id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
sequencelengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-03-18 10:01:09
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
sequencelengths 1
10
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
benayad7/concat-e5-small-bge-small-01 | benayad7 | null | [
"mteb",
"model-index",
"region:us"
] | 1,728,551,241,000 | 2024-10-14T09:37:01 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: no_model_name_available
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.58470764617691
- type: ap
value: 24.719701151617723
- type: ap_weighted
value: 24.719701151617723
- type: f1
value: 63.00164246074738
- type: f1_weighted
value: 80.03796552199202
- type: main_score
value: 75.58470764617691
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.34328358208955
- type: ap
value: 37.50929758783498
- type: ap_weighted
value: 37.50929758783498
- type: f1
value: 68.47468266207234
- type: f1_weighted
value: 76.71536156910686
- type: main_score
value: 74.34328358208955
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: validation
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.10810810810811
- type: ap
value: 21.095894998268182
- type: ap_weighted
value: 21.095894998268182
- type: f1
value: 59.88562259265849
- type: f1_weighted
value: 78.24218318628027
- type: main_score
value: 73.10810810810811
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: validation
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.76119402985076
- type: ap
value: 33.462242773250075
- type: ap_weighted
value: 33.462242773250075
- type: f1
value: 66.50228790953409
- type: f1_weighted
value: 76.66423272035549
- type: main_score
value: 73.76119402985076
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.99744999999999
- type: ap
value: 89.7770669227447
- type: ap_weighted
value: 89.7770669227447
- type: f1
value: 92.98870898689393
- type: f1_weighted
value: 92.98870898689394
- type: main_score
value: 92.99744999999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.364000000000004
- type: f1
value: 48.09686892529694
- type: f1_weighted
value: 48.09686892529693
- type: main_score
value: 49.364000000000004
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: validation
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.797999999999995
- type: f1
value: 47.572308082658886
- type: f1_weighted
value: 47.572308082658886
- type: main_score
value: 48.797999999999995
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 59.345000000000006
- type: map_at_1
value: 34.993
- type: map_at_10
value: 50.93299999999999
- type: map_at_100
value: 51.653999999999996
- type: map_at_1000
value: 51.659
- type: map_at_20
value: 51.503
- type: map_at_3
value: 46.539
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.34850640113798
- type: mrr_at_10
value: 51.08029081713292
- type: mrr_at_100
value: 51.79453395523629
- type: mrr_at_1000
value: 51.79922640990794
- type: mrr_at_20
value: 51.64015533841996
- type: mrr_at_3
value: 46.68089141773353
- type: mrr_at_5
value: 49.20578473210052
- type: nauc_map_at_1000_diff1
value: 10.777019519798255
- type: nauc_map_at_1000_max
value: -7.407917463470924
- type: nauc_map_at_1000_std
value: -7.563268405241759
- type: nauc_map_at_100_diff1
value: 10.779545140640199
- type: nauc_map_at_100_max
value: -7.401298063310178
- type: nauc_map_at_100_std
value: -7.556601235531485
- type: nauc_map_at_10_diff1
value: 10.721539302065686
- type: nauc_map_at_10_max
value: -7.166136768327753
- type: nauc_map_at_10_std
value: -7.293662115267615
- type: nauc_map_at_1_diff1
value: 14.156914156584854
- type: nauc_map_at_1_max
value: -9.498793111705812
- type: nauc_map_at_1_std
value: -9.726351399489474
- type: nauc_map_at_20_diff1
value: 10.761980363958246
- type: nauc_map_at_20_max
value: -7.3011716552322445
- type: nauc_map_at_20_std
value: -7.474633798851565
- type: nauc_map_at_3_diff1
value: 10.32402309373244
- type: nauc_map_at_3_max
value: -7.789968158157112
- type: nauc_map_at_3_std
value: -8.459911202243127
- type: nauc_map_at_5_diff1
value: 9.950359837131687
- type: nauc_map_at_5_max
value: -7.54700058691015
- type: nauc_map_at_5_std
value: -7.4852438758641515
- type: nauc_mrr_at_1000_diff1
value: 9.545661892556165
- type: nauc_mrr_at_1000_max
value: -8.17995767306064
- type: nauc_mrr_at_1000_std
value: -7.690544153747644
- type: nauc_mrr_at_100_diff1
value: 9.54837560401051
- type: nauc_mrr_at_100_max
value: -8.173227861584765
- type: nauc_mrr_at_100_std
value: -7.683867106566364
- type: nauc_mrr_at_10_diff1
value: 9.522795074409384
- type: nauc_mrr_at_10_max
value: -7.9063165040486645
- type: nauc_mrr_at_10_std
value: -7.431873185012491
- type: nauc_mrr_at_1_diff1
value: 13.130099478615803
- type: nauc_mrr_at_1_max
value: -9.992669947319378
- type: nauc_mrr_at_1_std
value: -10.051618087478493
- type: nauc_mrr_at_20_diff1
value: 9.541270804874118
- type: nauc_mrr_at_20_max
value: -8.058828250931276
- type: nauc_mrr_at_20_std
value: -7.598979599209439
- type: nauc_mrr_at_3_diff1
value: 9.116533214412966
- type: nauc_mrr_at_3_max
value: -8.540266568222016
- type: nauc_mrr_at_3_std
value: -8.693222832504544
- type: nauc_mrr_at_5_diff1
value: 8.694955232599785
- type: nauc_mrr_at_5_max
value: -8.34107055995193
- type: nauc_mrr_at_5_std
value: -7.6629161714950484
- type: nauc_ndcg_at_1000_diff1
value: 10.526533346595262
- type: nauc_ndcg_at_1000_max
value: -6.703732662610731
- type: nauc_ndcg_at_1000_std
value: -6.603623582089393
- type: nauc_ndcg_at_100_diff1
value: 10.561172146895252
- type: nauc_ndcg_at_100_max
value: -6.514791218416639
- type: nauc_ndcg_at_100_std
value: -6.354781956088969
- type: nauc_ndcg_at_10_diff1
value: 10.361381171866082
- type: nauc_ndcg_at_10_max
value: -5.2230158943462985
- type: nauc_ndcg_at_10_std
value: -5.007524284028199
- type: nauc_ndcg_at_1_diff1
value: 14.156914156584854
- type: nauc_ndcg_at_1_max
value: -9.498793111705812
- type: nauc_ndcg_at_1_std
value: -9.726351399489474
- type: nauc_ndcg_at_20_diff1
value: 10.517676064474847
- type: nauc_ndcg_at_20_max
value: -5.711765067868173
- type: nauc_ndcg_at_20_std
value: -5.605824861630572
- type: nauc_ndcg_at_3_diff1
value: 9.186600380601536
- type: nauc_ndcg_at_3_max
value: -7.056576818596352
- type: nauc_ndcg_at_3_std
value: -7.812761606548313
- type: nauc_ndcg_at_5_diff1
value: 8.432418101937406
- type: nauc_ndcg_at_5_max
value: -6.480411547015269
- type: nauc_ndcg_at_5_std
value: -5.777975379414998
- type: nauc_precision_at_1000_diff1
value: 13.955746514246906
- type: nauc_precision_at_1000_max
value: 28.8840202640091
- type: nauc_precision_at_1000_std
value: 65.36756825837523
- type: nauc_precision_at_100_diff1
value: 14.686199038527814
- type: nauc_precision_at_100_max
value: 31.31632878025447
- type: nauc_precision_at_100_std
value: 54.07113844512418
- type: nauc_precision_at_10_diff1
value: 9.662188427513989
- type: nauc_precision_at_10_max
value: 9.22345326975617
- type: nauc_precision_at_10_std
value: 11.831872077040245
- type: nauc_precision_at_1_diff1
value: 14.156914156584854
- type: nauc_precision_at_1_max
value: -9.498793111705812
- type: nauc_precision_at_1_std
value: -9.726351399489474
- type: nauc_precision_at_20_diff1
value: 11.432148213771988
- type: nauc_precision_at_20_max
value: 18.514764981962074
- type: nauc_precision_at_20_std
value: 22.78510644336147
- type: nauc_precision_at_3_diff1
value: 5.6317217760949685
- type: nauc_precision_at_3_max
value: -4.676598241685319
- type: nauc_precision_at_3_std
value: -5.684836764145716
- type: nauc_precision_at_5_diff1
value: 2.5659150784472753
- type: nauc_precision_at_5_max
value: -2.1551693896958524
- type: nauc_precision_at_5_std
value: 1.384881159297645
- type: nauc_recall_at_1000_diff1
value: 13.955746514247625
- type: nauc_recall_at_1000_max
value: 28.884020264002835
- type: nauc_recall_at_1000_std
value: 65.36756825836953
- type: nauc_recall_at_100_diff1
value: 14.686199038527365
- type: nauc_recall_at_100_max
value: 31.31632878025263
- type: nauc_recall_at_100_std
value: 54.071138445123225
- type: nauc_recall_at_10_diff1
value: 9.662188427513836
- type: nauc_recall_at_10_max
value: 9.22345326975619
- type: nauc_recall_at_10_std
value: 11.83187207704014
- type: nauc_recall_at_1_diff1
value: 14.156914156584854
- type: nauc_recall_at_1_max
value: -9.498793111705812
- type: nauc_recall_at_1_std
value: -9.726351399489474
- type: nauc_recall_at_20_diff1
value: 11.432148213772072
- type: nauc_recall_at_20_max
value: 18.514764981961978
- type: nauc_recall_at_20_std
value: 22.785106443361688
- type: nauc_recall_at_3_diff1
value: 5.63172177609501
- type: nauc_recall_at_3_max
value: -4.676598241685221
- type: nauc_recall_at_3_std
value: -5.684836764145642
- type: nauc_recall_at_5_diff1
value: 2.5659150784473406
- type: nauc_recall_at_5_max
value: -2.1551693896959048
- type: nauc_recall_at_5_std
value: 1.3848811592975896
- type: ndcg_at_1
value: 34.993
- type: ndcg_at_10
value: 59.345000000000006
- type: ndcg_at_100
value: 62.324999999999996
- type: ndcg_at_1000
value: 62.437
- type: ndcg_at_20
value: 61.36899999999999
- type: ndcg_at_3
value: 50.381
- type: ndcg_at_5
value: 54.923
- type: precision_at_1
value: 34.993
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.694
- type: precision_at_3
value: 20.507
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 34.993
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 98.791
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 93.88300000000001
- type: recall_at_3
value: 61.522
- type: recall_at_5
value: 72.475
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 47.552697319955676
- type: v_measure
value: 47.552697319955676
- type: v_measure_std
value: 13.808952106577216
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 40.03337707775063
- type: v_measure
value: 40.03337707775063
- type: v_measure_std
value: 14.249888078941146
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 61.8126291529566
- type: map
value: 61.8126291529566
- type: mrr
value: 74.36705799586686
- type: nAUC_map_diff1
value: 8.967467836084921
- type: nAUC_map_max
value: 26.579764001539115
- type: nAUC_map_std
value: 17.40600362828969
- type: nAUC_mrr_diff1
value: 17.598155509582792
- type: nAUC_mrr_max
value: 39.897111892695534
- type: nAUC_mrr_std
value: 22.494399933018467
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 86.08936662443202
- type: cosine_spearman
value: 84.89832000798314
- type: euclidean_pearson
value: 84.81392595158542
- type: euclidean_spearman
value: 84.89832000798314
- type: main_score
value: 84.89832000798314
- type: manhattan_pearson
value: 85.13291943945366
- type: manhattan_spearman
value: 85.16820567558344
- type: pearson
value: 86.08936662443202
- type: spearman
value: 84.89832000798314
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.18831168831169
- type: f1
value: 81.5323786025089
- type: f1_weighted
value: 81.5323786025089
- type: main_score
value: 82.18831168831169
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 38.67384534802573
- type: v_measure
value: 38.67384534802573
- type: v_measure_std
value: 0.7568897383721817
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 34.355519780975385
- type: v_measure
value: 34.355519780975385
- type: v_measure_std
value: 1.110838684148089
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 49.34
- type: map_at_1
value: 31.978
- type: map_at_10
value: 42.963
- type: map_at_100
value: 44.446999999999996
- type: map_at_1000
value: 44.584
- type: map_at_20
value: 43.804
- type: map_at_3
value: 39.422000000000004
- type: map_at_5
value: 41.504000000000005
- type: mrr_at_1
value: 39.77110157367668
- type: mrr_at_10
value: 49.26595817153757
- type: mrr_at_100
value: 49.978002791230615
- type: mrr_at_1000
value: 50.02634861972985
- type: mrr_at_20
value: 49.71356312876928
- type: mrr_at_3
value: 46.49499284692418
- type: mrr_at_5
value: 48.22603719599428
- type: nauc_map_at_1000_diff1
value: 47.79618096338393
- type: nauc_map_at_1000_max
value: 41.176156568022584
- type: nauc_map_at_1000_std
value: -2.896933596181026
- type: nauc_map_at_100_diff1
value: 47.809435131565934
- type: nauc_map_at_100_max
value: 41.16647280007524
- type: nauc_map_at_100_std
value: -2.8831579711166193
- type: nauc_map_at_10_diff1
value: 48.10973144108661
- type: nauc_map_at_10_max
value: 40.664863006508625
- type: nauc_map_at_10_std
value: -3.578886025167837
- type: nauc_map_at_1_diff1
value: 51.708891898356256
- type: nauc_map_at_1_max
value: 36.28682951162398
- type: nauc_map_at_1_std
value: -7.4668723971868065
- type: nauc_map_at_20_diff1
value: 47.9555643209195
- type: nauc_map_at_20_max
value: 40.916180069472304
- type: nauc_map_at_20_std
value: -3.0932267113062433
- type: nauc_map_at_3_diff1
value: 48.98794466901641
- type: nauc_map_at_3_max
value: 39.681057169237455
- type: nauc_map_at_3_std
value: -5.066827772334784
- type: nauc_map_at_5_diff1
value: 48.646328680462275
- type: nauc_map_at_5_max
value: 40.477824460870764
- type: nauc_map_at_5_std
value: -4.491443351489804
- type: nauc_mrr_at_1000_diff1
value: 45.616911771663744
- type: nauc_mrr_at_1000_max
value: 41.74404269617856
- type: nauc_mrr_at_1000_std
value: -4.559184319493835
- type: nauc_mrr_at_100_diff1
value: 45.59611866833938
- type: nauc_mrr_at_100_max
value: 41.72604055232334
- type: nauc_mrr_at_100_std
value: -4.558345700839123
- type: nauc_mrr_at_10_diff1
value: 45.55894061838237
- type: nauc_mrr_at_10_max
value: 41.77360038611892
- type: nauc_mrr_at_10_std
value: -4.618036976051173
- type: nauc_mrr_at_1_diff1
value: 48.15582854302911
- type: nauc_mrr_at_1_max
value: 39.82659893618731
- type: nauc_mrr_at_1_std
value: -8.183852381071691
- type: nauc_mrr_at_20_diff1
value: 45.57723701387737
- type: nauc_mrr_at_20_max
value: 41.6914875419549
- type: nauc_mrr_at_20_std
value: -4.4669938868552865
- type: nauc_mrr_at_3_diff1
value: 46.254550893686755
- type: nauc_mrr_at_3_max
value: 41.4719065274011
- type: nauc_mrr_at_3_std
value: -5.655174401673945
- type: nauc_mrr_at_5_diff1
value: 46.042457649709675
- type: nauc_mrr_at_5_max
value: 42.03643705756859
- type: nauc_mrr_at_5_std
value: -4.958139821587032
- type: nauc_ndcg_at_1000_diff1
value: 45.81204094950804
- type: nauc_ndcg_at_1000_max
value: 42.73689566776186
- type: nauc_ndcg_at_1000_std
value: -0.7066382518858306
- type: nauc_ndcg_at_100_diff1
value: 45.28640379554905
- type: nauc_ndcg_at_100_max
value: 42.681083144594815
- type: nauc_ndcg_at_100_std
value: -0.09688585685774474
- type: nauc_ndcg_at_10_diff1
value: 45.71414379972128
- type: nauc_ndcg_at_10_max
value: 41.53626843173395
- type: nauc_ndcg_at_10_std
value: -1.8507612518474037
- type: nauc_ndcg_at_1_diff1
value: 48.15582854302911
- type: nauc_ndcg_at_1_max
value: 39.82659893618731
- type: nauc_ndcg_at_1_std
value: -8.183852381071691
- type: nauc_ndcg_at_20_diff1
value: 45.39833040860289
- type: nauc_ndcg_at_20_max
value: 41.55849686342755
- type: nauc_ndcg_at_20_std
value: -1.0215221134520232
- type: nauc_ndcg_at_3_diff1
value: 46.87312912585575
- type: nauc_ndcg_at_3_max
value: 41.057497209218255
- type: nauc_ndcg_at_3_std
value: -3.9052157576939535
- type: nauc_ndcg_at_5_diff1
value: 46.52903875914039
- type: nauc_ndcg_at_5_max
value: 41.793701014010786
- type: nauc_ndcg_at_5_std
value: -2.9870402822330866
- type: nauc_precision_at_1000_diff1
value: -19.737292446155067
- type: nauc_precision_at_1000_max
value: -10.181883018676627
- type: nauc_precision_at_1000_std
value: -3.068587468550703
- type: nauc_precision_at_100_diff1
value: -11.377068363169922
- type: nauc_precision_at_100_max
value: 6.3871026306788705
- type: nauc_precision_at_100_std
value: 8.872198838850938
- type: nauc_precision_at_10_diff1
value: 9.599810544198046
- type: nauc_precision_at_10_max
value: 26.62316097540815
- type: nauc_precision_at_10_std
value: 8.686741005293698
- type: nauc_precision_at_1_diff1
value: 48.15582854302911
- type: nauc_precision_at_1_max
value: 39.82659893618731
- type: nauc_precision_at_1_std
value: -8.183852381071691
- type: nauc_precision_at_20_diff1
value: 0.25496998553714123
- type: nauc_precision_at_20_max
value: 18.471523812532052
- type: nauc_precision_at_20_std
value: 10.351704374808387
- type: nauc_precision_at_3_diff1
value: 30.617775870803055
- type: nauc_precision_at_3_max
value: 38.01125065426583
- type: nauc_precision_at_3_std
value: -0.7623574434648546
- type: nauc_precision_at_5_diff1
value: 20.450147138491545
- type: nauc_precision_at_5_max
value: 33.18114326263472
- type: nauc_precision_at_5_std
value: 2.998578407482621
- type: nauc_recall_at_1000_diff1
value: 22.91280255517604
- type: nauc_recall_at_1000_max
value: 64.0102572612793
- type: nauc_recall_at_1000_std
value: 51.549096370979406
- type: nauc_recall_at_100_diff1
value: 28.635647358302503
- type: nauc_recall_at_100_max
value: 44.00473186289832
- type: nauc_recall_at_100_std
value: 19.912314783541138
- type: nauc_recall_at_10_diff1
value: 37.808472531928786
- type: nauc_recall_at_10_max
value: 38.67030494361658
- type: nauc_recall_at_10_std
value: 4.063425477456253
- type: nauc_recall_at_1_diff1
value: 51.708891898356256
- type: nauc_recall_at_1_max
value: 36.28682951162398
- type: nauc_recall_at_1_std
value: -7.4668723971868065
- type: nauc_recall_at_20_diff1
value: 34.51072533447235
- type: nauc_recall_at_20_max
value: 37.73082104160229
- type: nauc_recall_at_20_std
value: 8.219053762621884
- type: nauc_recall_at_3_diff1
value: 44.99432786987943
- type: nauc_recall_at_3_max
value: 39.55995143135335
- type: nauc_recall_at_3_std
value: -2.4914190692599165
- type: nauc_recall_at_5_diff1
value: 42.547651108851106
- type: nauc_recall_at_5_max
value: 40.85551908930608
- type: nauc_recall_at_5_std
value: 0.4962059846390094
- type: ndcg_at_1
value: 39.771
- type: ndcg_at_10
value: 49.34
- type: ndcg_at_100
value: 54.595000000000006
- type: ndcg_at_1000
value: 56.666000000000004
- type: ndcg_at_20
value: 51.492000000000004
- type: ndcg_at_3
value: 44.308
- type: ndcg_at_5
value: 46.838
- type: precision_at_1
value: 39.771
- type: precision_at_10
value: 9.442
- type: precision_at_100
value: 1.5010000000000001
- type: precision_at_1000
value: 0.2
- type: precision_at_20
value: 5.651
- type: precision_at_3
value: 21.125
- type: precision_at_5
value: 15.479000000000001
- type: recall_at_1
value: 31.978
- type: recall_at_10
value: 61.129
- type: recall_at_100
value: 83.052
- type: recall_at_1000
value: 96.378
- type: recall_at_20
value: 68.72500000000001
- type: recall_at_3
value: 46.518
- type: recall_at_5
value: 53.59
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 45.286
- type: map_at_1
value: 30.169
- type: map_at_10
value: 39.711999999999996
- type: map_at_100
value: 40.898
- type: map_at_1000
value: 41.038999999999994
- type: map_at_20
value: 40.357
- type: map_at_3
value: 36.949
- type: map_at_5
value: 38.473
- type: mrr_at_1
value: 37.57961783439491
- type: mrr_at_10
value: 46.031847133757964
- type: mrr_at_100
value: 46.65912851493955
- type: mrr_at_1000
value: 46.707042666709604
- type: mrr_at_20
value: 46.40068681036863
- type: mrr_at_3
value: 43.7791932059448
- type: mrr_at_5
value: 45.10084925690022
- type: nauc_map_at_1000_diff1
value: 50.85056542222115
- type: nauc_map_at_1000_max
value: 31.05651616405919
- type: nauc_map_at_1000_std
value: -2.2368126065205987
- type: nauc_map_at_100_diff1
value: 50.89417988965435
- type: nauc_map_at_100_max
value: 30.997137971767142
- type: nauc_map_at_100_std
value: -2.3409324916593492
- type: nauc_map_at_10_diff1
value: 51.142437175292955
- type: nauc_map_at_10_max
value: 30.227559741209287
- type: nauc_map_at_10_std
value: -3.7930421273349264
- type: nauc_map_at_1_diff1
value: 55.941637622487214
- type: nauc_map_at_1_max
value: 25.080533324385613
- type: nauc_map_at_1_std
value: -8.864138917095795
- type: nauc_map_at_20_diff1
value: 50.96714330539312
- type: nauc_map_at_20_max
value: 30.56073364246607
- type: nauc_map_at_20_std
value: -3.1109111786530703
- type: nauc_map_at_3_diff1
value: 51.56324965582417
- type: nauc_map_at_3_max
value: 28.83764240000901
- type: nauc_map_at_3_std
value: -5.496574943583724
- type: nauc_map_at_5_diff1
value: 51.2643900871776
- type: nauc_map_at_5_max
value: 29.42492887150313
- type: nauc_map_at_5_std
value: -4.7906573624880675
- type: nauc_mrr_at_1000_diff1
value: 49.81188202883573
- type: nauc_mrr_at_1000_max
value: 32.758906990991626
- type: nauc_mrr_at_1000_std
value: 1.5051321484514517
- type: nauc_mrr_at_100_diff1
value: 49.808094282032876
- type: nauc_mrr_at_100_max
value: 32.75870267430557
- type: nauc_mrr_at_100_std
value: 1.5117004894545576
- type: nauc_mrr_at_10_diff1
value: 49.86964428608597
- type: nauc_mrr_at_10_max
value: 32.75447791816241
- type: nauc_mrr_at_10_std
value: 1.0472130007711005
- type: nauc_mrr_at_1_diff1
value: 54.63093984058127
- type: nauc_mrr_at_1_max
value: 31.486608661031973
- type: nauc_mrr_at_1_std
value: -1.3741363436543739
- type: nauc_mrr_at_20_diff1
value: 49.790685542152005
- type: nauc_mrr_at_20_max
value: 32.75477912841072
- type: nauc_mrr_at_20_std
value: 1.338641256922409
- type: nauc_mrr_at_3_diff1
value: 50.17281274015403
- type: nauc_mrr_at_3_max
value: 32.55859057277966
- type: nauc_mrr_at_3_std
value: 0.6635403402684905
- type: nauc_mrr_at_5_diff1
value: 49.99493486356793
- type: nauc_mrr_at_5_max
value: 32.61428342888774
- type: nauc_mrr_at_5_std
value: 0.7430591391457354
- type: nauc_ndcg_at_1000_diff1
value: 48.22576941926604
- type: nauc_ndcg_at_1000_max
value: 32.93511771089829
- type: nauc_ndcg_at_1000_std
value: 3.129856319572661
- type: nauc_ndcg_at_100_diff1
value: 48.62921110376326
- type: nauc_ndcg_at_100_max
value: 32.991588269891764
- type: nauc_ndcg_at_100_std
value: 3.0722970856766
- type: nauc_ndcg_at_10_diff1
value: 49.18605872592804
- type: nauc_ndcg_at_10_max
value: 32.24106173386995
- type: nauc_ndcg_at_10_std
value: -0.2906781273981978
- type: nauc_ndcg_at_1_diff1
value: 54.63093984058127
- type: nauc_ndcg_at_1_max
value: 31.486608661031973
- type: nauc_ndcg_at_1_std
value: -1.3741363436543739
- type: nauc_ndcg_at_20_diff1
value: 48.67444292759105
- type: nauc_ndcg_at_20_max
value: 32.38356642801882
- type: nauc_ndcg_at_20_std
value: 1.0417910616522417
- type: nauc_ndcg_at_3_diff1
value: 49.64405616341687
- type: nauc_ndcg_at_3_max
value: 31.851912727510662
- type: nauc_ndcg_at_3_std
value: -1.3378234982644532
- type: nauc_ndcg_at_5_diff1
value: 49.41698121410308
- type: nauc_ndcg_at_5_max
value: 31.796598394828706
- type: nauc_ndcg_at_5_std
value: -1.356552455719212
- type: nauc_precision_at_1000_diff1
value: -16.185681544062707
- type: nauc_precision_at_1000_max
value: 15.06181269349989
- type: nauc_precision_at_1000_std
value: 29.16661769411087
- type: nauc_precision_at_100_diff1
value: -4.50092289449142
- type: nauc_precision_at_100_max
value: 26.889499343262663
- type: nauc_precision_at_100_std
value: 34.61625224772955
- type: nauc_precision_at_10_diff1
value: 16.845250924167054
- type: nauc_precision_at_10_max
value: 33.18165476286763
- type: nauc_precision_at_10_std
value: 20.01633509478702
- type: nauc_precision_at_1_diff1
value: 54.63093984058127
- type: nauc_precision_at_1_max
value: 31.486608661031973
- type: nauc_precision_at_1_std
value: -1.3741363436543739
- type: nauc_precision_at_20_diff1
value: 7.904987701430007
- type: nauc_precision_at_20_max
value: 31.66538848828912
- type: nauc_precision_at_20_std
value: 27.373203146189333
- type: nauc_precision_at_3_diff1
value: 32.98074210677402
- type: nauc_precision_at_3_max
value: 35.70551590699247
- type: nauc_precision_at_3_std
value: 9.64957518804935
- type: nauc_precision_at_5_diff1
value: 25.68789006898062
- type: nauc_precision_at_5_max
value: 34.528632431840606
- type: nauc_precision_at_5_std
value: 13.758348844848392
- type: nauc_recall_at_1000_diff1
value: 27.292499993748585
- type: nauc_recall_at_1000_max
value: 33.50856923877545
- type: nauc_recall_at_1000_std
value: 32.219345025751586
- type: nauc_recall_at_100_diff1
value: 37.33505162034011
- type: nauc_recall_at_100_max
value: 33.97748076845698
- type: nauc_recall_at_100_std
value: 21.716723075947385
- type: nauc_recall_at_10_diff1
value: 42.67650896437499
- type: nauc_recall_at_10_max
value: 30.930396085103517
- type: nauc_recall_at_10_std
value: 2.5045964341943163
- type: nauc_recall_at_1_diff1
value: 55.941637622487214
- type: nauc_recall_at_1_max
value: 25.080533324385613
- type: nauc_recall_at_1_std
value: -8.864138917095795
- type: nauc_recall_at_20_diff1
value: 39.97433575808996
- type: nauc_recall_at_20_max
value: 31.44609230387655
- type: nauc_recall_at_20_std
value: 7.736221159385454
- type: nauc_recall_at_3_diff1
value: 46.21571868804605
- type: nauc_recall_at_3_max
value: 28.893654020460204
- type: nauc_recall_at_3_std
value: -3.6376291909998493
- type: nauc_recall_at_5_diff1
value: 44.64111423499069
- type: nauc_recall_at_5_max
value: 29.065352256728865
- type: nauc_recall_at_5_std
value: -2.0155177145574905
- type: ndcg_at_1
value: 37.580000000000005
- type: ndcg_at_10
value: 45.286
- type: ndcg_at_100
value: 49.518
- type: ndcg_at_1000
value: 51.778999999999996
- type: ndcg_at_20
value: 46.921
- type: ndcg_at_3
value: 41.237
- type: ndcg_at_5
value: 43.085
- type: precision_at_1
value: 37.580000000000005
- type: precision_at_10
value: 8.452
- type: precision_at_100
value: 1.358
- type: precision_at_1000
value: 0.186
- type: precision_at_20
value: 4.987
- type: precision_at_3
value: 19.66
- type: precision_at_5
value: 13.936000000000002
- type: recall_at_1
value: 30.169
- type: recall_at_10
value: 54.657999999999994
- type: recall_at_100
value: 72.631
- type: recall_at_1000
value: 86.99799999999999
- type: recall_at_20
value: 60.549
- type: recall_at_3
value: 42.703
- type: recall_at_5
value: 47.871
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 57.935
- type: map_at_1
value: 39.459
- type: map_at_10
value: 52.041000000000004
- type: map_at_100
value: 53.123
- type: map_at_1000
value: 53.174
- type: map_at_20
value: 52.715999999999994
- type: map_at_3
value: 48.652
- type: map_at_5
value: 50.526
- type: mrr_at_1
value: 45.141065830721004
- type: mrr_at_10
value: 55.4189680051749
- type: mrr_at_100
value: 56.12734090607992
- type: mrr_at_1000
value: 56.15035665102205
- type: mrr_at_20
value: 55.870980474719914
- type: mrr_at_3
value: 52.92580982236154
- type: mrr_at_5
value: 54.380355276907
- type: nauc_map_at_1000_diff1
value: 49.350343037507805
- type: nauc_map_at_1000_max
value: 39.98281162376331
- type: nauc_map_at_1000_std
value: -3.6565717425989255
- type: nauc_map_at_100_diff1
value: 49.342545452256
- type: nauc_map_at_100_max
value: 39.98916109913233
- type: nauc_map_at_100_std
value: -3.643010722158061
- type: nauc_map_at_10_diff1
value: 49.45960781144599
- type: nauc_map_at_10_max
value: 39.68189953288183
- type: nauc_map_at_10_std
value: -4.405767261039445
- type: nauc_map_at_1_diff1
value: 52.089101330688045
- type: nauc_map_at_1_max
value: 33.85881212484232
- type: nauc_map_at_1_std
value: -8.418190685614235
- type: nauc_map_at_20_diff1
value: 49.32978868149478
- type: nauc_map_at_20_max
value: 39.75645643846
- type: nauc_map_at_20_std
value: -3.9860531109091193
- type: nauc_map_at_3_diff1
value: 50.04151871930599
- type: nauc_map_at_3_max
value: 37.79961474082032
- type: nauc_map_at_3_std
value: -6.670028333711345
- type: nauc_map_at_5_diff1
value: 49.71975459031514
- type: nauc_map_at_5_max
value: 38.817567597269466
- type: nauc_map_at_5_std
value: -5.434776325022176
- type: nauc_mrr_at_1000_diff1
value: 48.947499409111096
- type: nauc_mrr_at_1000_max
value: 40.93741059921989
- type: nauc_mrr_at_1000_std
value: -3.1825006442699553
- type: nauc_mrr_at_100_diff1
value: 48.942343488274815
- type: nauc_mrr_at_100_max
value: 40.95031134407549
- type: nauc_mrr_at_100_std
value: -3.160377489400296
- type: nauc_mrr_at_10_diff1
value: 48.840125595383455
- type: nauc_mrr_at_10_max
value: 41.02170174858195
- type: nauc_mrr_at_10_std
value: -3.197031134091893
- type: nauc_mrr_at_1_diff1
value: 52.064927865645075
- type: nauc_mrr_at_1_max
value: 38.44984483881011
- type: nauc_mrr_at_1_std
value: -6.416298888465512
- type: nauc_mrr_at_20_diff1
value: 48.906114080185866
- type: nauc_mrr_at_20_max
value: 40.87883825337818
- type: nauc_mrr_at_20_std
value: -3.255032101669121
- type: nauc_mrr_at_3_diff1
value: 49.32815249648534
- type: nauc_mrr_at_3_max
value: 40.432904378326384
- type: nauc_mrr_at_3_std
value: -4.370615209045244
- type: nauc_mrr_at_5_diff1
value: 48.83993033552693
- type: nauc_mrr_at_5_max
value: 40.71824702891057
- type: nauc_mrr_at_5_std
value: -3.655659400980567
- type: nauc_ndcg_at_1000_diff1
value: 48.474352967441945
- type: nauc_ndcg_at_1000_max
value: 41.96481325646208
- type: nauc_ndcg_at_1000_std
value: -0.6346960977825105
- type: nauc_ndcg_at_100_diff1
value: 48.3009196475395
- type: nauc_ndcg_at_100_max
value: 42.349365306663664
- type: nauc_ndcg_at_100_std
value: 0.16696421606061568
- type: nauc_ndcg_at_10_diff1
value: 48.22259923512447
- type: nauc_ndcg_at_10_max
value: 41.81263561142292
- type: nauc_ndcg_at_10_std
value: -1.6113767450068854
- type: nauc_ndcg_at_1_diff1
value: 52.064927865645075
- type: nauc_ndcg_at_1_max
value: 38.44984483881011
- type: nauc_ndcg_at_1_std
value: -6.416298888465512
- type: nauc_ndcg_at_20_diff1
value: 48.07530674920143
- type: nauc_ndcg_at_20_max
value: 41.481398876880355
- type: nauc_ndcg_at_20_std
value: -1.1959186932073145
- type: nauc_ndcg_at_3_diff1
value: 48.97047357315626
- type: nauc_ndcg_at_3_max
value: 39.3764554710483
- type: nauc_ndcg_at_3_std
value: -4.737405650853368
- type: nauc_ndcg_at_5_diff1
value: 48.465073616087444
- type: nauc_ndcg_at_5_max
value: 40.48611129423828
- type: nauc_ndcg_at_5_std
value: -3.2733514958040946
- type: nauc_precision_at_1000_diff1
value: -9.539304087056678
- type: nauc_precision_at_1000_max
value: 15.041788011496934
- type: nauc_precision_at_1000_std
value: 22.84867087898786
- type: nauc_precision_at_100_diff1
value: -5.179075253413082
- type: nauc_precision_at_100_max
value: 22.968148583194907
- type: nauc_precision_at_100_std
value: 28.16118112550293
- type: nauc_precision_at_10_diff1
value: 12.68864903314593
- type: nauc_precision_at_10_max
value: 33.402306007402416
- type: nauc_precision_at_10_std
value: 15.954788657247748
- type: nauc_precision_at_1_diff1
value: 52.064927865645075
- type: nauc_precision_at_1_max
value: 38.44984483881011
- type: nauc_precision_at_1_std
value: -6.416298888465512
- type: nauc_precision_at_20_diff1
value: 4.086070276980883
- type: nauc_precision_at_20_max
value: 28.466186216262646
- type: nauc_precision_at_20_std
value: 21.548953975789946
- type: nauc_precision_at_3_diff1
value: 31.827793894919836
- type: nauc_precision_at_3_max
value: 38.13215690903121
- type: nauc_precision_at_3_std
value: 2.3490498723729787
- type: nauc_precision_at_5_diff1
value: 23.36248316835303
- type: nauc_precision_at_5_max
value: 36.6085689865106
- type: nauc_precision_at_5_std
value: 8.730583249465882
- type: nauc_recall_at_1000_diff1
value: 35.58748183758336
- type: nauc_recall_at_1000_max
value: 63.481971018253866
- type: nauc_recall_at_1000_std
value: 50.255552349121324
- type: nauc_recall_at_100_diff1
value: 38.30162180279426
- type: nauc_recall_at_100_max
value: 55.32704556841503
- type: nauc_recall_at_100_std
value: 33.05803540444057
- type: nauc_recall_at_10_diff1
value: 42.88255606095961
- type: nauc_recall_at_10_max
value: 45.08979716365479
- type: nauc_recall_at_10_std
value: 6.080592095781171
- type: nauc_recall_at_1_diff1
value: 52.089101330688045
- type: nauc_recall_at_1_max
value: 33.85881212484232
- type: nauc_recall_at_1_std
value: -8.418190685614235
- type: nauc_recall_at_20_diff1
value: 40.717620176248104
- type: nauc_recall_at_20_max
value: 43.889494702908635
- type: nauc_recall_at_20_std
value: 9.674760090258394
- type: nauc_recall_at_3_diff1
value: 46.97768558746882
- type: nauc_recall_at_3_max
value: 38.285091697616465
- type: nauc_recall_at_3_std
value: -4.852209401420014
- type: nauc_recall_at_5_diff1
value: 44.54361378218751
- type: nauc_recall_at_5_max
value: 40.49656738354392
- type: nauc_recall_at_5_std
value: -0.39210246739263926
- type: ndcg_at_1
value: 45.141
- type: ndcg_at_10
value: 57.935
- type: ndcg_at_100
value: 62.121
- type: ndcg_at_1000
value: 63.086
- type: ndcg_at_20
value: 59.907999999999994
- type: ndcg_at_3
value: 52.358000000000004
- type: ndcg_at_5
value: 55.027
- type: precision_at_1
value: 45.141
- type: precision_at_10
value: 9.347999999999999
- type: precision_at_100
value: 1.232
- type: precision_at_1000
value: 0.136
- type: precision_at_20
value: 5.251
- type: precision_at_3
value: 23.344
- type: precision_at_5
value: 16.05
- type: recall_at_1
value: 39.459
- type: recall_at_10
value: 71.968
- type: recall_at_100
value: 89.81
- type: recall_at_1000
value: 96.55
- type: recall_at_20
value: 79.33099999999999
- type: recall_at_3
value: 57.099999999999994
- type: recall_at_5
value: 63.637
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 39.839999999999996
- type: map_at_1
value: 26.821
- type: map_at_10
value: 34.993
- type: map_at_100
value: 36.009
- type: map_at_1000
value: 36.083
- type: map_at_20
value: 35.554
- type: map_at_3
value: 32.228
- type: map_at_5
value: 33.904
- type: mrr_at_1
value: 28.926553672316384
- type: mrr_at_10
value: 36.986368935521476
- type: mrr_at_100
value: 37.895698883680886
- type: mrr_at_1000
value: 37.95031449676498
- type: mrr_at_20
value: 37.48535917350526
- type: mrr_at_3
value: 34.293785310734464
- type: mrr_at_5
value: 35.983050847457626
- type: nauc_map_at_1000_diff1
value: 43.31690408477667
- type: nauc_map_at_1000_max
value: 30.862014860762795
- type: nauc_map_at_1000_std
value: -0.8551391908390162
- type: nauc_map_at_100_diff1
value: 43.312902470408524
- type: nauc_map_at_100_max
value: 30.862986571062383
- type: nauc_map_at_100_std
value: -0.8637722733314122
- type: nauc_map_at_10_diff1
value: 43.385672679389984
- type: nauc_map_at_10_max
value: 30.714784436737087
- type: nauc_map_at_10_std
value: -1.2522744939678505
- type: nauc_map_at_1_diff1
value: 47.86681025046955
- type: nauc_map_at_1_max
value: 29.132194162422586
- type: nauc_map_at_1_std
value: -5.002201483950293
- type: nauc_map_at_20_diff1
value: 43.37717003110337
- type: nauc_map_at_20_max
value: 30.78438235738698
- type: nauc_map_at_20_std
value: -0.8953950024164806
- type: nauc_map_at_3_diff1
value: 44.38299520601522
- type: nauc_map_at_3_max
value: 29.83366218086086
- type: nauc_map_at_3_std
value: -2.5693730585913297
- type: nauc_map_at_5_diff1
value: 43.51921405125155
- type: nauc_map_at_5_max
value: 30.22504627106614
- type: nauc_map_at_5_std
value: -1.5972020248683592
- type: nauc_mrr_at_1000_diff1
value: 42.49108453787739
- type: nauc_mrr_at_1000_max
value: 31.972602761246655
- type: nauc_mrr_at_1000_std
value: 1.0231286981944512
- type: nauc_mrr_at_100_diff1
value: 42.479172722773896
- type: nauc_mrr_at_100_max
value: 31.987222969415342
- type: nauc_mrr_at_100_std
value: 1.0274146182701183
- type: nauc_mrr_at_10_diff1
value: 42.526290626365196
- type: nauc_mrr_at_10_max
value: 32.01830237899739
- type: nauc_mrr_at_10_std
value: 0.9161077778836639
- type: nauc_mrr_at_1_diff1
value: 46.621348392565736
- type: nauc_mrr_at_1_max
value: 31.470289340095604
- type: nauc_mrr_at_1_std
value: -2.4591738933621867
- type: nauc_mrr_at_20_diff1
value: 42.534853876861575
- type: nauc_mrr_at_20_max
value: 31.996618075721905
- type: nauc_mrr_at_20_std
value: 1.1174044583485392
- type: nauc_mrr_at_3_diff1
value: 43.24815151493605
- type: nauc_mrr_at_3_max
value: 31.282640177477422
- type: nauc_mrr_at_3_std
value: -0.2891247404070639
- type: nauc_mrr_at_5_diff1
value: 42.730970720344594
- type: nauc_mrr_at_5_max
value: 31.60546178005326
- type: nauc_mrr_at_5_std
value: 0.5275202736677842
- type: nauc_ndcg_at_1000_diff1
value: 41.11037892060558
- type: nauc_ndcg_at_1000_max
value: 31.981322365939096
- type: nauc_ndcg_at_1000_std
value: 2.5104274168043115
- type: nauc_ndcg_at_100_diff1
value: 40.72420815298269
- type: nauc_ndcg_at_100_max
value: 32.12478900255941
- type: nauc_ndcg_at_100_std
value: 2.823745879247313
- type: nauc_ndcg_at_10_diff1
value: 41.248176699595774
- type: nauc_ndcg_at_10_max
value: 31.76223462957885
- type: nauc_ndcg_at_10_std
value: 1.325254903938574
- type: nauc_ndcg_at_1_diff1
value: 46.621348392565736
- type: nauc_ndcg_at_1_max
value: 31.470289340095604
- type: nauc_ndcg_at_1_std
value: -2.4591738933621867
- type: nauc_ndcg_at_20_diff1
value: 41.2264296695926
- type: nauc_ndcg_at_20_max
value: 31.880018569087127
- type: nauc_ndcg_at_20_std
value: 2.5771307591404953
- type: nauc_ndcg_at_3_diff1
value: 43.14220610066874
- type: nauc_ndcg_at_3_max
value: 30.506711094581906
- type: nauc_ndcg_at_3_std
value: -1.32756254984396
- type: nauc_ndcg_at_5_diff1
value: 41.766023996364815
- type: nauc_ndcg_at_5_max
value: 30.89844409312977
- type: nauc_ndcg_at_5_std
value: 0.3788803778583815
- type: nauc_precision_at_1000_diff1
value: -1.6666285719678695
- type: nauc_precision_at_1000_max
value: 16.874851634956954
- type: nauc_precision_at_1000_std
value: 13.467634314792958
- type: nauc_precision_at_100_diff1
value: 12.127904836990574
- type: nauc_precision_at_100_max
value: 29.832469022456742
- type: nauc_precision_at_100_std
value: 17.83171142960893
- type: nauc_precision_at_10_diff1
value: 28.629632098778924
- type: nauc_precision_at_10_max
value: 36.382552111288504
- type: nauc_precision_at_10_std
value: 10.527329328113375
- type: nauc_precision_at_1_diff1
value: 46.621348392565736
- type: nauc_precision_at_1_max
value: 31.470289340095604
- type: nauc_precision_at_1_std
value: -2.4591738933621867
- type: nauc_precision_at_20_diff1
value: 26.30904506190692
- type: nauc_precision_at_20_max
value: 35.806707688621934
- type: nauc_precision_at_20_std
value: 15.568878952628618
- type: nauc_precision_at_3_diff1
value: 37.18723505833023
- type: nauc_precision_at_3_max
value: 33.61785834920449
- type: nauc_precision_at_3_std
value: 2.9164041375719263
- type: nauc_precision_at_5_diff1
value: 32.815958974052634
- type: nauc_precision_at_5_max
value: 34.22763811245109
- type: nauc_precision_at_5_std
value: 6.903458965358602
- type: nauc_recall_at_1000_diff1
value: 19.113079964836775
- type: nauc_recall_at_1000_max
value: 34.179678630291214
- type: nauc_recall_at_1000_std
value: 32.767248399630766
- type: nauc_recall_at_100_diff1
value: 25.8811372897862
- type: nauc_recall_at_100_max
value: 34.080364921744646
- type: nauc_recall_at_100_std
value: 19.89467893613937
- type: nauc_recall_at_10_diff1
value: 33.89551559779046
- type: nauc_recall_at_10_max
value: 32.010002950841205
- type: nauc_recall_at_10_std
value: 6.940984864057547
- type: nauc_recall_at_1_diff1
value: 47.86681025046955
- type: nauc_recall_at_1_max
value: 29.132194162422586
- type: nauc_recall_at_1_std
value: -5.002201483950293
- type: nauc_recall_at_20_diff1
value: 33.23701340255753
- type: nauc_recall_at_20_max
value: 32.4122108288123
- type: nauc_recall_at_20_std
value: 12.716938752356722
- type: nauc_recall_at_3_diff1
value: 40.13928334370612
- type: nauc_recall_at_3_max
value: 29.763781596658056
- type: nauc_recall_at_3_std
value: 0.34341106017726875
- type: nauc_recall_at_5_diff1
value: 36.11418609771827
- type: nauc_recall_at_5_max
value: 30.444939213226345
- type: nauc_recall_at_5_std
value: 4.2497080620670244
- type: ndcg_at_1
value: 28.927000000000003
- type: ndcg_at_10
value: 39.839999999999996
- type: ndcg_at_100
value: 45.044000000000004
- type: ndcg_at_1000
value: 46.937
- type: ndcg_at_20
value: 41.821999999999996
- type: ndcg_at_3
value: 34.455000000000005
- type: ndcg_at_5
value: 37.37
- type: precision_at_1
value: 28.927000000000003
- type: precision_at_10
value: 6.045
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 3.4799999999999995
- type: precision_at_3
value: 14.313
- type: precision_at_5
value: 10.26
- type: recall_at_1
value: 26.821
- type: recall_at_10
value: 52.917
- type: recall_at_100
value: 77.093
- type: recall_at_1000
value: 91.316
- type: recall_at_20
value: 60.56700000000001
- type: recall_at_3
value: 38.51
- type: recall_at_5
value: 45.536
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 29.98
- type: map_at_1
value: 16.006999999999998
- type: map_at_10
value: 24.477
- type: map_at_100
value: 25.668000000000003
- type: map_at_1000
value: 25.788
- type: map_at_20
value: 25.048
- type: map_at_3
value: 21.656
- type: map_at_5
value: 23.34
- type: mrr_at_1
value: 20.024875621890548
- type: mrr_at_10
value: 29.197020848140255
- type: mrr_at_100
value: 30.11227723049729
- type: mrr_at_1000
value: 30.18189915906596
- type: mrr_at_20
value: 29.656175097568173
- type: mrr_at_3
value: 26.409618573797676
- type: mrr_at_5
value: 28.094941956882252
- type: nauc_map_at_1000_diff1
value: 29.373123176645237
- type: nauc_map_at_1000_max
value: 21.445648024191712
- type: nauc_map_at_1000_std
value: 4.405281916537104
- type: nauc_map_at_100_diff1
value: 29.35310092597499
- type: nauc_map_at_100_max
value: 21.40489526072531
- type: nauc_map_at_100_std
value: 4.380387058523312
- type: nauc_map_at_10_diff1
value: 29.141697351596253
- type: nauc_map_at_10_max
value: 21.03434833947413
- type: nauc_map_at_10_std
value: 4.144922514001805
- type: nauc_map_at_1_diff1
value: 38.13496031071367
- type: nauc_map_at_1_max
value: 21.901814324794117
- type: nauc_map_at_1_std
value: 0.4814120186616989
- type: nauc_map_at_20_diff1
value: 29.32299833637584
- type: nauc_map_at_20_max
value: 21.306214454816754
- type: nauc_map_at_20_std
value: 4.300356249155504
- type: nauc_map_at_3_diff1
value: 29.06177418598614
- type: nauc_map_at_3_max
value: 20.811300750566513
- type: nauc_map_at_3_std
value: 3.4414335533909166
- type: nauc_map_at_5_diff1
value: 29.338077175083153
- type: nauc_map_at_5_max
value: 20.905802602838854
- type: nauc_map_at_5_std
value: 3.718624565651613
- type: nauc_mrr_at_1000_diff1
value: 30.008651430755055
- type: nauc_mrr_at_1000_max
value: 22.107612668457936
- type: nauc_mrr_at_1000_std
value: 2.722385123849279
- type: nauc_mrr_at_100_diff1
value: 29.99004753126145
- type: nauc_mrr_at_100_max
value: 22.094848691097166
- type: nauc_mrr_at_100_std
value: 2.732788863788039
- type: nauc_mrr_at_10_diff1
value: 29.88442282413889
- type: nauc_mrr_at_10_max
value: 22.02646483854745
- type: nauc_mrr_at_10_std
value: 2.6648957296233644
- type: nauc_mrr_at_1_diff1
value: 37.967956104647755
- type: nauc_mrr_at_1_max
value: 21.44460113476502
- type: nauc_mrr_at_1_std
value: -1.7530679840582375
- type: nauc_mrr_at_20_diff1
value: 29.927851260160853
- type: nauc_mrr_at_20_max
value: 22.01880502582662
- type: nauc_mrr_at_20_std
value: 2.708224642328409
- type: nauc_mrr_at_3_diff1
value: 30.266105364910807
- type: nauc_mrr_at_3_max
value: 21.89072561010228
- type: nauc_mrr_at_3_std
value: 2.1435490321496875
- type: nauc_mrr_at_5_diff1
value: 30.15575881656933
- type: nauc_mrr_at_5_max
value: 22.16515882306906
- type: nauc_mrr_at_5_std
value: 2.577739748796098
- type: nauc_ndcg_at_1000_diff1
value: 27.90510789867005
- type: nauc_ndcg_at_1000_max
value: 22.530968354985028
- type: nauc_ndcg_at_1000_std
value: 6.781805354595171
- type: nauc_ndcg_at_100_diff1
value: 27.380292435540532
- type: nauc_ndcg_at_100_max
value: 21.809545194883373
- type: nauc_ndcg_at_100_std
value: 6.466454400897538
- type: nauc_ndcg_at_10_diff1
value: 26.787786754146648
- type: nauc_ndcg_at_10_max
value: 21.1259284562756
- type: nauc_ndcg_at_10_std
value: 5.5137140693199065
- type: nauc_ndcg_at_1_diff1
value: 37.967956104647755
- type: nauc_ndcg_at_1_max
value: 21.44460113476502
- type: nauc_ndcg_at_1_std
value: -1.7530679840582375
- type: nauc_ndcg_at_20_diff1
value: 27.263083511878065
- type: nauc_ndcg_at_20_max
value: 21.679228236914504
- type: nauc_ndcg_at_20_std
value: 6.089674507905745
- type: nauc_ndcg_at_3_diff1
value: 27.146357683499524
- type: nauc_ndcg_at_3_max
value: 21.104681584239923
- type: nauc_ndcg_at_3_std
value: 3.7675309462600217
- type: nauc_ndcg_at_5_diff1
value: 27.331511358161997
- type: nauc_ndcg_at_5_max
value: 21.18337870362178
- type: nauc_ndcg_at_5_std
value: 4.57529049323361
- type: nauc_precision_at_1000_diff1
value: 3.239653820602742
- type: nauc_precision_at_1000_max
value: 7.26961590562058
- type: nauc_precision_at_1000_std
value: 2.2025825300071125
- type: nauc_precision_at_100_diff1
value: 10.44430363394719
- type: nauc_precision_at_100_max
value: 14.264783558128372
- type: nauc_precision_at_100_std
value: 7.062602811028171
- type: nauc_precision_at_10_diff1
value: 18.25356714400902
- type: nauc_precision_at_10_max
value: 19.640874340636955
- type: nauc_precision_at_10_std
value: 7.720476038645774
- type: nauc_precision_at_1_diff1
value: 37.967956104647755
- type: nauc_precision_at_1_max
value: 21.44460113476502
- type: nauc_precision_at_1_std
value: -1.7530679840582375
- type: nauc_precision_at_20_diff1
value: 18.054556815828192
- type: nauc_precision_at_20_max
value: 20.132496075135354
- type: nauc_precision_at_20_std
value: 8.992246975170467
- type: nauc_precision_at_3_diff1
value: 20.99569005515048
- type: nauc_precision_at_3_max
value: 20.986073606132656
- type: nauc_precision_at_3_std
value: 4.447517886269654
- type: nauc_precision_at_5_diff1
value: 20.501261498730983
- type: nauc_precision_at_5_max
value: 20.506697758500973
- type: nauc_precision_at_5_std
value: 6.836691174582398
- type: nauc_recall_at_1000_diff1
value: 17.117021891023526
- type: nauc_recall_at_1000_max
value: 28.098336442218425
- type: nauc_recall_at_1000_std
value: 32.03347662796732
- type: nauc_recall_at_100_diff1
value: 17.981711126081034
- type: nauc_recall_at_100_max
value: 18.60340216104814
- type: nauc_recall_at_100_std
value: 14.313017781949211
- type: nauc_recall_at_10_diff1
value: 18.464197753039507
- type: nauc_recall_at_10_max
value: 18.474261038624018
- type: nauc_recall_at_10_std
value: 9.090322373993915
- type: nauc_recall_at_1_diff1
value: 38.13496031071367
- type: nauc_recall_at_1_max
value: 21.901814324794117
- type: nauc_recall_at_1_std
value: 0.4814120186616989
- type: nauc_recall_at_20_diff1
value: 19.45317881386327
- type: nauc_recall_at_20_max
value: 19.791890415922335
- type: nauc_recall_at_20_std
value: 11.195653035396559
- type: nauc_recall_at_3_diff1
value: 19.58052222735834
- type: nauc_recall_at_3_max
value: 18.390514209031757
- type: nauc_recall_at_3_std
value: 6.285979231354349
- type: nauc_recall_at_5_diff1
value: 19.957569014017217
- type: nauc_recall_at_5_max
value: 18.605256807993243
- type: nauc_recall_at_5_std
value: 6.7220712737663755
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 29.98
- type: ndcg_at_100
value: 35.69
- type: ndcg_at_1000
value: 38.568000000000005
- type: ndcg_at_20
value: 31.820999999999998
- type: ndcg_at_3
value: 24.891
- type: ndcg_at_5
value: 27.46
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 5.759
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.136
- type: precision_at_20
value: 3.3640000000000003
- type: precision_at_3
value: 12.106
- type: precision_at_5
value: 9.179
- type: recall_at_1
value: 16.006999999999998
- type: recall_at_10
value: 42.081
- type: recall_at_100
value: 67.305
- type: recall_at_1000
value: 87.957
- type: recall_at_20
value: 48.704
- type: recall_at_3
value: 28.354000000000003
- type: recall_at_5
value: 34.605000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 45.073
- type: map_at_1
value: 28.136
- type: map_at_10
value: 38.868
- type: map_at_100
value: 40.108
- type: map_at_1000
value: 40.221000000000004
- type: map_at_20
value: 39.564
- type: map_at_3
value: 35.449000000000005
- type: map_at_5
value: 37.385000000000005
- type: mrr_at_1
value: 34.93743984600577
- type: mrr_at_10
value: 44.622385688314466
- type: mrr_at_100
value: 45.34143739001397
- type: mrr_at_1000
value: 45.39414063155817
- type: mrr_at_20
value: 45.00770434975747
- type: mrr_at_3
value: 41.88322104587745
- type: mrr_at_5
value: 43.50016041065127
- type: nauc_map_at_1000_diff1
value: 52.26823337938297
- type: nauc_map_at_1000_max
value: 37.37245329468808
- type: nauc_map_at_1000_std
value: 4.72166664574236
- type: nauc_map_at_100_diff1
value: 52.24616463751152
- type: nauc_map_at_100_max
value: 37.33426712740629
- type: nauc_map_at_100_std
value: 4.681771946538098
- type: nauc_map_at_10_diff1
value: 52.161361569838085
- type: nauc_map_at_10_max
value: 36.73617207301781
- type: nauc_map_at_10_std
value: 3.8276263103624397
- type: nauc_map_at_1_diff1
value: 58.609107153987026
- type: nauc_map_at_1_max
value: 35.02757021351409
- type: nauc_map_at_1_std
value: -0.6936621804258922
- type: nauc_map_at_20_diff1
value: 52.20038315316372
- type: nauc_map_at_20_max
value: 37.24718279507634
- type: nauc_map_at_20_std
value: 4.433483954204022
- type: nauc_map_at_3_diff1
value: 52.98890960683455
- type: nauc_map_at_3_max
value: 36.627227662545636
- type: nauc_map_at_3_std
value: 2.1473003055776303
- type: nauc_map_at_5_diff1
value: 52.138722935493796
- type: nauc_map_at_5_max
value: 36.57267129483463
- type: nauc_map_at_5_std
value: 2.9487505674940846
- type: nauc_mrr_at_1000_diff1
value: 52.727499174808976
- type: nauc_mrr_at_1000_max
value: 39.660587784745125
- type: nauc_mrr_at_1000_std
value: 7.215620440931916
- type: nauc_mrr_at_100_diff1
value: 52.71227861389539
- type: nauc_mrr_at_100_max
value: 39.65944541613258
- type: nauc_mrr_at_100_std
value: 7.232920570021006
- type: nauc_mrr_at_10_diff1
value: 52.605293878188306
- type: nauc_mrr_at_10_max
value: 39.39789652621602
- type: nauc_mrr_at_10_std
value: 6.994445305721646
- type: nauc_mrr_at_1_diff1
value: 58.87987158821745
- type: nauc_mrr_at_1_max
value: 39.883000294981386
- type: nauc_mrr_at_1_std
value: 4.525740385718022
- type: nauc_mrr_at_20_diff1
value: 52.709991373902476
- type: nauc_mrr_at_20_max
value: 39.63900552236034
- type: nauc_mrr_at_20_std
value: 7.180374629477749
- type: nauc_mrr_at_3_diff1
value: 52.87253562158551
- type: nauc_mrr_at_3_max
value: 39.8526236943088
- type: nauc_mrr_at_3_std
value: 6.312802069469314
- type: nauc_mrr_at_5_diff1
value: 52.4218392724399
- type: nauc_mrr_at_5_max
value: 39.31643340619565
- type: nauc_mrr_at_5_std
value: 6.497151104380261
- type: nauc_ndcg_at_1000_diff1
value: 50.896715084702
- type: nauc_ndcg_at_1000_max
value: 39.03137903719933
- type: nauc_ndcg_at_1000_std
value: 9.120093464579284
- type: nauc_ndcg_at_100_diff1
value: 50.57128970408371
- type: nauc_ndcg_at_100_max
value: 38.57734282233673
- type: nauc_ndcg_at_100_std
value: 8.948100860484162
- type: nauc_ndcg_at_10_diff1
value: 50.10362861621041
- type: nauc_ndcg_at_10_max
value: 36.81788741482043
- type: nauc_ndcg_at_10_std
value: 6.248168006196132
- type: nauc_ndcg_at_1_diff1
value: 58.87987158821745
- type: nauc_ndcg_at_1_max
value: 39.883000294981386
- type: nauc_ndcg_at_1_std
value: 4.525740385718022
- type: nauc_ndcg_at_20_diff1
value: 50.226356261645435
- type: nauc_ndcg_at_20_max
value: 38.11050217929487
- type: nauc_ndcg_at_20_std
value: 7.486965774997671
- type: nauc_ndcg_at_3_diff1
value: 51.11099202906786
- type: nauc_ndcg_at_3_max
value: 37.9283909559886
- type: nauc_ndcg_at_3_std
value: 4.702254100554942
- type: nauc_ndcg_at_5_diff1
value: 50.09568890455949
- type: nauc_ndcg_at_5_max
value: 36.9698296373855
- type: nauc_ndcg_at_5_std
value: 5.188587998085958
- type: nauc_precision_at_1000_diff1
value: -5.397384708148253
- type: nauc_precision_at_1000_max
value: 7.658538116479907
- type: nauc_precision_at_1000_std
value: 19.419371334393016
- type: nauc_precision_at_100_diff1
value: 4.13777630264156
- type: nauc_precision_at_100_max
value: 18.076564064280507
- type: nauc_precision_at_100_std
value: 24.333555979164466
- type: nauc_precision_at_10_diff1
value: 20.55388046991252
- type: nauc_precision_at_10_max
value: 29.41800073419997
- type: nauc_precision_at_10_std
value: 18.580366767122076
- type: nauc_precision_at_1_diff1
value: 58.87987158821745
- type: nauc_precision_at_1_max
value: 39.883000294981386
- type: nauc_precision_at_1_std
value: 4.525740385718022
- type: nauc_precision_at_20_diff1
value: 14.242304182638074
- type: nauc_precision_at_20_max
value: 28.10797068496045
- type: nauc_precision_at_20_std
value: 21.90112335945266
- type: nauc_precision_at_3_diff1
value: 37.46947895260127
- type: nauc_precision_at_3_max
value: 38.006705277755955
- type: nauc_precision_at_3_std
value: 12.158729151102953
- type: nauc_precision_at_5_diff1
value: 28.328393033049203
- type: nauc_precision_at_5_max
value: 33.15639175834532
- type: nauc_precision_at_5_std
value: 14.406771010005597
- type: nauc_recall_at_1000_diff1
value: 34.31859222623266
- type: nauc_recall_at_1000_max
value: 45.13722035429492
- type: nauc_recall_at_1000_std
value: 53.791310222471886
- type: nauc_recall_at_100_diff1
value: 38.78662480990974
- type: nauc_recall_at_100_max
value: 36.173587813938305
- type: nauc_recall_at_100_std
value: 26.619442060940923
- type: nauc_recall_at_10_diff1
value: 40.329036336117454
- type: nauc_recall_at_10_max
value: 30.912126881840678
- type: nauc_recall_at_10_std
value: 9.814880445182029
- type: nauc_recall_at_1_diff1
value: 58.609107153987026
- type: nauc_recall_at_1_max
value: 35.02757021351409
- type: nauc_recall_at_1_std
value: -0.6936621804258922
- type: nauc_recall_at_20_diff1
value: 40.18277885997754
- type: nauc_recall_at_20_max
value: 34.8149234226211
- type: nauc_recall_at_20_std
value: 14.12527070299619
- type: nauc_recall_at_3_diff1
value: 45.40990517015726
- type: nauc_recall_at_3_max
value: 34.69882134310449
- type: nauc_recall_at_3_std
value: 3.732266451734092
- type: nauc_recall_at_5_diff1
value: 41.766916839112206
- type: nauc_recall_at_5_max
value: 32.129792239721716
- type: nauc_recall_at_5_std
value: 5.6211961779763415
- type: ndcg_at_1
value: 34.937000000000005
- type: ndcg_at_10
value: 45.073
- type: ndcg_at_100
value: 50.182
- type: ndcg_at_1000
value: 52.276
- type: ndcg_at_20
value: 47.010000000000005
- type: ndcg_at_3
value: 39.612
- type: ndcg_at_5
value: 42.266999999999996
- type: precision_at_1
value: 34.937000000000005
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.268
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 4.832
- type: precision_at_3
value: 18.704
- type: precision_at_5
value: 13.474
- type: recall_at_1
value: 28.136
- type: recall_at_10
value: 57.791000000000004
- type: recall_at_100
value: 79.521
- type: recall_at_1000
value: 93.176
- type: recall_at_20
value: 64.48700000000001
- type: recall_at_3
value: 42.427
- type: recall_at_5
value: 49.39
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 40.65
- type: map_at_1
value: 24.381
- type: map_at_10
value: 34.522000000000006
- type: map_at_100
value: 35.823
- type: map_at_1000
value: 35.939
- type: map_at_20
value: 35.153
- type: map_at_3
value: 31.079
- type: map_at_5
value: 33.167
- type: mrr_at_1
value: 30.47945205479452
- type: mrr_at_10
value: 40.03556026672465
- type: mrr_at_100
value: 40.859486284310805
- type: mrr_at_1000
value: 40.91385465888402
- type: mrr_at_20
value: 40.44289694965349
- type: mrr_at_3
value: 37.29071537290716
- type: mrr_at_5
value: 38.86605783866057
- type: nauc_map_at_1000_diff1
value: 45.08645907290653
- type: nauc_map_at_1000_max
value: 37.93984277049077
- type: nauc_map_at_1000_std
value: 4.348018171635654
- type: nauc_map_at_100_diff1
value: 45.103728679899895
- type: nauc_map_at_100_max
value: 37.99535887007234
- type: nauc_map_at_100_std
value: 4.357619716783864
- type: nauc_map_at_10_diff1
value: 45.00658982142447
- type: nauc_map_at_10_max
value: 37.31815420707169
- type: nauc_map_at_10_std
value: 3.3382278188396106
- type: nauc_map_at_1_diff1
value: 52.252891830215134
- type: nauc_map_at_1_max
value: 32.77061213394142
- type: nauc_map_at_1_std
value: -2.8010640035289143
- type: nauc_map_at_20_diff1
value: 45.091235458676685
- type: nauc_map_at_20_max
value: 37.61754711974
- type: nauc_map_at_20_std
value: 3.980483359486563
- type: nauc_map_at_3_diff1
value: 45.96759132853502
- type: nauc_map_at_3_max
value: 35.70107657784562
- type: nauc_map_at_3_std
value: 0.5893514980003082
- type: nauc_map_at_5_diff1
value: 44.76256760745472
- type: nauc_map_at_5_max
value: 36.837428148742184
- type: nauc_map_at_5_std
value: 2.739539166736263
- type: nauc_mrr_at_1000_diff1
value: 42.92232217227843
- type: nauc_mrr_at_1000_max
value: 40.17925571552386
- type: nauc_mrr_at_1000_std
value: 8.148960526298065
- type: nauc_mrr_at_100_diff1
value: 42.91455614925039
- type: nauc_mrr_at_100_max
value: 40.20045379839933
- type: nauc_mrr_at_100_std
value: 8.18118615824958
- type: nauc_mrr_at_10_diff1
value: 42.813559222481004
- type: nauc_mrr_at_10_max
value: 40.22809168556719
- type: nauc_mrr_at_10_std
value: 7.862450240887059
- type: nauc_mrr_at_1_diff1
value: 49.85954283512805
- type: nauc_mrr_at_1_max
value: 39.100124047592615
- type: nauc_mrr_at_1_std
value: 4.308285447853536
- type: nauc_mrr_at_20_diff1
value: 42.916739579172855
- type: nauc_mrr_at_20_max
value: 40.18884672645215
- type: nauc_mrr_at_20_std
value: 8.070953502233257
- type: nauc_mrr_at_3_diff1
value: 43.33588918249767
- type: nauc_mrr_at_3_max
value: 40.02185259938533
- type: nauc_mrr_at_3_std
value: 6.611771804932232
- type: nauc_mrr_at_5_diff1
value: 42.31798087627212
- type: nauc_mrr_at_5_max
value: 39.95836322628087
- type: nauc_mrr_at_5_std
value: 7.8252604777639565
- type: nauc_ndcg_at_1000_diff1
value: 42.58508571989192
- type: nauc_ndcg_at_1000_max
value: 39.911165427637954
- type: nauc_ndcg_at_1000_std
value: 9.035223072322985
- type: nauc_ndcg_at_100_diff1
value: 42.85331157305972
- type: nauc_ndcg_at_100_max
value: 40.901343091681056
- type: nauc_ndcg_at_100_std
value: 10.085512946952077
- type: nauc_ndcg_at_10_diff1
value: 42.40611041378945
- type: nauc_ndcg_at_10_max
value: 39.0395878685548
- type: nauc_ndcg_at_10_std
value: 6.603225480063732
- type: nauc_ndcg_at_1_diff1
value: 49.85954283512805
- type: nauc_ndcg_at_1_max
value: 39.100124047592615
- type: nauc_ndcg_at_1_std
value: 4.308285447853536
- type: nauc_ndcg_at_20_diff1
value: 42.77709403486501
- type: nauc_ndcg_at_20_max
value: 39.45047866326879
- type: nauc_ndcg_at_20_std
value: 8.324410425742744
- type: nauc_ndcg_at_3_diff1
value: 43.358589542818706
- type: nauc_ndcg_at_3_max
value: 37.91671645220353
- type: nauc_ndcg_at_3_std
value: 3.548176766967823
- type: nauc_ndcg_at_5_diff1
value: 41.56594514959379
- type: nauc_ndcg_at_5_max
value: 38.42086968355651
- type: nauc_ndcg_at_5_std
value: 6.066217005485217
- type: nauc_precision_at_1000_diff1
value: -4.792438397619244
- type: nauc_precision_at_1000_max
value: 2.7129923853733118
- type: nauc_precision_at_1000_std
value: 11.67095620060032
- type: nauc_precision_at_100_diff1
value: 4.188981090197261
- type: nauc_precision_at_100_max
value: 24.721360175955237
- type: nauc_precision_at_100_std
value: 25.081372551205
- type: nauc_precision_at_10_diff1
value: 18.811203609613973
- type: nauc_precision_at_10_max
value: 38.14203413304164
- type: nauc_precision_at_10_std
value: 21.933474701361284
- type: nauc_precision_at_1_diff1
value: 49.85954283512805
- type: nauc_precision_at_1_max
value: 39.100124047592615
- type: nauc_precision_at_1_std
value: 4.308285447853536
- type: nauc_precision_at_20_diff1
value: 14.825255525788132
- type: nauc_precision_at_20_max
value: 33.96942297730217
- type: nauc_precision_at_20_std
value: 24.843048642950983
- type: nauc_precision_at_3_diff1
value: 31.399689348786293
- type: nauc_precision_at_3_max
value: 42.36936733454805
- type: nauc_precision_at_3_std
value: 13.606543019531983
- type: nauc_precision_at_5_diff1
value: 22.95129561216099
- type: nauc_precision_at_5_max
value: 40.15375346390098
- type: nauc_precision_at_5_std
value: 19.844337655067264
- type: nauc_recall_at_1000_diff1
value: 18.250357722466067
- type: nauc_recall_at_1000_max
value: 46.49889896497268
- type: nauc_recall_at_1000_std
value: 49.015133298216114
- type: nauc_recall_at_100_diff1
value: 34.14073541913647
- type: nauc_recall_at_100_max
value: 47.880871182499654
- type: nauc_recall_at_100_std
value: 33.394450460167654
- type: nauc_recall_at_10_diff1
value: 34.10055202985094
- type: nauc_recall_at_10_max
value: 37.29000268460915
- type: nauc_recall_at_10_std
value: 10.719966977592444
- type: nauc_recall_at_1_diff1
value: 52.252891830215134
- type: nauc_recall_at_1_max
value: 32.77061213394142
- type: nauc_recall_at_1_std
value: -2.8010640035289143
- type: nauc_recall_at_20_diff1
value: 34.77711534255729
- type: nauc_recall_at_20_max
value: 37.8830564545796
- type: nauc_recall_at_20_std
value: 17.204302583685262
- type: nauc_recall_at_3_diff1
value: 37.38919386647721
- type: nauc_recall_at_3_max
value: 34.570111627131695
- type: nauc_recall_at_3_std
value: 2.2380100020283096
- type: nauc_recall_at_5_diff1
value: 32.187904365676324
- type: nauc_recall_at_5_max
value: 35.77757062335063
- type: nauc_recall_at_5_std
value: 8.796247583408153
- type: ndcg_at_1
value: 30.479
- type: ndcg_at_10
value: 40.65
- type: ndcg_at_100
value: 46.366
- type: ndcg_at_1000
value: 48.689
- type: ndcg_at_20
value: 42.576
- type: ndcg_at_3
value: 35.116
- type: ndcg_at_5
value: 37.909
- type: precision_at_1
value: 30.479
- type: precision_at_10
value: 7.603
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.16
- type: precision_at_20
value: 4.458
- type: precision_at_3
value: 17.122999999999998
- type: precision_at_5
value: 12.58
- type: recall_at_1
value: 24.381
- type: recall_at_10
value: 53.591
- type: recall_at_100
value: 78.31
- type: recall_at_1000
value: 93.952
- type: recall_at_20
value: 60.465999999999994
- type: recall_at_3
value: 38.189
- type: recall_at_5
value: 45.425
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 40.268750000000004
- type: ndcg_at_10
value: 40.268750000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 34.89
- type: map_at_1
value: 24.343
- type: map_at_10
value: 31.064999999999998
- type: map_at_100
value: 32.013000000000005
- type: map_at_1000
value: 32.11
- type: map_at_20
value: 31.584
- type: map_at_3
value: 29.287999999999997
- type: map_at_5
value: 30.262
- type: mrr_at_1
value: 27.453987730061353
- type: mrr_at_10
value: 33.89503603077223
- type: mrr_at_100
value: 34.76452500547403
- type: mrr_at_1000
value: 34.831641998976096
- type: mrr_at_20
value: 34.39433378717007
- type: mrr_at_3
value: 32.20858895705521
- type: mrr_at_5
value: 33.09049079754601
- type: nauc_map_at_1000_diff1
value: 52.16433709147209
- type: nauc_map_at_1000_max
value: 41.895031426126124
- type: nauc_map_at_1000_std
value: 7.981597857750285
- type: nauc_map_at_100_diff1
value: 52.151218712376554
- type: nauc_map_at_100_max
value: 41.845879262079926
- type: nauc_map_at_100_std
value: 7.961900934081761
- type: nauc_map_at_10_diff1
value: 52.402838490484406
- type: nauc_map_at_10_max
value: 41.77705031749437
- type: nauc_map_at_10_std
value: 7.46409351133074
- type: nauc_map_at_1_diff1
value: 58.5759193069631
- type: nauc_map_at_1_max
value: 40.228075475738066
- type: nauc_map_at_1_std
value: -0.331391740789906
- type: nauc_map_at_20_diff1
value: 52.24118427663996
- type: nauc_map_at_20_max
value: 41.717633721989124
- type: nauc_map_at_20_std
value: 7.713122213513287
- type: nauc_map_at_3_diff1
value: 53.00130785485122
- type: nauc_map_at_3_max
value: 41.257807025801554
- type: nauc_map_at_3_std
value: 5.77564250625532
- type: nauc_map_at_5_diff1
value: 52.765771056206034
- type: nauc_map_at_5_max
value: 41.17385278218844
- type: nauc_map_at_5_std
value: 6.267751055469911
- type: nauc_mrr_at_1000_diff1
value: 50.542579627409346
- type: nauc_mrr_at_1000_max
value: 43.84384933961597
- type: nauc_mrr_at_1000_std
value: 8.9290783799145
- type: nauc_mrr_at_100_diff1
value: 50.52367459214615
- type: nauc_mrr_at_100_max
value: 43.83317399067154
- type: nauc_mrr_at_100_std
value: 8.93848179915739
- type: nauc_mrr_at_10_diff1
value: 50.592180069200076
- type: nauc_mrr_at_10_max
value: 43.86512615662472
- type: nauc_mrr_at_10_std
value: 8.648594684024122
- type: nauc_mrr_at_1_diff1
value: 56.64925132242809
- type: nauc_mrr_at_1_max
value: 44.302509524664245
- type: nauc_mrr_at_1_std
value: 2.8736722205582566
- type: nauc_mrr_at_20_diff1
value: 50.621524856832025
- type: nauc_mrr_at_20_max
value: 43.85010163029952
- type: nauc_mrr_at_20_std
value: 8.767065605432117
- type: nauc_mrr_at_3_diff1
value: 51.38022486401255
- type: nauc_mrr_at_3_max
value: 43.690091517144886
- type: nauc_mrr_at_3_std
value: 7.583000007208131
- type: nauc_mrr_at_5_diff1
value: 51.026555577484345
- type: nauc_mrr_at_5_max
value: 43.40937882095473
- type: nauc_mrr_at_5_std
value: 7.870192951985875
- type: nauc_ndcg_at_1000_diff1
value: 48.946807624632214
- type: nauc_ndcg_at_1000_max
value: 43.63846016728263
- type: nauc_ndcg_at_1000_std
value: 13.07112902361304
- type: nauc_ndcg_at_100_diff1
value: 48.46009023670086
- type: nauc_ndcg_at_100_max
value: 42.84578351440459
- type: nauc_ndcg_at_100_std
value: 13.048673429979113
- type: nauc_ndcg_at_10_diff1
value: 49.80098562249132
- type: nauc_ndcg_at_10_max
value: 42.46231067480857
- type: nauc_ndcg_at_10_std
value: 10.5896193945906
- type: nauc_ndcg_at_1_diff1
value: 56.64925132242809
- type: nauc_ndcg_at_1_max
value: 44.302509524664245
- type: nauc_ndcg_at_1_std
value: 2.8736722205582566
- type: nauc_ndcg_at_20_diff1
value: 49.32237998567543
- type: nauc_ndcg_at_20_max
value: 42.23892720118917
- type: nauc_ndcg_at_20_std
value: 11.311668576433599
- type: nauc_ndcg_at_3_diff1
value: 51.01125725288161
- type: nauc_ndcg_at_3_max
value: 42.12827055031322
- type: nauc_ndcg_at_3_std
value: 7.952435809082195
- type: nauc_ndcg_at_5_diff1
value: 50.88410133484621
- type: nauc_ndcg_at_5_max
value: 41.565359240466975
- type: nauc_ndcg_at_5_std
value: 8.314306379070254
- type: nauc_precision_at_1000_diff1
value: 4.413508451597973
- type: nauc_precision_at_1000_max
value: 33.762398288256094
- type: nauc_precision_at_1000_std
value: 25.98231326304456
- type: nauc_precision_at_100_diff1
value: 17.203842054243655
- type: nauc_precision_at_100_max
value: 41.105989573268204
- type: nauc_precision_at_100_std
value: 31.631978351351798
- type: nauc_precision_at_10_diff1
value: 34.696132889148664
- type: nauc_precision_at_10_max
value: 45.80034445639316
- type: nauc_precision_at_10_std
value: 25.084569022460418
- type: nauc_precision_at_1_diff1
value: 56.64925132242809
- type: nauc_precision_at_1_max
value: 44.302509524664245
- type: nauc_precision_at_1_std
value: 2.8736722205582566
- type: nauc_precision_at_20_diff1
value: 29.865679363368436
- type: nauc_precision_at_20_max
value: 43.51573363065152
- type: nauc_precision_at_20_std
value: 26.69033832110468
- type: nauc_precision_at_3_diff1
value: 41.389196232922316
- type: nauc_precision_at_3_max
value: 46.28755529729629
- type: nauc_precision_at_3_std
value: 16.909849769292716
- type: nauc_precision_at_5_diff1
value: 39.087938584206306
- type: nauc_precision_at_5_max
value: 43.92003534457625
- type: nauc_precision_at_5_std
value: 18.560228476365946
- type: nauc_recall_at_1000_diff1
value: 26.669464596984678
- type: nauc_recall_at_1000_max
value: 46.54456322772547
- type: nauc_recall_at_1000_std
value: 44.24504504678441
- type: nauc_recall_at_100_diff1
value: 31.188372051167907
- type: nauc_recall_at_100_max
value: 39.33606079794924
- type: nauc_recall_at_100_std
value: 31.351107197440655
- type: nauc_recall_at_10_diff1
value: 42.00542257322957
- type: nauc_recall_at_10_max
value: 40.557783865813306
- type: nauc_recall_at_10_std
value: 17.578789968761935
- type: nauc_recall_at_1_diff1
value: 58.5759193069631
- type: nauc_recall_at_1_max
value: 40.228075475738066
- type: nauc_recall_at_1_std
value: -0.331391740789906
- type: nauc_recall_at_20_diff1
value: 39.65715933448972
- type: nauc_recall_at_20_max
value: 38.93829722046506
- type: nauc_recall_at_20_std
value: 20.20996021737043
- type: nauc_recall_at_3_diff1
value: 46.87588712984084
- type: nauc_recall_at_3_max
value: 39.797140704347825
- type: nauc_recall_at_3_std
value: 10.153636337649985
- type: nauc_recall_at_5_diff1
value: 46.08928170734386
- type: nauc_recall_at_5_max
value: 39.01544586906766
- type: nauc_recall_at_5_std
value: 11.453087165740614
- type: ndcg_at_1
value: 27.454
- type: ndcg_at_10
value: 34.89
- type: ndcg_at_100
value: 39.612
- type: ndcg_at_1000
value: 42.045
- type: ndcg_at_20
value: 36.738
- type: ndcg_at_3
value: 31.630000000000003
- type: ndcg_at_5
value: 33.056000000000004
- type: precision_at_1
value: 27.454
- type: precision_at_10
value: 5.383
- type: precision_at_100
value: 0.8330000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 3.152
- type: precision_at_3
value: 13.65
- type: precision_at_5
value: 9.232999999999999
- type: recall_at_1
value: 24.343
- type: recall_at_10
value: 43.836999999999996
- type: recall_at_100
value: 65.202
- type: recall_at_1000
value: 83.206
- type: recall_at_20
value: 50.829
- type: recall_at_3
value: 34.695
- type: recall_at_5
value: 38.33
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 28.595
- type: map_at_1
value: 17.134
- type: map_at_10
value: 24.104
- type: map_at_100
value: 25.238
- type: map_at_1000
value: 25.368000000000002
- type: map_at_20
value: 24.691
- type: map_at_3
value: 21.803
- type: map_at_5
value: 23.066
- type: mrr_at_1
value: 20.750172057811426
- type: mrr_at_10
value: 27.87088017129311
- type: mrr_at_100
value: 28.81791033306077
- type: mrr_at_1000
value: 28.899375497257516
- type: mrr_at_20
value: 28.378447148229803
- type: mrr_at_3
value: 25.699701766460198
- type: mrr_at_5
value: 26.892062399632945
- type: nauc_map_at_1000_diff1
value: 36.89782887137058
- type: nauc_map_at_1000_max
value: 33.08016929719372
- type: nauc_map_at_1000_std
value: 3.3099635444764735
- type: nauc_map_at_100_diff1
value: 36.8780819112453
- type: nauc_map_at_100_max
value: 33.050931677418774
- type: nauc_map_at_100_std
value: 3.2972109847369
- type: nauc_map_at_10_diff1
value: 37.08595684018923
- type: nauc_map_at_10_max
value: 32.816323824163476
- type: nauc_map_at_10_std
value: 2.552386021070936
- type: nauc_map_at_1_diff1
value: 43.51132829968084
- type: nauc_map_at_1_max
value: 32.3415757101499
- type: nauc_map_at_1_std
value: 1.1401002178840585
- type: nauc_map_at_20_diff1
value: 36.94143359488415
- type: nauc_map_at_20_max
value: 32.982623511734424
- type: nauc_map_at_20_std
value: 2.948810693672891
- type: nauc_map_at_3_diff1
value: 38.71612128344212
- type: nauc_map_at_3_max
value: 32.637048522654034
- type: nauc_map_at_3_std
value: 1.156009064619756
- type: nauc_map_at_5_diff1
value: 37.78942760585617
- type: nauc_map_at_5_max
value: 32.75604432178518
- type: nauc_map_at_5_std
value: 2.0173303861590925
- type: nauc_mrr_at_1000_diff1
value: 36.135612852709095
- type: nauc_mrr_at_1000_max
value: 34.12476601616417
- type: nauc_mrr_at_1000_std
value: 3.845954153422737
- type: nauc_mrr_at_100_diff1
value: 36.12068139187418
- type: nauc_mrr_at_100_max
value: 34.119358854394214
- type: nauc_mrr_at_100_std
value: 3.849886746617628
- type: nauc_mrr_at_10_diff1
value: 36.21596271906166
- type: nauc_mrr_at_10_max
value: 34.0449176526415
- type: nauc_mrr_at_10_std
value: 3.392757560646604
- type: nauc_mrr_at_1_diff1
value: 42.4688892662091
- type: nauc_mrr_at_1_max
value: 34.61638632573713
- type: nauc_mrr_at_1_std
value: 1.842575033698782
- type: nauc_mrr_at_20_diff1
value: 36.100856934569805
- type: nauc_mrr_at_20_max
value: 34.09556229697657
- type: nauc_mrr_at_20_std
value: 3.6620323814000684
- type: nauc_mrr_at_3_diff1
value: 37.47429125012118
- type: nauc_mrr_at_3_max
value: 34.16045576255616
- type: nauc_mrr_at_3_std
value: 2.2755120232411903
- type: nauc_mrr_at_5_diff1
value: 36.65972061354651
- type: nauc_mrr_at_5_max
value: 33.96708531025307
- type: nauc_mrr_at_5_std
value: 2.8464790894241885
- type: nauc_ndcg_at_1000_diff1
value: 33.78518836462285
- type: nauc_ndcg_at_1000_max
value: 33.63185976032734
- type: nauc_ndcg_at_1000_std
value: 6.989447169026228
- type: nauc_ndcg_at_100_diff1
value: 33.39381937519511
- type: nauc_ndcg_at_100_max
value: 33.27132063948536
- type: nauc_ndcg_at_100_std
value: 6.813560048322648
- type: nauc_ndcg_at_10_diff1
value: 34.231562121939554
- type: nauc_ndcg_at_10_max
value: 32.957787193957664
- type: nauc_ndcg_at_10_std
value: 3.9141441526477867
- type: nauc_ndcg_at_1_diff1
value: 42.4688892662091
- type: nauc_ndcg_at_1_max
value: 34.61638632573713
- type: nauc_ndcg_at_1_std
value: 1.842575033698782
- type: nauc_ndcg_at_20_diff1
value: 33.73777473522033
- type: nauc_ndcg_at_20_max
value: 33.210514059183375
- type: nauc_ndcg_at_20_std
value: 5.073301185091096
- type: nauc_ndcg_at_3_diff1
value: 36.878575344351624
- type: nauc_ndcg_at_3_max
value: 33.32312056613959
- type: nauc_ndcg_at_3_std
value: 1.4148329239930337
- type: nauc_ndcg_at_5_diff1
value: 35.53121227641059
- type: nauc_ndcg_at_5_max
value: 32.97583732206957
- type: nauc_ndcg_at_5_std
value: 2.7296938204105046
- type: nauc_precision_at_1000_diff1
value: 5.048094785844507
- type: nauc_precision_at_1000_max
value: 20.72305226491182
- type: nauc_precision_at_1000_std
value: 13.166659843282705
- type: nauc_precision_at_100_diff1
value: 11.403111213178537
- type: nauc_precision_at_100_max
value: 25.876626218425784
- type: nauc_precision_at_100_std
value: 15.750263914866009
- type: nauc_precision_at_10_diff1
value: 21.812128842605574
- type: nauc_precision_at_10_max
value: 32.36024330484034
- type: nauc_precision_at_10_std
value: 8.30153625256048
- type: nauc_precision_at_1_diff1
value: 42.4688892662091
- type: nauc_precision_at_1_max
value: 34.61638632573713
- type: nauc_precision_at_1_std
value: 1.842575033698782
- type: nauc_precision_at_20_diff1
value: 18.126781090521444
- type: nauc_precision_at_20_max
value: 30.879595883417306
- type: nauc_precision_at_20_std
value: 10.9632189477538
- type: nauc_precision_at_3_diff1
value: 30.857088332242842
- type: nauc_precision_at_3_max
value: 34.435211511210596
- type: nauc_precision_at_3_std
value: 2.1743174606141897
- type: nauc_precision_at_5_diff1
value: 26.70684474302077
- type: nauc_precision_at_5_max
value: 33.881041843033714
- type: nauc_precision_at_5_std
value: 4.964700828623661
- type: nauc_recall_at_1000_diff1
value: 15.179090310075448
- type: nauc_recall_at_1000_max
value: 31.177239498655414
- type: nauc_recall_at_1000_std
value: 32.817986166465225
- type: nauc_recall_at_100_diff1
value: 19.791440992074865
- type: nauc_recall_at_100_max
value: 29.00647998704242
- type: nauc_recall_at_100_std
value: 19.00290442690933
- type: nauc_recall_at_10_diff1
value: 25.400689588358937
- type: nauc_recall_at_10_max
value: 29.67225586682777
- type: nauc_recall_at_10_std
value: 6.326653531699302
- type: nauc_recall_at_1_diff1
value: 43.51132829968084
- type: nauc_recall_at_1_max
value: 32.3415757101499
- type: nauc_recall_at_1_std
value: 1.1401002178840585
- type: nauc_recall_at_20_diff1
value: 23.158667708575777
- type: nauc_recall_at_20_max
value: 29.65979097031932
- type: nauc_recall_at_20_std
value: 10.004891834963093
- type: nauc_recall_at_3_diff1
value: 32.631467317064406
- type: nauc_recall_at_3_max
value: 30.531285335243492
- type: nauc_recall_at_3_std
value: 0.9187457345239257
- type: nauc_recall_at_5_diff1
value: 29.18624452088816
- type: nauc_recall_at_5_max
value: 29.708509257799818
- type: nauc_recall_at_5_std
value: 3.6369154905030765
- type: ndcg_at_1
value: 20.75
- type: ndcg_at_10
value: 28.595
- type: ndcg_at_100
value: 34.047
- type: ndcg_at_1000
value: 37.039
- type: ndcg_at_20
value: 30.526999999999997
- type: ndcg_at_3
value: 24.46
- type: ndcg_at_5
value: 26.339000000000002
- type: precision_at_1
value: 20.75
- type: precision_at_10
value: 5.231
- type: precision_at_100
value: 0.9400000000000001
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_20
value: 3.1919999999999997
- type: precision_at_3
value: 11.482000000000001
- type: precision_at_5
value: 8.369
- type: recall_at_1
value: 17.134
- type: recall_at_10
value: 38.316
- type: recall_at_100
value: 62.913
- type: recall_at_1000
value: 84.233
- type: recall_at_20
value: 45.465
- type: recall_at_3
value: 26.840000000000003
- type: recall_at_5
value: 31.616
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 39.89
- type: map_at_1
value: 25.712000000000003
- type: map_at_10
value: 34.615
- type: map_at_100
value: 35.846000000000004
- type: map_at_1000
value: 35.953
- type: map_at_20
value: 35.321999999999996
- type: map_at_3
value: 32.056000000000004
- type: map_at_5
value: 33.315
- type: mrr_at_1
value: 29.850746268656714
- type: mrr_at_10
value: 38.71860933428097
- type: mrr_at_100
value: 39.61660285528755
- type: mrr_at_1000
value: 39.676734545015016
- type: mrr_at_20
value: 39.20100616633673
- type: mrr_at_3
value: 36.53606965174129
- type: mrr_at_5
value: 37.604166666666664
- type: nauc_map_at_1000_diff1
value: 44.53604492112556
- type: nauc_map_at_1000_max
value: 39.23951280655509
- type: nauc_map_at_1000_std
value: -0.5395031277467611
- type: nauc_map_at_100_diff1
value: 44.51770266447437
- type: nauc_map_at_100_max
value: 39.24922673974566
- type: nauc_map_at_100_std
value: -0.5418148801230667
- type: nauc_map_at_10_diff1
value: 44.65794786125487
- type: nauc_map_at_10_max
value: 38.73649462647244
- type: nauc_map_at_10_std
value: -1.2439485316781116
- type: nauc_map_at_1_diff1
value: 51.3015448718258
- type: nauc_map_at_1_max
value: 37.68162132042243
- type: nauc_map_at_1_std
value: -4.942523398750195
- type: nauc_map_at_20_diff1
value: 44.56752919145453
- type: nauc_map_at_20_max
value: 39.15671461993185
- type: nauc_map_at_20_std
value: -0.69839546492355
- type: nauc_map_at_3_diff1
value: 45.85932349411764
- type: nauc_map_at_3_max
value: 38.36052842023208
- type: nauc_map_at_3_std
value: -2.2972931157260588
- type: nauc_map_at_5_diff1
value: 45.05340840012778
- type: nauc_map_at_5_max
value: 38.498629564114125
- type: nauc_map_at_5_std
value: -1.5490287999131676
- type: nauc_mrr_at_1000_diff1
value: 43.86315579499585
- type: nauc_mrr_at_1000_max
value: 41.22972681259589
- type: nauc_mrr_at_1000_std
value: 0.2312354368485346
- type: nauc_mrr_at_100_diff1
value: 43.82688559618802
- type: nauc_mrr_at_100_max
value: 41.228924353114486
- type: nauc_mrr_at_100_std
value: 0.23800385006789107
- type: nauc_mrr_at_10_diff1
value: 43.83189862351878
- type: nauc_mrr_at_10_max
value: 41.10756388047098
- type: nauc_mrr_at_10_std
value: -0.21685246034735925
- type: nauc_mrr_at_1_diff1
value: 51.230432406020476
- type: nauc_mrr_at_1_max
value: 41.48741446620127
- type: nauc_mrr_at_1_std
value: -2.7496538177528658
- type: nauc_mrr_at_20_diff1
value: 43.83356230557604
- type: nauc_mrr_at_20_max
value: 41.22449434055463
- type: nauc_mrr_at_20_std
value: 0.1017557136662017
- type: nauc_mrr_at_3_diff1
value: 44.642204252525374
- type: nauc_mrr_at_3_max
value: 41.394135395117
- type: nauc_mrr_at_3_std
value: -0.3102479679512395
- type: nauc_mrr_at_5_diff1
value: 44.03177178118225
- type: nauc_mrr_at_5_max
value: 41.14617895311199
- type: nauc_mrr_at_5_std
value: -0.15778767756702847
- type: nauc_ndcg_at_1000_diff1
value: 41.931781429588355
- type: nauc_ndcg_at_1000_max
value: 40.19130298587713
- type: nauc_ndcg_at_1000_std
value: 2.4213241669516314
- type: nauc_ndcg_at_100_diff1
value: 41.47736552910553
- type: nauc_ndcg_at_100_max
value: 40.35512795681438
- type: nauc_ndcg_at_100_std
value: 2.797704103101513
- type: nauc_ndcg_at_10_diff1
value: 41.89774080166355
- type: nauc_ndcg_at_10_max
value: 39.068973811839285
- type: nauc_ndcg_at_10_std
value: 0.1409703112345929
- type: nauc_ndcg_at_1_diff1
value: 51.230432406020476
- type: nauc_ndcg_at_1_max
value: 41.48741446620127
- type: nauc_ndcg_at_1_std
value: -2.7496538177528658
- type: nauc_ndcg_at_20_diff1
value: 41.654873436264864
- type: nauc_ndcg_at_20_max
value: 40.02385895251709
- type: nauc_ndcg_at_20_std
value: 1.6616675841903024
- type: nauc_ndcg_at_3_diff1
value: 43.677208022052135
- type: nauc_ndcg_at_3_max
value: 39.355842345181706
- type: nauc_ndcg_at_3_std
value: -0.9994914032687997
- type: nauc_ndcg_at_5_diff1
value: 42.63394828100176
- type: nauc_ndcg_at_5_max
value: 38.78287388555721
- type: nauc_ndcg_at_5_std
value: -0.40383353126040594
- type: nauc_precision_at_1000_diff1
value: -12.720363964831158
- type: nauc_precision_at_1000_max
value: 1.780615695736265
- type: nauc_precision_at_1000_std
value: 6.9259933939590015
- type: nauc_precision_at_100_diff1
value: 1.5494816839626728
- type: nauc_precision_at_100_max
value: 21.16486694935512
- type: nauc_precision_at_100_std
value: 13.63358160875128
- type: nauc_precision_at_10_diff1
value: 21.022460223564163
- type: nauc_precision_at_10_max
value: 33.517939417026845
- type: nauc_precision_at_10_std
value: 6.777527986277892
- type: nauc_precision_at_1_diff1
value: 51.230432406020476
- type: nauc_precision_at_1_max
value: 41.48741446620127
- type: nauc_precision_at_1_std
value: -2.7496538177528658
- type: nauc_precision_at_20_diff1
value: 14.686385148466163
- type: nauc_precision_at_20_max
value: 31.18430505462782
- type: nauc_precision_at_20_std
value: 12.077548081645366
- type: nauc_precision_at_3_diff1
value: 33.70051545579217
- type: nauc_precision_at_3_max
value: 39.60907345133216
- type: nauc_precision_at_3_std
value: 4.2539653363781476
- type: nauc_precision_at_5_diff1
value: 27.941480760698635
- type: nauc_precision_at_5_max
value: 37.31639952677206
- type: nauc_precision_at_5_std
value: 6.219978856190591
- type: nauc_recall_at_1000_diff1
value: 20.47974719909093
- type: nauc_recall_at_1000_max
value: 40.904077508578574
- type: nauc_recall_at_1000_std
value: 36.68929060623499
- type: nauc_recall_at_100_diff1
value: 26.82928665193226
- type: nauc_recall_at_100_max
value: 39.0725275666681
- type: nauc_recall_at_100_std
value: 18.856939694213875
- type: nauc_recall_at_10_diff1
value: 32.48097141976132
- type: nauc_recall_at_10_max
value: 34.921002737123445
- type: nauc_recall_at_10_std
value: 2.8847239359758596
- type: nauc_recall_at_1_diff1
value: 51.3015448718258
- type: nauc_recall_at_1_max
value: 37.68162132042243
- type: nauc_recall_at_1_std
value: -4.942523398750195
- type: nauc_recall_at_20_diff1
value: 31.155212544928517
- type: nauc_recall_at_20_max
value: 37.903451497060075
- type: nauc_recall_at_20_std
value: 8.052860930292049
- type: nauc_recall_at_3_diff1
value: 38.338624569740745
- type: nauc_recall_at_3_max
value: 36.16696793299773
- type: nauc_recall_at_3_std
value: -0.5499014670399716
- type: nauc_recall_at_5_diff1
value: 35.47491917247829
- type: nauc_recall_at_5_max
value: 34.98125952824827
- type: nauc_recall_at_5_std
value: 1.18095697584272
- type: ndcg_at_1
value: 29.851
- type: ndcg_at_10
value: 39.89
- type: ndcg_at_100
value: 45.327
- type: ndcg_at_1000
value: 47.772999999999996
- type: ndcg_at_20
value: 42.022999999999996
- type: ndcg_at_3
value: 35.284
- type: ndcg_at_5
value: 37.062
- type: precision_at_1
value: 29.851
- type: precision_at_10
value: 6.707000000000001
- type: precision_at_100
value: 1.054
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_20
value: 3.9690000000000003
- type: precision_at_3
value: 16.107
- type: precision_at_5
value: 10.97
- type: recall_at_1
value: 25.712000000000003
- type: recall_at_10
value: 51.754
- type: recall_at_100
value: 75.229
- type: recall_at_1000
value: 92.352
- type: recall_at_20
value: 59.224
- type: recall_at_3
value: 38.942
- type: recall_at_5
value: 43.608000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 39.275
- type: map_at_1
value: 25.207
- type: map_at_10
value: 33.73
- type: map_at_100
value: 35.314
- type: map_at_1000
value: 35.539
- type: map_at_20
value: 34.485
- type: map_at_3
value: 30.751
- type: map_at_5
value: 32.458
- type: mrr_at_1
value: 30.237154150197625
- type: mrr_at_10
value: 38.160330008156095
- type: mrr_at_100
value: 39.19604519073853
- type: mrr_at_1000
value: 39.25828944824003
- type: mrr_at_20
value: 38.76011256052923
- type: mrr_at_3
value: 35.573122529644266
- type: mrr_at_5
value: 37.02569169960474
- type: nauc_map_at_1000_diff1
value: 43.30050536386893
- type: nauc_map_at_1000_max
value: 34.49322247549182
- type: nauc_map_at_1000_std
value: 11.480704685860331
- type: nauc_map_at_100_diff1
value: 43.23374152493401
- type: nauc_map_at_100_max
value: 34.63190473420833
- type: nauc_map_at_100_std
value: 11.327553372568829
- type: nauc_map_at_10_diff1
value: 43.30961094445623
- type: nauc_map_at_10_max
value: 34.03516491801115
- type: nauc_map_at_10_std
value: 9.930448779277498
- type: nauc_map_at_1_diff1
value: 48.52093859650541
- type: nauc_map_at_1_max
value: 31.75748436789384
- type: nauc_map_at_1_std
value: 6.295511700792233
- type: nauc_map_at_20_diff1
value: 43.20196516265475
- type: nauc_map_at_20_max
value: 34.40279536967664
- type: nauc_map_at_20_std
value: 10.502521746353215
- type: nauc_map_at_3_diff1
value: 44.95075655761481
- type: nauc_map_at_3_max
value: 34.26130085326597
- type: nauc_map_at_3_std
value: 8.112284718354838
- type: nauc_map_at_5_diff1
value: 44.07530466847515
- type: nauc_map_at_5_max
value: 34.29301885967441
- type: nauc_map_at_5_std
value: 9.578236763789624
- type: nauc_mrr_at_1000_diff1
value: 43.74672886274306
- type: nauc_mrr_at_1000_max
value: 34.91906226000147
- type: nauc_mrr_at_1000_std
value: 11.902601352714218
- type: nauc_mrr_at_100_diff1
value: 43.723509090577245
- type: nauc_mrr_at_100_max
value: 34.90821615383906
- type: nauc_mrr_at_100_std
value: 11.929277924338157
- type: nauc_mrr_at_10_diff1
value: 43.79863834518147
- type: nauc_mrr_at_10_max
value: 34.77481237076968
- type: nauc_mrr_at_10_std
value: 11.621191309978766
- type: nauc_mrr_at_1_diff1
value: 47.691221572943014
- type: nauc_mrr_at_1_max
value: 33.83642419108169
- type: nauc_mrr_at_1_std
value: 9.150897325122848
- type: nauc_mrr_at_20_diff1
value: 43.72762657840614
- type: nauc_mrr_at_20_max
value: 34.93505549897513
- type: nauc_mrr_at_20_std
value: 11.77349360531995
- type: nauc_mrr_at_3_diff1
value: 44.801537918309464
- type: nauc_mrr_at_3_max
value: 35.19185952684632
- type: nauc_mrr_at_3_std
value: 10.129116464250496
- type: nauc_mrr_at_5_diff1
value: 44.002923041903024
- type: nauc_mrr_at_5_max
value: 35.04910165721431
- type: nauc_mrr_at_5_std
value: 11.398743784126784
- type: nauc_ndcg_at_1000_diff1
value: 42.1442239971562
- type: nauc_ndcg_at_1000_max
value: 35.56548085015422
- type: nauc_ndcg_at_1000_std
value: 14.555455012386773
- type: nauc_ndcg_at_100_diff1
value: 41.050980420258334
- type: nauc_ndcg_at_100_max
value: 35.2027650239632
- type: nauc_ndcg_at_100_std
value: 15.344655139587024
- type: nauc_ndcg_at_10_diff1
value: 41.471409940051366
- type: nauc_ndcg_at_10_max
value: 34.09128642447279
- type: nauc_ndcg_at_10_std
value: 12.90078883047037
- type: nauc_ndcg_at_1_diff1
value: 47.691221572943014
- type: nauc_ndcg_at_1_max
value: 33.83642419108169
- type: nauc_ndcg_at_1_std
value: 9.150897325122848
- type: nauc_ndcg_at_20_diff1
value: 41.1905793713784
- type: nauc_ndcg_at_20_max
value: 34.647479344052016
- type: nauc_ndcg_at_20_std
value: 13.402886235852945
- type: nauc_ndcg_at_3_diff1
value: 44.297355376548786
- type: nauc_ndcg_at_3_max
value: 35.81339314801316
- type: nauc_ndcg_at_3_std
value: 10.823231337841902
- type: nauc_ndcg_at_5_diff1
value: 42.800976621550305
- type: nauc_ndcg_at_5_max
value: 35.31553747718296
- type: nauc_ndcg_at_5_std
value: 12.999770768672233
- type: nauc_precision_at_1000_diff1
value: 1.3287822595564707
- type: nauc_precision_at_1000_max
value: -7.831380419769612
- type: nauc_precision_at_1000_std
value: 22.16155818707822
- type: nauc_precision_at_100_diff1
value: 7.9573230301645275
- type: nauc_precision_at_100_max
value: 8.250711189680114
- type: nauc_precision_at_100_std
value: 27.281502698532794
- type: nauc_precision_at_10_diff1
value: 20.32301793773069
- type: nauc_precision_at_10_max
value: 26.292089297975213
- type: nauc_precision_at_10_std
value: 22.04086433136549
- type: nauc_precision_at_1_diff1
value: 47.691221572943014
- type: nauc_precision_at_1_max
value: 33.83642419108169
- type: nauc_precision_at_1_std
value: 9.150897325122848
- type: nauc_precision_at_20_diff1
value: 15.599818198246682
- type: nauc_precision_at_20_max
value: 22.892539838177527
- type: nauc_precision_at_20_std
value: 25.01036353128196
- type: nauc_precision_at_3_diff1
value: 34.886829985942896
- type: nauc_precision_at_3_max
value: 35.42710937207887
- type: nauc_precision_at_3_std
value: 15.365089091648043
- type: nauc_precision_at_5_diff1
value: 28.15586527983956
- type: nauc_precision_at_5_max
value: 32.56389381889207
- type: nauc_precision_at_5_std
value: 21.509648911296196
- type: nauc_recall_at_1000_diff1
value: 26.26755669559851
- type: nauc_recall_at_1000_max
value: 47.26359554231266
- type: nauc_recall_at_1000_std
value: 49.17271507988052
- type: nauc_recall_at_100_diff1
value: 24.907171416372044
- type: nauc_recall_at_100_max
value: 31.73666795365669
- type: nauc_recall_at_100_std
value: 34.72430395157542
- type: nauc_recall_at_10_diff1
value: 31.950390817058445
- type: nauc_recall_at_10_max
value: 30.2098918820224
- type: nauc_recall_at_10_std
value: 14.899223627845695
- type: nauc_recall_at_1_diff1
value: 48.52093859650541
- type: nauc_recall_at_1_max
value: 31.75748436789384
- type: nauc_recall_at_1_std
value: 6.295511700792233
- type: nauc_recall_at_20_diff1
value: 30.085789421997823
- type: nauc_recall_at_20_max
value: 31.331626697063587
- type: nauc_recall_at_20_std
value: 17.995054512768437
- type: nauc_recall_at_3_diff1
value: 40.13064450428089
- type: nauc_recall_at_3_max
value: 34.31621377237918
- type: nauc_recall_at_3_std
value: 8.678472209664326
- type: nauc_recall_at_5_diff1
value: 36.077286485853236
- type: nauc_recall_at_5_max
value: 33.299301282489374
- type: nauc_recall_at_5_std
value: 13.612662967959412
- type: ndcg_at_1
value: 30.237000000000002
- type: ndcg_at_10
value: 39.275
- type: ndcg_at_100
value: 45.333
- type: ndcg_at_1000
value: 48.037
- type: ndcg_at_20
value: 41.370000000000005
- type: ndcg_at_3
value: 34.64
- type: ndcg_at_5
value: 36.909
- type: precision_at_1
value: 30.237000000000002
- type: precision_at_10
value: 7.371999999999999
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.241
- type: precision_at_20
value: 4.733
- type: precision_at_3
value: 16.073999999999998
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 25.207
- type: recall_at_10
value: 49.727
- type: recall_at_100
value: 76.936
- type: recall_at_1000
value: 94.299
- type: recall_at_20
value: 57.589
- type: recall_at_3
value: 35.994
- type: recall_at_5
value: 42.406
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 32.471
- type: map_at_1
value: 19.356
- type: map_at_10
value: 27.230999999999998
- type: map_at_100
value: 28.081
- type: map_at_1000
value: 28.181
- type: map_at_20
value: 27.559
- type: map_at_3
value: 24.171
- type: map_at_5
value: 26.057999999999996
- type: mrr_at_1
value: 21.072088724584106
- type: mrr_at_10
value: 29.30647243493824
- type: mrr_at_100
value: 30.00869418929891
- type: mrr_at_1000
value: 30.080405060029037
- type: mrr_at_20
value: 29.578807931610335
- type: mrr_at_3
value: 26.247689463955638
- type: mrr_at_5
value: 28.188539741219966
- type: nauc_map_at_1000_diff1
value: 36.02401315504141
- type: nauc_map_at_1000_max
value: 31.376220748052575
- type: nauc_map_at_1000_std
value: 0.1346247536534226
- type: nauc_map_at_100_diff1
value: 35.97288988366299
- type: nauc_map_at_100_max
value: 31.37343097714007
- type: nauc_map_at_100_std
value: 0.0958351635147478
- type: nauc_map_at_10_diff1
value: 36.15899620024802
- type: nauc_map_at_10_max
value: 31.361037177575795
- type: nauc_map_at_10_std
value: -0.4041283486899738
- type: nauc_map_at_1_diff1
value: 45.43410283777835
- type: nauc_map_at_1_max
value: 32.58678422458408
- type: nauc_map_at_1_std
value: -2.6851172017575133
- type: nauc_map_at_20_diff1
value: 35.934404619775286
- type: nauc_map_at_20_max
value: 31.31030383249301
- type: nauc_map_at_20_std
value: -0.14717896898491972
- type: nauc_map_at_3_diff1
value: 37.86423915954705
- type: nauc_map_at_3_max
value: 32.2809012047829
- type: nauc_map_at_3_std
value: -2.1708148155059592
- type: nauc_map_at_5_diff1
value: 36.84265522390141
- type: nauc_map_at_5_max
value: 31.37157808476922
- type: nauc_map_at_5_std
value: -0.29963826158644064
- type: nauc_mrr_at_1000_diff1
value: 36.20544738283434
- type: nauc_mrr_at_1000_max
value: 32.30208954588085
- type: nauc_mrr_at_1000_std
value: 1.2838715864936316
- type: nauc_mrr_at_100_diff1
value: 36.15904411259169
- type: nauc_mrr_at_100_max
value: 32.327938256466766
- type: nauc_mrr_at_100_std
value: 1.2657067250082925
- type: nauc_mrr_at_10_diff1
value: 36.2543779320316
- type: nauc_mrr_at_10_max
value: 32.24904810717847
- type: nauc_mrr_at_10_std
value: 0.9726935050696278
- type: nauc_mrr_at_1_diff1
value: 45.20429144833244
- type: nauc_mrr_at_1_max
value: 33.82767894871078
- type: nauc_mrr_at_1_std
value: -1.8704335003125725
- type: nauc_mrr_at_20_diff1
value: 36.113279457295576
- type: nauc_mrr_at_20_max
value: 32.21594845332526
- type: nauc_mrr_at_20_std
value: 1.2095516871175558
- type: nauc_mrr_at_3_diff1
value: 38.20188328518054
- type: nauc_mrr_at_3_max
value: 33.4670227017699
- type: nauc_mrr_at_3_std
value: -0.9236814295925724
- type: nauc_mrr_at_5_diff1
value: 36.931570063812494
- type: nauc_mrr_at_5_max
value: 32.13489737875822
- type: nauc_mrr_at_5_std
value: 1.2076368928061894
- type: nauc_ndcg_at_1000_diff1
value: 33.10609773784356
- type: nauc_ndcg_at_1000_max
value: 31.453190993700264
- type: nauc_ndcg_at_1000_std
value: 4.163506021728012
- type: nauc_ndcg_at_100_diff1
value: 32.17682816382762
- type: nauc_ndcg_at_100_max
value: 31.138666275698608
- type: nauc_ndcg_at_100_std
value: 3.479378119538843
- type: nauc_ndcg_at_10_diff1
value: 32.36757537451326
- type: nauc_ndcg_at_10_max
value: 30.337414154328656
- type: nauc_ndcg_at_10_std
value: 1.2414375322956122
- type: nauc_ndcg_at_1_diff1
value: 45.20429144833244
- type: nauc_ndcg_at_1_max
value: 33.82767894871078
- type: nauc_ndcg_at_1_std
value: -1.8704335003125725
- type: nauc_ndcg_at_20_diff1
value: 31.63077246604112
- type: nauc_ndcg_at_20_max
value: 30.145549351574726
- type: nauc_ndcg_at_20_std
value: 2.2258665508440916
- type: nauc_ndcg_at_3_diff1
value: 35.7601957137602
- type: nauc_ndcg_at_3_max
value: 32.14158639451499
- type: nauc_ndcg_at_3_std
value: -1.5690858449662577
- type: nauc_ndcg_at_5_diff1
value: 33.96138687625853
- type: nauc_ndcg_at_5_max
value: 30.334046334206526
- type: nauc_ndcg_at_5_std
value: 1.6084690034355629
- type: nauc_precision_at_1000_diff1
value: 0.9129162358156773
- type: nauc_precision_at_1000_max
value: 0.31132302385509963
- type: nauc_precision_at_1000_std
value: 13.514346244208935
- type: nauc_precision_at_100_diff1
value: 9.48976640458681
- type: nauc_precision_at_100_max
value: 20.18267841145738
- type: nauc_precision_at_100_std
value: 19.747863553039213
- type: nauc_precision_at_10_diff1
value: 20.33134020850462
- type: nauc_precision_at_10_max
value: 27.91758448136032
- type: nauc_precision_at_10_std
value: 8.727850582611723
- type: nauc_precision_at_1_diff1
value: 45.20429144833244
- type: nauc_precision_at_1_max
value: 33.82767894871078
- type: nauc_precision_at_1_std
value: -1.8704335003125725
- type: nauc_precision_at_20_diff1
value: 15.896166469695693
- type: nauc_precision_at_20_max
value: 25.364993587711677
- type: nauc_precision_at_20_std
value: 13.114218169676196
- type: nauc_precision_at_3_diff1
value: 29.003771079583746
- type: nauc_precision_at_3_max
value: 32.65304146225246
- type: nauc_precision_at_3_std
value: 1.1340423196142706
- type: nauc_precision_at_5_diff1
value: 25.57516378319094
- type: nauc_precision_at_5_max
value: 28.99827810579553
- type: nauc_precision_at_5_std
value: 7.945608340859795
- type: nauc_recall_at_1000_diff1
value: 17.592328574711598
- type: nauc_recall_at_1000_max
value: 36.04732369555508
- type: nauc_recall_at_1000_std
value: 37.39190072119301
- type: nauc_recall_at_100_diff1
value: 17.82377173461608
- type: nauc_recall_at_100_max
value: 28.7134171033536
- type: nauc_recall_at_100_std
value: 14.674088397323942
- type: nauc_recall_at_10_diff1
value: 20.986778993947823
- type: nauc_recall_at_10_max
value: 25.73837718906844
- type: nauc_recall_at_10_std
value: 4.164768619671828
- type: nauc_recall_at_1_diff1
value: 45.43410283777835
- type: nauc_recall_at_1_max
value: 32.58678422458408
- type: nauc_recall_at_1_std
value: -2.6851172017575133
- type: nauc_recall_at_20_diff1
value: 17.951879051749504
- type: nauc_recall_at_20_max
value: 24.485751903899704
- type: nauc_recall_at_20_std
value: 7.330820639157978
- type: nauc_recall_at_3_diff1
value: 29.347120526930233
- type: nauc_recall_at_3_max
value: 30.40588230331169
- type: nauc_recall_at_3_std
value: -0.8229385769060911
- type: nauc_recall_at_5_diff1
value: 25.655758092986265
- type: nauc_recall_at_5_max
value: 26.529657553507846
- type: nauc_recall_at_5_std
value: 5.6563620697950405
- type: ndcg_at_1
value: 21.072
- type: ndcg_at_10
value: 32.471
- type: ndcg_at_100
value: 37.183
- type: ndcg_at_1000
value: 39.644
- type: ndcg_at_20
value: 33.559
- type: ndcg_at_3
value: 26.52
- type: ndcg_at_5
value: 29.773
- type: precision_at_1
value: 21.072
- type: precision_at_10
value: 5.396999999999999
- type: precision_at_100
value: 0.856
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_20
value: 2.976
- type: precision_at_3
value: 11.522
- type: precision_at_5
value: 8.799
- type: recall_at_1
value: 19.356
- type: recall_at_10
value: 46.636
- type: recall_at_100
value: 69.663
- type: recall_at_1000
value: 87.789
- type: recall_at_20
value: 50.819
- type: recall_at_3
value: 30.745
- type: recall_at_5
value: 38.558
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 28.83
- type: map_at_1
value: 11.912
- type: map_at_10
value: 20.47
- type: map_at_100
value: 22.519
- type: map_at_1000
value: 22.723
- type: map_at_20
value: 21.597
- type: map_at_3
value: 17.14
- type: map_at_5
value: 18.784
- type: mrr_at_1
value: 26.97068403908795
- type: mrr_at_10
value: 38.490486531203146
- type: mrr_at_100
value: 39.53396099832615
- type: mrr_at_1000
value: 39.57210684594996
- type: mrr_at_20
value: 39.15425856866292
- type: mrr_at_3
value: 35.11400651465798
- type: mrr_at_5
value: 37.045602605863195
- type: nauc_map_at_1000_diff1
value: 24.76538935736538
- type: nauc_map_at_1000_max
value: 33.971343734250524
- type: nauc_map_at_1000_std
value: 15.514286902461812
- type: nauc_map_at_100_diff1
value: 24.779729311279336
- type: nauc_map_at_100_max
value: 33.919396423104644
- type: nauc_map_at_100_std
value: 15.447044791174381
- type: nauc_map_at_10_diff1
value: 24.846700880282587
- type: nauc_map_at_10_max
value: 33.009555706514895
- type: nauc_map_at_10_std
value: 13.466161448314931
- type: nauc_map_at_1_diff1
value: 30.77619025618349
- type: nauc_map_at_1_max
value: 29.34711506798961
- type: nauc_map_at_1_std
value: 5.370506534257179
- type: nauc_map_at_20_diff1
value: 24.887170838281627
- type: nauc_map_at_20_max
value: 33.417660516280556
- type: nauc_map_at_20_std
value: 14.573987916991893
- type: nauc_map_at_3_diff1
value: 26.268565940718595
- type: nauc_map_at_3_max
value: 31.393759098190273
- type: nauc_map_at_3_std
value: 9.903530484775908
- type: nauc_map_at_5_diff1
value: 25.92807753940896
- type: nauc_map_at_5_max
value: 31.75547205926257
- type: nauc_map_at_5_std
value: 11.144523593563873
- type: nauc_mrr_at_1000_diff1
value: 21.169799923106503
- type: nauc_mrr_at_1000_max
value: 30.197960966240423
- type: nauc_mrr_at_1000_std
value: 14.9456013284975
- type: nauc_mrr_at_100_diff1
value: 21.163213044565072
- type: nauc_mrr_at_100_max
value: 30.203598251460463
- type: nauc_mrr_at_100_std
value: 14.965088745621394
- type: nauc_mrr_at_10_diff1
value: 21.288223958797573
- type: nauc_mrr_at_10_max
value: 30.216972741035725
- type: nauc_mrr_at_10_std
value: 14.843389003635199
- type: nauc_mrr_at_1_diff1
value: 23.727724539614726
- type: nauc_mrr_at_1_max
value: 25.48232837843042
- type: nauc_mrr_at_1_std
value: 7.3658686447303365
- type: nauc_mrr_at_20_diff1
value: 21.169422189267745
- type: nauc_mrr_at_20_max
value: 30.301266953756706
- type: nauc_mrr_at_20_std
value: 15.075334986433173
- type: nauc_mrr_at_3_diff1
value: 21.196573573020927
- type: nauc_mrr_at_3_max
value: 28.977465265081033
- type: nauc_mrr_at_3_std
value: 13.332883636713632
- type: nauc_mrr_at_5_diff1
value: 21.357024988142246
- type: nauc_mrr_at_5_max
value: 29.813915226240674
- type: nauc_mrr_at_5_std
value: 14.24321875462318
- type: nauc_ndcg_at_1000_diff1
value: 21.61770986869092
- type: nauc_ndcg_at_1000_max
value: 36.60187844333304
- type: nauc_ndcg_at_1000_std
value: 22.795099112621724
- type: nauc_ndcg_at_100_diff1
value: 21.776142474396803
- type: nauc_ndcg_at_100_max
value: 36.2414089305107
- type: nauc_ndcg_at_100_std
value: 22.4799584915147
- type: nauc_ndcg_at_10_diff1
value: 22.481037341715847
- type: nauc_ndcg_at_10_max
value: 34.06739169204709
- type: nauc_ndcg_at_10_std
value: 17.57021576387562
- type: nauc_ndcg_at_1_diff1
value: 23.727724539614726
- type: nauc_ndcg_at_1_max
value: 25.48232837843042
- type: nauc_ndcg_at_1_std
value: 7.3658686447303365
- type: nauc_ndcg_at_20_diff1
value: 22.498496087588748
- type: nauc_ndcg_at_20_max
value: 34.892054494351775
- type: nauc_ndcg_at_20_std
value: 19.86842474039087
- type: nauc_ndcg_at_3_diff1
value: 23.227651070229673
- type: nauc_ndcg_at_3_max
value: 30.75938173648469
- type: nauc_ndcg_at_3_std
value: 12.463565273811932
- type: nauc_ndcg_at_5_diff1
value: 23.799000279668764
- type: nauc_ndcg_at_5_max
value: 32.0660491516285
- type: nauc_ndcg_at_5_std
value: 13.990017306982494
- type: nauc_precision_at_1000_diff1
value: -6.21355360115824
- type: nauc_precision_at_1000_max
value: 14.235538664122721
- type: nauc_precision_at_1000_std
value: 25.265885010833472
- type: nauc_precision_at_100_diff1
value: 0.562403756165029
- type: nauc_precision_at_100_max
value: 23.924206050764756
- type: nauc_precision_at_100_std
value: 31.338663218741008
- type: nauc_precision_at_10_diff1
value: 10.354652757659355
- type: nauc_precision_at_10_max
value: 30.758798212425408
- type: nauc_precision_at_10_std
value: 26.7031344564241
- type: nauc_precision_at_1_diff1
value: 23.727724539614726
- type: nauc_precision_at_1_max
value: 25.48232837843042
- type: nauc_precision_at_1_std
value: 7.3658686447303365
- type: nauc_precision_at_20_diff1
value: 7.953520958455318
- type: nauc_precision_at_20_max
value: 27.65892195364336
- type: nauc_precision_at_20_std
value: 29.388872379134785
- type: nauc_precision_at_3_diff1
value: 16.808332567181218
- type: nauc_precision_at_3_max
value: 29.936324350885325
- type: nauc_precision_at_3_std
value: 17.73176704168099
- type: nauc_precision_at_5_diff1
value: 14.763444123101097
- type: nauc_precision_at_5_max
value: 29.010028453553105
- type: nauc_precision_at_5_std
value: 20.023906594546354
- type: nauc_recall_at_1000_diff1
value: 7.556266790051787
- type: nauc_recall_at_1000_max
value: 39.68519315547223
- type: nauc_recall_at_1000_std
value: 42.629307490289136
- type: nauc_recall_at_100_diff1
value: 11.82811136427535
- type: nauc_recall_at_100_max
value: 34.385603696492915
- type: nauc_recall_at_100_std
value: 32.712899819080086
- type: nauc_recall_at_10_diff1
value: 16.862688054209933
- type: nauc_recall_at_10_max
value: 32.3375156464964
- type: nauc_recall_at_10_std
value: 20.728901631388972
- type: nauc_recall_at_1_diff1
value: 30.77619025618349
- type: nauc_recall_at_1_max
value: 29.34711506798961
- type: nauc_recall_at_1_std
value: 5.370506534257179
- type: nauc_recall_at_20_diff1
value: 16.155103050272285
- type: nauc_recall_at_20_max
value: 32.5683721009582
- type: nauc_recall_at_20_std
value: 24.833541877340508
- type: nauc_recall_at_3_diff1
value: 21.31158530879841
- type: nauc_recall_at_3_max
value: 30.2555482302795
- type: nauc_recall_at_3_std
value: 12.723641578903559
- type: nauc_recall_at_5_diff1
value: 20.646276690604772
- type: nauc_recall_at_5_max
value: 30.46478603020548
- type: nauc_recall_at_5_std
value: 14.74543343019217
- type: ndcg_at_1
value: 26.971
- type: ndcg_at_10
value: 28.83
- type: ndcg_at_100
value: 36.742000000000004
- type: ndcg_at_1000
value: 40.228
- type: ndcg_at_20
value: 31.983
- type: ndcg_at_3
value: 23.546
- type: ndcg_at_5
value: 25.291999999999998
- type: precision_at_1
value: 26.971
- type: precision_at_10
value: 9.075
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.24
- type: precision_at_20
value: 5.912
- type: precision_at_3
value: 17.59
- type: precision_at_5
value: 13.485
- type: recall_at_1
value: 11.912
- type: recall_at_10
value: 34.746
- type: recall_at_100
value: 61.869
- type: recall_at_1000
value: 81.178
- type: recall_at_20
value: 43.575
- type: recall_at_3
value: 21.706
- type: recall_at_5
value: 26.826
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: dev
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 46.941
- type: map_at_1
value: 11.128
- type: map_at_10
value: 21.468
- type: map_at_100
value: 31.257
- type: map_at_1000
value: 32.994
- type: map_at_20
value: 25.85
- type: map_at_3
value: 15.384999999999998
- type: map_at_5
value: 17.918
- type: mrr_at_1
value: 74.6268656716418
- type: mrr_at_10
value: 82.71736555318644
- type: mrr_at_100
value: 82.86968222071188
- type: mrr_at_1000
value: 82.87289889595635
- type: mrr_at_20
value: 82.81686804074863
- type: mrr_at_3
value: 82.33830845771143
- type: mrr_at_5
value: 82.33830845771143
- type: nauc_map_at_1000_diff1
value: 28.205873220137107
- type: nauc_map_at_1000_max
value: 32.58677206955515
- type: nauc_map_at_1000_std
value: 5.320112182691084
- type: nauc_map_at_100_diff1
value: 30.2072705535807
- type: nauc_map_at_100_max
value: 31.398441843394203
- type: nauc_map_at_100_std
value: 0.3721987672572358
- type: nauc_map_at_10_diff1
value: 43.658171592848966
- type: nauc_map_at_10_max
value: 20.163715859146205
- type: nauc_map_at_10_std
value: -27.377423723665643
- type: nauc_map_at_1_diff1
value: 53.67643301898103
- type: nauc_map_at_1_max
value: 9.619896030610045
- type: nauc_map_at_1_std
value: -39.90622904765458
- type: nauc_map_at_20_diff1
value: 37.89256707694639
- type: nauc_map_at_20_max
value: 23.572362119543783
- type: nauc_map_at_20_std
value: -17.725673906501697
- type: nauc_map_at_3_diff1
value: 44.79713521587412
- type: nauc_map_at_3_max
value: 11.458612145657254
- type: nauc_map_at_3_std
value: -38.4860514263039
- type: nauc_map_at_5_diff1
value: 44.27015282190561
- type: nauc_map_at_5_max
value: 16.420591916821895
- type: nauc_map_at_5_std
value: -32.85173443465263
- type: nauc_mrr_at_1000_diff1
value: 40.68406246743924
- type: nauc_mrr_at_1000_max
value: 54.557403984682374
- type: nauc_mrr_at_1000_std
value: 40.971924667245645
- type: nauc_mrr_at_100_diff1
value: 40.68710053046815
- type: nauc_mrr_at_100_max
value: 54.561036423251075
- type: nauc_mrr_at_100_std
value: 40.97834383095643
- type: nauc_mrr_at_10_diff1
value: 41.01499432410833
- type: nauc_mrr_at_10_max
value: 54.97000112752338
- type: nauc_mrr_at_10_std
value: 41.318724395876444
- type: nauc_mrr_at_1_diff1
value: 38.93343855503621
- type: nauc_mrr_at_1_max
value: 44.716366048532024
- type: nauc_mrr_at_1_std
value: 31.486396303111274
- type: nauc_mrr_at_20_diff1
value: 40.64197845924939
- type: nauc_mrr_at_20_max
value: 54.6852354691744
- type: nauc_mrr_at_20_std
value: 41.1230179065546
- type: nauc_mrr_at_3_diff1
value: 40.82958545456673
- type: nauc_mrr_at_3_max
value: 55.153817526702355
- type: nauc_mrr_at_3_std
value: 41.650161545663664
- type: nauc_mrr_at_5_diff1
value: 40.82958545456673
- type: nauc_mrr_at_5_max
value: 55.153817526702355
- type: nauc_mrr_at_5_std
value: 41.650161545663664
- type: nauc_ndcg_at_1000_diff1
value: 23.929983013223506
- type: nauc_ndcg_at_1000_max
value: 44.75276524735193
- type: nauc_ndcg_at_1000_std
value: 29.517675569258962
- type: nauc_ndcg_at_100_diff1
value: 29.624406725667374
- type: nauc_ndcg_at_100_max
value: 41.6779897397469
- type: nauc_ndcg_at_100_std
value: 13.705531184382655
- type: nauc_ndcg_at_10_diff1
value: 31.416257175446887
- type: nauc_ndcg_at_10_max
value: 38.589902876894975
- type: nauc_ndcg_at_10_std
value: 11.728256480525854
- type: nauc_ndcg_at_1_diff1
value: 28.46337552050302
- type: nauc_ndcg_at_1_max
value: 38.225886384997445
- type: nauc_ndcg_at_1_std
value: 21.338500059461598
- type: nauc_ndcg_at_20_diff1
value: 33.256723866450166
- type: nauc_ndcg_at_20_max
value: 33.537144781805374
- type: nauc_ndcg_at_20_std
value: -0.33743018664631347
- type: nauc_ndcg_at_3_diff1
value: 13.969404562448364
- type: nauc_ndcg_at_3_max
value: 30.565751196256763
- type: nauc_ndcg_at_3_std
value: 17.30174026819281
- type: nauc_ndcg_at_5_diff1
value: 18.872272865937877
- type: nauc_ndcg_at_5_max
value: 35.263519230922064
- type: nauc_ndcg_at_5_std
value: 19.33016308269416
- type: nauc_precision_at_1000_diff1
value: -33.04092483049479
- type: nauc_precision_at_1000_max
value: 11.49095198021317
- type: nauc_precision_at_1000_std
value: 60.12678457700677
- type: nauc_precision_at_100_diff1
value: -31.078345270653546
- type: nauc_precision_at_100_max
value: 24.725068437496166
- type: nauc_precision_at_100_std
value: 67.57419232501738
- type: nauc_precision_at_10_diff1
value: -22.292617190022344
- type: nauc_precision_at_10_max
value: 24.54113659577851
- type: nauc_precision_at_10_std
value: 50.62947534355667
- type: nauc_precision_at_1_diff1
value: 38.93343855503621
- type: nauc_precision_at_1_max
value: 44.716366048532024
- type: nauc_precision_at_1_std
value: 31.486396303111274
- type: nauc_precision_at_20_diff1
value: -25.92502457178481
- type: nauc_precision_at_20_max
value: 23.621039548137382
- type: nauc_precision_at_20_std
value: 52.21318384886264
- type: nauc_precision_at_3_diff1
value: -15.971548173967095
- type: nauc_precision_at_3_max
value: 24.699430304690164
- type: nauc_precision_at_3_std
value: 35.799215461434265
- type: nauc_precision_at_5_diff1
value: -19.615035551746058
- type: nauc_precision_at_5_max
value: 30.732478067399917
- type: nauc_precision_at_5_std
value: 49.57700533096512
- type: nauc_recall_at_1000_diff1
value: 2.3453680689225975
- type: nauc_recall_at_1000_max
value: 41.49529473708931
- type: nauc_recall_at_1000_std
value: 41.33189951068062
- type: nauc_recall_at_100_diff1
value: 20.244471165146653
- type: nauc_recall_at_100_max
value: 37.4543481569782
- type: nauc_recall_at_100_std
value: 7.801110449891264
- type: nauc_recall_at_10_diff1
value: 46.105545914294964
- type: nauc_recall_at_10_max
value: 21.930607285926236
- type: nauc_recall_at_10_std
value: -29.972280971809862
- type: nauc_recall_at_1_diff1
value: 53.67643301898103
- type: nauc_recall_at_1_max
value: 9.619896030610045
- type: nauc_recall_at_1_std
value: -39.90622904765458
- type: nauc_recall_at_20_diff1
value: 31.224305541777962
- type: nauc_recall_at_20_max
value: 18.835055220233627
- type: nauc_recall_at_20_std
value: -23.34247163762115
- type: nauc_recall_at_3_diff1
value: 42.73205224851371
- type: nauc_recall_at_3_max
value: 9.89768785740234
- type: nauc_recall_at_3_std
value: -40.04768970629589
- type: nauc_recall_at_5_diff1
value: 43.48392547485277
- type: nauc_recall_at_5_max
value: 16.208422216255546
- type: nauc_recall_at_5_std
value: -33.79681691846111
- type: ndcg_at_1
value: 64.925
- type: ndcg_at_10
value: 46.941
- type: ndcg_at_100
value: 53.242999999999995
- type: ndcg_at_1000
value: 60.87200000000001
- type: ndcg_at_20
value: 47.138999999999996
- type: ndcg_at_3
value: 54.533
- type: ndcg_at_5
value: 50.536
- type: precision_at_1
value: 74.627
- type: precision_at_10
value: 34.477999999999994
- type: precision_at_100
value: 10.776
- type: precision_at_1000
value: 1.737
- type: precision_at_20
value: 26.939999999999998
- type: precision_at_3
value: 53.234
- type: precision_at_5
value: 45.074999999999996
- type: recall_at_1
value: 11.128
- type: recall_at_10
value: 27.354
- type: recall_at_100
value: 60.211999999999996
- type: recall_at_1000
value: 84.485
- type: recall_at_20
value: 38.645
- type: recall_at_3
value: 16.72
- type: recall_at_5
value: 20.648
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 40.463
- type: map_at_1
value: 9.029
- type: map_at_10
value: 19.262999999999998
- type: map_at_100
value: 27.32
- type: map_at_1000
value: 28.898000000000003
- type: map_at_20
value: 22.285
- type: map_at_3
value: 13.741999999999999
- type: map_at_5
value: 16.184
- type: mrr_at_1
value: 66.5
- type: mrr_at_10
value: 74.58015873015873
- type: mrr_at_100
value: 74.90889577505533
- type: mrr_at_1000
value: 74.91723981152577
- type: mrr_at_20
value: 74.81400519415226
- type: mrr_at_3
value: 72.75
- type: mrr_at_5
value: 74.0125
- type: nauc_map_at_1000_diff1
value: 18.09558210846325
- type: nauc_map_at_1000_max
value: 23.2203289549629
- type: nauc_map_at_1000_std
value: 26.956962795986172
- type: nauc_map_at_100_diff1
value: 19.26318278180542
- type: nauc_map_at_100_max
value: 21.385042416697452
- type: nauc_map_at_100_std
value: 24.102985219136333
- type: nauc_map_at_10_diff1
value: 23.01034372544496
- type: nauc_map_at_10_max
value: 10.878304152216096
- type: nauc_map_at_10_std
value: 2.362436293740753
- type: nauc_map_at_1_diff1
value: 37.366256622356744
- type: nauc_map_at_1_max
value: 3.3385920525541053
- type: nauc_map_at_1_std
value: -13.380038032859368
- type: nauc_map_at_20_diff1
value: 21.282079555449165
- type: nauc_map_at_20_max
value: 14.770333252524978
- type: nauc_map_at_20_std
value: 10.07082791626879
- type: nauc_map_at_3_diff1
value: 27.510938193340262
- type: nauc_map_at_3_max
value: 4.965379678339785
- type: nauc_map_at_3_std
value: -8.210306316829763
- type: nauc_map_at_5_diff1
value: 25.261898710191527
- type: nauc_map_at_5_max
value: 6.767168819826101
- type: nauc_map_at_5_std
value: -4.755049267878645
- type: nauc_mrr_at_1000_diff1
value: 43.97215491866585
- type: nauc_mrr_at_1000_max
value: 57.2564604463254
- type: nauc_mrr_at_1000_std
value: 37.52158604598497
- type: nauc_mrr_at_100_diff1
value: 43.9520395412017
- type: nauc_mrr_at_100_max
value: 57.25472510851986
- type: nauc_mrr_at_100_std
value: 37.512313370242985
- type: nauc_mrr_at_10_diff1
value: 44.149055538533176
- type: nauc_mrr_at_10_max
value: 57.33498205903583
- type: nauc_mrr_at_10_std
value: 37.47474466090973
- type: nauc_mrr_at_1_diff1
value: 47.2190330933805
- type: nauc_mrr_at_1_max
value: 54.17417392251676
- type: nauc_mrr_at_1_std
value: 32.26038934374714
- type: nauc_mrr_at_20_diff1
value: 43.968230171527594
- type: nauc_mrr_at_20_max
value: 57.29041263736231
- type: nauc_mrr_at_20_std
value: 37.544977698244445
- type: nauc_mrr_at_3_diff1
value: 43.34027046577654
- type: nauc_mrr_at_3_max
value: 56.66557913521474
- type: nauc_mrr_at_3_std
value: 38.76901326294043
- type: nauc_mrr_at_5_diff1
value: 44.12604461577999
- type: nauc_mrr_at_5_max
value: 57.612606984606195
- type: nauc_mrr_at_5_std
value: 37.82836737434154
- type: nauc_ndcg_at_1000_diff1
value: 17.44988479317236
- type: nauc_ndcg_at_1000_max
value: 37.7482696046901
- type: nauc_ndcg_at_1000_std
value: 38.65638419071411
- type: nauc_ndcg_at_100_diff1
value: 19.507315827241328
- type: nauc_ndcg_at_100_max
value: 30.400138333090588
- type: nauc_ndcg_at_100_std
value: 31.113058945606742
- type: nauc_ndcg_at_10_diff1
value: 20.815636696375392
- type: nauc_ndcg_at_10_max
value: 33.68288613631258
- type: nauc_ndcg_at_10_std
value: 29.671362930314803
- type: nauc_ndcg_at_1_diff1
value: 38.11376341528132
- type: nauc_ndcg_at_1_max
value: 44.403414549347204
- type: nauc_ndcg_at_1_std
value: 25.52384786407678
- type: nauc_ndcg_at_20_diff1
value: 20.918758451069003
- type: nauc_ndcg_at_20_max
value: 30.06195313262468
- type: nauc_ndcg_at_20_std
value: 26.54741096206399
- type: nauc_ndcg_at_3_diff1
value: 24.372445814427298
- type: nauc_ndcg_at_3_max
value: 37.527105863035004
- type: nauc_ndcg_at_3_std
value: 29.627357113821652
- type: nauc_ndcg_at_5_diff1
value: 22.24317304883715
- type: nauc_ndcg_at_5_max
value: 36.01451546088971
- type: nauc_ndcg_at_5_std
value: 29.681491475278758
- type: nauc_precision_at_1000_diff1
value: -16.787088918126038
- type: nauc_precision_at_1000_max
value: 7.469815188782062
- type: nauc_precision_at_1000_std
value: 20.347279526863016
- type: nauc_precision_at_100_diff1
value: -7.767846738327802
- type: nauc_precision_at_100_max
value: 29.03244278470077
- type: nauc_precision_at_100_std
value: 46.382880462091606
- type: nauc_precision_at_10_diff1
value: -0.5232913303652262
- type: nauc_precision_at_10_max
value: 38.64476507974731
- type: nauc_precision_at_10_std
value: 47.74014794238241
- type: nauc_precision_at_1_diff1
value: 47.2190330933805
- type: nauc_precision_at_1_max
value: 54.17417392251676
- type: nauc_precision_at_1_std
value: 32.26038934374714
- type: nauc_precision_at_20_diff1
value: -2.1834578501338147
- type: nauc_precision_at_20_max
value: 35.998198621553904
- type: nauc_precision_at_20_std
value: 48.111771749288465
- type: nauc_precision_at_3_diff1
value: 10.263223401237676
- type: nauc_precision_at_3_max
value: 38.014854436938116
- type: nauc_precision_at_3_std
value: 38.4358555773606
- type: nauc_precision_at_5_diff1
value: 4.413009169159884
- type: nauc_precision_at_5_max
value: 39.10496606345797
- type: nauc_precision_at_5_std
value: 42.78105363038704
- type: nauc_recall_at_1000_diff1
value: 5.614459025569145
- type: nauc_recall_at_1000_max
value: 30.609110099594243
- type: nauc_recall_at_1000_std
value: 41.7665129006611
- type: nauc_recall_at_100_diff1
value: 11.30144179728448
- type: nauc_recall_at_100_max
value: 16.59742852717951
- type: nauc_recall_at_100_std
value: 24.50296358615415
- type: nauc_recall_at_10_diff1
value: 16.490149625173352
- type: nauc_recall_at_10_max
value: 3.832770249980029
- type: nauc_recall_at_10_std
value: -2.374496776149457
- type: nauc_recall_at_1_diff1
value: 37.366256622356744
- type: nauc_recall_at_1_max
value: 3.3385920525541053
- type: nauc_recall_at_1_std
value: -13.380038032859368
- type: nauc_recall_at_20_diff1
value: 15.292191881271883
- type: nauc_recall_at_20_max
value: 8.415679565455628
- type: nauc_recall_at_20_std
value: 5.074936206972553
- type: nauc_recall_at_3_diff1
value: 24.152242185735222
- type: nauc_recall_at_3_max
value: 1.506727274505061
- type: nauc_recall_at_3_std
value: -10.019411125629754
- type: nauc_recall_at_5_diff1
value: 21.7463835872855
- type: nauc_recall_at_5_max
value: 2.645640036075683
- type: nauc_recall_at_5_std
value: -7.864417712754525
- type: ndcg_at_1
value: 54.625
- type: ndcg_at_10
value: 40.463
- type: ndcg_at_100
value: 45.693
- type: ndcg_at_1000
value: 53.21900000000001
- type: ndcg_at_20
value: 39.948
- type: ndcg_at_3
value: 44.546
- type: ndcg_at_5
value: 42.370000000000005
- type: precision_at_1
value: 66.5
- type: precision_at_10
value: 31.724999999999998
- type: precision_at_100
value: 10.308
- type: precision_at_1000
value: 2.015
- type: precision_at_20
value: 24.013
- type: precision_at_3
value: 47.5
- type: precision_at_5
value: 40.8
- type: recall_at_1
value: 9.029
- type: recall_at_10
value: 25.064999999999998
- type: recall_at_100
value: 52.578
- type: recall_at_1000
value: 76.43100000000001
- type: recall_at_20
value: 32.012
- type: recall_at_3
value: 15.046000000000001
- type: recall_at_5
value: 19.012
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 50.815
- type: f1
value: 44.094634894021816
- type: f1_weighted
value: 52.60818465807737
- type: main_score
value: 50.815
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: validation
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 50.25999999999999
- type: f1
value: 44.29688695357058
- type: f1_weighted
value: 51.782894585455274
- type: main_score
value: 50.25999999999999
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: dev
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 86.581
- type: map_at_1
value: 74.727
- type: map_at_10
value: 82.856
- type: map_at_100
value: 83.08
- type: map_at_1000
value: 83.095
- type: map_at_20
value: 82.98
- type: map_at_3
value: 81.874
- type: map_at_5
value: 82.528
- type: mrr_at_1
value: 81.11311131113112
- type: mrr_at_10
value: 88.08008777068184
- type: mrr_at_100
value: 88.15267896396102
- type: mrr_at_1000
value: 88.15440036957291
- type: mrr_at_20
value: 88.12826180824881
- type: mrr_at_3
value: 87.42624262426241
- type: mrr_at_5
value: 87.90179017901791
- type: nauc_map_at_1000_diff1
value: 50.36356059540196
- type: nauc_map_at_1000_max
value: 23.9992915254945
- type: nauc_map_at_1000_std
value: -5.351570096789122
- type: nauc_map_at_100_diff1
value: 50.32379212854651
- type: nauc_map_at_100_max
value: 23.98098961038551
- type: nauc_map_at_100_std
value: -5.348874939403815
- type: nauc_map_at_10_diff1
value: 49.99523888098324
- type: nauc_map_at_10_max
value: 23.709118817821636
- type: nauc_map_at_10_std
value: -5.655330318910658
- type: nauc_map_at_1_diff1
value: 54.8096391515275
- type: nauc_map_at_1_max
value: 20.61234382722085
- type: nauc_map_at_1_std
value: -7.831309271791796
- type: nauc_map_at_20_diff1
value: 50.19276270347539
- type: nauc_map_at_20_max
value: 23.8325417359291
- type: nauc_map_at_20_std
value: -5.4936298430989
- type: nauc_map_at_3_diff1
value: 50.28796474395995
- type: nauc_map_at_3_max
value: 23.45783355551859
- type: nauc_map_at_3_std
value: -6.885753631015993
- type: nauc_map_at_5_diff1
value: 49.96283216222825
- type: nauc_map_at_5_max
value: 23.555049015922823
- type: nauc_map_at_5_std
value: -6.090659860215219
- type: nauc_mrr_at_1000_diff1
value: 69.17421533744194
- type: nauc_mrr_at_1000_max
value: 33.02158730991583
- type: nauc_mrr_at_1000_std
value: -8.948358014186812
- type: nauc_mrr_at_100_diff1
value: 69.17553004498271
- type: nauc_mrr_at_100_max
value: 33.028331950038684
- type: nauc_mrr_at_100_std
value: -8.942802854470754
- type: nauc_mrr_at_10_diff1
value: 69.13328163893561
- type: nauc_mrr_at_10_max
value: 33.18335904223495
- type: nauc_mrr_at_10_std
value: -8.89478907359742
- type: nauc_mrr_at_1_diff1
value: 69.59435273515031
- type: nauc_mrr_at_1_max
value: 28.41439149035527
- type: nauc_mrr_at_1_std
value: -9.895055694379428
- type: nauc_mrr_at_20_diff1
value: 69.19362398254414
- type: nauc_mrr_at_20_max
value: 33.09302969870915
- type: nauc_mrr_at_20_std
value: -8.903618952902336
- type: nauc_mrr_at_3_diff1
value: 69.03641198083353
- type: nauc_mrr_at_3_max
value: 33.689976336215125
- type: nauc_mrr_at_3_std
value: -9.53929739319604
- type: nauc_mrr_at_5_diff1
value: 69.03744477223476
- type: nauc_mrr_at_5_max
value: 33.343224412356946
- type: nauc_mrr_at_5_std
value: -9.107354899986388
- type: nauc_ndcg_at_1000_diff1
value: 52.37043460910987
- type: nauc_ndcg_at_1000_max
value: 26.868515910590585
- type: nauc_ndcg_at_1000_std
value: -3.8173698040799926
- type: nauc_ndcg_at_100_diff1
value: 51.59304592952214
- type: nauc_ndcg_at_100_max
value: 26.582916315809957
- type: nauc_ndcg_at_100_std
value: -3.507246327999668
- type: nauc_ndcg_at_10_diff1
value: 50.56748641091843
- type: nauc_ndcg_at_10_max
value: 25.937839511946397
- type: nauc_ndcg_at_10_std
value: -4.317117533070399
- type: nauc_ndcg_at_1_diff1
value: 69.59435273515031
- type: nauc_ndcg_at_1_max
value: 28.41439149035527
- type: nauc_ndcg_at_1_std
value: -9.895055694379428
- type: nauc_ndcg_at_20_diff1
value: 51.07942544100161
- type: nauc_ndcg_at_20_max
value: 26.079691268086176
- type: nauc_ndcg_at_20_std
value: -3.9867583589809534
- type: nauc_ndcg_at_3_diff1
value: 52.08099666063276
- type: nauc_ndcg_at_3_max
value: 26.743415311234692
- type: nauc_ndcg_at_3_std
value: -6.45797306431994
- type: nauc_ndcg_at_5_diff1
value: 50.763723596631735
- type: nauc_ndcg_at_5_max
value: 26.016889338180306
- type: nauc_ndcg_at_5_std
value: -5.248574264404534
- type: nauc_precision_at_1000_diff1
value: -5.387965863532961
- type: nauc_precision_at_1000_max
value: 10.372523367921854
- type: nauc_precision_at_1000_std
value: 13.415209647969087
- type: nauc_precision_at_100_diff1
value: -5.685856344844705
- type: nauc_precision_at_100_max
value: 13.882370685075996
- type: nauc_precision_at_100_std
value: 17.947725196159116
- type: nauc_precision_at_10_diff1
value: 1.4713194150281539
- type: nauc_precision_at_10_max
value: 20.264317861307276
- type: nauc_precision_at_10_std
value: 12.524533346684677
- type: nauc_precision_at_1_diff1
value: 69.59435273515031
- type: nauc_precision_at_1_max
value: 28.41439149035527
- type: nauc_precision_at_1_std
value: -9.895055694379428
- type: nauc_precision_at_20_diff1
value: -1.040212171206397
- type: nauc_precision_at_20_max
value: 17.76967921210208
- type: nauc_precision_at_20_std
value: 15.86703101590195
- type: nauc_precision_at_3_diff1
value: 27.397567762305414
- type: nauc_precision_at_3_max
value: 29.983353547368306
- type: nauc_precision_at_3_std
value: -2.024560146034635
- type: nauc_precision_at_5_diff1
value: 11.978185255474472
- type: nauc_precision_at_5_max
value: 25.336475474417213
- type: nauc_precision_at_5_std
value: 6.079864267936165
- type: nauc_recall_at_1000_diff1
value: 5.978402010181869
- type: nauc_recall_at_1000_max
value: 26.578352090329005
- type: nauc_recall_at_1000_std
value: 31.804174323172713
- type: nauc_recall_at_100_diff1
value: 9.67678765671401
- type: nauc_recall_at_100_max
value: 21.918365367812797
- type: nauc_recall_at_100_std
value: 24.820139168438597
- type: nauc_recall_at_10_diff1
value: 21.60125165788751
- type: nauc_recall_at_10_max
value: 21.565110839293144
- type: nauc_recall_at_10_std
value: 8.449173954921543
- type: nauc_recall_at_1_diff1
value: 54.8096391515275
- type: nauc_recall_at_1_max
value: 20.61234382722085
- type: nauc_recall_at_1_std
value: -7.831309271791796
- type: nauc_recall_at_20_diff1
value: 19.681237121526223
- type: nauc_recall_at_20_max
value: 20.77874943741462
- type: nauc_recall_at_20_std
value: 12.46872212646974
- type: nauc_recall_at_3_diff1
value: 35.25648317456749
- type: nauc_recall_at_3_max
value: 24.513936377413255
- type: nauc_recall_at_3_std
value: -3.46424213085049
- type: nauc_recall_at_5_diff1
value: 27.636657051463775
- type: nauc_recall_at_5_max
value: 22.76892845457896
- type: nauc_recall_at_5_std
value: 1.819321132167595
- type: ndcg_at_1
value: 81.113
- type: ndcg_at_10
value: 86.581
- type: ndcg_at_100
value: 87.37700000000001
- type: ndcg_at_1000
value: 87.634
- type: ndcg_at_20
value: 86.90299999999999
- type: ndcg_at_3
value: 85.126
- type: ndcg_at_5
value: 85.992
- type: precision_at_1
value: 81.113
- type: precision_at_10
value: 10.35
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_20
value: 5.281000000000001
- type: precision_at_3
value: 32.543
- type: precision_at_5
value: 20.150000000000002
- type: recall_at_1
value: 74.727
- type: recall_at_10
value: 92.93
- type: recall_at_100
value: 96.122
- type: recall_at_1000
value: 97.722
- type: recall_at_20
value: 94.038
- type: recall_at_3
value: 88.785
- type: recall_at_5
value: 91.143
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 85.319
- type: map_at_1
value: 72.161
- type: map_at_10
value: 81.238
- type: map_at_100
value: 81.46
- type: map_at_1000
value: 81.473
- type: map_at_20
value: 81.365
- type: map_at_3
value: 80.01599999999999
- type: map_at_5
value: 80.857
- type: mrr_at_1
value: 77.93279327932792
- type: mrr_at_10
value: 85.94110601536343
- type: mrr_at_100
value: 86.02247464267394
- type: mrr_at_1000
value: 86.02424083776694
- type: mrr_at_20
value: 85.99713844897458
- type: mrr_at_3
value: 85.10351035103511
- type: mrr_at_5
value: 85.74182418241824
- type: nauc_map_at_1000_diff1
value: 48.87942532850865
- type: nauc_map_at_1000_max
value: 19.207999873933872
- type: nauc_map_at_1000_std
value: -6.426739011393975
- type: nauc_map_at_100_diff1
value: 48.844165528106586
- type: nauc_map_at_100_max
value: 19.19923211737889
- type: nauc_map_at_100_std
value: -6.413953838280856
- type: nauc_map_at_10_diff1
value: 48.66909865150514
- type: nauc_map_at_10_max
value: 19.08422286721995
- type: nauc_map_at_10_std
value: -6.550612019706119
- type: nauc_map_at_1_diff1
value: 52.38347452384079
- type: nauc_map_at_1_max
value: 15.321152180069644
- type: nauc_map_at_1_std
value: -9.404483536334594
- type: nauc_map_at_20_diff1
value: 48.75447470976026
- type: nauc_map_at_20_max
value: 19.146566952856375
- type: nauc_map_at_20_std
value: -6.433993868501403
- type: nauc_map_at_3_diff1
value: 48.848389883387796
- type: nauc_map_at_3_max
value: 18.375903836766348
- type: nauc_map_at_3_std
value: -8.115547043380014
- type: nauc_map_at_5_diff1
value: 48.45908591701036
- type: nauc_map_at_5_max
value: 18.91547083095089
- type: nauc_map_at_5_std
value: -7.0216129648180345
- type: nauc_mrr_at_1000_diff1
value: 61.86764697591185
- type: nauc_mrr_at_1000_max
value: 23.37259724729802
- type: nauc_mrr_at_1000_std
value: -11.240785012446858
- type: nauc_mrr_at_100_diff1
value: 61.86299894038223
- type: nauc_mrr_at_100_max
value: 23.377993723874425
- type: nauc_mrr_at_100_std
value: -11.234547507145425
- type: nauc_mrr_at_10_diff1
value: 61.89745030747914
- type: nauc_mrr_at_10_max
value: 23.51179181879897
- type: nauc_mrr_at_10_std
value: -11.21542754410092
- type: nauc_mrr_at_1_diff1
value: 62.87718289404487
- type: nauc_mrr_at_1_max
value: 20.036585845650016
- type: nauc_mrr_at_1_std
value: -11.824468808796672
- type: nauc_mrr_at_20_diff1
value: 61.87965159241426
- type: nauc_mrr_at_20_max
value: 23.46555093254262
- type: nauc_mrr_at_20_std
value: -11.184929381359039
- type: nauc_mrr_at_3_diff1
value: 61.96154941592062
- type: nauc_mrr_at_3_max
value: 23.30622207138397
- type: nauc_mrr_at_3_std
value: -12.526039371134765
- type: nauc_mrr_at_5_diff1
value: 61.71707334295513
- type: nauc_mrr_at_5_max
value: 23.5270336630759
- type: nauc_mrr_at_5_std
value: -11.623135779913035
- type: nauc_ndcg_at_1000_diff1
value: 50.409731512636114
- type: nauc_ndcg_at_1000_max
value: 21.65102021635264
- type: nauc_ndcg_at_1000_std
value: -4.92217385213998
- type: nauc_ndcg_at_100_diff1
value: 49.61567352404278
- type: nauc_ndcg_at_100_max
value: 21.66920626618109
- type: nauc_ndcg_at_100_std
value: -4.449124212637381
- type: nauc_ndcg_at_10_diff1
value: 48.92696345322944
- type: nauc_ndcg_at_10_max
value: 21.43134916536533
- type: nauc_ndcg_at_10_std
value: -4.875245880370844
- type: nauc_ndcg_at_1_diff1
value: 62.87718289404487
- type: nauc_ndcg_at_1_max
value: 20.036585845650016
- type: nauc_ndcg_at_1_std
value: -11.824468808796672
- type: nauc_ndcg_at_20_diff1
value: 49.11732939114983
- type: nauc_ndcg_at_20_max
value: 21.599262409124997
- type: nauc_ndcg_at_20_std
value: -4.420171263096915
- type: nauc_ndcg_at_3_diff1
value: 50.45116917733788
- type: nauc_ndcg_at_3_max
value: 20.80003117781856
- type: nauc_ndcg_at_3_std
value: -8.452770588263899
- type: nauc_ndcg_at_5_diff1
value: 48.748106275538724
- type: nauc_ndcg_at_5_max
value: 21.211994383559222
- type: nauc_ndcg_at_5_std
value: -6.225634212256297
- type: nauc_precision_at_1000_diff1
value: -2.65994713494688
- type: nauc_precision_at_1000_max
value: 9.355224820997055
- type: nauc_precision_at_1000_std
value: 6.93245511496358
- type: nauc_precision_at_100_diff1
value: -4.855736259082823
- type: nauc_precision_at_100_max
value: 13.36331723677647
- type: nauc_precision_at_100_std
value: 11.902586270338599
- type: nauc_precision_at_10_diff1
value: 3.216750400462425
- type: nauc_precision_at_10_max
value: 21.25636976697531
- type: nauc_precision_at_10_std
value: 12.137963028905675
- type: nauc_precision_at_1_diff1
value: 62.87718289404487
- type: nauc_precision_at_1_max
value: 20.036585845650016
- type: nauc_precision_at_1_std
value: -11.824468808796672
- type: nauc_precision_at_20_diff1
value: -1.4118811423454232
- type: nauc_precision_at_20_max
value: 18.692537535200373
- type: nauc_precision_at_20_std
value: 14.30532206890571
- type: nauc_precision_at_3_diff1
value: 30.339371349812698
- type: nauc_precision_at_3_max
value: 26.408309865804767
- type: nauc_precision_at_3_std
value: -4.0916660664063365
- type: nauc_precision_at_5_diff1
value: 12.879323748381793
- type: nauc_precision_at_5_max
value: 25.69174163377847
- type: nauc_precision_at_5_std
value: 6.093421699798719
- type: nauc_recall_at_1000_diff1
value: 12.273439585549289
- type: nauc_recall_at_1000_max
value: 36.35356580077124
- type: nauc_recall_at_1000_std
value: 48.56058221377234
- type: nauc_recall_at_100_diff1
value: 11.024787894935523
- type: nauc_recall_at_100_max
value: 32.31032622508141
- type: nauc_recall_at_100_std
value: 37.7793363899341
- type: nauc_recall_at_10_diff1
value: 23.70968015354158
- type: nauc_recall_at_10_max
value: 26.928818056185793
- type: nauc_recall_at_10_std
value: 15.603934216247126
- type: nauc_recall_at_1_diff1
value: 52.38347452384079
- type: nauc_recall_at_1_max
value: 15.321152180069644
- type: nauc_recall_at_1_std
value: -9.404483536334594
- type: nauc_recall_at_20_diff1
value: 19.803465715185133
- type: nauc_recall_at_20_max
value: 29.156847915934563
- type: nauc_recall_at_20_std
value: 23.60980197996787
- type: nauc_recall_at_3_diff1
value: 36.64320565102789
- type: nauc_recall_at_3_max
value: 21.36937034204157
- type: nauc_recall_at_3_std
value: -4.728003775515406
- type: nauc_recall_at_5_diff1
value: 27.617739447790917
- type: nauc_recall_at_5_max
value: 24.241178855068966
- type: nauc_recall_at_5_std
value: 4.7483438956009385
- type: ndcg_at_1
value: 77.93299999999999
- type: ndcg_at_10
value: 85.319
- type: ndcg_at_100
value: 86.13300000000001
- type: ndcg_at_1000
value: 86.378
- type: ndcg_at_20
value: 85.668
- type: ndcg_at_3
value: 83.41
- type: ndcg_at_5
value: 84.61999999999999
- type: precision_at_1
value: 77.93299999999999
- type: precision_at_10
value: 10.318
- type: precision_at_100
value: 1.095
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.2620000000000005
- type: precision_at_3
value: 32.063
- type: precision_at_5
value: 20.009
- type: recall_at_1
value: 72.161
- type: recall_at_10
value: 93.195
- type: recall_at_100
value: 96.455
- type: recall_at_1000
value: 97.98400000000001
- type: recall_at_20
value: 94.405
- type: recall_at_3
value: 88.061
- type: recall_at_5
value: 91.149
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: train
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 86.26899999999999
- type: map_at_1
value: 72.449
- type: map_at_10
value: 82.03
- type: map_at_100
value: 82.296
- type: map_at_1000
value: 82.314
- type: map_at_20
value: 82.186
- type: map_at_3
value: 80.83500000000001
- type: map_at_5
value: 81.623
- type: mrr_at_1
value: 80.61287678717785
- type: mrr_at_10
value: 88.20712760713671
- type: mrr_at_100
value: 88.2553288872469
- type: mrr_at_1000
value: 88.2557710463964
- type: mrr_at_20
value: 88.24322934725306
- type: mrr_at_3
value: 87.53498467049145
- type: mrr_at_5
value: 88.02728956075644
- type: nauc_map_at_1000_diff1
value: 39.95106411898475
- type: nauc_map_at_1000_max
value: 9.654728508229924
- type: nauc_map_at_1000_std
value: -12.151917182587324
- type: nauc_map_at_100_diff1
value: 39.90468852636486
- type: nauc_map_at_100_max
value: 9.63167786931566
- type: nauc_map_at_100_std
value: -12.13835017665434
- type: nauc_map_at_10_diff1
value: 39.46904405606303
- type: nauc_map_at_10_max
value: 9.325543240387914
- type: nauc_map_at_10_std
value: -12.262834163465582
- type: nauc_map_at_1_diff1
value: 45.26755146712848
- type: nauc_map_at_1_max
value: 7.630523437842046
- type: nauc_map_at_1_std
value: -13.605176488921838
- type: nauc_map_at_20_diff1
value: 39.66853756850951
- type: nauc_map_at_20_max
value: 9.466678245833826
- type: nauc_map_at_20_std
value: -12.1759657705622
- type: nauc_map_at_3_diff1
value: 39.50530889629205
- type: nauc_map_at_3_max
value: 8.935845976271647
- type: nauc_map_at_3_std
value: -13.490923397297127
- type: nauc_map_at_5_diff1
value: 39.30464767323601
- type: nauc_map_at_5_max
value: 9.171957030191257
- type: nauc_map_at_5_std
value: -12.606276076261402
- type: nauc_mrr_at_1000_diff1
value: 61.318394082054084
- type: nauc_mrr_at_1000_max
value: 13.942830061515366
- type: nauc_mrr_at_1000_std
value: -23.553800119123004
- type: nauc_mrr_at_100_diff1
value: 61.31872034357897
- type: nauc_mrr_at_100_max
value: 13.94419608612362
- type: nauc_mrr_at_100_std
value: -23.553055204470468
- type: nauc_mrr_at_10_diff1
value: 61.31278822068778
- type: nauc_mrr_at_10_max
value: 14.031207581756833
- type: nauc_mrr_at_10_std
value: -23.584266855456125
- type: nauc_mrr_at_1_diff1
value: 62.02492634987989
- type: nauc_mrr_at_1_max
value: 12.426103478237204
- type: nauc_mrr_at_1_std
value: -21.462128885638194
- type: nauc_mrr_at_20_diff1
value: 61.315947046589194
- type: nauc_mrr_at_20_max
value: 13.974382096839708
- type: nauc_mrr_at_20_std
value: -23.54265969965065
- type: nauc_mrr_at_3_diff1
value: 61.15068247478852
- type: nauc_mrr_at_3_max
value: 14.166286324555182
- type: nauc_mrr_at_3_std
value: -24.539638368220956
- type: nauc_mrr_at_5_diff1
value: 61.27021769164674
- type: nauc_mrr_at_5_max
value: 14.18154187157247
- type: nauc_mrr_at_5_std
value: -23.86452902597003
- type: nauc_ndcg_at_1000_diff1
value: 42.33233119047479
- type: nauc_ndcg_at_1000_max
value: 11.565331883315682
- type: nauc_ndcg_at_1000_std
value: -12.178748604304223
- type: nauc_ndcg_at_100_diff1
value: 41.37693942834703
- type: nauc_ndcg_at_100_max
value: 11.328077098906448
- type: nauc_ndcg_at_100_std
value: -11.65003452480089
- type: nauc_ndcg_at_10_diff1
value: 39.766907902897806
- type: nauc_ndcg_at_10_max
value: 10.419530131217563
- type: nauc_ndcg_at_10_std
value: -12.020911860422277
- type: nauc_ndcg_at_1_diff1
value: 62.02492634987989
- type: nauc_ndcg_at_1_max
value: 12.426103478237204
- type: nauc_ndcg_at_1_std
value: -21.462128885638194
- type: nauc_ndcg_at_20_diff1
value: 40.21705035603188
- type: nauc_ndcg_at_20_max
value: 10.683814012195125
- type: nauc_ndcg_at_20_std
value: -11.69979166757856
- type: nauc_ndcg_at_3_diff1
value: 41.841810239951
- type: nauc_ndcg_at_3_max
value: 11.064226884457492
- type: nauc_ndcg_at_3_std
value: -14.913305538472887
- type: nauc_ndcg_at_5_diff1
value: 40.04853934119725
- type: nauc_ndcg_at_5_max
value: 10.57426706775158
- type: nauc_ndcg_at_5_std
value: -13.00820458664088
- type: nauc_precision_at_1000_diff1
value: 0.3851195665415394
- type: nauc_precision_at_1000_max
value: 10.719664087222139
- type: nauc_precision_at_1000_std
value: 4.060546631755005
- type: nauc_precision_at_100_diff1
value: -1.1672675937708246
- type: nauc_precision_at_100_max
value: 11.76706994923555
- type: nauc_precision_at_100_std
value: 6.682502752333491
- type: nauc_precision_at_10_diff1
value: -2.442502050324997
- type: nauc_precision_at_10_max
value: 9.982032022937407
- type: nauc_precision_at_10_std
value: 2.253707666893988
- type: nauc_precision_at_1_diff1
value: 62.02492634987989
- type: nauc_precision_at_1_max
value: 12.426103478237204
- type: nauc_precision_at_1_std
value: -21.462128885638194
- type: nauc_precision_at_20_diff1
value: -2.8909153065947506
- type: nauc_precision_at_20_max
value: 10.381422532792685
- type: nauc_precision_at_20_std
value: 4.880699672256141
- type: nauc_precision_at_3_diff1
value: 13.983067176933947
- type: nauc_precision_at_3_max
value: 11.941084260114476
- type: nauc_precision_at_3_std
value: -12.615897998044437
- type: nauc_precision_at_5_diff1
value: 2.6828558906895688
- type: nauc_precision_at_5_max
value: 10.78931350518598
- type: nauc_precision_at_5_std
value: -3.6318676409745203
- type: nauc_recall_at_1000_diff1
value: -8.672439956073752
- type: nauc_recall_at_1000_max
value: 23.244662839438533
- type: nauc_recall_at_1000_std
value: 40.18478080701204
- type: nauc_recall_at_100_diff1
value: -4.465450402228265
- type: nauc_recall_at_100_max
value: 15.424894718754112
- type: nauc_recall_at_100_std
value: 28.33923562045188
- type: nauc_recall_at_10_diff1
value: 5.824940232374557
- type: nauc_recall_at_10_max
value: 9.55369331654998
- type: nauc_recall_at_10_std
value: 8.705857917052512
- type: nauc_recall_at_1_diff1
value: 45.26755146712848
- type: nauc_recall_at_1_max
value: 7.630523437842046
- type: nauc_recall_at_1_std
value: -13.605176488921838
- type: nauc_recall_at_20_diff1
value: 1.1759544592030455
- type: nauc_recall_at_20_max
value: 10.197822183591066
- type: nauc_recall_at_20_std
value: 15.269883968698228
- type: nauc_recall_at_3_diff1
value: 21.902162866702668
- type: nauc_recall_at_3_max
value: 9.183669197010431
- type: nauc_recall_at_3_std
value: -8.694680120818228
- type: nauc_recall_at_5_diff1
value: 13.653229973783526
- type: nauc_recall_at_5_max
value: 9.716681689699136
- type: nauc_recall_at_5_std
value: -0.2880989596678906
- type: ndcg_at_1
value: 80.613
- type: ndcg_at_10
value: 86.26899999999999
- type: ndcg_at_100
value: 87.12100000000001
- type: ndcg_at_1000
value: 87.40299999999999
- type: ndcg_at_20
value: 86.651
- type: ndcg_at_3
value: 84.64500000000001
- type: ndcg_at_5
value: 85.59
- type: precision_at_1
value: 80.613
- type: precision_at_10
value: 10.763
- type: precision_at_100
value: 1.16
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.525
- type: precision_at_3
value: 33.415
- type: precision_at_5
value: 20.808
- type: recall_at_1
value: 72.449
- type: recall_at_10
value: 93.29899999999999
- type: recall_at_100
value: 96.421
- type: recall_at_1000
value: 98.053
- type: recall_at_20
value: 94.53200000000001
- type: recall_at_3
value: 88.594
- type: recall_at_5
value: 91.268
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: dev
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 42.362
- type: map_at_1
value: 22.462
- type: map_at_10
value: 34.717999999999996
- type: map_at_100
value: 36.561
- type: map_at_1000
value: 36.716
- type: map_at_20
value: 35.716
- type: map_at_3
value: 30.692000000000004
- type: map_at_5
value: 32.553
- type: mrr_at_1
value: 40.400000000000006
- type: mrr_at_10
value: 49.666428571428575
- type: mrr_at_100
value: 50.487418944361295
- type: mrr_at_1000
value: 50.51970264235359
- type: mrr_at_20
value: 50.13681738969665
- type: mrr_at_3
value: 47.56666666666667
- type: mrr_at_5
value: 48.85666666666666
- type: nauc_map_at_1000_diff1
value: 48.907747941693266
- type: nauc_map_at_1000_max
value: 33.36065026293396
- type: nauc_map_at_1000_std
value: 3.721763378995018
- type: nauc_map_at_100_diff1
value: 48.84727756808333
- type: nauc_map_at_100_max
value: 33.24694910436772
- type: nauc_map_at_100_std
value: 3.697147651138032
- type: nauc_map_at_10_diff1
value: 49.12331607215452
- type: nauc_map_at_10_max
value: 32.834495636560405
- type: nauc_map_at_10_std
value: 1.842046947246096
- type: nauc_map_at_1_diff1
value: 55.677889443633454
- type: nauc_map_at_1_max
value: 29.14429236432774
- type: nauc_map_at_1_std
value: -6.340931692625591
- type: nauc_map_at_20_diff1
value: 48.94402984571876
- type: nauc_map_at_20_max
value: 32.97842670695263
- type: nauc_map_at_20_std
value: 2.719721699786715
- type: nauc_map_at_3_diff1
value: 50.345833937052284
- type: nauc_map_at_3_max
value: 30.884637973451046
- type: nauc_map_at_3_std
value: -1.7032612225578052
- type: nauc_map_at_5_diff1
value: 49.4967472092422
- type: nauc_map_at_5_max
value: 32.04823415779001
- type: nauc_map_at_5_std
value: 0.11348259159930259
- type: nauc_mrr_at_1000_diff1
value: 53.4664833188161
- type: nauc_mrr_at_1000_max
value: 43.94130528629785
- type: nauc_mrr_at_1000_std
value: 6.3073911252297625
- type: nauc_mrr_at_100_diff1
value: 53.45784380239337
- type: nauc_mrr_at_100_max
value: 43.93103386484263
- type: nauc_mrr_at_100_std
value: 6.326857176825003
- type: nauc_mrr_at_10_diff1
value: 53.16999005935199
- type: nauc_mrr_at_10_max
value: 43.96148608534777
- type: nauc_mrr_at_10_std
value: 6.060709004613991
- type: nauc_mrr_at_1_diff1
value: 60.454425921819976
- type: nauc_mrr_at_1_max
value: 44.854772426097114
- type: nauc_mrr_at_1_std
value: 1.7461856169787138
- type: nauc_mrr_at_20_diff1
value: 53.4253064364276
- type: nauc_mrr_at_20_max
value: 43.93067697716607
- type: nauc_mrr_at_20_std
value: 6.132078452692403
- type: nauc_mrr_at_3_diff1
value: 53.75537772130037
- type: nauc_mrr_at_3_max
value: 43.612665358637784
- type: nauc_mrr_at_3_std
value: 5.3924645903759165
- type: nauc_mrr_at_5_diff1
value: 53.23878310443892
- type: nauc_mrr_at_5_max
value: 44.154494509820275
- type: nauc_mrr_at_5_std
value: 6.128619405551008
- type: nauc_ndcg_at_1000_diff1
value: 48.74084938103878
- type: nauc_ndcg_at_1000_max
value: 38.28143671294916
- type: nauc_ndcg_at_1000_std
value: 9.48205302587039
- type: nauc_ndcg_at_100_diff1
value: 47.86688156492628
- type: nauc_ndcg_at_100_max
value: 36.73585329637074
- type: nauc_ndcg_at_100_std
value: 9.886842243260414
- type: nauc_ndcg_at_10_diff1
value: 47.874511239978354
- type: nauc_ndcg_at_10_max
value: 35.7706902177999
- type: nauc_ndcg_at_10_std
value: 5.509978475259395
- type: nauc_ndcg_at_1_diff1
value: 60.454425921819976
- type: nauc_ndcg_at_1_max
value: 44.854772426097114
- type: nauc_ndcg_at_1_std
value: 1.7461856169787138
- type: nauc_ndcg_at_20_diff1
value: 47.98814179482003
- type: nauc_ndcg_at_20_max
value: 35.80004813629657
- type: nauc_ndcg_at_20_std
value: 6.632931898598267
- type: nauc_ndcg_at_3_diff1
value: 48.153795894304174
- type: nauc_ndcg_at_3_max
value: 36.28196005109544
- type: nauc_ndcg_at_3_std
value: 2.793405120536969
- type: nauc_ndcg_at_5_diff1
value: 47.88157624782777
- type: nauc_ndcg_at_5_max
value: 35.8935279838606
- type: nauc_ndcg_at_5_std
value: 4.0533111045329955
- type: nauc_precision_at_1000_diff1
value: -2.8869159343467605
- type: nauc_precision_at_1000_max
value: 21.13113140923961
- type: nauc_precision_at_1000_std
value: 24.212722433897408
- type: nauc_precision_at_100_diff1
value: 1.1594774420038931
- type: nauc_precision_at_100_max
value: 22.26831206413967
- type: nauc_precision_at_100_std
value: 29.81314341287538
- type: nauc_precision_at_10_diff1
value: 16.05087694540557
- type: nauc_precision_at_10_max
value: 29.809963401808616
- type: nauc_precision_at_10_std
value: 19.84954878024915
- type: nauc_precision_at_1_diff1
value: 60.454425921819976
- type: nauc_precision_at_1_max
value: 44.854772426097114
- type: nauc_precision_at_1_std
value: 1.7461856169787138
- type: nauc_precision_at_20_diff1
value: 9.748587851192008
- type: nauc_precision_at_20_max
value: 26.594687094795994
- type: nauc_precision_at_20_std
value: 24.150090984390086
- type: nauc_precision_at_3_diff1
value: 32.12647129845763
- type: nauc_precision_at_3_max
value: 38.608852867058395
- type: nauc_precision_at_3_std
value: 10.859746017200495
- type: nauc_precision_at_5_diff1
value: 23.827901543995793
- type: nauc_precision_at_5_max
value: 35.67460349135147
- type: nauc_precision_at_5_std
value: 15.9392352657892
- type: nauc_recall_at_1000_diff1
value: 35.2270895294026
- type: nauc_recall_at_1000_max
value: 42.19353048661261
- type: nauc_recall_at_1000_std
value: 38.04743729975647
- type: nauc_recall_at_100_diff1
value: 30.40548686084564
- type: nauc_recall_at_100_max
value: 25.92198350281053
- type: nauc_recall_at_100_std
value: 26.42423804923528
- type: nauc_recall_at_10_diff1
value: 36.90410693124825
- type: nauc_recall_at_10_max
value: 28.372843635416388
- type: nauc_recall_at_10_std
value: 7.8175786831627905
- type: nauc_recall_at_1_diff1
value: 55.677889443633454
- type: nauc_recall_at_1_max
value: 29.14429236432774
- type: nauc_recall_at_1_std
value: -6.340931692625591
- type: nauc_recall_at_20_diff1
value: 35.99767274556454
- type: nauc_recall_at_20_max
value: 27.04852589764223
- type: nauc_recall_at_20_std
value: 11.072120738834073
- type: nauc_recall_at_3_diff1
value: 41.25393288041596
- type: nauc_recall_at_3_max
value: 26.97488751275594
- type: nauc_recall_at_3_std
value: 0.21950460796902102
- type: nauc_recall_at_5_diff1
value: 38.26235012673011
- type: nauc_recall_at_5_max
value: 28.830247312497054
- type: nauc_recall_at_5_std
value: 4.072132303371853
- type: ndcg_at_1
value: 40.400000000000006
- type: ndcg_at_10
value: 42.362
- type: ndcg_at_100
value: 48.982
- type: ndcg_at_1000
value: 51.581999999999994
- type: ndcg_at_20
value: 44.937
- type: ndcg_at_3
value: 38.922000000000004
- type: ndcg_at_5
value: 39.596
- type: precision_at_1
value: 40.400000000000006
- type: precision_at_10
value: 11.18
- type: precision_at_100
value: 1.796
- type: precision_at_1000
value: 0.22300000000000003
- type: precision_at_20
value: 6.710000000000001
- type: precision_at_3
value: 24.532999999999998
- type: precision_at_5
value: 17.52
- type: recall_at_1
value: 22.462
- type: recall_at_10
value: 49.382999999999996
- type: recall_at_100
value: 74.494
- type: recall_at_1000
value: 90.427
- type: recall_at_20
value: 57.524
- type: recall_at_3
value: 36.028999999999996
- type: recall_at_5
value: 41.089
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 40.558
- type: map_at_1
value: 20.451
- type: map_at_10
value: 33.041
- type: map_at_100
value: 34.945
- type: map_at_1000
value: 35.136
- type: map_at_20
value: 34.114
- type: map_at_3
value: 29.03
- type: map_at_5
value: 31.423000000000002
- type: mrr_at_1
value: 39.50617283950617
- type: mrr_at_10
value: 48.47491671565746
- type: mrr_at_100
value: 49.25528865170789
- type: mrr_at_1000
value: 49.30492546288068
- type: mrr_at_20
value: 48.90574296444466
- type: mrr_at_3
value: 46.16769547325103
- type: mrr_at_5
value: 47.47170781893004
- type: nauc_map_at_1000_diff1
value: 42.33223793532006
- type: nauc_map_at_1000_max
value: 28.178269647825054
- type: nauc_map_at_1000_std
value: 4.694684846288862
- type: nauc_map_at_100_diff1
value: 42.23585502790571
- type: nauc_map_at_100_max
value: 28.09325262656699
- type: nauc_map_at_100_std
value: 4.635779769876692
- type: nauc_map_at_10_diff1
value: 42.29892463032549
- type: nauc_map_at_10_max
value: 26.885463946159042
- type: nauc_map_at_10_std
value: 3.439834407388629
- type: nauc_map_at_1_diff1
value: 48.29160916418846
- type: nauc_map_at_1_max
value: 20.558365030918516
- type: nauc_map_at_1_std
value: -0.44854024391070973
- type: nauc_map_at_20_diff1
value: 42.1753422786911
- type: nauc_map_at_20_max
value: 27.631926253435857
- type: nauc_map_at_20_std
value: 4.002081405774815
- type: nauc_map_at_3_diff1
value: 44.56188133973965
- type: nauc_map_at_3_max
value: 25.936921564225862
- type: nauc_map_at_3_std
value: 2.3757170812866804
- type: nauc_map_at_5_diff1
value: 43.183022909231234
- type: nauc_map_at_5_max
value: 26.115662732325145
- type: nauc_map_at_5_std
value: 2.584799983984164
- type: nauc_mrr_at_1000_diff1
value: 49.90329578326087
- type: nauc_mrr_at_1000_max
value: 35.86029877981383
- type: nauc_mrr_at_1000_std
value: 6.305323054154965
- type: nauc_mrr_at_100_diff1
value: 49.86929948930004
- type: nauc_mrr_at_100_max
value: 35.850698951924386
- type: nauc_mrr_at_100_std
value: 6.3311346338729
- type: nauc_mrr_at_10_diff1
value: 49.943023388207195
- type: nauc_mrr_at_10_max
value: 35.98541218521815
- type: nauc_mrr_at_10_std
value: 6.0382083805515805
- type: nauc_mrr_at_1_diff1
value: 53.793790143858146
- type: nauc_mrr_at_1_max
value: 33.46215151905759
- type: nauc_mrr_at_1_std
value: 4.310648959342706
- type: nauc_mrr_at_20_diff1
value: 49.78255203343183
- type: nauc_mrr_at_20_max
value: 35.67982988010112
- type: nauc_mrr_at_20_std
value: 6.097005492945753
- type: nauc_mrr_at_3_diff1
value: 50.78094748778293
- type: nauc_mrr_at_3_max
value: 36.31047156376032
- type: nauc_mrr_at_3_std
value: 5.551964274549184
- type: nauc_mrr_at_5_diff1
value: 50.22070037219334
- type: nauc_mrr_at_5_max
value: 35.82391876186752
- type: nauc_mrr_at_5_std
value: 5.989440675787551
- type: nauc_ndcg_at_1000_diff1
value: 43.29067795693791
- type: nauc_ndcg_at_1000_max
value: 32.770853091465156
- type: nauc_ndcg_at_1000_std
value: 9.075882750305011
- type: nauc_ndcg_at_100_diff1
value: 41.92437892838288
- type: nauc_ndcg_at_100_max
value: 31.977508922932984
- type: nauc_ndcg_at_100_std
value: 9.345565382601217
- type: nauc_ndcg_at_10_diff1
value: 41.86314487224504
- type: nauc_ndcg_at_10_max
value: 29.565216296817827
- type: nauc_ndcg_at_10_std
value: 5.173875408923697
- type: nauc_ndcg_at_1_diff1
value: 53.793790143858146
- type: nauc_ndcg_at_1_max
value: 33.46215151905759
- type: nauc_ndcg_at_1_std
value: 4.310648959342706
- type: nauc_ndcg_at_20_diff1
value: 41.426074027410706
- type: nauc_ndcg_at_20_max
value: 30.01328944145506
- type: nauc_ndcg_at_20_std
value: 6.169601231532484
- type: nauc_ndcg_at_3_diff1
value: 44.27395561376024
- type: nauc_ndcg_at_3_max
value: 31.503088046380313
- type: nauc_ndcg_at_3_std
value: 5.208532778972112
- type: nauc_ndcg_at_5_diff1
value: 42.81322677405853
- type: nauc_ndcg_at_5_max
value: 29.407079874158846
- type: nauc_ndcg_at_5_std
value: 4.483826520898586
- type: nauc_precision_at_1000_diff1
value: 1.621703595454423
- type: nauc_precision_at_1000_max
value: 23.880102194550204
- type: nauc_precision_at_1000_std
value: 13.80100184324797
- type: nauc_precision_at_100_diff1
value: 6.575602396019008
- type: nauc_precision_at_100_max
value: 28.264948074327478
- type: nauc_precision_at_100_std
value: 17.684872532407283
- type: nauc_precision_at_10_diff1
value: 16.8328055869745
- type: nauc_precision_at_10_max
value: 30.24323560145753
- type: nauc_precision_at_10_std
value: 11.304688858560379
- type: nauc_precision_at_1_diff1
value: 53.793790143858146
- type: nauc_precision_at_1_max
value: 33.46215151905759
- type: nauc_precision_at_1_std
value: 4.310648959342706
- type: nauc_precision_at_20_diff1
value: 12.904056330326577
- type: nauc_precision_at_20_max
value: 28.969423687460438
- type: nauc_precision_at_20_std
value: 12.029389701897678
- type: nauc_precision_at_3_diff1
value: 31.1308248469768
- type: nauc_precision_at_3_max
value: 33.86869420849858
- type: nauc_precision_at_3_std
value: 8.474707630694647
- type: nauc_precision_at_5_diff1
value: 23.15042207925308
- type: nauc_precision_at_5_max
value: 29.597450076869325
- type: nauc_precision_at_5_std
value: 9.148285720232177
- type: nauc_recall_at_1000_diff1
value: 30.844750307544324
- type: nauc_recall_at_1000_max
value: 30.58773950265899
- type: nauc_recall_at_1000_std
value: 35.28925560349968
- type: nauc_recall_at_100_diff1
value: 24.148211328845242
- type: nauc_recall_at_100_max
value: 26.79994369269315
- type: nauc_recall_at_100_std
value: 24.218335174944343
- type: nauc_recall_at_10_diff1
value: 28.95261846685996
- type: nauc_recall_at_10_max
value: 22.666727959324515
- type: nauc_recall_at_10_std
value: 5.513769892416629
- type: nauc_recall_at_1_diff1
value: 48.29160916418846
- type: nauc_recall_at_1_max
value: 20.558365030918516
- type: nauc_recall_at_1_std
value: -0.44854024391070973
- type: nauc_recall_at_20_diff1
value: 26.837633000701228
- type: nauc_recall_at_20_max
value: 22.535529588484504
- type: nauc_recall_at_20_std
value: 8.42648285205937
- type: nauc_recall_at_3_diff1
value: 38.25026260610774
- type: nauc_recall_at_3_max
value: 25.050690202547578
- type: nauc_recall_at_3_std
value: 3.1080253121438766
- type: nauc_recall_at_5_diff1
value: 34.50423249642314
- type: nauc_recall_at_5_max
value: 22.85592363545583
- type: nauc_recall_at_5_std
value: 3.640758629683203
- type: ndcg_at_1
value: 39.506
- type: ndcg_at_10
value: 40.558
- type: ndcg_at_100
value: 47.247
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_20
value: 43.162
- type: ndcg_at_3
value: 37.261
- type: ndcg_at_5
value: 38.391999999999996
- type: precision_at_1
value: 39.506
- type: precision_at_10
value: 11.142000000000001
- type: precision_at_100
value: 1.813
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_20
value: 6.7589999999999995
- type: precision_at_3
value: 25.051000000000002
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 20.451
- type: recall_at_10
value: 46.79
- type: recall_at_100
value: 71.353
- type: recall_at_1000
value: 90.536
- type: recall_at_20
value: 54.517
- type: recall_at_3
value: 33.672000000000004
- type: recall_at_5
value: 39.678999999999995
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: train
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 42.217
- type: map_at_1
value: 21.418
- type: map_at_10
value: 34.437
- type: map_at_100
value: 36.254999999999995
- type: map_at_1000
value: 36.424
- type: map_at_20
value: 35.449999999999996
- type: map_at_3
value: 30.122
- type: map_at_5
value: 32.46
- type: mrr_at_1
value: 40.21818181818182
- type: mrr_at_10
value: 49.078658008658
- type: mrr_at_100
value: 49.9159011457498
- type: mrr_at_1000
value: 49.956067513298244
- type: mrr_at_20
value: 49.59237755036657
- type: mrr_at_3
value: 46.62424242424243
- type: mrr_at_5
value: 48.02424242424242
- type: nauc_map_at_1000_diff1
value: 40.17457855495732
- type: nauc_map_at_1000_max
value: 25.311659240262102
- type: nauc_map_at_1000_std
value: 5.059430044056764
- type: nauc_map_at_100_diff1
value: 40.151268380342515
- type: nauc_map_at_100_max
value: 25.21648990114121
- type: nauc_map_at_100_std
value: 5.026493196551854
- type: nauc_map_at_10_diff1
value: 40.040465022431576
- type: nauc_map_at_10_max
value: 24.355474347016965
- type: nauc_map_at_10_std
value: 3.5640713532110886
- type: nauc_map_at_1_diff1
value: 44.23533684826564
- type: nauc_map_at_1_max
value: 16.26051705102941
- type: nauc_map_at_1_std
value: -0.9304514636140965
- type: nauc_map_at_20_diff1
value: 40.07253921544327
- type: nauc_map_at_20_max
value: 24.840445695825426
- type: nauc_map_at_20_std
value: 4.3692544013620225
- type: nauc_map_at_3_diff1
value: 40.802965930319765
- type: nauc_map_at_3_max
value: 21.578173190568766
- type: nauc_map_at_3_std
value: 0.6624363947378223
- type: nauc_map_at_5_diff1
value: 40.41105488775194
- type: nauc_map_at_5_max
value: 22.962784936768788
- type: nauc_map_at_5_std
value: 2.061146134529408
- type: nauc_mrr_at_1000_diff1
value: 45.39271593141525
- type: nauc_mrr_at_1000_max
value: 32.71504880799005
- type: nauc_mrr_at_1000_std
value: 7.416955987371169
- type: nauc_mrr_at_100_diff1
value: 45.37668004444169
- type: nauc_mrr_at_100_max
value: 32.71589234567116
- type: nauc_mrr_at_100_std
value: 7.429449409497287
- type: nauc_mrr_at_10_diff1
value: 45.296174959712616
- type: nauc_mrr_at_10_max
value: 32.717197303838624
- type: nauc_mrr_at_10_std
value: 7.283098786281713
- type: nauc_mrr_at_1_diff1
value: 49.480470078543185
- type: nauc_mrr_at_1_max
value: 31.68605495492876
- type: nauc_mrr_at_1_std
value: 4.5814903060120304
- type: nauc_mrr_at_20_diff1
value: 45.36317431590108
- type: nauc_mrr_at_20_max
value: 32.706079440239364
- type: nauc_mrr_at_20_std
value: 7.382450020586175
- type: nauc_mrr_at_3_diff1
value: 46.001809326251454
- type: nauc_mrr_at_3_max
value: 32.46816837141833
- type: nauc_mrr_at_3_std
value: 6.707194036403839
- type: nauc_mrr_at_5_diff1
value: 45.46057942438407
- type: nauc_mrr_at_5_max
value: 32.408646791629394
- type: nauc_mrr_at_5_std
value: 7.11971218876357
- type: nauc_ndcg_at_1000_diff1
value: 40.315538048774236
- type: nauc_ndcg_at_1000_max
value: 29.577764682607636
- type: nauc_ndcg_at_1000_std
value: 9.802905608930201
- type: nauc_ndcg_at_100_diff1
value: 39.69359815845272
- type: nauc_ndcg_at_100_max
value: 28.64875657558225
- type: nauc_ndcg_at_100_std
value: 10.074769927489001
- type: nauc_ndcg_at_10_diff1
value: 39.42499828447507
- type: nauc_ndcg_at_10_max
value: 27.18259534858955
- type: nauc_ndcg_at_10_std
value: 6.306777798983758
- type: nauc_ndcg_at_1_diff1
value: 49.480470078543185
- type: nauc_ndcg_at_1_max
value: 31.68605495492876
- type: nauc_ndcg_at_1_std
value: 4.5814903060120304
- type: nauc_ndcg_at_20_diff1
value: 39.51807220300204
- type: nauc_ndcg_at_20_max
value: 27.62717063554887
- type: nauc_ndcg_at_20_std
value: 7.612068802546053
- type: nauc_ndcg_at_3_diff1
value: 41.04629803524383
- type: nauc_ndcg_at_3_max
value: 27.84818206495344
- type: nauc_ndcg_at_3_std
value: 4.828135065879767
- type: nauc_ndcg_at_5_diff1
value: 40.01813972950631
- type: nauc_ndcg_at_5_max
value: 26.524741885273233
- type: nauc_ndcg_at_5_std
value: 5.068529272139508
- type: nauc_precision_at_1000_diff1
value: 0.5470497141171098
- type: nauc_precision_at_1000_max
value: 24.170306355297317
- type: nauc_precision_at_1000_std
value: 18.50896415433208
- type: nauc_precision_at_100_diff1
value: 7.241268035021293
- type: nauc_precision_at_100_max
value: 28.939185497818976
- type: nauc_precision_at_100_std
value: 24.22708062648789
- type: nauc_precision_at_10_diff1
value: 18.067403140646473
- type: nauc_precision_at_10_max
value: 33.44992314337708
- type: nauc_precision_at_10_std
value: 16.388600267880083
- type: nauc_precision_at_1_diff1
value: 49.480470078543185
- type: nauc_precision_at_1_max
value: 31.68605495492876
- type: nauc_precision_at_1_std
value: 4.5814903060120304
- type: nauc_precision_at_20_diff1
value: 14.651721506297456
- type: nauc_precision_at_20_max
value: 32.54709281419177
- type: nauc_precision_at_20_std
value: 20.017505605751566
- type: nauc_precision_at_3_diff1
value: 30.04485384205624
- type: nauc_precision_at_3_max
value: 34.145525798671464
- type: nauc_precision_at_3_std
value: 9.385558743613709
- type: nauc_precision_at_5_diff1
value: 24.483639848034873
- type: nauc_precision_at_5_max
value: 33.41057528286542
- type: nauc_precision_at_5_std
value: 12.384466658411828
- type: nauc_recall_at_1000_diff1
value: 21.123402730271586
- type: nauc_recall_at_1000_max
value: 26.29127489133915
- type: nauc_recall_at_1000_std
value: 41.48297645436926
- type: nauc_recall_at_100_diff1
value: 23.190745050134876
- type: nauc_recall_at_100_max
value: 20.96730541491135
- type: nauc_recall_at_100_std
value: 24.579783248787745
- type: nauc_recall_at_10_diff1
value: 28.523056048729163
- type: nauc_recall_at_10_max
value: 21.334211258862197
- type: nauc_recall_at_10_std
value: 7.805738098876508
- type: nauc_recall_at_1_diff1
value: 44.23533684826564
- type: nauc_recall_at_1_max
value: 16.26051705102941
- type: nauc_recall_at_1_std
value: -0.9304514636140965
- type: nauc_recall_at_20_diff1
value: 27.256571523147745
- type: nauc_recall_at_20_max
value: 21.103408026023196
- type: nauc_recall_at_20_std
value: 11.445449145769603
- type: nauc_recall_at_3_diff1
value: 34.31934638156186
- type: nauc_recall_at_3_max
value: 18.78403262209476
- type: nauc_recall_at_3_std
value: 1.4901630112302768
- type: nauc_recall_at_5_diff1
value: 31.63689568492167
- type: nauc_recall_at_5_max
value: 19.123482920636295
- type: nauc_recall_at_5_std
value: 3.880507575208801
- type: ndcg_at_1
value: 40.217999999999996
- type: ndcg_at_10
value: 42.217
- type: ndcg_at_100
value: 48.762
- type: ndcg_at_1000
value: 51.63100000000001
- type: ndcg_at_20
value: 44.767
- type: ndcg_at_3
value: 38.057
- type: ndcg_at_5
value: 39.427
- type: precision_at_1
value: 40.217999999999996
- type: precision_at_10
value: 11.376
- type: precision_at_100
value: 1.815
- type: precision_at_1000
value: 0.232
- type: precision_at_20
value: 6.775
- type: precision_at_3
value: 24.812
- type: precision_at_5
value: 18.185000000000002
- type: recall_at_1
value: 21.418
- type: recall_at_10
value: 49.854
- type: recall_at_100
value: 74.37700000000001
- type: recall_at_1000
value: 91.72200000000001
- type: recall_at_20
value: 57.857
- type: recall_at_3
value: 35.221999999999994
- type: recall_at_5
value: 41.404999999999994
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: dev
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 74.175
- type: map_at_1
value: 41.041
- type: map_at_10
value: 66.31099999999999
- type: map_at_100
value: 67.104
- type: map_at_1000
value: 67.161
- type: map_at_20
value: 66.782
- type: map_at_3
value: 62.925
- type: map_at_5
value: 65.134
- type: mrr_at_1
value: 82.08187993390858
- type: mrr_at_10
value: 87.22340242626639
- type: mrr_at_100
value: 87.36059128975714
- type: mrr_at_1000
value: 87.364851053774
- type: mrr_at_20
value: 87.31789909538858
- type: mrr_at_3
value: 86.40842053729882
- type: mrr_at_5
value: 86.97478734471574
- type: nauc_map_at_1000_diff1
value: 15.036034548647908
- type: nauc_map_at_1000_max
value: 25.75085303993812
- type: nauc_map_at_1000_std
value: 17.465887392707984
- type: nauc_map_at_100_diff1
value: 15.005143661829045
- type: nauc_map_at_100_max
value: 25.745125861339808
- type: nauc_map_at_100_std
value: 17.504053167240222
- type: nauc_map_at_10_diff1
value: 14.893496342739656
- type: nauc_map_at_10_max
value: 25.56792154122193
- type: nauc_map_at_10_std
value: 17.09812367943331
- type: nauc_map_at_1_diff1
value: 69.96236855623434
- type: nauc_map_at_1_max
value: 49.448851447506925
- type: nauc_map_at_1_std
value: 2.889004800052033
- type: nauc_map_at_20_diff1
value: 14.907169096850453
- type: nauc_map_at_20_max
value: 25.664564175861543
- type: nauc_map_at_20_std
value: 17.43387597817089
- type: nauc_map_at_3_diff1
value: 14.819429540966336
- type: nauc_map_at_3_max
value: 24.63784431578106
- type: nauc_map_at_3_std
value: 13.51698248320979
- type: nauc_map_at_5_diff1
value: 14.793999397375302
- type: nauc_map_at_5_max
value: 25.273543691930023
- type: nauc_map_at_5_std
value: 15.528475200820887
- type: nauc_mrr_at_1000_diff1
value: 69.09310778251964
- type: nauc_mrr_at_1000_max
value: 51.91323689768607
- type: nauc_mrr_at_1000_std
value: 5.0427302617514265
- type: nauc_mrr_at_100_diff1
value: 69.08774366986974
- type: nauc_mrr_at_100_max
value: 51.91649892721352
- type: nauc_mrr_at_100_std
value: 5.055693527150247
- type: nauc_mrr_at_10_diff1
value: 69.04162799026531
- type: nauc_mrr_at_10_max
value: 51.94707923303371
- type: nauc_mrr_at_10_std
value: 5.073028039967743
- type: nauc_mrr_at_1_diff1
value: 69.96236855623434
- type: nauc_mrr_at_1_max
value: 49.448851447506925
- type: nauc_mrr_at_1_std
value: 2.889004800052033
- type: nauc_mrr_at_20_diff1
value: 69.07119075448549
- type: nauc_mrr_at_20_max
value: 51.945891610724935
- type: nauc_mrr_at_20_std
value: 5.120139371969651
- type: nauc_mrr_at_3_diff1
value: 69.04728116249032
- type: nauc_mrr_at_3_max
value: 52.19716746949592
- type: nauc_mrr_at_3_std
value: 4.455467076969271
- type: nauc_mrr_at_5_diff1
value: 68.99508759550626
- type: nauc_mrr_at_5_max
value: 52.00068611382036
- type: nauc_mrr_at_5_std
value: 4.853875045723172
- type: nauc_ndcg_at_1000_diff1
value: 21.56019247861624
- type: nauc_ndcg_at_1000_max
value: 29.992075228189385
- type: nauc_ndcg_at_1000_std
value: 19.52149913625525
- type: nauc_ndcg_at_100_diff1
value: 20.57182767444212
- type: nauc_ndcg_at_100_max
value: 29.671296370713957
- type: nauc_ndcg_at_100_std
value: 20.459885577163732
- type: nauc_ndcg_at_10_diff1
value: 19.899707356785555
- type: nauc_ndcg_at_10_max
value: 28.92080969944076
- type: nauc_ndcg_at_10_std
value: 18.985520950121536
- type: nauc_ndcg_at_1_diff1
value: 69.96236855623434
- type: nauc_ndcg_at_1_max
value: 49.448851447506925
- type: nauc_ndcg_at_1_std
value: 2.889004800052033
- type: nauc_ndcg_at_20_diff1
value: 19.817147184816825
- type: nauc_ndcg_at_20_max
value: 29.10607440644831
- type: nauc_ndcg_at_20_std
value: 20.055386680932457
- type: nauc_ndcg_at_3_diff1
value: 20.563849560253576
- type: nauc_ndcg_at_3_max
value: 28.040961943107828
- type: nauc_ndcg_at_3_std
value: 13.092847825660659
- type: nauc_ndcg_at_5_diff1
value: 19.974303695690292
- type: nauc_ndcg_at_5_max
value: 28.543507472783176
- type: nauc_ndcg_at_5_std
value: 15.984138613333693
- type: nauc_precision_at_1000_diff1
value: -3.673616783114865
- type: nauc_precision_at_1000_max
value: 23.053922175840807
- type: nauc_precision_at_1000_std
value: 49.11991772792811
- type: nauc_precision_at_100_diff1
value: -1.216591429661993
- type: nauc_precision_at_100_max
value: 22.391264965682506
- type: nauc_precision_at_100_std
value: 42.72424514027106
- type: nauc_precision_at_10_diff1
value: 3.616065203597004
- type: nauc_precision_at_10_max
value: 21.983666160174664
- type: nauc_precision_at_10_std
value: 29.4059520720441
- type: nauc_precision_at_1_diff1
value: 69.96236855623434
- type: nauc_precision_at_1_max
value: 49.448851447506925
- type: nauc_precision_at_1_std
value: 2.889004800052033
- type: nauc_precision_at_20_diff1
value: 1.0139909003288345
- type: nauc_precision_at_20_max
value: 21.586196202262293
- type: nauc_precision_at_20_std
value: 34.67062542720318
- type: nauc_precision_at_3_diff1
value: 8.864530763513228
- type: nauc_precision_at_3_max
value: 22.75139236955371
- type: nauc_precision_at_3_std
value: 15.906499760044241
- type: nauc_precision_at_5_diff1
value: 6.115146064055562
- type: nauc_precision_at_5_max
value: 22.511552769119312
- type: nauc_precision_at_5_std
value: 21.404838493866283
- type: nauc_recall_at_1000_diff1
value: -3.673616783114743
- type: nauc_recall_at_1000_max
value: 23.053922175841222
- type: nauc_recall_at_1000_std
value: 49.11991772792807
- type: nauc_recall_at_100_diff1
value: -1.2165914296620488
- type: nauc_recall_at_100_max
value: 22.391264965682254
- type: nauc_recall_at_100_std
value: 42.72424514027085
- type: nauc_recall_at_10_diff1
value: 3.6160652035968592
- type: nauc_recall_at_10_max
value: 21.983666160174625
- type: nauc_recall_at_10_std
value: 29.405952072043952
- type: nauc_recall_at_1_diff1
value: 69.96236855623434
- type: nauc_recall_at_1_max
value: 49.448851447506925
- type: nauc_recall_at_1_std
value: 2.889004800052033
- type: nauc_recall_at_20_diff1
value: 1.0139909003285874
- type: nauc_recall_at_20_max
value: 21.586196202262176
- type: nauc_recall_at_20_std
value: 34.67062542720296
- type: nauc_recall_at_3_diff1
value: 8.864530763513235
- type: nauc_recall_at_3_max
value: 22.75139236955375
- type: nauc_recall_at_3_std
value: 15.906499760044227
- type: nauc_recall_at_5_diff1
value: 6.115146064055457
- type: nauc_recall_at_5_max
value: 22.51155276911919
- type: nauc_recall_at_5_std
value: 21.404838493866148
- type: ndcg_at_1
value: 82.082
- type: ndcg_at_10
value: 74.175
- type: ndcg_at_100
value: 76.818
- type: ndcg_at_1000
value: 77.86999999999999
- type: ndcg_at_20
value: 75.324
- type: ndcg_at_3
value: 69.505
- type: ndcg_at_5
value: 72.25399999999999
- type: precision_at_1
value: 82.082
- type: precision_at_10
value: 15.459999999999999
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 8.099
- type: precision_at_3
value: 44.696999999999996
- type: precision_at_5
value: 28.992
- type: recall_at_1
value: 41.041
- type: recall_at_10
value: 77.29899999999999
- type: recall_at_100
value: 87.516
- type: recall_at_1000
value: 94.428
- type: recall_at_20
value: 80.99
- type: recall_at_3
value: 67.04599999999999
- type: recall_at_5
value: 72.48
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 72.099
- type: map_at_1
value: 39.993
- type: map_at_10
value: 63.906
- type: map_at_100
value: 64.768
- type: map_at_1000
value: 64.825
- type: map_at_20
value: 64.434
- type: map_at_3
value: 60.428000000000004
- type: map_at_5
value: 62.65500000000001
- type: mrr_at_1
value: 79.9864956110736
- type: mrr_at_10
value: 85.597317342422
- type: mrr_at_100
value: 85.75538451417644
- type: mrr_at_1000
value: 85.76054586395492
- type: mrr_at_20
value: 85.70043434608681
- type: mrr_at_3
value: 84.7085302723385
- type: mrr_at_5
value: 85.3006977267612
- type: nauc_map_at_1000_diff1
value: 14.879240191611792
- type: nauc_map_at_1000_max
value: 17.352504404367227
- type: nauc_map_at_1000_std
value: 12.058834112176255
- type: nauc_map_at_100_diff1
value: 14.849169805835471
- type: nauc_map_at_100_max
value: 17.33232999123778
- type: nauc_map_at_100_std
value: 12.077792740295559
- type: nauc_map_at_10_diff1
value: 14.739688896718311
- type: nauc_map_at_10_max
value: 17.22236218684584
- type: nauc_map_at_10_std
value: 11.601003434736638
- type: nauc_map_at_1_diff1
value: 65.4983976623848
- type: nauc_map_at_1_max
value: 42.1548067294384
- type: nauc_map_at_1_std
value: -0.664907685382726
- type: nauc_map_at_20_diff1
value: 14.775964716898365
- type: nauc_map_at_20_max
value: 17.28407185806239
- type: nauc_map_at_20_std
value: 11.938629322344202
- type: nauc_map_at_3_diff1
value: 15.187257332075571
- type: nauc_map_at_3_max
value: 16.50469797840979
- type: nauc_map_at_3_std
value: 8.427502806629832
- type: nauc_map_at_5_diff1
value: 14.734331855452067
- type: nauc_map_at_5_max
value: 16.996649506768556
- type: nauc_map_at_5_std
value: 10.675417416580272
- type: nauc_mrr_at_1000_diff1
value: 64.77314045859816
- type: nauc_mrr_at_1000_max
value: 45.07640869774897
- type: nauc_mrr_at_1000_std
value: 1.1223707370777345
- type: nauc_mrr_at_100_diff1
value: 64.7736064468281
- type: nauc_mrr_at_100_max
value: 45.081668625794045
- type: nauc_mrr_at_100_std
value: 1.1301861065543417
- type: nauc_mrr_at_10_diff1
value: 64.7436125478473
- type: nauc_mrr_at_10_max
value: 45.17530304621016
- type: nauc_mrr_at_10_std
value: 1.1574487279881827
- type: nauc_mrr_at_1_diff1
value: 65.4983976623848
- type: nauc_mrr_at_1_max
value: 42.1548067294384
- type: nauc_mrr_at_1_std
value: -0.664907685382726
- type: nauc_mrr_at_20_diff1
value: 64.74897914048353
- type: nauc_mrr_at_20_max
value: 45.10718259984782
- type: nauc_mrr_at_20_std
value: 1.1414090900637834
- type: nauc_mrr_at_3_diff1
value: 64.66047914937114
- type: nauc_mrr_at_3_max
value: 45.240654624502135
- type: nauc_mrr_at_3_std
value: 0.6018871726391307
- type: nauc_mrr_at_5_diff1
value: 64.62465365659278
- type: nauc_mrr_at_5_max
value: 45.26782758375944
- type: nauc_mrr_at_5_std
value: 0.9729934670795005
- type: nauc_ndcg_at_1000_diff1
value: 20.735117506723284
- type: nauc_ndcg_at_1000_max
value: 21.973730279574237
- type: nauc_ndcg_at_1000_std
value: 14.435511069333268
- type: nauc_ndcg_at_100_diff1
value: 19.92424972281059
- type: nauc_ndcg_at_100_max
value: 21.46124911457025
- type: nauc_ndcg_at_100_std
value: 14.997278102074985
- type: nauc_ndcg_at_10_diff1
value: 19.482173354148614
- type: nauc_ndcg_at_10_max
value: 21.057780839586314
- type: nauc_ndcg_at_10_std
value: 13.183889323902614
- type: nauc_ndcg_at_1_diff1
value: 65.4983976623848
- type: nauc_ndcg_at_1_max
value: 42.1548067294384
- type: nauc_ndcg_at_1_std
value: -0.664907685382726
- type: nauc_ndcg_at_20_diff1
value: 19.492746981314262
- type: nauc_ndcg_at_20_max
value: 21.17506677843973
- type: nauc_ndcg_at_20_std
value: 14.226350927505344
- type: nauc_ndcg_at_3_diff1
value: 20.783583144063783
- type: nauc_ndcg_at_3_max
value: 20.412720533103773
- type: nauc_ndcg_at_3_std
value: 8.1573992813502
- type: nauc_ndcg_at_5_diff1
value: 19.678652006506965
- type: nauc_ndcg_at_5_max
value: 20.797671396298504
- type: nauc_ndcg_at_5_std
value: 11.325414041574488
- type: nauc_precision_at_1000_diff1
value: -10.450656807230294
- type: nauc_precision_at_1000_max
value: 13.984468023954232
- type: nauc_precision_at_1000_std
value: 47.62472479785625
- type: nauc_precision_at_100_diff1
value: -4.196473742286918
- type: nauc_precision_at_100_max
value: 12.121665049576427
- type: nauc_precision_at_100_std
value: 37.19008451156229
- type: nauc_precision_at_10_diff1
value: 3.061787271972301
- type: nauc_precision_at_10_max
value: 14.242591133970839
- type: nauc_precision_at_10_std
value: 22.61377683902424
- type: nauc_precision_at_1_diff1
value: 65.4983976623848
- type: nauc_precision_at_1_max
value: 42.1548067294384
- type: nauc_precision_at_1_std
value: -0.664907685382726
- type: nauc_precision_at_20_diff1
value: 0.6113817731563579
- type: nauc_precision_at_20_max
value: 13.40678529536442
- type: nauc_precision_at_20_std
value: 27.36583188393757
- type: nauc_precision_at_3_diff1
value: 9.006850832399767
- type: nauc_precision_at_3_max
value: 14.709279058037167
- type: nauc_precision_at_3_std
value: 11.083943561368818
- type: nauc_precision_at_5_diff1
value: 5.4640694491525785
- type: nauc_precision_at_5_max
value: 14.534695725481427
- type: nauc_precision_at_5_std
value: 17.261328831180144
- type: nauc_recall_at_1000_diff1
value: -10.45065680723015
- type: nauc_recall_at_1000_max
value: 13.984468023954296
- type: nauc_recall_at_1000_std
value: 47.624724797856494
- type: nauc_recall_at_100_diff1
value: -4.1964737422872425
- type: nauc_recall_at_100_max
value: 12.121665049576311
- type: nauc_recall_at_100_std
value: 37.19008451156221
- type: nauc_recall_at_10_diff1
value: 3.061787271972265
- type: nauc_recall_at_10_max
value: 14.242591133970848
- type: nauc_recall_at_10_std
value: 22.613776839024126
- type: nauc_recall_at_1_diff1
value: 65.4983976623848
- type: nauc_recall_at_1_max
value: 42.1548067294384
- type: nauc_recall_at_1_std
value: -0.664907685382726
- type: nauc_recall_at_20_diff1
value: 0.6113817731563769
- type: nauc_recall_at_20_max
value: 13.406785295364488
- type: nauc_recall_at_20_std
value: 27.365831883937688
- type: nauc_recall_at_3_diff1
value: 9.006850832399833
- type: nauc_recall_at_3_max
value: 14.709279058037176
- type: nauc_recall_at_3_std
value: 11.083943561368827
- type: nauc_recall_at_5_diff1
value: 5.464069449152512
- type: nauc_recall_at_5_max
value: 14.534695725481386
- type: nauc_recall_at_5_std
value: 17.261328831180222
- type: ndcg_at_1
value: 79.986
- type: ndcg_at_10
value: 72.099
- type: ndcg_at_100
value: 75.01100000000001
- type: ndcg_at_1000
value: 76.062
- type: ndcg_at_20
value: 73.38799999999999
- type: ndcg_at_3
value: 67.252
- type: ndcg_at_5
value: 70.03399999999999
- type: precision_at_1
value: 79.986
- type: precision_at_10
value: 15.084
- type: precision_at_100
value: 1.735
- type: precision_at_1000
value: 0.187
- type: precision_at_20
value: 7.957
- type: precision_at_3
value: 43.165
- type: precision_at_5
value: 28.097
- type: recall_at_1
value: 39.993
- type: recall_at_10
value: 75.422
- type: recall_at_100
value: 86.739
- type: recall_at_1000
value: 93.633
- type: recall_at_20
value: 79.57499999999999
- type: recall_at_3
value: 64.747
- type: recall_at_5
value: 70.243
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: train
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 74.675
- type: map_at_1
value: 41.071999999999996
- type: map_at_10
value: 66.92
- type: map_at_100
value: 67.723
- type: map_at_1000
value: 67.774
- type: map_at_20
value: 67.408
- type: map_at_3
value: 63.55499999999999
- type: map_at_5
value: 65.704
- type: mrr_at_1
value: 82.14470588235294
- type: mrr_at_10
value: 87.27400186741363
- type: mrr_at_100
value: 87.4140081509131
- type: mrr_at_1000
value: 87.41798363724548
- type: mrr_at_20
value: 87.36842561587122
- type: mrr_at_3
value: 86.50941176470587
- type: mrr_at_5
value: 87.00347058823529
- type: nauc_map_at_1000_diff1
value: 15.173631052534247
- type: nauc_map_at_1000_max
value: 26.068735945434245
- type: nauc_map_at_1000_std
value: 16.679793876109372
- type: nauc_map_at_100_diff1
value: 15.145622681798832
- type: nauc_map_at_100_max
value: 26.05989713486766
- type: nauc_map_at_100_std
value: 16.707010441285338
- type: nauc_map_at_10_diff1
value: 14.914638256007287
- type: nauc_map_at_10_max
value: 25.882443097244483
- type: nauc_map_at_10_std
value: 16.268808050707122
- type: nauc_map_at_1_diff1
value: 68.99857028038306
- type: nauc_map_at_1_max
value: 49.44823319018734
- type: nauc_map_at_1_std
value: 3.95480041595882
- type: nauc_map_at_20_diff1
value: 15.024824474002042
- type: nauc_map_at_20_max
value: 25.979156549532394
- type: nauc_map_at_20_std
value: 16.58495990193771
- type: nauc_map_at_3_diff1
value: 14.82922429946727
- type: nauc_map_at_3_max
value: 24.94644740438124
- type: nauc_map_at_3_std
value: 13.163454771602703
- type: nauc_map_at_5_diff1
value: 14.786835699591697
- type: nauc_map_at_5_max
value: 25.4952477897668
- type: nauc_map_at_5_std
value: 15.132801238151277
- type: nauc_mrr_at_1000_diff1
value: 68.01504171335921
- type: nauc_mrr_at_1000_max
value: 51.87765517862451
- type: nauc_mrr_at_1000_std
value: 6.86223778975253
- type: nauc_mrr_at_100_diff1
value: 68.01399006752484
- type: nauc_mrr_at_100_max
value: 51.88280954265237
- type: nauc_mrr_at_100_std
value: 6.873608557329335
- type: nauc_mrr_at_10_diff1
value: 67.99091321438537
- type: nauc_mrr_at_10_max
value: 51.969521221681624
- type: nauc_mrr_at_10_std
value: 6.931166148442037
- type: nauc_mrr_at_1_diff1
value: 68.99857028038306
- type: nauc_mrr_at_1_max
value: 49.44823319018734
- type: nauc_mrr_at_1_std
value: 3.95480041595882
- type: nauc_mrr_at_20_diff1
value: 68.01259861100158
- type: nauc_mrr_at_20_max
value: 51.92347187092682
- type: nauc_mrr_at_20_std
value: 6.916000236458142
- type: nauc_mrr_at_3_diff1
value: 67.78455056859933
- type: nauc_mrr_at_3_max
value: 51.871123397467976
- type: nauc_mrr_at_3_std
value: 6.43509345435153
- type: nauc_mrr_at_5_diff1
value: 67.90240615066607
- type: nauc_mrr_at_5_max
value: 52.0124881495014
- type: nauc_mrr_at_5_std
value: 6.803350773425626
- type: nauc_ndcg_at_1000_diff1
value: 21.69891416034566
- type: nauc_ndcg_at_1000_max
value: 30.461957789834806
- type: nauc_ndcg_at_1000_std
value: 19.18877840309474
- type: nauc_ndcg_at_100_diff1
value: 20.7970186296764
- type: nauc_ndcg_at_100_max
value: 30.15224012992887
- type: nauc_ndcg_at_100_std
value: 19.943169128048474
- type: nauc_ndcg_at_10_diff1
value: 19.84268006275167
- type: nauc_ndcg_at_10_max
value: 29.484945625149916
- type: nauc_ndcg_at_10_std
value: 18.258410671819046
- type: nauc_ndcg_at_1_diff1
value: 68.99857028038306
- type: nauc_ndcg_at_1_max
value: 49.44823319018734
- type: nauc_ndcg_at_1_std
value: 3.95480041595882
- type: nauc_ndcg_at_20_diff1
value: 20.04561491289907
- type: nauc_ndcg_at_20_max
value: 29.693953898939768
- type: nauc_ndcg_at_20_std
value: 19.236240432268765
- type: nauc_ndcg_at_3_diff1
value: 20.257610313097572
- type: nauc_ndcg_at_3_max
value: 28.337695262101402
- type: nauc_ndcg_at_3_std
value: 13.27057009577805
- type: nauc_ndcg_at_5_diff1
value: 19.813917540582832
- type: nauc_ndcg_at_5_max
value: 28.898934488412642
- type: nauc_ndcg_at_5_std
value: 16.037376675812613
- type: nauc_precision_at_1000_diff1
value: -2.7630728745432833
- type: nauc_precision_at_1000_max
value: 24.273957339049858
- type: nauc_precision_at_1000_std
value: 51.174814670874234
- type: nauc_precision_at_100_diff1
value: -0.3693685702558231
- type: nauc_precision_at_100_max
value: 23.540054418358388
- type: nauc_precision_at_100_std
value: 42.86307596340895
- type: nauc_precision_at_10_diff1
value: 3.839420261166957
- type: nauc_precision_at_10_max
value: 23.33220436834566
- type: nauc_precision_at_10_std
value: 27.497642027822938
- type: nauc_precision_at_1_diff1
value: 68.99857028038306
- type: nauc_precision_at_1_max
value: 49.44823319018734
- type: nauc_precision_at_1_std
value: 3.95480041595882
- type: nauc_precision_at_20_diff1
value: 2.149278982050884
- type: nauc_precision_at_20_max
value: 23.225228919773492
- type: nauc_precision_at_20_std
value: 32.583030590092996
- type: nauc_precision_at_3_diff1
value: 8.414263763743502
- type: nauc_precision_at_3_max
value: 23.266556039522364
- type: nauc_precision_at_3_std
value: 16.166789635561578
- type: nauc_precision_at_5_diff1
value: 6.100362067031837
- type: nauc_precision_at_5_max
value: 23.317092657737902
- type: nauc_precision_at_5_std
value: 21.439743834724506
- type: nauc_recall_at_1000_diff1
value: -2.763072874542946
- type: nauc_recall_at_1000_max
value: 24.273957339050046
- type: nauc_recall_at_1000_std
value: 51.17481467087451
- type: nauc_recall_at_100_diff1
value: -0.36936857025585346
- type: nauc_recall_at_100_max
value: 23.54005441835819
- type: nauc_recall_at_100_std
value: 42.86307596340905
- type: nauc_recall_at_10_diff1
value: 3.8394202611669916
- type: nauc_recall_at_10_max
value: 23.332204368345536
- type: nauc_recall_at_10_std
value: 27.497642027822835
- type: nauc_recall_at_1_diff1
value: 68.99857028038306
- type: nauc_recall_at_1_max
value: 49.44823319018734
- type: nauc_recall_at_1_std
value: 3.95480041595882
- type: nauc_recall_at_20_diff1
value: 2.1492789820506943
- type: nauc_recall_at_20_max
value: 23.22522891977323
- type: nauc_recall_at_20_std
value: 32.58303059009287
- type: nauc_recall_at_3_diff1
value: 8.414263763743387
- type: nauc_recall_at_3_max
value: 23.266556039522314
- type: nauc_recall_at_3_std
value: 16.166789635561496
- type: nauc_recall_at_5_diff1
value: 6.100362067031725
- type: nauc_recall_at_5_max
value: 23.317092657737877
- type: nauc_recall_at_5_std
value: 21.43974383472456
- type: ndcg_at_1
value: 82.145
- type: ndcg_at_10
value: 74.675
- type: ndcg_at_100
value: 77.35499999999999
- type: ndcg_at_1000
value: 78.31099999999999
- type: ndcg_at_20
value: 75.859
- type: ndcg_at_3
value: 70.056
- type: ndcg_at_5
value: 72.68900000000001
- type: precision_at_1
value: 82.145
- type: precision_at_10
value: 15.594
- type: precision_at_100
value: 1.7670000000000001
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 8.177
- type: precision_at_3
value: 45.207
- type: precision_at_5
value: 29.199
- type: recall_at_1
value: 41.071999999999996
- type: recall_at_10
value: 77.968
- type: recall_at_100
value: 88.342
- type: recall_at_1000
value: 94.636
- type: recall_at_20
value: 81.768
- type: recall_at_3
value: 67.81099999999999
- type: recall_at_5
value: 72.99799999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.6404
- type: ap
value: 88.06069266911989
- type: ap_weighted
value: 88.06069266911989
- type: f1
value: 91.63447508338969
- type: f1_weighted
value: 91.63447508338966
- type: main_score
value: 91.6404
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 41.881
- type: map_at_1
value: 22.467000000000002
- type: map_at_10
value: 34.910999999999994
- type: map_at_100
value: 36.061
- type: map_at_1000
value: 36.108000000000004
- type: map_at_20
value: 35.624
- type: map_at_3
value: 30.941999999999997
- type: map_at_5
value: 33.263999999999996
- type: mrr_at_1
value: 23.08022922636103
- type: mrr_at_10
value: 35.50051735116205
- type: mrr_at_100
value: 36.59446189817308
- type: mrr_at_1000
value: 36.635988914583265
- type: mrr_at_20
value: 36.18253329805409
- type: mrr_at_3
value: 31.609360076408787
- type: mrr_at_5
value: 33.90305635148042
- type: nauc_map_at_1000_diff1
value: 35.55451924480011
- type: nauc_map_at_1000_max
value: 3.3973484820380984
- type: nauc_map_at_1000_std
value: -15.313643259934148
- type: nauc_map_at_100_diff1
value: 35.550804246990495
- type: nauc_map_at_100_max
value: 3.399807661520924
- type: nauc_map_at_100_std
value: -15.275642266336313
- type: nauc_map_at_10_diff1
value: 35.5782836837628
- type: nauc_map_at_10_max
value: 3.2528940284048993
- type: nauc_map_at_10_std
value: -15.969363396756405
- type: nauc_map_at_1_diff1
value: 38.48673772459152
- type: nauc_map_at_1_max
value: 3.058447908569679
- type: nauc_map_at_1_std
value: -14.963393022102997
- type: nauc_map_at_20_diff1
value: 35.51677537885897
- type: nauc_map_at_20_max
value: 3.3515639340449725
- type: nauc_map_at_20_std
value: -15.515763931018828
- type: nauc_map_at_3_diff1
value: 35.92227750150707
- type: nauc_map_at_3_max
value: 2.8055574560079064
- type: nauc_map_at_3_std
value: -16.880504853072285
- type: nauc_map_at_5_diff1
value: 35.55198639506613
- type: nauc_map_at_5_max
value: 2.9586919269467526
- type: nauc_map_at_5_std
value: -16.54701954101431
- type: nauc_mrr_at_1000_diff1
value: 35.431696643964436
- type: nauc_mrr_at_1000_max
value: 3.413854144953113
- type: nauc_mrr_at_1000_std
value: -15.142803785812797
- type: nauc_mrr_at_100_diff1
value: 35.426786864620986
- type: nauc_mrr_at_100_max
value: 3.4171950444493935
- type: nauc_mrr_at_100_std
value: -15.103678118737987
- type: nauc_mrr_at_10_diff1
value: 35.420624727408644
- type: nauc_mrr_at_10_max
value: 3.322578167282073
- type: nauc_mrr_at_10_std
value: -15.720250181753793
- type: nauc_mrr_at_1_diff1
value: 38.282779286642096
- type: nauc_mrr_at_1_max
value: 2.9904752802636327
- type: nauc_mrr_at_1_std
value: -15.138960622677649
- type: nauc_mrr_at_20_diff1
value: 35.38506376339875
- type: nauc_mrr_at_20_max
value: 3.394154792664457
- type: nauc_mrr_at_20_std
value: -15.296898873665988
- type: nauc_mrr_at_3_diff1
value: 35.7803854971965
- type: nauc_mrr_at_3_max
value: 2.7546109667567733
- type: nauc_mrr_at_3_std
value: -16.7760243616548
- type: nauc_mrr_at_5_diff1
value: 35.40507525930076
- type: nauc_mrr_at_5_max
value: 2.968967051093017
- type: nauc_mrr_at_5_std
value: -16.322983362913682
- type: nauc_ndcg_at_1000_diff1
value: 34.61165703664126
- type: nauc_ndcg_at_1000_max
value: 4.2633988477939315
- type: nauc_ndcg_at_1000_std
value: -13.07846493573677
- type: nauc_ndcg_at_100_diff1
value: 34.496098679488654
- type: nauc_ndcg_at_100_max
value: 4.35416012374578
- type: nauc_ndcg_at_100_std
value: -11.701820911709307
- type: nauc_ndcg_at_10_diff1
value: 34.510499084561594
- type: nauc_ndcg_at_10_max
value: 3.7195988744787765
- type: nauc_ndcg_at_10_std
value: -15.20204882975027
- type: nauc_ndcg_at_1_diff1
value: 38.282779286642096
- type: nauc_ndcg_at_1_max
value: 2.9904752802636327
- type: nauc_ndcg_at_1_std
value: -15.138960622677649
- type: nauc_ndcg_at_20_diff1
value: 34.28895557191255
- type: nauc_ndcg_at_20_max
value: 4.078634149451336
- type: nauc_ndcg_at_20_std
value: -13.495634934731186
- type: nauc_ndcg_at_3_diff1
value: 35.1825890111415
- type: nauc_ndcg_at_3_max
value: 2.6899697916041787
- type: nauc_ndcg_at_3_std
value: -17.3200148795559
- type: nauc_ndcg_at_5_diff1
value: 34.50553625093832
- type: nauc_ndcg_at_5_max
value: 2.9886919040366453
- type: nauc_ndcg_at_5_std
value: -16.682910398462134
- type: nauc_precision_at_1000_diff1
value: -5.237411685181927
- type: nauc_precision_at_1000_max
value: 12.33197477512628
- type: nauc_precision_at_1000_std
value: 14.624993194631372
- type: nauc_precision_at_100_diff1
value: 12.929312931181103
- type: nauc_precision_at_100_max
value: 11.682654496612322
- type: nauc_precision_at_100_std
value: 22.44127157890021
- type: nauc_precision_at_10_diff1
value: 28.260532585949555
- type: nauc_precision_at_10_max
value: 5.053591117400416
- type: nauc_precision_at_10_std
value: -11.685872129702071
- type: nauc_precision_at_1_diff1
value: 38.282779286642096
- type: nauc_precision_at_1_max
value: 2.9904752802636327
- type: nauc_precision_at_1_std
value: -15.138960622677649
- type: nauc_precision_at_20_diff1
value: 24.082555406455477
- type: nauc_precision_at_20_max
value: 7.220785769786994
- type: nauc_precision_at_20_std
value: -2.3466420348408876
- type: nauc_precision_at_3_diff1
value: 32.2680340644695
- type: nauc_precision_at_3_max
value: 2.195269827644658
- type: nauc_precision_at_3_std
value: -18.77340802819126
- type: nauc_precision_at_5_diff1
value: 30.01850058048678
- type: nauc_precision_at_5_max
value: 3.0452445562886243
- type: nauc_precision_at_5_std
value: -16.787002738887892
- type: nauc_recall_at_1000_diff1
value: 7.940551548703448
- type: nauc_recall_at_1000_max
value: 40.70493639275526
- type: nauc_recall_at_1000_std
value: 66.21346351313893
- type: nauc_recall_at_100_diff1
value: 26.395513427993123
- type: nauc_recall_at_100_max
value: 13.850278277490382
- type: nauc_recall_at_100_std
value: 31.262749365543495
- type: nauc_recall_at_10_diff1
value: 30.810510902525202
- type: nauc_recall_at_10_max
value: 5.342768575463394
- type: nauc_recall_at_10_std
value: -12.041087579697969
- type: nauc_recall_at_1_diff1
value: 38.48673772459152
- type: nauc_recall_at_1_max
value: 3.058447908569679
- type: nauc_recall_at_1_std
value: -14.963393022102997
- type: nauc_recall_at_20_diff1
value: 28.97941173115346
- type: nauc_recall_at_20_max
value: 7.387302922786026
- type: nauc_recall_at_20_std
value: -2.9701154270124537
- type: nauc_recall_at_3_diff1
value: 33.11138786633106
- type: nauc_recall_at_3_max
value: 2.3972198714504103
- type: nauc_recall_at_3_std
value: -18.444370167091602
- type: nauc_recall_at_5_diff1
value: 31.354710954894315
- type: nauc_recall_at_5_max
value: 3.0409753924683227
- type: nauc_recall_at_5_std
value: -17.003889653830544
- type: ndcg_at_1
value: 23.080000000000002
- type: ndcg_at_10
value: 41.881
- type: ndcg_at_100
value: 47.384
- type: ndcg_at_1000
value: 48.563
- type: ndcg_at_20
value: 44.394
- type: ndcg_at_3
value: 33.819
- type: ndcg_at_5
value: 37.951
- type: precision_at_1
value: 23.080000000000002
- type: precision_at_10
value: 6.619
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.836
- type: precision_at_3
value: 14.350999999999999
- type: precision_at_5
value: 10.705
- type: recall_at_1
value: 22.467000000000002
- type: recall_at_10
value: 63.363
- type: recall_at_100
value: 88.73
- type: recall_at_1000
value: 97.726
- type: recall_at_20
value: 73.072
- type: recall_at_3
value: 41.464
- type: recall_at_5
value: 51.367
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 69.895
- type: map_at_1
value: 2.386
- type: map_at_10
value: 15.517
- type: map_at_100
value: 40.755
- type: map_at_1000
value: 48.650999999999996
- type: map_at_20
value: 23.866
- type: map_at_3
value: 6.593999999999999
- type: map_at_5
value: 9.733
- type: mrr_at_1
value: 90.69767441860465
- type: mrr_at_10
value: 94.26356589147287
- type: mrr_at_100
value: 94.26356589147287
- type: mrr_at_1000
value: 94.26356589147287
- type: mrr_at_20
value: 94.26356589147287
- type: mrr_at_3
value: 93.7984496124031
- type: mrr_at_5
value: 94.26356589147287
- type: nauc_map_at_1000_diff1
value: -17.43619926664674
- type: nauc_map_at_1000_max
value: 33.6129902421304
- type: nauc_map_at_1000_std
value: 46.534866437148445
- type: nauc_map_at_100_diff1
value: -3.4235524401190065
- type: nauc_map_at_100_max
value: 20.249118581525174
- type: nauc_map_at_100_std
value: 21.211678002623337
- type: nauc_map_at_10_diff1
value: 19.868551667849278
- type: nauc_map_at_10_max
value: -7.049880879841646
- type: nauc_map_at_10_std
value: -15.401797310529073
- type: nauc_map_at_1_diff1
value: 33.37422645090001
- type: nauc_map_at_1_max
value: -26.28998419171722
- type: nauc_map_at_1_std
value: -28.696833028930623
- type: nauc_map_at_20_diff1
value: 16.721532586974828
- type: nauc_map_at_20_max
value: 5.488066416387458
- type: nauc_map_at_20_std
value: -4.928338561410316
- type: nauc_map_at_3_diff1
value: 26.596230178724362
- type: nauc_map_at_3_max
value: -21.091821185844502
- type: nauc_map_at_3_std
value: -27.931616241692335
- type: nauc_map_at_5_diff1
value: 26.503416309451893
- type: nauc_map_at_5_max
value: -14.966431698877205
- type: nauc_map_at_5_std
value: -24.92953418435822
- type: nauc_mrr_at_1000_diff1
value: 37.25735469680115
- type: nauc_mrr_at_1000_max
value: 58.86319796354405
- type: nauc_mrr_at_1000_std
value: 91.72535591220716
- type: nauc_mrr_at_100_diff1
value: 37.25735469680115
- type: nauc_mrr_at_100_max
value: 58.86319796354405
- type: nauc_mrr_at_100_std
value: 91.72535591220716
- type: nauc_mrr_at_10_diff1
value: 37.25735469680115
- type: nauc_mrr_at_10_max
value: 58.86319796354405
- type: nauc_mrr_at_10_std
value: 91.72535591220716
- type: nauc_mrr_at_1_diff1
value: 34.671850640708776
- type: nauc_mrr_at_1_max
value: 51.21031217917046
- type: nauc_mrr_at_1_std
value: 89.79460562505535
- type: nauc_mrr_at_20_diff1
value: 37.25735469680115
- type: nauc_mrr_at_20_max
value: 58.86319796354405
- type: nauc_mrr_at_20_std
value: 91.72535591220716
- type: nauc_mrr_at_3_diff1
value: 38.60957957843779
- type: nauc_mrr_at_3_max
value: 60.97560743754536
- type: nauc_mrr_at_3_std
value: 92.34595421879162
- type: nauc_mrr_at_5_diff1
value: 37.25735469680115
- type: nauc_mrr_at_5_max
value: 58.86319796354405
- type: nauc_mrr_at_5_std
value: 91.72535591220716
- type: nauc_ndcg_at_1000_diff1
value: -16.225938948928025
- type: nauc_ndcg_at_1000_max
value: 55.89533466517005
- type: nauc_ndcg_at_1000_std
value: 55.40479015862836
- type: nauc_ndcg_at_100_diff1
value: -4.2505692685100485
- type: nauc_ndcg_at_100_max
value: 41.08810853075458
- type: nauc_ndcg_at_100_std
value: 51.72941963551418
- type: nauc_ndcg_at_10_diff1
value: 0.747121732650285
- type: nauc_ndcg_at_10_max
value: 39.232823971742405
- type: nauc_ndcg_at_10_std
value: 30.231095222965937
- type: nauc_ndcg_at_1_diff1
value: 50.70351721984524
- type: nauc_ndcg_at_1_max
value: 22.223151480238055
- type: nauc_ndcg_at_1_std
value: -1.6981049298345798
- type: nauc_ndcg_at_20_diff1
value: -5.964227909058437
- type: nauc_ndcg_at_20_max
value: 41.04081240965169
- type: nauc_ndcg_at_20_std
value: 38.124111046791164
- type: nauc_ndcg_at_3_diff1
value: 22.642365636430505
- type: nauc_ndcg_at_3_max
value: 36.65277646275841
- type: nauc_ndcg_at_3_std
value: 13.807981639504055
- type: nauc_ndcg_at_5_diff1
value: 13.228770870517279
- type: nauc_ndcg_at_5_max
value: 39.26501372019075
- type: nauc_ndcg_at_5_std
value: 18.117117619727306
- type: nauc_precision_at_1000_diff1
value: -34.34396146066081
- type: nauc_precision_at_1000_max
value: 25.35157697994365
- type: nauc_precision_at_1000_std
value: 52.692463751897535
- type: nauc_precision_at_100_diff1
value: -34.59882373262441
- type: nauc_precision_at_100_max
value: 30.38585569564875
- type: nauc_precision_at_100_std
value: 55.39043613247343
- type: nauc_precision_at_10_diff1
value: -34.84065479839732
- type: nauc_precision_at_10_max
value: 52.453330287918234
- type: nauc_precision_at_10_std
value: 65.94589958566101
- type: nauc_precision_at_1_diff1
value: 34.671850640708776
- type: nauc_precision_at_1_max
value: 51.21031217917046
- type: nauc_precision_at_1_std
value: 89.79460562505535
- type: nauc_precision_at_20_diff1
value: -34.405502025386006
- type: nauc_precision_at_20_max
value: 47.22465715601537
- type: nauc_precision_at_20_std
value: 60.3767447367751
- type: nauc_precision_at_3_diff1
value: -7.395035257468793
- type: nauc_precision_at_3_max
value: 62.69785319929072
- type: nauc_precision_at_3_std
value: 67.15720558808125
- type: nauc_precision_at_5_diff1
value: -19.659532277774154
- type: nauc_precision_at_5_max
value: 65.88294876723467
- type: nauc_precision_at_5_std
value: 65.5071257264847
- type: nauc_recall_at_1000_diff1
value: -32.13151178260713
- type: nauc_recall_at_1000_max
value: 49.599798672364024
- type: nauc_recall_at_1000_std
value: 53.416414623207245
- type: nauc_recall_at_100_diff1
value: 2.2830012629669367
- type: nauc_recall_at_100_max
value: 20.72697656877468
- type: nauc_recall_at_100_std
value: 15.090940745210233
- type: nauc_recall_at_10_diff1
value: 19.967019350754683
- type: nauc_recall_at_10_max
value: -10.459789162236417
- type: nauc_recall_at_10_std
value: -18.89446359106782
- type: nauc_recall_at_1_diff1
value: 33.37422645090001
- type: nauc_recall_at_1_max
value: -26.28998419171722
- type: nauc_recall_at_1_std
value: -28.696833028930623
- type: nauc_recall_at_20_diff1
value: 17.31230551930378
- type: nauc_recall_at_20_max
value: 3.20695402833611
- type: nauc_recall_at_20_std
value: -8.361923242107272
- type: nauc_recall_at_3_diff1
value: 26.874491125503315
- type: nauc_recall_at_3_max
value: -22.372781558886633
- type: nauc_recall_at_3_std
value: -30.08475617025354
- type: nauc_recall_at_5_diff1
value: 26.425992230237465
- type: nauc_recall_at_5_max
value: -17.053828088435072
- type: nauc_recall_at_5_std
value: -27.797398066593537
- type: ndcg_at_1
value: 74.806
- type: ndcg_at_10
value: 69.895
- type: ndcg_at_100
value: 64.736
- type: ndcg_at_1000
value: 71.563
- type: ndcg_at_20
value: 67.916
- type: ndcg_at_3
value: 72.72800000000001
- type: ndcg_at_5
value: 72.298
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 78.605
- type: precision_at_100
value: 38.256
- type: precision_at_1000
value: 6.914
- type: precision_at_20
value: 70.814
- type: precision_at_3
value: 86.047
- type: precision_at_5
value: 84.651
- type: recall_at_1
value: 2.386
- type: recall_at_10
value: 16.739
- type: recall_at_100
value: 52.782
- type: recall_at_1000
value: 77.948
- type: recall_at_20
value: 26.564
- type: recall_at_3
value: 6.789000000000001
- type: recall_at_5
value: 10.192
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: train
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 39.287
- type: map_at_1
value: 20.312
- type: map_at_10
value: 32.37
- type: map_at_100
value: 33.617999999999995
- type: map_at_1000
value: 33.667
- type: map_at_20
value: 33.141999999999996
- type: map_at_3
value: 28.419
- type: map_at_5
value: 30.677
- type: mrr_at_1
value: 20.88106112272065
- type: mrr_at_10
value: 32.88347561288713
- type: mrr_at_100
value: 34.079369127830056
- type: mrr_at_1000
value: 34.12286416716751
- type: mrr_at_20
value: 33.6284955497779
- type: mrr_at_3
value: 29.00577074622038
- type: mrr_at_5
value: 31.230424233024944
- type: nauc_map_at_1000_diff1
value: 34.02708673784959
- type: nauc_map_at_1000_max
value: -0.13774596315365284
- type: nauc_map_at_1000_std
value: -14.811669105922384
- type: nauc_map_at_100_diff1
value: 34.01930820880688
- type: nauc_map_at_100_max
value: -0.1347868034897845
- type: nauc_map_at_100_std
value: -14.772649407858864
- type: nauc_map_at_10_diff1
value: 33.96158417816836
- type: nauc_map_at_10_max
value: -0.3136135162999447
- type: nauc_map_at_10_std
value: -15.444045950700714
- type: nauc_map_at_1_diff1
value: 37.58205211162927
- type: nauc_map_at_1_max
value: -0.4822908400800713
- type: nauc_map_at_1_std
value: -15.470046110078922
- type: nauc_map_at_20_diff1
value: 33.98271018950918
- type: nauc_map_at_20_max
value: -0.21381571255200063
- type: nauc_map_at_20_std
value: -15.007378462657424
- type: nauc_map_at_3_diff1
value: 34.289832746358385
- type: nauc_map_at_3_max
value: -0.5002033864794293
- type: nauc_map_at_3_std
value: -16.202034438595152
- type: nauc_map_at_5_diff1
value: 34.00117071654728
- type: nauc_map_at_5_max
value: -0.45851793692290926
- type: nauc_map_at_5_std
value: -16.006184527594332
- type: nauc_mrr_at_1000_diff1
value: 33.984976772028844
- type: nauc_mrr_at_1000_max
value: -0.1529354207124242
- type: nauc_mrr_at_1000_std
value: -14.857766881343624
- type: nauc_mrr_at_100_diff1
value: 33.97715243824301
- type: nauc_mrr_at_100_max
value: -0.14868509934664242
- type: nauc_mrr_at_100_std
value: -14.819185948867908
- type: nauc_mrr_at_10_diff1
value: 33.9073996451901
- type: nauc_mrr_at_10_max
value: -0.3117834117567206
- type: nauc_mrr_at_10_std
value: -15.447133275926591
- type: nauc_mrr_at_1_diff1
value: 37.569117720003305
- type: nauc_mrr_at_1_max
value: -0.4402285765126485
- type: nauc_mrr_at_1_std
value: -15.610329778824116
- type: nauc_mrr_at_20_diff1
value: 33.93531912336384
- type: nauc_mrr_at_20_max
value: -0.21832751838659756
- type: nauc_mrr_at_20_std
value: -15.02826142229628
- type: nauc_mrr_at_3_diff1
value: 34.233342147328905
- type: nauc_mrr_at_3_max
value: -0.48966505992824105
- type: nauc_mrr_at_3_std
value: -16.24404416215297
- type: nauc_mrr_at_5_diff1
value: 33.936224279069286
- type: nauc_mrr_at_5_max
value: -0.4505343004388219
- type: nauc_mrr_at_5_std
value: -16.009358996271995
- type: nauc_ndcg_at_1000_diff1
value: 33.15636942116579
- type: nauc_ndcg_at_1000_max
value: 0.7421529032280344
- type: nauc_ndcg_at_1000_std
value: -12.573458200734184
- type: nauc_ndcg_at_100_diff1
value: 32.93005052778217
- type: nauc_ndcg_at_100_max
value: 0.9174493718835769
- type: nauc_ndcg_at_100_std
value: -11.171269358523292
- type: nauc_ndcg_at_10_diff1
value: 32.702296942276526
- type: nauc_ndcg_at_10_max
value: -0.09434815260748751
- type: nauc_ndcg_at_10_std
value: -14.584325513499733
- type: nauc_ndcg_at_1_diff1
value: 37.58713933791024
- type: nauc_ndcg_at_1_max
value: -0.4513949591167018
- type: nauc_ndcg_at_1_std
value: -15.609650942411912
- type: nauc_ndcg_at_20_diff1
value: 32.73010631865281
- type: nauc_ndcg_at_20_max
value: 0.28286295130856637
- type: nauc_ndcg_at_20_std
value: -12.91199724170651
- type: nauc_ndcg_at_3_diff1
value: 33.376023670753284
- type: nauc_ndcg_at_3_max
value: -0.5120382325774017
- type: nauc_ndcg_at_3_std
value: -16.37620688280034
- type: nauc_ndcg_at_5_diff1
value: 32.84453673758015
- type: nauc_ndcg_at_5_max
value: -0.4424535321329695
- type: nauc_ndcg_at_5_std
value: -15.986073718543075
- type: nauc_precision_at_1000_diff1
value: -2.05172207477867
- type: nauc_precision_at_1000_max
value: 11.420451556771813
- type: nauc_precision_at_1000_std
value: 13.013500055223338
- type: nauc_precision_at_100_diff1
value: 14.09591129638288
- type: nauc_precision_at_100_max
value: 10.301532890390265
- type: nauc_precision_at_100_std
value: 21.262673040909597
- type: nauc_precision_at_10_diff1
value: 26.861179153662277
- type: nauc_precision_at_10_max
value: 0.9328475133208457
- type: nauc_precision_at_10_std
value: -10.923443610237937
- type: nauc_precision_at_1_diff1
value: 37.58713933791024
- type: nauc_precision_at_1_max
value: -0.4513949591167018
- type: nauc_precision_at_1_std
value: -15.609650942411912
- type: nauc_precision_at_20_diff1
value: 24.328515569316732
- type: nauc_precision_at_20_max
value: 2.876122000952972
- type: nauc_precision_at_20_std
value: -2.33467622450143
- type: nauc_precision_at_3_diff1
value: 30.578927129819753
- type: nauc_precision_at_3_max
value: -0.42632757194706555
- type: nauc_precision_at_3_std
value: -16.788412743765186
- type: nauc_precision_at_5_diff1
value: 28.78020642240962
- type: nauc_precision_at_5_max
value: -0.2124025605546784
- type: nauc_precision_at_5_std
value: -15.67344805195943
- type: nauc_recall_at_1000_diff1
value: 14.944369342374936
- type: nauc_recall_at_1000_max
value: 46.83432685416887
- type: nauc_recall_at_1000_std
value: 69.28042589318784
- type: nauc_recall_at_100_diff1
value: 24.745870493773193
- type: nauc_recall_at_100_max
value: 13.558884583015281
- type: nauc_recall_at_100_std
value: 31.59926593228638
- type: nauc_recall_at_10_diff1
value: 28.564011002637212
- type: nauc_recall_at_10_max
value: 0.6794665651327308
- type: nauc_recall_at_10_std
value: -11.184026264546878
- type: nauc_recall_at_1_diff1
value: 37.58205211162927
- type: nauc_recall_at_1_max
value: -0.4822908400800713
- type: nauc_recall_at_1_std
value: -15.470046110078922
- type: nauc_recall_at_20_diff1
value: 27.72144102460605
- type: nauc_recall_at_20_max
value: 2.6132011820945884
- type: nauc_recall_at_20_std
value: -2.461755722295994
- type: nauc_recall_at_3_diff1
value: 30.958109485805675
- type: nauc_recall_at_3_max
value: -0.5576168684477084
- type: nauc_recall_at_3_std
value: -16.72079877980838
- type: nauc_recall_at_5_diff1
value: 29.588704776525393
- type: nauc_recall_at_5_max
value: -0.4055192968984154
- type: nauc_recall_at_5_std
value: -15.750524291373111
- type: ndcg_at_1
value: 20.877000000000002
- type: ndcg_at_10
value: 39.287
- type: ndcg_at_100
value: 45.316
- type: ndcg_at_1000
value: 46.52
- type: ndcg_at_20
value: 42.028999999999996
- type: ndcg_at_3
value: 31.233
- type: ndcg_at_5
value: 35.259
- type: precision_at_1
value: 20.877000000000002
- type: precision_at_10
value: 6.327000000000001
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.7310000000000003
- type: precision_at_3
value: 13.379
- type: precision_at_5
value: 10.065
- type: recall_at_1
value: 20.312
- type: recall_at_10
value: 60.748999999999995
- type: recall_at_100
value: 88.682
- type: recall_at_1000
value: 97.867
- type: recall_at_20
value: 71.411
- type: recall_at_3
value: 38.800000000000004
- type: recall_at_5
value: 48.483
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.18741450068397
- type: f1
value: 93.0238595381845
- type: f1_weighted
value: 93.17789324951269
- type: main_score
value: 93.18741450068397
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: validation
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.05145413870247
- type: f1
value: 92.99732266407732
- type: f1_weighted
value: 93.02588804838254
- type: main_score
value: 93.05145413870247
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.33470132238944
- type: f1
value: 48.97680766700152
- type: f1_weighted
value: 69.63742750869805
- type: main_score
value: 67.33470132238944
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: validation
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.76733780760627
- type: f1
value: 48.841507782489195
- type: f1_weighted
value: 70.48620980474442
- type: main_score
value: 67.76733780760627
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 72.4310692669805
- type: f1
value: 70.33600790276014
- type: f1_weighted
value: 71.41893477886055
- type: main_score
value: 72.4310692669805
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: validation
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 73.4284308903099
- type: f1
value: 69.55956726857646
- type: f1_weighted
value: 72.15631882777167
- type: main_score
value: 73.4284308903099
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 76.78883658372561
- type: f1
value: 76.32136975183587
- type: f1_weighted
value: 76.595264995014
- type: main_score
value: 76.78883658372561
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: validation
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 77.03885882931628
- type: f1
value: 76.31184523539754
- type: f1_weighted
value: 76.88960030513515
- type: main_score
value: 77.03885882931628
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 33.095522581541175
- type: v_measure
value: 33.095522581541175
- type: v_measure_std
value: 1.2733644358152183
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 31.06849440193581
- type: v_measure
value: 31.06849440193581
- type: v_measure_std
value: 1.6047314419984595
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 31.29908728873086
- type: map
value: 31.29908728873086
- type: mrr
value: 32.39123019117521
- type: nAUC_map_diff1
value: 13.066848043751357
- type: nAUC_map_max
value: -21.421847351002597
- type: nAUC_map_std
value: -1.6918937775056007
- type: nAUC_mrr_diff1
value: 12.280185578076129
- type: nAUC_mrr_max
value: -16.139433517987133
- type: nAUC_mrr_std
value: -0.23907632474680363
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 35.509
- type: map_at_1
value: 5.508
- type: map_at_10
value: 13.014999999999999
- type: map_at_100
value: 16.342000000000002
- type: map_at_1000
value: 17.797
- type: map_at_20
value: 14.456
- type: map_at_3
value: 9.363000000000001
- type: map_at_5
value: 11.064
- type: mrr_at_1
value: 45.51083591331269
- type: mrr_at_10
value: 54.5795862204531
- type: mrr_at_100
value: 55.07630032637193
- type: mrr_at_1000
value: 55.127586714933805
- type: mrr_at_20
value: 54.90781971103837
- type: mrr_at_3
value: 52.32198142414861
- type: mrr_at_5
value: 53.74613003095975
- type: nauc_map_at_1000_diff1
value: 24.501330360153915
- type: nauc_map_at_1000_max
value: 33.06045409371139
- type: nauc_map_at_1000_std
value: 19.95230416435316
- type: nauc_map_at_100_diff1
value: 25.806053286220532
- type: nauc_map_at_100_max
value: 31.933709402264153
- type: nauc_map_at_100_std
value: 15.842452059972015
- type: nauc_map_at_10_diff1
value: 29.890503647724593
- type: nauc_map_at_10_max
value: 25.92830734236527
- type: nauc_map_at_10_std
value: 3.6250471128132804
- type: nauc_map_at_1_diff1
value: 50.389087879996865
- type: nauc_map_at_1_max
value: 20.95986174718729
- type: nauc_map_at_1_std
value: -5.41570803338871
- type: nauc_map_at_20_diff1
value: 27.444531905945162
- type: nauc_map_at_20_max
value: 28.660078098188745
- type: nauc_map_at_20_std
value: 8.44788718797198
- type: nauc_map_at_3_diff1
value: 37.92764614538618
- type: nauc_map_at_3_max
value: 20.65857273684232
- type: nauc_map_at_3_std
value: -6.143600629025448
- type: nauc_map_at_5_diff1
value: 34.81301402702416
- type: nauc_map_at_5_max
value: 23.1193138262864
- type: nauc_map_at_5_std
value: -1.7057107446005662
- type: nauc_mrr_at_1000_diff1
value: 28.19968199215303
- type: nauc_mrr_at_1000_max
value: 46.82195714140159
- type: nauc_mrr_at_1000_std
value: 31.68998255433254
- type: nauc_mrr_at_100_diff1
value: 28.23773638204092
- type: nauc_mrr_at_100_max
value: 46.867704940087
- type: nauc_mrr_at_100_std
value: 31.72723785491753
- type: nauc_mrr_at_10_diff1
value: 28.235449464678208
- type: nauc_mrr_at_10_max
value: 46.757000930146255
- type: nauc_mrr_at_10_std
value: 31.38607887979623
- type: nauc_mrr_at_1_diff1
value: 30.395863641073404
- type: nauc_mrr_at_1_max
value: 39.14603700466295
- type: nauc_mrr_at_1_std
value: 22.81299502437437
- type: nauc_mrr_at_20_diff1
value: 28.26836846663423
- type: nauc_mrr_at_20_max
value: 46.71036119122077
- type: nauc_mrr_at_20_std
value: 31.717717995472462
- type: nauc_mrr_at_3_diff1
value: 27.677977469638744
- type: nauc_mrr_at_3_max
value: 46.281885617180905
- type: nauc_mrr_at_3_std
value: 29.971954604398306
- type: nauc_mrr_at_5_diff1
value: 28.002295498470435
- type: nauc_mrr_at_5_max
value: 46.24599903705559
- type: nauc_mrr_at_5_std
value: 30.64406412426167
- type: nauc_ndcg_at_1000_diff1
value: 23.52189648298028
- type: nauc_ndcg_at_1000_max
value: 48.84815340128893
- type: nauc_ndcg_at_1000_std
value: 38.191275632053504
- type: nauc_ndcg_at_100_diff1
value: 24.241636067072005
- type: nauc_ndcg_at_100_max
value: 44.044030318408616
- type: nauc_ndcg_at_100_std
value: 33.08892120061157
- type: nauc_ndcg_at_10_diff1
value: 19.38875035508853
- type: nauc_ndcg_at_10_max
value: 39.741010776186656
- type: nauc_ndcg_at_10_std
value: 31.608748238315908
- type: nauc_ndcg_at_1_diff1
value: 31.62521740234213
- type: nauc_ndcg_at_1_max
value: 37.777292135533955
- type: nauc_ndcg_at_1_std
value: 23.13720120267271
- type: nauc_ndcg_at_20_diff1
value: 21.084936669355745
- type: nauc_ndcg_at_20_max
value: 40.54909713178013
- type: nauc_ndcg_at_20_std
value: 32.197062113529576
- type: nauc_ndcg_at_3_diff1
value: 24.011090519352297
- type: nauc_ndcg_at_3_max
value: 41.390403583726155
- type: nauc_ndcg_at_3_std
value: 25.970801370304663
- type: nauc_ndcg_at_5_diff1
value: 21.665450103655758
- type: nauc_ndcg_at_5_max
value: 41.31692759970914
- type: nauc_ndcg_at_5_std
value: 28.27222783671603
- type: nauc_precision_at_1000_diff1
value: -12.949662625593387
- type: nauc_precision_at_1000_max
value: 9.813067200754695
- type: nauc_precision_at_1000_std
value: 40.89419595054123
- type: nauc_precision_at_100_diff1
value: -8.788626197601692
- type: nauc_precision_at_100_max
value: 24.009165542236552
- type: nauc_precision_at_100_std
value: 52.03060008921234
- type: nauc_precision_at_10_diff1
value: -0.8821033318569177
- type: nauc_precision_at_10_max
value: 36.63377864059687
- type: nauc_precision_at_10_std
value: 41.728479740505605
- type: nauc_precision_at_1_diff1
value: 30.395863641073404
- type: nauc_precision_at_1_max
value: 39.14603700466295
- type: nauc_precision_at_1_std
value: 22.81299502437437
- type: nauc_precision_at_20_diff1
value: -4.247170694530449
- type: nauc_precision_at_20_max
value: 34.1717839314797
- type: nauc_precision_at_20_std
value: 46.95915867262384
- type: nauc_precision_at_3_diff1
value: 14.08549588334381
- type: nauc_precision_at_3_max
value: 41.78305711419503
- type: nauc_precision_at_3_std
value: 29.937750126808016
- type: nauc_precision_at_5_diff1
value: 6.795675986788425
- type: nauc_precision_at_5_max
value: 41.1649623002547
- type: nauc_precision_at_5_std
value: 35.10676757427974
- type: nauc_recall_at_1000_diff1
value: 9.323813197352594
- type: nauc_recall_at_1000_max
value: 26.561710670307797
- type: nauc_recall_at_1000_std
value: 27.567182664914775
- type: nauc_recall_at_100_diff1
value: 19.40746686768132
- type: nauc_recall_at_100_max
value: 31.43872311644064
- type: nauc_recall_at_100_std
value: 22.723757402094883
- type: nauc_recall_at_10_diff1
value: 25.54082922198322
- type: nauc_recall_at_10_max
value: 22.018853511961083
- type: nauc_recall_at_10_std
value: 4.299087738075515
- type: nauc_recall_at_1_diff1
value: 50.389087879996865
- type: nauc_recall_at_1_max
value: 20.95986174718729
- type: nauc_recall_at_1_std
value: -5.41570803338871
- type: nauc_recall_at_20_diff1
value: 23.338815663373357
- type: nauc_recall_at_20_max
value: 23.350885868547458
- type: nauc_recall_at_20_std
value: 8.049814823974602
- type: nauc_recall_at_3_diff1
value: 33.134066342544145
- type: nauc_recall_at_3_max
value: 18.539807748844968
- type: nauc_recall_at_3_std
value: -5.923909673276145
- type: nauc_recall_at_5_diff1
value: 29.55575014531457
- type: nauc_recall_at_5_max
value: 19.067988699284534
- type: nauc_recall_at_5_std
value: -0.8407775036491218
- type: ndcg_at_1
value: 43.808
- type: ndcg_at_10
value: 35.509
- type: ndcg_at_100
value: 31.676
- type: ndcg_at_1000
value: 40.543
- type: ndcg_at_20
value: 32.988
- type: ndcg_at_3
value: 40.376
- type: ndcg_at_5
value: 38.41
- type: precision_at_1
value: 45.511
- type: precision_at_10
value: 26.811
- type: precision_at_100
value: 8.056000000000001
- type: precision_at_1000
value: 2.098
- type: precision_at_20
value: 19.505
- type: precision_at_3
value: 38.080000000000005
- type: precision_at_5
value: 33.437
- type: recall_at_1
value: 5.508
- type: recall_at_10
value: 17.319000000000003
- type: recall_at_100
value: 31.294
- type: recall_at_1000
value: 63.275999999999996
- type: recall_at_20
value: 21.431
- type: recall_at_3
value: 10.473
- type: recall_at_5
value: 13.507
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 49.393
- type: map_at_1
value: 26.849
- type: map_at_10
value: 41.499
- type: map_at_100
value: 42.653
- type: map_at_1000
value: 42.684
- type: map_at_20
value: 42.243
- type: map_at_3
value: 36.882999999999996
- type: map_at_5
value: 39.555
- type: mrr_at_1
value: 30.446118192352262
- type: mrr_at_10
value: 44.00585581857308
- type: mrr_at_100
value: 44.90568334013942
- type: mrr_at_1000
value: 44.927318173677044
- type: mrr_at_20
value: 44.606647073916015
- type: mrr_at_3
value: 40.02510621861723
- type: mrr_at_5
value: 42.41067979915025
- type: nauc_map_at_1000_diff1
value: 31.651453391622358
- type: nauc_map_at_1000_max
value: 21.072274916232775
- type: nauc_map_at_1000_std
value: -4.166680453231419
- type: nauc_map_at_100_diff1
value: 31.64962254039646
- type: nauc_map_at_100_max
value: 21.098240383541132
- type: nauc_map_at_100_std
value: -4.131604331386804
- type: nauc_map_at_10_diff1
value: 31.766862600658698
- type: nauc_map_at_10_max
value: 21.010891364290753
- type: nauc_map_at_10_std
value: -4.708084775930258
- type: nauc_map_at_1_diff1
value: 33.32708116835102
- type: nauc_map_at_1_max
value: 16.533091238015647
- type: nauc_map_at_1_std
value: -5.473056333831887
- type: nauc_map_at_20_diff1
value: 31.651686475476943
- type: nauc_map_at_20_max
value: 21.118504172703588
- type: nauc_map_at_20_std
value: -4.266813330654076
- type: nauc_map_at_3_diff1
value: 31.00533525886582
- type: nauc_map_at_3_max
value: 19.147198618391343
- type: nauc_map_at_3_std
value: -5.813774751421132
- type: nauc_map_at_5_diff1
value: 31.35355732909234
- type: nauc_map_at_5_max
value: 20.164291644935563
- type: nauc_map_at_5_std
value: -5.484865685870952
- type: nauc_mrr_at_1000_diff1
value: 31.2911953834966
- type: nauc_mrr_at_1000_max
value: 20.98587622652489
- type: nauc_mrr_at_1000_std
value: -2.899993822246
- type: nauc_mrr_at_100_diff1
value: 31.291944529588978
- type: nauc_mrr_at_100_max
value: 21.005023374269047
- type: nauc_mrr_at_100_std
value: -2.873367683133069
- type: nauc_mrr_at_10_diff1
value: 31.34953406118716
- type: nauc_mrr_at_10_max
value: 21.090610712235293
- type: nauc_mrr_at_10_std
value: -3.0680385450871865
- type: nauc_mrr_at_1_diff1
value: 33.28395799775457
- type: nauc_mrr_at_1_max
value: 17.5779149107383
- type: nauc_mrr_at_1_std
value: -4.09778735624495
- type: nauc_mrr_at_20_diff1
value: 31.26673427403389
- type: nauc_mrr_at_20_max
value: 21.065827511367623
- type: nauc_mrr_at_20_std
value: -2.9368911795293955
- type: nauc_mrr_at_3_diff1
value: 30.940055172744103
- type: nauc_mrr_at_3_max
value: 19.798185385046573
- type: nauc_mrr_at_3_std
value: -4.064157829412906
- type: nauc_mrr_at_5_diff1
value: 31.02626997196326
- type: nauc_mrr_at_5_max
value: 20.619867130693375
- type: nauc_mrr_at_5_std
value: -3.621718540966316
- type: nauc_ndcg_at_1000_diff1
value: 31.215275891803834
- type: nauc_ndcg_at_1000_max
value: 23.038486093151146
- type: nauc_ndcg_at_1000_std
value: -1.703062891132552
- type: nauc_ndcg_at_100_diff1
value: 31.222739143287704
- type: nauc_ndcg_at_100_max
value: 23.73579897144862
- type: nauc_ndcg_at_100_std
value: -0.7280084629042225
- type: nauc_ndcg_at_10_diff1
value: 31.552059739538358
- type: nauc_ndcg_at_10_max
value: 23.663866950433963
- type: nauc_ndcg_at_10_std
value: -2.8203583845097455
- type: nauc_ndcg_at_1_diff1
value: 33.28395799775457
- type: nauc_ndcg_at_1_max
value: 17.5779149107383
- type: nauc_ndcg_at_1_std
value: -4.09778735624495
- type: nauc_ndcg_at_20_diff1
value: 31.115573156536808
- type: nauc_ndcg_at_20_max
value: 23.91556914404219
- type: nauc_ndcg_at_20_std
value: -1.5512740313611486
- type: nauc_ndcg_at_3_diff1
value: 30.182801126285153
- type: nauc_ndcg_at_3_max
value: 20.116992880379968
- type: nauc_ndcg_at_3_std
value: -5.270323067103204
- type: nauc_ndcg_at_5_diff1
value: 30.55855867246541
- type: nauc_ndcg_at_5_max
value: 21.854891798578514
- type: nauc_ndcg_at_5_std
value: -4.621054609051083
- type: nauc_precision_at_1000_diff1
value: -1.2677610563440622
- type: nauc_precision_at_1000_max
value: 10.306514334211938
- type: nauc_precision_at_1000_std
value: 13.52311200877095
- type: nauc_precision_at_100_diff1
value: 4.584881683104716
- type: nauc_precision_at_100_max
value: 19.05218376135677
- type: nauc_precision_at_100_std
value: 19.53317534358223
- type: nauc_precision_at_10_diff1
value: 20.668826405226238
- type: nauc_precision_at_10_max
value: 27.133486139333527
- type: nauc_precision_at_10_std
value: 6.507236892401218
- type: nauc_precision_at_1_diff1
value: 33.28395799775457
- type: nauc_precision_at_1_max
value: 17.5779149107383
- type: nauc_precision_at_1_std
value: -4.09778735624495
- type: nauc_precision_at_20_diff1
value: 14.220253768827485
- type: nauc_precision_at_20_max
value: 25.76903982782269
- type: nauc_precision_at_20_std
value: 13.332493634341546
- type: nauc_precision_at_3_diff1
value: 25.11865756124606
- type: nauc_precision_at_3_max
value: 22.659692088129788
- type: nauc_precision_at_3_std
value: -2.4562156265188646
- type: nauc_precision_at_5_diff1
value: 23.14757206487534
- type: nauc_precision_at_5_max
value: 24.626603388946382
- type: nauc_precision_at_5_std
value: -0.3684391495744226
- type: nauc_recall_at_1000_diff1
value: 18.253568539360202
- type: nauc_recall_at_1000_max
value: 69.97593556184775
- type: nauc_recall_at_1000_std
value: 67.03053194965779
- type: nauc_recall_at_100_diff1
value: 27.49214266284193
- type: nauc_recall_at_100_max
value: 52.8893274817214
- type: nauc_recall_at_100_std
value: 41.90088287480938
- type: nauc_recall_at_10_diff1
value: 30.033274268188283
- type: nauc_recall_at_10_max
value: 32.423763247028035
- type: nauc_recall_at_10_std
value: 2.83469868226926
- type: nauc_recall_at_1_diff1
value: 33.32708116835102
- type: nauc_recall_at_1_max
value: 16.533091238015647
- type: nauc_recall_at_1_std
value: -5.473056333831887
- type: nauc_recall_at_20_diff1
value: 27.482504671207547
- type: nauc_recall_at_20_max
value: 36.766903843277824
- type: nauc_recall_at_20_std
value: 11.285869127910257
- type: nauc_recall_at_3_diff1
value: 26.871897458882167
- type: nauc_recall_at_3_max
value: 21.40857537556137
- type: nauc_recall_at_3_std
value: -5.177974131117242
- type: nauc_recall_at_5_diff1
value: 27.05417164774647
- type: nauc_recall_at_5_max
value: 25.54051973421125
- type: nauc_recall_at_5_std
value: -3.6239708422249772
- type: ndcg_at_1
value: 30.446
- type: ndcg_at_10
value: 49.393
- type: ndcg_at_100
value: 54.32900000000001
- type: ndcg_at_1000
value: 55.074999999999996
- type: ndcg_at_20
value: 51.837
- type: ndcg_at_3
value: 40.598
- type: ndcg_at_5
value: 45.129999999999995
- type: precision_at_1
value: 30.446
- type: precision_at_10
value: 8.456
- type: precision_at_100
value: 1.122
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_20
value: 4.807
- type: precision_at_3
value: 18.694
- type: precision_at_5
value: 13.83
- type: recall_at_1
value: 26.849
- type: recall_at_10
value: 71.08
- type: recall_at_100
value: 92.528
- type: recall_at_1000
value: 98.124
- type: recall_at_20
value: 80.164
- type: recall_at_3
value: 48.211
- type: recall_at_5
value: 58.669000000000004
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: dev
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 88.414
- type: map_at_1
value: 70.451
- type: map_at_10
value: 84.743
- type: map_at_100
value: 85.31800000000001
- type: map_at_1000
value: 85.334
- type: map_at_20
value: 85.139
- type: map_at_3
value: 81.852
- type: map_at_5
value: 83.692
- type: mrr_at_1
value: 81.10000000000001
- type: mrr_at_10
value: 87.573
- type: mrr_at_100
value: 87.6829088327801
- type: mrr_at_1000
value: 87.68388325928169
- type: mrr_at_20
value: 87.6581424881448
- type: mrr_at_3
value: 86.69333333333334
- type: mrr_at_5
value: 87.29333333333334
- type: nauc_map_at_1000_diff1
value: 75.72104615748786
- type: nauc_map_at_1000_max
value: 31.609188396862343
- type: nauc_map_at_1000_std
value: -46.52088683693813
- type: nauc_map_at_100_diff1
value: 75.72964444793845
- type: nauc_map_at_100_max
value: 31.591912730669613
- type: nauc_map_at_100_std
value: -46.60975557872442
- type: nauc_map_at_10_diff1
value: 75.94308571775802
- type: nauc_map_at_10_max
value: 31.253563385398657
- type: nauc_map_at_10_std
value: -48.181322538551086
- type: nauc_map_at_1_diff1
value: 79.79532882332838
- type: nauc_map_at_1_max
value: 21.771809872442184
- type: nauc_map_at_1_std
value: -42.77169105840692
- type: nauc_map_at_20_diff1
value: 75.81259077387881
- type: nauc_map_at_20_max
value: 31.416579812982732
- type: nauc_map_at_20_std
value: -47.20581632849992
- type: nauc_map_at_3_diff1
value: 76.20949767229638
- type: nauc_map_at_3_max
value: 28.979538243065196
- type: nauc_map_at_3_std
value: -49.54877638998648
- type: nauc_map_at_5_diff1
value: 76.00775678250743
- type: nauc_map_at_5_max
value: 30.52865409003373
- type: nauc_map_at_5_std
value: -49.12438963420192
- type: nauc_mrr_at_1000_diff1
value: 76.53323969103015
- type: nauc_mrr_at_1000_max
value: 35.012128118744265
- type: nauc_mrr_at_1000_std
value: -42.21934663675809
- type: nauc_mrr_at_100_diff1
value: 76.53433148138932
- type: nauc_mrr_at_100_max
value: 35.01523464314866
- type: nauc_mrr_at_100_std
value: -42.218206899281256
- type: nauc_mrr_at_10_diff1
value: 76.55659780959033
- type: nauc_mrr_at_10_max
value: 35.12026307368733
- type: nauc_mrr_at_10_std
value: -42.332025538861515
- type: nauc_mrr_at_1_diff1
value: 77.4819303112777
- type: nauc_mrr_at_1_max
value: 33.87083528550898
- type: nauc_mrr_at_1_std
value: -40.70273058923244
- type: nauc_mrr_at_20_diff1
value: 76.54181877814159
- type: nauc_mrr_at_20_max
value: 35.029167402913025
- type: nauc_mrr_at_20_std
value: -42.26416201560607
- type: nauc_mrr_at_3_diff1
value: 76.13878281984827
- type: nauc_mrr_at_3_max
value: 34.95491780423885
- type: nauc_mrr_at_3_std
value: -42.5669408971239
- type: nauc_mrr_at_5_diff1
value: 76.36492389659307
- type: nauc_mrr_at_5_max
value: 35.26107283009812
- type: nauc_mrr_at_5_std
value: -42.42173398101322
- type: nauc_ndcg_at_1000_diff1
value: 75.72692836343464
- type: nauc_ndcg_at_1000_max
value: 33.247134568215095
- type: nauc_ndcg_at_1000_std
value: -44.439309524493986
- type: nauc_ndcg_at_100_diff1
value: 75.76218437396709
- type: nauc_ndcg_at_100_max
value: 33.213029188852076
- type: nauc_ndcg_at_100_std
value: -44.86123377474948
- type: nauc_ndcg_at_10_diff1
value: 76.0418708659554
- type: nauc_ndcg_at_10_max
value: 33.03535169276814
- type: nauc_ndcg_at_10_std
value: -47.78769977393773
- type: nauc_ndcg_at_1_diff1
value: 77.5187067567312
- type: nauc_ndcg_at_1_max
value: 33.752101025952825
- type: nauc_ndcg_at_1_std
value: -40.74843856168861
- type: nauc_ndcg_at_20_diff1
value: 75.93559598948329
- type: nauc_ndcg_at_20_max
value: 32.78328286896399
- type: nauc_ndcg_at_20_std
value: -46.52407523302919
- type: nauc_ndcg_at_3_diff1
value: 74.79008589166301
- type: nauc_ndcg_at_3_max
value: 31.808036331631794
- type: nauc_ndcg_at_3_std
value: -47.52652861987724
- type: nauc_ndcg_at_5_diff1
value: 75.43287367533794
- type: nauc_ndcg_at_5_max
value: 32.833203744407065
- type: nauc_ndcg_at_5_std
value: -48.200904530310275
- type: nauc_precision_at_1000_diff1
value: -43.672762408765834
- type: nauc_precision_at_1000_max
value: -2.214136208188294
- type: nauc_precision_at_1000_std
value: 40.29144301611288
- type: nauc_precision_at_100_diff1
value: -43.12206249775465
- type: nauc_precision_at_100_max
value: -1.6653962082743625
- type: nauc_precision_at_100_std
value: 37.5746847458721
- type: nauc_precision_at_10_diff1
value: -37.22201537627446
- type: nauc_precision_at_10_max
value: 2.313038479878318
- type: nauc_precision_at_10_std
value: 24.85067899903987
- type: nauc_precision_at_1_diff1
value: 77.5187067567312
- type: nauc_precision_at_1_max
value: 33.752101025952825
- type: nauc_precision_at_1_std
value: -40.74843856168861
- type: nauc_precision_at_20_diff1
value: -40.88285707082065
- type: nauc_precision_at_20_max
value: -0.4200418417967951
- type: nauc_precision_at_20_std
value: 31.468438530634213
- type: nauc_precision_at_3_diff1
value: -19.411745749449427
- type: nauc_precision_at_3_max
value: 10.67761072881965
- type: nauc_precision_at_3_std
value: 6.696045978354165
- type: nauc_precision_at_5_diff1
value: -30.962882716609258
- type: nauc_precision_at_5_max
value: 6.418634812410295
- type: nauc_precision_at_5_std
value: 17.387821173608316
- type: nauc_recall_at_1000_diff1
value: 94.69386728069973
- type: nauc_recall_at_1000_max
value: 84.28469880099311
- type: nauc_recall_at_1000_std
value: 27.77863150403283
- type: nauc_recall_at_100_diff1
value: 75.67916531615204
- type: nauc_recall_at_100_max
value: 38.167545891813184
- type: nauc_recall_at_100_std
value: -64.10332561926879
- type: nauc_recall_at_10_diff1
value: 74.06200450248475
- type: nauc_recall_at_10_max
value: 32.11315273973532
- type: nauc_recall_at_10_std
value: -66.65395786324505
- type: nauc_recall_at_1_diff1
value: 79.79532882332838
- type: nauc_recall_at_1_max
value: 21.771809872442184
- type: nauc_recall_at_1_std
value: -42.77169105840692
- type: nauc_recall_at_20_diff1
value: 75.33902902721827
- type: nauc_recall_at_20_max
value: 29.429302358270228
- type: nauc_recall_at_20_std
value: -71.19429079945306
- type: nauc_recall_at_3_diff1
value: 71.53824579534546
- type: nauc_recall_at_3_max
value: 27.06969939574833
- type: nauc_recall_at_3_std
value: -56.11546620113172
- type: nauc_recall_at_5_diff1
value: 71.03056598232821
- type: nauc_recall_at_5_max
value: 29.8357436692765
- type: nauc_recall_at_5_std
value: -60.42704500308669
- type: ndcg_at_1
value: 81.08
- type: ndcg_at_10
value: 88.414
- type: ndcg_at_100
value: 89.586
- type: ndcg_at_1000
value: 89.68599999999999
- type: ndcg_at_20
value: 89.092
- type: ndcg_at_3
value: 85.697
- type: ndcg_at_5
value: 87.215
- type: precision_at_1
value: 81.08
- type: precision_at_10
value: 13.344000000000001
- type: precision_at_100
value: 1.489
- type: precision_at_1000
value: 0.152
- type: precision_at_20
value: 7.0440000000000005
- type: precision_at_3
value: 37.627
- type: precision_at_5
value: 24.668
- type: recall_at_1
value: 70.451
- type: recall_at_10
value: 95.588
- type: recall_at_100
value: 99.53
- type: recall_at_1000
value: 99.985
- type: recall_at_20
value: 97.723
- type: recall_at_3
value: 87.775
- type: recall_at_5
value: 92.134
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 88.78
- type: map_at_1
value: 71.18100000000001
- type: map_at_10
value: 85.07900000000001
- type: map_at_100
value: 85.685
- type: map_at_1000
value: 85.702
- type: map_at_20
value: 85.476
- type: map_at_3
value: 82.15400000000001
- type: map_at_5
value: 83.99600000000001
- type: mrr_at_1
value: 81.94
- type: mrr_at_10
value: 88.03901984126983
- type: mrr_at_100
value: 88.12521550900104
- type: mrr_at_1000
value: 88.12597480807705
- type: mrr_at_20
value: 88.10112262221783
- type: mrr_at_3
value: 87.14833333333334
- type: mrr_at_5
value: 87.75433333333332
- type: nauc_map_at_1000_diff1
value: 76.35234622746741
- type: nauc_map_at_1000_max
value: 34.98127835356811
- type: nauc_map_at_1000_std
value: -48.412573733834584
- type: nauc_map_at_100_diff1
value: 76.360780142656
- type: nauc_map_at_100_max
value: 34.95463862302956
- type: nauc_map_at_100_std
value: -48.465125662089584
- type: nauc_map_at_10_diff1
value: 76.52051460855479
- type: nauc_map_at_10_max
value: 34.52605693128301
- type: nauc_map_at_10_std
value: -50.187292245995415
- type: nauc_map_at_1_diff1
value: 79.82094021507255
- type: nauc_map_at_1_max
value: 26.775009943003003
- type: nauc_map_at_1_std
value: -43.61817396485825
- type: nauc_map_at_20_diff1
value: 76.43954832866126
- type: nauc_map_at_20_max
value: 34.90428946303489
- type: nauc_map_at_20_std
value: -49.12609036782338
- type: nauc_map_at_3_diff1
value: 76.94334321778128
- type: nauc_map_at_3_max
value: 32.622560188949166
- type: nauc_map_at_3_std
value: -52.188893755275046
- type: nauc_map_at_5_diff1
value: 76.82067204509046
- type: nauc_map_at_5_max
value: 33.69870062813589
- type: nauc_map_at_5_std
value: -51.82728889112768
- type: nauc_mrr_at_1000_diff1
value: 76.95820451520244
- type: nauc_mrr_at_1000_max
value: 37.55235025494403
- type: nauc_mrr_at_1000_std
value: -44.920257644843616
- type: nauc_mrr_at_100_diff1
value: 76.9582899978155
- type: nauc_mrr_at_100_max
value: 37.553463697088354
- type: nauc_mrr_at_100_std
value: -44.91865155369245
- type: nauc_mrr_at_10_diff1
value: 76.9448254715384
- type: nauc_mrr_at_10_max
value: 37.57592800822079
- type: nauc_mrr_at_10_std
value: -44.99589549975646
- type: nauc_mrr_at_1_diff1
value: 77.9564573352381
- type: nauc_mrr_at_1_max
value: 37.315703535659146
- type: nauc_mrr_at_1_std
value: -42.14814981557996
- type: nauc_mrr_at_20_diff1
value: 76.96048519906928
- type: nauc_mrr_at_20_max
value: 37.595921188645136
- type: nauc_mrr_at_20_std
value: -44.95453191997584
- type: nauc_mrr_at_3_diff1
value: 76.75010647148396
- type: nauc_mrr_at_3_max
value: 37.33157903176563
- type: nauc_mrr_at_3_std
value: -45.76965088590701
- type: nauc_mrr_at_5_diff1
value: 76.91724094778046
- type: nauc_mrr_at_5_max
value: 37.6961116255935
- type: nauc_mrr_at_5_std
value: -45.315111572131336
- type: nauc_ndcg_at_1000_diff1
value: 76.17915741213382
- type: nauc_ndcg_at_1000_max
value: 36.3994100974221
- type: nauc_ndcg_at_1000_std
value: -46.52196285482035
- type: nauc_ndcg_at_100_diff1
value: 76.20633890208957
- type: nauc_ndcg_at_100_max
value: 36.300160051636446
- type: nauc_ndcg_at_100_std
value: -46.64640352852657
- type: nauc_ndcg_at_10_diff1
value: 76.15630905915566
- type: nauc_ndcg_at_10_max
value: 35.60844948367385
- type: nauc_ndcg_at_10_std
value: -49.545760942997966
- type: nauc_ndcg_at_1_diff1
value: 77.9374936485659
- type: nauc_ndcg_at_1_max
value: 37.284222900318895
- type: nauc_ndcg_at_1_std
value: -42.11990958304383
- type: nauc_ndcg_at_20_diff1
value: 76.27986532220213
- type: nauc_ndcg_at_20_max
value: 36.21561434308417
- type: nauc_ndcg_at_20_std
value: -48.37507535345429
- type: nauc_ndcg_at_3_diff1
value: 75.75636270427239
- type: nauc_ndcg_at_3_max
value: 35.032989706721466
- type: nauc_ndcg_at_3_std
value: -50.238571322181016
- type: nauc_ndcg_at_5_diff1
value: 76.196363901235
- type: nauc_ndcg_at_5_max
value: 35.00688298015009
- type: nauc_ndcg_at_5_std
value: -50.80538017477186
- type: nauc_precision_at_1000_diff1
value: -43.87939421459698
- type: nauc_precision_at_1000_max
value: -6.616664050809342
- type: nauc_precision_at_1000_std
value: 40.21658774987439
- type: nauc_precision_at_100_diff1
value: -43.38845573529132
- type: nauc_precision_at_100_max
value: -6.459899067577796
- type: nauc_precision_at_100_std
value: 38.97937099541361
- type: nauc_precision_at_10_diff1
value: -38.512616169714
- type: nauc_precision_at_10_max
value: -2.0392553261530506
- type: nauc_precision_at_10_std
value: 27.846561415195296
- type: nauc_precision_at_1_diff1
value: 77.9374936485659
- type: nauc_precision_at_1_max
value: 37.284222900318895
- type: nauc_precision_at_1_std
value: -42.11990958304383
- type: nauc_precision_at_20_diff1
value: -41.20993785338744
- type: nauc_precision_at_20_max
value: -3.883205139317751
- type: nauc_precision_at_20_std
value: 33.746465008755195
- type: nauc_precision_at_3_diff1
value: -18.925243780820253
- type: nauc_precision_at_3_max
value: 7.636938829464144
- type: nauc_precision_at_3_std
value: 5.1931719934549125
- type: nauc_precision_at_5_diff1
value: -30.993143706159508
- type: nauc_precision_at_5_max
value: 1.6891759655648126
- type: nauc_precision_at_5_std
value: 17.209212799329844
- type: nauc_recall_at_1000_diff1
value: 47.456878747204605
- type: nauc_recall_at_1000_max
value: 23.807161723272337
- type: nauc_recall_at_1000_std
value: -32.766355457141614
- type: nauc_recall_at_100_diff1
value: 72.34674588264632
- type: nauc_recall_at_100_max
value: 33.99490510103659
- type: nauc_recall_at_100_std
value: -36.46420217881495
- type: nauc_recall_at_10_diff1
value: 71.56272183904842
- type: nauc_recall_at_10_max
value: 32.016038250938735
- type: nauc_recall_at_10_std
value: -66.96989454392227
- type: nauc_recall_at_1_diff1
value: 79.82094021507255
- type: nauc_recall_at_1_max
value: 26.775009943003003
- type: nauc_recall_at_1_std
value: -43.61817396485825
- type: nauc_recall_at_20_diff1
value: 73.24474644767976
- type: nauc_recall_at_20_max
value: 36.40934106083169
- type: nauc_recall_at_20_std
value: -65.70195006740305
- type: nauc_recall_at_3_diff1
value: 72.78023437495966
- type: nauc_recall_at_3_max
value: 29.643076465434625
- type: nauc_recall_at_3_std
value: -60.40805827565894
- type: nauc_recall_at_5_diff1
value: 72.04421253222534
- type: nauc_recall_at_5_max
value: 30.353847712973938
- type: nauc_recall_at_5_std
value: -64.58955239875104
- type: ndcg_at_1
value: 81.95
- type: ndcg_at_10
value: 88.78
- type: ndcg_at_100
value: 89.90899999999999
- type: ndcg_at_1000
value: 90.011
- type: ndcg_at_20
value: 89.376
- type: ndcg_at_3
value: 86.027
- type: ndcg_at_5
value: 87.551
- type: precision_at_1
value: 81.95
- type: precision_at_10
value: 13.453999999999999
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.1209999999999996
- type: precision_at_3
value: 37.6
- type: precision_at_5
value: 24.712
- type: recall_at_1
value: 71.18100000000001
- type: recall_at_10
value: 95.75099999999999
- type: recall_at_100
value: 99.544
- type: recall_at_1000
value: 99.98899999999999
- type: recall_at_20
value: 97.629
- type: recall_at_3
value: 87.801
- type: recall_at_5
value: 92.14099999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 53.81557774708997
- type: v_measure
value: 53.81557774708997
- type: v_measure_std
value: 4.940724653585969
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 63.059340693644785
- type: v_measure
value: 63.059340693644785
- type: v_measure_std
value: 11.97353413900713
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 20.441000000000003
- type: map_at_1
value: 4.508
- type: map_at_10
value: 12.141
- type: map_at_100
value: 14.316
- type: map_at_1000
value: 14.658
- type: map_at_20
value: 13.242
- type: map_at_3
value: 8.624
- type: map_at_5
value: 10.333
- type: mrr_at_1
value: 22.3
- type: mrr_at_10
value: 34.01488095238095
- type: mrr_at_100
value: 35.12947200540852
- type: mrr_at_1000
value: 35.18258200333811
- type: mrr_at_20
value: 34.72472937494222
- type: mrr_at_3
value: 30.51666666666667
- type: mrr_at_5
value: 32.471666666666664
- type: nauc_map_at_1000_diff1
value: 19.5295210919884
- type: nauc_map_at_1000_max
value: 35.776154650551575
- type: nauc_map_at_1000_std
value: 22.598989298641175
- type: nauc_map_at_100_diff1
value: 19.5575568280981
- type: nauc_map_at_100_max
value: 35.764999407927924
- type: nauc_map_at_100_std
value: 22.388928333977983
- type: nauc_map_at_10_diff1
value: 20.301784773079817
- type: nauc_map_at_10_max
value: 33.90741195696659
- type: nauc_map_at_10_std
value: 17.948501033988475
- type: nauc_map_at_1_diff1
value: 27.63795852560605
- type: nauc_map_at_1_max
value: 28.677707624382226
- type: nauc_map_at_1_std
value: 11.313097574744265
- type: nauc_map_at_20_diff1
value: 19.82097532850719
- type: nauc_map_at_20_max
value: 35.10681865871605
- type: nauc_map_at_20_std
value: 20.393280333290768
- type: nauc_map_at_3_diff1
value: 23.465265104601574
- type: nauc_map_at_3_max
value: 31.90516362728403
- type: nauc_map_at_3_std
value: 11.082550126615867
- type: nauc_map_at_5_diff1
value: 21.191560324800694
- type: nauc_map_at_5_max
value: 32.349846169015365
- type: nauc_map_at_5_std
value: 14.08086983687101
- type: nauc_mrr_at_1000_diff1
value: 22.43032918913008
- type: nauc_mrr_at_1000_max
value: 30.264097065600065
- type: nauc_mrr_at_1000_std
value: 17.16645379954286
- type: nauc_mrr_at_100_diff1
value: 22.431918077326362
- type: nauc_mrr_at_100_max
value: 30.278656848582674
- type: nauc_mrr_at_100_std
value: 17.205287864081686
- type: nauc_mrr_at_10_diff1
value: 22.399800621436718
- type: nauc_mrr_at_10_max
value: 30.17231346566611
- type: nauc_mrr_at_10_std
value: 17.12117352552421
- type: nauc_mrr_at_1_diff1
value: 27.338483005001855
- type: nauc_mrr_at_1_max
value: 28.516706043782
- type: nauc_mrr_at_1_std
value: 11.568218727642739
- type: nauc_mrr_at_20_diff1
value: 22.34527846970131
- type: nauc_mrr_at_20_max
value: 30.293795035108754
- type: nauc_mrr_at_20_std
value: 17.24818382223587
- type: nauc_mrr_at_3_diff1
value: 21.41347530740495
- type: nauc_mrr_at_3_max
value: 29.49578215215621
- type: nauc_mrr_at_3_std
value: 15.565310294336843
- type: nauc_mrr_at_5_diff1
value: 21.876219757110178
- type: nauc_mrr_at_5_max
value: 29.579506797264944
- type: nauc_mrr_at_5_std
value: 16.105358881703282
- type: nauc_ndcg_at_1000_diff1
value: 18.678582790233204
- type: nauc_ndcg_at_1000_max
value: 36.955188341480266
- type: nauc_ndcg_at_1000_std
value: 29.649172816911378
- type: nauc_ndcg_at_100_diff1
value: 18.52482522871902
- type: nauc_ndcg_at_100_max
value: 37.130183155861445
- type: nauc_ndcg_at_100_std
value: 29.52961271427892
- type: nauc_ndcg_at_10_diff1
value: 19.94175154982902
- type: nauc_ndcg_at_10_max
value: 33.9452753797045
- type: nauc_ndcg_at_10_std
value: 20.946470709263238
- type: nauc_ndcg_at_1_diff1
value: 27.338483005001855
- type: nauc_ndcg_at_1_max
value: 28.516706043782
- type: nauc_ndcg_at_1_std
value: 11.568218727642739
- type: nauc_ndcg_at_20_diff1
value: 19.293919910420897
- type: nauc_ndcg_at_20_max
value: 35.76522367701567
- type: nauc_ndcg_at_20_std
value: 24.376420891239388
- type: nauc_ndcg_at_3_diff1
value: 21.270066084825956
- type: nauc_ndcg_at_3_max
value: 31.326650210409184
- type: nauc_ndcg_at_3_std
value: 13.58976920494935
- type: nauc_ndcg_at_5_diff1
value: 20.02407289631927
- type: nauc_ndcg_at_5_max
value: 31.684362011594843
- type: nauc_ndcg_at_5_std
value: 16.249511700219692
- type: nauc_precision_at_1000_diff1
value: 6.234005186132995
- type: nauc_precision_at_1000_max
value: 29.34471048327642
- type: nauc_precision_at_1000_std
value: 39.64669468760326
- type: nauc_precision_at_100_diff1
value: 9.855106179885876
- type: nauc_precision_at_100_max
value: 34.132120560020866
- type: nauc_precision_at_100_std
value: 39.72164007339206
- type: nauc_precision_at_10_diff1
value: 15.90102642361928
- type: nauc_precision_at_10_max
value: 33.65916770662614
- type: nauc_precision_at_10_std
value: 25.390897965743548
- type: nauc_precision_at_1_diff1
value: 27.338483005001855
- type: nauc_precision_at_1_max
value: 28.516706043782
- type: nauc_precision_at_1_std
value: 11.568218727642739
- type: nauc_precision_at_20_diff1
value: 13.612513188408462
- type: nauc_precision_at_20_max
value: 35.277982280608725
- type: nauc_precision_at_20_std
value: 30.71930871320442
- type: nauc_precision_at_3_diff1
value: 18.807118007742872
- type: nauc_precision_at_3_max
value: 31.816570276319002
- type: nauc_precision_at_3_std
value: 14.449767808472858
- type: nauc_precision_at_5_diff1
value: 16.209744159726867
- type: nauc_precision_at_5_max
value: 31.151946588732997
- type: nauc_precision_at_5_std
value: 18.381161071520488
- type: nauc_recall_at_1000_diff1
value: 6.273647441536759
- type: nauc_recall_at_1000_max
value: 29.1162229253121
- type: nauc_recall_at_1000_std
value: 41.52051532378572
- type: nauc_recall_at_100_diff1
value: 10.065781985677573
- type: nauc_recall_at_100_max
value: 33.83167291115486
- type: nauc_recall_at_100_std
value: 40.006650979954934
- type: nauc_recall_at_10_diff1
value: 16.15411588223024
- type: nauc_recall_at_10_max
value: 33.49396867499272
- type: nauc_recall_at_10_std
value: 25.292996350892167
- type: nauc_recall_at_1_diff1
value: 27.63795852560605
- type: nauc_recall_at_1_max
value: 28.677707624382226
- type: nauc_recall_at_1_std
value: 11.313097574744265
- type: nauc_recall_at_20_diff1
value: 13.925034360460256
- type: nauc_recall_at_20_max
value: 34.99803447287975
- type: nauc_recall_at_20_std
value: 30.666854032413088
- type: nauc_recall_at_3_diff1
value: 18.998052423925802
- type: nauc_recall_at_3_max
value: 31.62628665469099
- type: nauc_recall_at_3_std
value: 14.239340647611009
- type: nauc_recall_at_5_diff1
value: 16.398224006899152
- type: nauc_recall_at_5_max
value: 30.935454145918744
- type: nauc_recall_at_5_std
value: 18.143468400300172
- type: ndcg_at_1
value: 22.3
- type: ndcg_at_10
value: 20.441000000000003
- type: ndcg_at_100
value: 28.836000000000002
- type: ndcg_at_1000
value: 34.705000000000005
- type: ndcg_at_20
value: 23.426
- type: ndcg_at_3
value: 19.205
- type: ndcg_at_5
value: 16.739
- type: precision_at_1
value: 22.3
- type: precision_at_10
value: 10.79
- type: precision_at_100
value: 2.2960000000000003
- type: precision_at_1000
value: 0.371
- type: precision_at_20
value: 7.115
- type: precision_at_3
value: 18.2
- type: precision_at_5
value: 14.84
- type: recall_at_1
value: 4.508
- type: recall_at_10
value: 21.853
- type: recall_at_100
value: 46.589999999999996
- type: recall_at_1000
value: 75.25
- type: recall_at_20
value: 28.853
- type: recall_at_3
value: 11.068
- type: recall_at_5
value: 15.033
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 84.30788927270164
- type: cosine_spearman
value: 80.07976019050732
- type: euclidean_pearson
value: 81.27116301839057
- type: euclidean_spearman
value: 80.07976519070897
- type: main_score
value: 80.07976019050732
- type: manhattan_pearson
value: 81.39470840383359
- type: manhattan_spearman
value: 80.11309125271727
- type: pearson
value: 84.30788927270164
- type: spearman
value: 80.07976019050732
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 85.6494246715977
- type: cosine_spearman
value: 77.21364859343417
- type: euclidean_pearson
value: 82.16686843138514
- type: euclidean_spearman
value: 77.2132269119475
- type: main_score
value: 77.21364859343417
- type: manhattan_pearson
value: 82.17288769644415
- type: manhattan_spearman
value: 77.05682937722813
- type: pearson
value: 85.6494246715977
- type: spearman
value: 77.21364859343417
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 82.12583205748827
- type: cosine_spearman
value: 83.55306391445943
- type: euclidean_pearson
value: 82.81699442422787
- type: euclidean_spearman
value: 83.55306391445943
- type: main_score
value: 83.55306391445943
- type: manhattan_pearson
value: 82.70032676616033
- type: manhattan_spearman
value: 83.43696105973991
- type: pearson
value: 82.12583205748827
- type: spearman
value: 83.55306391445943
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 82.8072878574002
- type: cosine_spearman
value: 82.39917863256566
- type: euclidean_pearson
value: 82.34142760248918
- type: euclidean_spearman
value: 82.39918313271785
- type: main_score
value: 82.39917863256566
- type: manhattan_pearson
value: 82.35430476764317
- type: manhattan_spearman
value: 82.38775090940842
- type: pearson
value: 82.8072878574002
- type: spearman
value: 82.39917863256566
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 87.05873896779867
- type: cosine_spearman
value: 88.08920102010087
- type: euclidean_pearson
value: 87.43579028480816
- type: euclidean_spearman
value: 88.08920593843715
- type: main_score
value: 88.08920102010087
- type: manhattan_pearson
value: 87.50258824179726
- type: manhattan_spearman
value: 88.18943707030766
- type: pearson
value: 87.05873896779867
- type: spearman
value: 88.08920102010087
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 83.18613820567626
- type: cosine_spearman
value: 85.02812271380569
- type: euclidean_pearson
value: 84.0552020752535
- type: euclidean_spearman
value: 85.0281225608977
- type: main_score
value: 85.02812271380569
- type: manhattan_pearson
value: 83.79067016461165
- type: manhattan_spearman
value: 84.75880971236536
- type: pearson
value: 83.18613820567626
- type: spearman
value: 85.02812271380569
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 39.5876015348183
- type: cosine_spearman
value: 38.2945838490163
- type: euclidean_pearson
value: 39.79784346190561
- type: euclidean_spearman
value: 38.2945838490163
- type: main_score
value: 38.2945838490163
- type: manhattan_pearson
value: 39.977833809923645
- type: manhattan_spearman
value: 39.388422674752235
- type: pearson
value: 39.5876015348183
- type: spearman
value: 38.2945838490163
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 88.51383096083804
- type: cosine_spearman
value: 88.52537252266963
- type: euclidean_pearson
value: 89.1117050087703
- type: euclidean_spearman
value: 88.52537252266963
- type: main_score
value: 88.52537252266963
- type: manhattan_pearson
value: 89.31585295977288
- type: manhattan_spearman
value: 88.78380232395662
- type: pearson
value: 88.51383096083804
- type: spearman
value: 88.52537252266963
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 35.50474589697871
- type: cosine_spearman
value: 30.812689378913603
- type: euclidean_pearson
value: 36.25909794770876
- type: euclidean_spearman
value: 30.812689378913603
- type: main_score
value: 30.812689378913603
- type: manhattan_pearson
value: 36.26828913763471
- type: manhattan_spearman
value: 31.528781713909197
- type: pearson
value: 35.50474589697871
- type: spearman
value: 30.812689378913603
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 32.652743251558654
- type: cosine_spearman
value: 29.105392097318806
- type: euclidean_pearson
value: 32.74903065824115
- type: euclidean_spearman
value: 29.105392097318806
- type: main_score
value: 29.105392097318806
- type: manhattan_pearson
value: 33.540625008403524
- type: manhattan_spearman
value: 29.355480493447494
- type: pearson
value: 32.652743251558654
- type: spearman
value: 29.105392097318806
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 12.36849882661743
- type: cosine_spearman
value: 7.611138217911713
- type: euclidean_pearson
value: 12.77603971192848
- type: euclidean_spearman
value: 7.611138217911713
- type: main_score
value: 7.611138217911713
- type: manhattan_pearson
value: 11.619163669702509
- type: manhattan_spearman
value: 6.184520778812523
- type: pearson
value: 12.36849882661743
- type: spearman
value: 7.611138217911713
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 6.59159885046766
- type: cosine_spearman
value: 4.622204785531158
- type: euclidean_pearson
value: 6.593149700947697
- type: euclidean_spearman
value: 4.622204785531158
- type: main_score
value: 4.622204785531158
- type: manhattan_pearson
value: 5.566016374381194
- type: manhattan_spearman
value: 3.8796229563749285
- type: pearson
value: 6.59159885046766
- type: spearman
value: 4.622204785531158
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 30.35179238078445
- type: cosine_spearman
value: 29.625068788447894
- type: euclidean_pearson
value: 30.233806247338414
- type: euclidean_spearman
value: 29.625068788447894
- type: main_score
value: 29.625068788447894
- type: manhattan_pearson
value: 29.936866734034933
- type: manhattan_spearman
value: 28.57299479927884
- type: pearson
value: 30.35179238078445
- type: spearman
value: 29.625068788447894
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 30.30232199857813
- type: cosine_spearman
value: 27.219119781543988
- type: euclidean_pearson
value: 30.225835856043272
- type: euclidean_spearman
value: 27.219119781543988
- type: main_score
value: 27.219119781543988
- type: manhattan_pearson
value: 29.142315782629925
- type: manhattan_spearman
value: 25.901216206187065
- type: pearson
value: 30.30232199857813
- type: spearman
value: 27.219119781543988
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 43.22978323210299
- type: cosine_spearman
value: 47.01518443724799
- type: euclidean_pearson
value: 45.930506019807574
- type: euclidean_spearman
value: 47.01518443724799
- type: main_score
value: 47.01518443724799
- type: manhattan_pearson
value: 47.44811320365125
- type: manhattan_spearman
value: 47.73671354326406
- type: pearson
value: 43.22978323210299
- type: spearman
value: 47.01518443724799
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 65.93329041695125
- type: cosine_spearman
value: 65.95494400411647
- type: euclidean_pearson
value: 67.77439530118112
- type: euclidean_spearman
value: 65.95494400411647
- type: main_score
value: 65.95494400411647
- type: manhattan_pearson
value: 68.21709531505775
- type: manhattan_spearman
value: 66.39646560258034
- type: pearson
value: 65.93329041695125
- type: spearman
value: 65.95494400411647
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 46.929818903916804
- type: cosine_spearman
value: 44.44075531175433
- type: euclidean_pearson
value: 47.059078863657675
- type: euclidean_spearman
value: 44.44075531175433
- type: main_score
value: 44.44075531175433
- type: manhattan_pearson
value: 46.04521740640152
- type: manhattan_spearman
value: 44.576197773142866
- type: pearson
value: 46.929818903916804
- type: spearman
value: 44.44075531175433
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 56.86304728923609
- type: cosine_spearman
value: 63.05184294758852
- type: euclidean_pearson
value: 58.177828253582206
- type: euclidean_spearman
value: 63.05184294758852
- type: main_score
value: 63.05184294758852
- type: manhattan_pearson
value: 58.958715164135825
- type: manhattan_spearman
value: 63.755348809781395
- type: pearson
value: 56.86304728923609
- type: spearman
value: 63.05184294758852
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 45.6965322701485
- type: cosine_spearman
value: 49.44860243126726
- type: euclidean_pearson
value: 45.71922769223791
- type: euclidean_spearman
value: 49.44860243126726
- type: main_score
value: 49.44860243126726
- type: manhattan_pearson
value: 45.78318374788422
- type: manhattan_spearman
value: 49.521422718994984
- type: pearson
value: 45.6965322701485
- type: spearman
value: 49.44860243126726
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 85.0014615648556
- type: cosine_spearman
value: 86.65686905435463
- type: euclidean_pearson
value: 86.1451324543907
- type: euclidean_spearman
value: 86.65685763157673
- type: main_score
value: 86.65686905435463
- type: manhattan_pearson
value: 86.0861598253851
- type: manhattan_spearman
value: 86.61047820278552
- type: pearson
value: 85.0014615648556
- type: spearman
value: 86.65686905435463
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 85.89651013746061
- type: map
value: 85.89651013746061
- type: mrr
value: 95.99524567661823
- type: nAUC_map_diff1
value: -0.0410803569069903
- type: nAUC_map_max
value: 53.629827070614546
- type: nAUC_map_std
value: 67.22768282404712
- type: nAUC_mrr_diff1
value: 40.687943307829606
- type: nAUC_mrr_max
value: 85.09337269421229
- type: nAUC_mrr_std
value: 79.32454109714799
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 73.251
- type: map_at_1
value: 59.428000000000004
- type: map_at_10
value: 68.959
- type: map_at_100
value: 69.484
- type: map_at_1000
value: 69.511
- type: map_at_20
value: 69.30199999999999
- type: map_at_3
value: 66.083
- type: map_at_5
value: 67.683
- type: mrr_at_1
value: 62.33333333333333
- type: mrr_at_10
value: 69.96005291005292
- type: mrr_at_100
value: 70.37333611401961
- type: mrr_at_1000
value: 70.40109312065698
- type: mrr_at_20
value: 70.23336940836941
- type: mrr_at_3
value: 67.72222222222221
- type: mrr_at_5
value: 69.07222222222222
- type: nauc_map_at_1000_diff1
value: 70.13570749625822
- type: nauc_map_at_1000_max
value: 52.65674632844235
- type: nauc_map_at_1000_std
value: 10.132852161672638
- type: nauc_map_at_100_diff1
value: 70.13538095683808
- type: nauc_map_at_100_max
value: 52.6884935329599
- type: nauc_map_at_100_std
value: 10.14379381633777
- type: nauc_map_at_10_diff1
value: 70.36327918679356
- type: nauc_map_at_10_max
value: 52.954942802961504
- type: nauc_map_at_10_std
value: 10.02209736428799
- type: nauc_map_at_1_diff1
value: 73.49594650090174
- type: nauc_map_at_1_max
value: 45.82858518993797
- type: nauc_map_at_1_std
value: -0.40926083096917837
- type: nauc_map_at_20_diff1
value: 70.06837661473298
- type: nauc_map_at_20_max
value: 52.76020878212071
- type: nauc_map_at_20_std
value: 10.007600191187741
- type: nauc_map_at_3_diff1
value: 70.31817155099134
- type: nauc_map_at_3_max
value: 48.641969098932314
- type: nauc_map_at_3_std
value: 7.073005227799293
- type: nauc_map_at_5_diff1
value: 69.91715955939225
- type: nauc_map_at_5_max
value: 50.08437399280585
- type: nauc_map_at_5_std
value: 8.325142211837317
- type: nauc_mrr_at_1000_diff1
value: 70.89523822365368
- type: nauc_mrr_at_1000_max
value: 54.566931991503175
- type: nauc_mrr_at_1000_std
value: 13.068031822873314
- type: nauc_mrr_at_100_diff1
value: 70.89405457859479
- type: nauc_mrr_at_100_max
value: 54.5971940226233
- type: nauc_mrr_at_100_std
value: 13.075912428577213
- type: nauc_mrr_at_10_diff1
value: 71.03623671748012
- type: nauc_mrr_at_10_max
value: 54.95069512409659
- type: nauc_mrr_at_10_std
value: 13.363315852524071
- type: nauc_mrr_at_1_diff1
value: 73.44129679978948
- type: nauc_mrr_at_1_max
value: 53.11585866037946
- type: nauc_mrr_at_1_std
value: 8.225722407664433
- type: nauc_mrr_at_20_diff1
value: 70.83036037320228
- type: nauc_mrr_at_20_max
value: 54.65143963366521
- type: nauc_mrr_at_20_std
value: 13.136998458127792
- type: nauc_mrr_at_3_diff1
value: 71.153158657001
- type: nauc_mrr_at_3_max
value: 53.205584683267084
- type: nauc_mrr_at_3_std
value: 11.980916718299982
- type: nauc_mrr_at_5_diff1
value: 70.66319100789279
- type: nauc_mrr_at_5_max
value: 53.64747705626284
- type: nauc_mrr_at_5_std
value: 13.158118645048983
- type: nauc_ndcg_at_1000_diff1
value: 69.82895027282646
- type: nauc_ndcg_at_1000_max
value: 54.52290390511955
- type: nauc_ndcg_at_1000_std
value: 13.07433953652383
- type: nauc_ndcg_at_100_diff1
value: 69.66603141670811
- type: nauc_ndcg_at_100_max
value: 55.33932851097095
- type: nauc_ndcg_at_100_std
value: 13.531855815973309
- type: nauc_ndcg_at_10_diff1
value: 70.0672222067367
- type: nauc_ndcg_at_10_max
value: 56.609848753019875
- type: nauc_ndcg_at_10_std
value: 13.801015990958836
- type: nauc_ndcg_at_1_diff1
value: 73.44129679978948
- type: nauc_ndcg_at_1_max
value: 53.11585866037946
- type: nauc_ndcg_at_1_std
value: 8.225722407664433
- type: nauc_ndcg_at_20_diff1
value: 69.00029554032444
- type: nauc_ndcg_at_20_max
value: 55.77587708778143
- type: nauc_ndcg_at_20_std
value: 13.550535479714885
- type: nauc_ndcg_at_3_diff1
value: 69.35809356376409
- type: nauc_ndcg_at_3_max
value: 50.88753511744564
- type: nauc_ndcg_at_3_std
value: 10.376849196217488
- type: nauc_ndcg_at_5_diff1
value: 68.7557226824334
- type: nauc_ndcg_at_5_max
value: 51.64039917399674
- type: nauc_ndcg_at_5_std
value: 11.375060289968888
- type: nauc_precision_at_1000_diff1
value: -24.38184345232032
- type: nauc_precision_at_1000_max
value: 25.3113664869997
- type: nauc_precision_at_1000_std
value: 54.78151801755995
- type: nauc_precision_at_100_diff1
value: -10.585088484724002
- type: nauc_precision_at_100_max
value: 34.58061719734689
- type: nauc_precision_at_100_std
value: 51.08616923594603
- type: nauc_precision_at_10_diff1
value: 15.011059571897201
- type: nauc_precision_at_10_max
value: 53.380534154853976
- type: nauc_precision_at_10_std
value: 46.77574783177469
- type: nauc_precision_at_1_diff1
value: 73.44129679978948
- type: nauc_precision_at_1_max
value: 53.11585866037946
- type: nauc_precision_at_1_std
value: 8.225722407664433
- type: nauc_precision_at_20_diff1
value: 3.0375226465875627
- type: nauc_precision_at_20_max
value: 46.96352459445593
- type: nauc_precision_at_20_std
value: 47.8180732859282
- type: nauc_precision_at_3_diff1
value: 41.34820548287889
- type: nauc_precision_at_3_max
value: 48.02988164084151
- type: nauc_precision_at_3_std
value: 29.8325902163623
- type: nauc_precision_at_5_diff1
value: 28.292680467974172
- type: nauc_precision_at_5_max
value: 48.2395519867733
- type: nauc_precision_at_5_std
value: 36.77307484865436
- type: nauc_recall_at_1000_diff1
value: 86.92810457516407
- type: nauc_recall_at_1000_max
value: 12.278244631182748
- type: nauc_recall_at_1000_std
value: 86.92810457516407
- type: nauc_recall_at_100_diff1
value: 64.23902894491097
- type: nauc_recall_at_100_max
value: 78.57809790582883
- type: nauc_recall_at_100_std
value: 35.47085500866988
- type: nauc_recall_at_10_diff1
value: 67.63049621810325
- type: nauc_recall_at_10_max
value: 71.99088366956788
- type: nauc_recall_at_10_std
value: 25.694634100003483
- type: nauc_recall_at_1_diff1
value: 73.49594650090174
- type: nauc_recall_at_1_max
value: 45.82858518993797
- type: nauc_recall_at_1_std
value: -0.40926083096917837
- type: nauc_recall_at_20_diff1
value: 58.91757942656138
- type: nauc_recall_at_20_max
value: 70.52947052947034
- type: nauc_recall_at_20_std
value: 26.934442802706094
- type: nauc_recall_at_3_diff1
value: 66.06848071584821
- type: nauc_recall_at_3_max
value: 47.58371405487635
- type: nauc_recall_at_3_std
value: 10.115468870167247
- type: nauc_recall_at_5_diff1
value: 63.103805010550055
- type: nauc_recall_at_5_max
value: 51.85255345215588
- type: nauc_recall_at_5_std
value: 15.746356552783283
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 73.251
- type: ndcg_at_100
value: 75.471
- type: ndcg_at_1000
value: 76.058
- type: ndcg_at_20
value: 74.344
- type: ndcg_at_3
value: 68.314
- type: ndcg_at_5
value: 70.733
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.067
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 59.428000000000004
- type: recall_at_10
value: 85.422
- type: recall_at_100
value: 95.333
- type: recall_at_1000
value: 99.667
- type: recall_at_20
value: 89.533
- type: recall_at_3
value: 72.06099999999999
- type: recall_at_5
value: 78.261
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: train
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 76.485
- type: map_at_1
value: 62.437
- type: map_at_10
value: 72.424
- type: map_at_100
value: 72.875
- type: map_at_1000
value: 72.894
- type: map_at_20
value: 72.699
- type: map_at_3
value: 69.929
- type: map_at_5
value: 71.41
- type: mrr_at_1
value: 65.389369592089
- type: mrr_at_10
value: 73.51874742480429
- type: mrr_at_100
value: 73.86149719584975
- type: mrr_at_1000
value: 73.87817655179401
- type: mrr_at_20
value: 73.69712246373606
- type: mrr_at_3
value: 72.10548001648127
- type: mrr_at_5
value: 72.83477544293366
- type: nauc_map_at_1000_diff1
value: 70.88987705012035
- type: nauc_map_at_1000_max
value: 46.77717767104366
- type: nauc_map_at_1000_std
value: 12.442652616829855
- type: nauc_map_at_100_diff1
value: 70.88170091037614
- type: nauc_map_at_100_max
value: 46.78841410305628
- type: nauc_map_at_100_std
value: 12.472496933804496
- type: nauc_map_at_10_diff1
value: 70.97977319347748
- type: nauc_map_at_10_max
value: 46.84404169739291
- type: nauc_map_at_10_std
value: 12.21031737700784
- type: nauc_map_at_1_diff1
value: 73.66858140737193
- type: nauc_map_at_1_max
value: 37.72209378868329
- type: nauc_map_at_1_std
value: 6.058877150429456
- type: nauc_map_at_20_diff1
value: 70.80507181778022
- type: nauc_map_at_20_max
value: 46.819460847144114
- type: nauc_map_at_20_std
value: 12.472103561158026
- type: nauc_map_at_3_diff1
value: 71.58242927801035
- type: nauc_map_at_3_max
value: 43.60594750641507
- type: nauc_map_at_3_std
value: 8.800606729897389
- type: nauc_map_at_5_diff1
value: 71.15709258217167
- type: nauc_map_at_5_max
value: 45.724847576243924
- type: nauc_map_at_5_std
value: 10.318911927965361
- type: nauc_mrr_at_1000_diff1
value: 71.04412576767227
- type: nauc_mrr_at_1000_max
value: 48.49663895894284
- type: nauc_mrr_at_1000_std
value: 15.61285718843641
- type: nauc_mrr_at_100_diff1
value: 71.03490861681165
- type: nauc_mrr_at_100_max
value: 48.50439063635205
- type: nauc_mrr_at_100_std
value: 15.640711217565325
- type: nauc_mrr_at_10_diff1
value: 71.03518493601177
- type: nauc_mrr_at_10_max
value: 48.61941531114441
- type: nauc_mrr_at_10_std
value: 15.706199493660014
- type: nauc_mrr_at_1_diff1
value: 73.60832695813949
- type: nauc_mrr_at_1_max
value: 44.578750878677084
- type: nauc_mrr_at_1_std
value: 11.890587471184963
- type: nauc_mrr_at_20_diff1
value: 70.97955641800446
- type: nauc_mrr_at_20_max
value: 48.58739923848605
- type: nauc_mrr_at_20_std
value: 15.664657516610061
- type: nauc_mrr_at_3_diff1
value: 71.27717189961885
- type: nauc_mrr_at_3_max
value: 48.6513411916282
- type: nauc_mrr_at_3_std
value: 15.310361808945489
- type: nauc_mrr_at_5_diff1
value: 71.20476915325136
- type: nauc_mrr_at_5_max
value: 48.441764267472145
- type: nauc_mrr_at_5_std
value: 15.365655626467209
- type: nauc_ndcg_at_1000_diff1
value: 70.08074385839639
- type: nauc_ndcg_at_1000_max
value: 48.6611446591934
- type: nauc_ndcg_at_1000_std
value: 15.430653624542611
- type: nauc_ndcg_at_100_diff1
value: 69.68550330145398
- type: nauc_ndcg_at_100_max
value: 48.88307581467983
- type: nauc_ndcg_at_100_std
value: 16.234260889596484
- type: nauc_ndcg_at_10_diff1
value: 69.84945527709709
- type: nauc_ndcg_at_10_max
value: 49.4084894308013
- type: nauc_ndcg_at_10_std
value: 15.695585904151896
- type: nauc_ndcg_at_1_diff1
value: 73.60832695813949
- type: nauc_ndcg_at_1_max
value: 44.578750878677084
- type: nauc_ndcg_at_1_std
value: 11.890587471184963
- type: nauc_ndcg_at_20_diff1
value: 69.33301377386857
- type: nauc_ndcg_at_20_max
value: 49.35761408825228
- type: nauc_ndcg_at_20_std
value: 16.3513166242212
- type: nauc_ndcg_at_3_diff1
value: 70.68084301272158
- type: nauc_ndcg_at_3_max
value: 46.19757134168426
- type: nauc_ndcg_at_3_std
value: 11.440557917517024
- type: nauc_ndcg_at_5_diff1
value: 70.22608752669124
- type: nauc_ndcg_at_5_max
value: 47.715928046644144
- type: nauc_ndcg_at_5_std
value: 12.927681872823138
- type: nauc_precision_at_1000_diff1
value: -30.771908331112545
- type: nauc_precision_at_1000_max
value: 21.856638608356842
- type: nauc_precision_at_1000_std
value: 35.14083908557948
- type: nauc_precision_at_100_diff1
value: -21.94474337380867
- type: nauc_precision_at_100_max
value: 27.749326522576958
- type: nauc_precision_at_100_std
value: 38.74941555088935
- type: nauc_precision_at_10_diff1
value: 4.1338368705214705
- type: nauc_precision_at_10_max
value: 40.35268389940857
- type: nauc_precision_at_10_std
value: 34.86150577835351
- type: nauc_precision_at_1_diff1
value: 73.60832695813949
- type: nauc_precision_at_1_max
value: 44.578750878677084
- type: nauc_precision_at_1_std
value: 11.890587471184963
- type: nauc_precision_at_20_diff1
value: -5.308128515414954
- type: nauc_precision_at_20_max
value: 34.927193769639246
- type: nauc_precision_at_20_std
value: 36.45210990375071
- type: nauc_precision_at_3_diff1
value: 38.801243070827766
- type: nauc_precision_at_3_max
value: 50.15215786362873
- type: nauc_precision_at_3_std
value: 22.902503553912712
- type: nauc_precision_at_5_diff1
value: 19.15608744207784
- type: nauc_precision_at_5_max
value: 43.451044323440705
- type: nauc_precision_at_5_std
value: 26.099168664730975
- type: nauc_recall_at_1000_diff1
value: 59.973000734681946
- type: nauc_recall_at_1000_max
value: 90.19473173884923
- type: nauc_recall_at_1000_std
value: 77.70012495920277
- type: nauc_recall_at_100_diff1
value: 49.520208174749534
- type: nauc_recall_at_100_max
value: 62.658917735912844
- type: nauc_recall_at_100_std
value: 57.59734216892526
- type: nauc_recall_at_10_diff1
value: 63.59382135270663
- type: nauc_recall_at_10_max
value: 57.66910071241303
- type: nauc_recall_at_10_std
value: 26.157501494756524
- type: nauc_recall_at_1_diff1
value: 73.66858140737193
- type: nauc_recall_at_1_max
value: 37.72209378868329
- type: nauc_recall_at_1_std
value: 6.058877150429456
- type: nauc_recall_at_20_diff1
value: 58.60290521588567
- type: nauc_recall_at_20_max
value: 59.907374912367516
- type: nauc_recall_at_20_std
value: 33.91543413688288
- type: nauc_recall_at_3_diff1
value: 68.02786476343704
- type: nauc_recall_at_3_max
value: 45.954227249575986
- type: nauc_recall_at_3_std
value: 10.821057815371045
- type: nauc_recall_at_5_diff1
value: 65.83241381992168
- type: nauc_recall_at_5_max
value: 50.03082691375689
- type: nauc_recall_at_5_std
value: 15.241601072308331
- type: ndcg_at_1
value: 65.389
- type: ndcg_at_10
value: 76.485
- type: ndcg_at_100
value: 78.47
- type: ndcg_at_1000
value: 78.875
- type: ndcg_at_20
value: 77.321
- type: ndcg_at_3
value: 72.51100000000001
- type: ndcg_at_5
value: 74.48
- type: precision_at_1
value: 65.389
- type: precision_at_10
value: 9.937999999999999
- type: precision_at_100
value: 1.098
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.167
- type: precision_at_3
value: 28.224
- type: precision_at_5
value: 18.368000000000002
- type: recall_at_1
value: 62.437
- type: recall_at_10
value: 87.50099999999999
- type: recall_at_100
value: 96.518
- type: recall_at_1000
value: 99.506
- type: recall_at_20
value: 90.606
- type: recall_at_3
value: 77.35300000000001
- type: recall_at_5
value: 81.811
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.86534653465347
- type: cosine_accuracy_threshold
value: 87.42109537124634
- type: cosine_ap
value: 96.62589824590614
- type: cosine_f1
value: 93.1726907630522
- type: cosine_f1_threshold
value: 87.26919889450073
- type: cosine_precision
value: 93.54838709677419
- type: cosine_recall
value: 92.80000000000001
- type: dot_accuracy
value: 99.86534653465347
- type: dot_accuracy_threshold
value: 174.84219074249268
- type: dot_ap
value: 96.62589584517633
- type: dot_f1
value: 93.1726907630522
- type: dot_f1_threshold
value: 174.53839778900146
- type: dot_precision
value: 93.54838709677419
- type: dot_recall
value: 92.80000000000001
- type: euclidean_accuracy
value: 99.86534653465347
- type: euclidean_accuracy_threshold
value: 70.93350887298584
- type: euclidean_ap
value: 96.62589584517633
- type: euclidean_f1
value: 93.1726907630522
- type: euclidean_f1_threshold
value: 71.36050462722778
- type: euclidean_precision
value: 93.54838709677419
- type: euclidean_recall
value: 92.80000000000001
- type: main_score
value: 96.62589824590614
- type: manhattan_accuracy
value: 99.86237623762376
- type: manhattan_accuracy_threshold
value: 1510.8413696289062
- type: manhattan_ap
value: 96.5990356446392
- type: manhattan_f1
value: 92.94057897409853
- type: manhattan_f1_threshold
value: 1530.6350708007812
- type: manhattan_precision
value: 94.42724458204334
- type: manhattan_recall
value: 91.5
- type: max_accuracy
value: 99.86534653465347
- type: max_ap
value: 96.62589824590614
- type: max_f1
value: 93.1726907630522
- type: max_precision
value: 94.42724458204334
- type: max_recall
value: 92.80000000000001
- type: similarity_accuracy
value: 99.86534653465347
- type: similarity_accuracy_threshold
value: 87.42109537124634
- type: similarity_ap
value: 96.62589824590614
- type: similarity_f1
value: 93.1726907630522
- type: similarity_f1_threshold
value: 87.26919889450073
- type: similarity_precision
value: 93.54838709677419
- type: similarity_recall
value: 92.80000000000001
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: validation
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.84059405940594
- type: cosine_accuracy_threshold
value: 87.00189590454102
- type: cosine_ap
value: 96.73329740037518
- type: cosine_f1
value: 92.0493827160494
- type: cosine_f1_threshold
value: 86.32840514183044
- type: cosine_precision
value: 90.92682926829269
- type: cosine_recall
value: 93.2
- type: dot_accuracy
value: 99.84059405940594
- type: dot_accuracy_threshold
value: 174.00379180908203
- type: dot_ap
value: 96.73329740037518
- type: dot_f1
value: 92.0493827160494
- type: dot_f1_threshold
value: 172.65679836273193
- type: dot_precision
value: 90.92682926829269
- type: dot_recall
value: 93.2
- type: euclidean_accuracy
value: 99.84059405940594
- type: euclidean_accuracy_threshold
value: 72.1057653427124
- type: euclidean_ap
value: 96.73329740037518
- type: euclidean_f1
value: 92.0493827160494
- type: euclidean_f1_threshold
value: 73.95024299621582
- type: euclidean_precision
value: 90.92682926829269
- type: euclidean_recall
value: 93.2
- type: main_score
value: 96.75632821046904
- type: manhattan_accuracy
value: 99.84257425742574
- type: manhattan_accuracy_threshold
value: 1568.2302474975586
- type: manhattan_ap
value: 96.75632821046904
- type: manhattan_f1
value: 91.98229217904576
- type: manhattan_f1_threshold
value: 1615.3419494628906
- type: manhattan_precision
value: 90.51306873184899
- type: manhattan_recall
value: 93.5
- type: max_accuracy
value: 99.84257425742574
- type: max_ap
value: 96.75632821046904
- type: max_f1
value: 92.0493827160494
- type: max_precision
value: 90.92682926829269
- type: max_recall
value: 93.5
- type: similarity_accuracy
value: 99.84059405940594
- type: similarity_accuracy_threshold
value: 87.00189590454102
- type: similarity_ap
value: 96.73329740037518
- type: similarity_f1
value: 92.0493827160494
- type: similarity_f1_threshold
value: 86.32840514183044
- type: similarity_precision
value: 90.92682926829269
- type: similarity_recall
value: 93.2
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 62.86417213636288
- type: v_measure
value: 62.86417213636288
- type: v_measure_std
value: 4.26517499407962
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 34.49041102464247
- type: v_measure
value: 34.49041102464247
- type: v_measure_std
value: 1.5008009956624384
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 53.56080676583018
- type: map
value: 53.56080676583018
- type: mrr
value: 54.574155419743654
- type: nAUC_map_diff1
value: 38.68676474201522
- type: nAUC_map_max
value: 13.538152012883021
- type: nAUC_map_std
value: 8.406474170060411
- type: nAUC_mrr_diff1
value: 38.27738033427405
- type: nAUC_mrr_max
value: 14.313023512543305
- type: nAUC_mrr_std
value: 9.482773181023104
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.78001838626019
- type: cosine_spearman
value: 30.06796987185709
- type: dot_pearson
value: 30.78001527942813
- type: dot_spearman
value: 30.063748343894524
- type: main_score
value: 30.06796987185709
- type: pearson
value: 30.78001838626019
- type: spearman
value: 30.06796987185709
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 67.647
- type: map_at_1
value: 0.192
- type: map_at_10
value: 1.678
- type: map_at_100
value: 9.362
- type: map_at_1000
value: 23.655
- type: map_at_20
value: 2.9770000000000003
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.91
- type: mrr_at_1
value: 74.0
- type: mrr_at_10
value: 84.07460317460317
- type: mrr_at_100
value: 84.07460317460317
- type: mrr_at_1000
value: 84.07460317460317
- type: mrr_at_20
value: 84.07460317460317
- type: mrr_at_3
value: 82.66666666666667
- type: mrr_at_5
value: 83.56666666666666
- type: nauc_map_at_1000_diff1
value: -1.1098863157023129
- type: nauc_map_at_1000_max
value: 48.42614641797866
- type: nauc_map_at_1000_std
value: 74.41245113222085
- type: nauc_map_at_100_diff1
value: 11.810610128543328
- type: nauc_map_at_100_max
value: 45.54012493749073
- type: nauc_map_at_100_std
value: 55.216633821552705
- type: nauc_map_at_10_diff1
value: 14.706640439003039
- type: nauc_map_at_10_max
value: 27.009359341915196
- type: nauc_map_at_10_std
value: 12.580248102741749
- type: nauc_map_at_1_diff1
value: 18.849085390343852
- type: nauc_map_at_1_max
value: 22.147807794097968
- type: nauc_map_at_1_std
value: 8.226943075643607
- type: nauc_map_at_20_diff1
value: 11.262073635376204
- type: nauc_map_at_20_max
value: 29.788086926929736
- type: nauc_map_at_20_std
value: 18.831720483995586
- type: nauc_map_at_3_diff1
value: 10.591785709689498
- type: nauc_map_at_3_max
value: 19.999731080136282
- type: nauc_map_at_3_std
value: 8.283426397720875
- type: nauc_map_at_5_diff1
value: 13.438781723934065
- type: nauc_map_at_5_max
value: 23.771513155136258
- type: nauc_map_at_5_std
value: 11.39274226111021
- type: nauc_mrr_at_1000_diff1
value: -7.700192607976083
- type: nauc_mrr_at_1000_max
value: 58.06999088899807
- type: nauc_mrr_at_1000_std
value: 55.498437292780146
- type: nauc_mrr_at_100_diff1
value: -7.700192607976083
- type: nauc_mrr_at_100_max
value: 58.06999088899807
- type: nauc_mrr_at_100_std
value: 55.498437292780146
- type: nauc_mrr_at_10_diff1
value: -7.700192607976083
- type: nauc_mrr_at_10_max
value: 58.06999088899807
- type: nauc_mrr_at_10_std
value: 55.498437292780146
- type: nauc_mrr_at_1_diff1
value: 5.8837841125663815
- type: nauc_mrr_at_1_max
value: 57.15502209967149
- type: nauc_mrr_at_1_std
value: 53.94752848627391
- type: nauc_mrr_at_20_diff1
value: -7.700192607976083
- type: nauc_mrr_at_20_max
value: 58.06999088899807
- type: nauc_mrr_at_20_std
value: 55.498437292780146
- type: nauc_mrr_at_3_diff1
value: -9.92939143073178
- type: nauc_mrr_at_3_max
value: 59.45162969291663
- type: nauc_mrr_at_3_std
value: 56.1706390124621
- type: nauc_mrr_at_5_diff1
value: -10.352998992145512
- type: nauc_mrr_at_5_max
value: 57.75340046811098
- type: nauc_mrr_at_5_std
value: 55.20409055698553
- type: nauc_ndcg_at_1000_diff1
value: 1.3886748577056562
- type: nauc_ndcg_at_1000_max
value: 45.41548784147603
- type: nauc_ndcg_at_1000_std
value: 69.30388009311415
- type: nauc_ndcg_at_100_diff1
value: 0.4312493032685509
- type: nauc_ndcg_at_100_max
value: 45.2714318986444
- type: nauc_ndcg_at_100_std
value: 73.00621559010233
- type: nauc_ndcg_at_10_diff1
value: -8.161869949696207
- type: nauc_ndcg_at_10_max
value: 44.11896916629053
- type: nauc_ndcg_at_10_std
value: 56.814234959093845
- type: nauc_ndcg_at_1_diff1
value: 6.616052060737538
- type: nauc_ndcg_at_1_max
value: 36.42502274158562
- type: nauc_ndcg_at_1_std
value: 50.542299349240814
- type: nauc_ndcg_at_20_diff1
value: -10.435311528666439
- type: nauc_ndcg_at_20_max
value: 40.17434792292328
- type: nauc_ndcg_at_20_std
value: 58.15224379743095
- type: nauc_ndcg_at_3_diff1
value: -8.681207709572954
- type: nauc_ndcg_at_3_max
value: 42.74075328162171
- type: nauc_ndcg_at_3_std
value: 52.252671273049266
- type: nauc_ndcg_at_5_diff1
value: -10.760752447599076
- type: nauc_ndcg_at_5_max
value: 45.30740412496431
- type: nauc_ndcg_at_5_std
value: 58.2580376338619
- type: nauc_precision_at_1000_diff1
value: -17.63226899483217
- type: nauc_precision_at_1000_max
value: 34.52005297813093
- type: nauc_precision_at_1000_std
value: 58.08147588022084
- type: nauc_precision_at_100_diff1
value: -1.6026030210260822
- type: nauc_precision_at_100_max
value: 46.64263585966634
- type: nauc_precision_at_100_std
value: 73.31086780620457
- type: nauc_precision_at_10_diff1
value: -7.9280637963804645
- type: nauc_precision_at_10_max
value: 49.556338319147216
- type: nauc_precision_at_10_std
value: 53.013180728865436
- type: nauc_precision_at_1_diff1
value: 5.8837841125663815
- type: nauc_precision_at_1_max
value: 57.15502209967149
- type: nauc_precision_at_1_std
value: 53.94752848627391
- type: nauc_precision_at_20_diff1
value: -12.831360657341229
- type: nauc_precision_at_20_max
value: 41.99544599867401
- type: nauc_precision_at_20_std
value: 55.9674807351615
- type: nauc_precision_at_3_diff1
value: -16.796733759190047
- type: nauc_precision_at_3_max
value: 46.054296733759244
- type: nauc_precision_at_3_std
value: 49.79209352778117
- type: nauc_precision_at_5_diff1
value: -15.871656413523668
- type: nauc_precision_at_5_max
value: 52.700458519160634
- type: nauc_precision_at_5_std
value: 57.1342030226753
- type: nauc_recall_at_1000_diff1
value: -0.8671141204967768
- type: nauc_recall_at_1000_max
value: 39.213878637976336
- type: nauc_recall_at_1000_std
value: 57.52498157883819
- type: nauc_recall_at_100_diff1
value: 13.71539607682519
- type: nauc_recall_at_100_max
value: 34.398894963723606
- type: nauc_recall_at_100_std
value: 38.15917686169161
- type: nauc_recall_at_10_diff1
value: 14.494212708704499
- type: nauc_recall_at_10_max
value: 19.727092277749353
- type: nauc_recall_at_10_std
value: 5.450799655125242
- type: nauc_recall_at_1_diff1
value: 18.849085390343852
- type: nauc_recall_at_1_max
value: 22.147807794097968
- type: nauc_recall_at_1_std
value: 8.226943075643607
- type: nauc_recall_at_20_diff1
value: 9.720808675841297
- type: nauc_recall_at_20_max
value: 19.01250825066701
- type: nauc_recall_at_20_std
value: 7.725806970920393
- type: nauc_recall_at_3_diff1
value: 7.127984664955145
- type: nauc_recall_at_3_max
value: 15.524675558616
- type: nauc_recall_at_3_std
value: 3.6362524696759704
- type: nauc_recall_at_5_diff1
value: 9.773840859655213
- type: nauc_recall_at_5_max
value: 17.976145684131815
- type: nauc_recall_at_5_std
value: 5.433479951527897
- type: ndcg_at_1
value: 68.0
- type: ndcg_at_10
value: 67.647
- type: ndcg_at_100
value: 52.568000000000005
- type: ndcg_at_1000
value: 48.936
- type: ndcg_at_20
value: 65.23
- type: ndcg_at_3
value: 68.073
- type: ndcg_at_5
value: 70.15400000000001
- type: precision_at_1
value: 74.0
- type: precision_at_10
value: 72.8
- type: precision_at_100
value: 54.32
- type: precision_at_1000
value: 21.788
- type: precision_at_20
value: 69.6
- type: precision_at_3
value: 73.333
- type: precision_at_5
value: 76.4
- type: recall_at_1
value: 0.192
- type: recall_at_10
value: 1.926
- type: recall_at_100
value: 12.967
- type: recall_at_1000
value: 46.414
- type: recall_at_20
value: 3.6020000000000003
- type: recall_at_3
value: 0.5950000000000001
- type: recall_at_5
value: 1.008
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 21.418
- type: map_at_1
value: 2.103
- type: map_at_10
value: 8.562
- type: map_at_100
value: 14.488999999999999
- type: map_at_1000
value: 15.966
- type: map_at_20
value: 10.862
- type: map_at_3
value: 4.922
- type: map_at_5
value: 5.771
- type: mrr_at_1
value: 26.53061224489796
- type: mrr_at_10
value: 41.18561710398445
- type: mrr_at_100
value: 42.23103797071357
- type: mrr_at_1000
value: 42.23103797071357
- type: mrr_at_20
value: 41.99007625028033
- type: mrr_at_3
value: 37.414965986394556
- type: mrr_at_5
value: 38.843537414965986
- type: nauc_map_at_1000_diff1
value: -19.351530359917472
- type: nauc_map_at_1000_max
value: -24.507017422899555
- type: nauc_map_at_1000_std
value: 4.42198895480301
- type: nauc_map_at_100_diff1
value: -18.850998775966502
- type: nauc_map_at_100_max
value: -24.362263151537178
- type: nauc_map_at_100_std
value: -0.5368896293104107
- type: nauc_map_at_10_diff1
value: -26.175172027837572
- type: nauc_map_at_10_max
value: -28.43026690361613
- type: nauc_map_at_10_std
value: -11.91118017447121
- type: nauc_map_at_1_diff1
value: -20.487774952653478
- type: nauc_map_at_1_max
value: -27.48651530838805
- type: nauc_map_at_1_std
value: -14.850754695409051
- type: nauc_map_at_20_diff1
value: -18.37972840319288
- type: nauc_map_at_20_max
value: -25.41263310281494
- type: nauc_map_at_20_std
value: -14.407088455977735
- type: nauc_map_at_3_diff1
value: -18.703214405275403
- type: nauc_map_at_3_max
value: -30.055160976096566
- type: nauc_map_at_3_std
value: -13.900503759042953
- type: nauc_map_at_5_diff1
value: -24.059397135404634
- type: nauc_map_at_5_max
value: -31.419338369603224
- type: nauc_map_at_5_std
value: -14.081937493608754
- type: nauc_mrr_at_1000_diff1
value: -19.253002712346046
- type: nauc_mrr_at_1000_max
value: -41.229418795229115
- type: nauc_mrr_at_1000_std
value: -8.194012889395678
- type: nauc_mrr_at_100_diff1
value: -19.253002712346046
- type: nauc_mrr_at_100_max
value: -41.229418795229115
- type: nauc_mrr_at_100_std
value: -8.194012889395678
- type: nauc_mrr_at_10_diff1
value: -20.181934921966565
- type: nauc_mrr_at_10_max
value: -41.52122062813685
- type: nauc_mrr_at_10_std
value: -8.517353777317007
- type: nauc_mrr_at_1_diff1
value: -14.93163259442193
- type: nauc_mrr_at_1_max
value: -28.271418042405944
- type: nauc_mrr_at_1_std
value: -8.483373359409944
- type: nauc_mrr_at_20_diff1
value: -18.71853810624142
- type: nauc_mrr_at_20_max
value: -41.643831727769445
- type: nauc_mrr_at_20_std
value: -7.994451407552443
- type: nauc_mrr_at_3_diff1
value: -20.10234227022684
- type: nauc_mrr_at_3_max
value: -41.481700697105964
- type: nauc_mrr_at_3_std
value: -10.1964348120824
- type: nauc_mrr_at_5_diff1
value: -22.378294273446333
- type: nauc_mrr_at_5_max
value: -42.399016719230765
- type: nauc_mrr_at_5_std
value: -9.64387750117273
- type: nauc_ndcg_at_1000_diff1
value: -20.46602931039575
- type: nauc_ndcg_at_1000_max
value: -29.51388332800775
- type: nauc_ndcg_at_1000_std
value: 31.97369133164661
- type: nauc_ndcg_at_100_diff1
value: -20.175041013877696
- type: nauc_ndcg_at_100_max
value: -36.286223372960904
- type: nauc_ndcg_at_100_std
value: 18.299490153297647
- type: nauc_ndcg_at_10_diff1
value: -24.02376235671449
- type: nauc_ndcg_at_10_max
value: -35.828837894026115
- type: nauc_ndcg_at_10_std
value: -4.55634576055288
- type: nauc_ndcg_at_1_diff1
value: -17.757931878350778
- type: nauc_ndcg_at_1_max
value: -28.882474074071734
- type: nauc_ndcg_at_1_std
value: -2.7961892265024058
- type: nauc_ndcg_at_20_diff1
value: -14.096289255526088
- type: nauc_ndcg_at_20_max
value: -34.41866397107521
- type: nauc_ndcg_at_20_std
value: -11.18648918839053
- type: nauc_ndcg_at_3_diff1
value: -17.842815168108977
- type: nauc_ndcg_at_3_max
value: -38.621085644925195
- type: nauc_ndcg_at_3_std
value: -3.8881155902480544
- type: nauc_ndcg_at_5_diff1
value: -25.284864428560216
- type: nauc_ndcg_at_5_max
value: -38.237375550106336
- type: nauc_ndcg_at_5_std
value: -4.911301062236913
- type: nauc_precision_at_1000_diff1
value: 8.807028730460438
- type: nauc_precision_at_1000_max
value: 37.1813389306755
- type: nauc_precision_at_1000_std
value: 50.105218819171206
- type: nauc_precision_at_100_diff1
value: -11.451048560927282
- type: nauc_precision_at_100_max
value: -18.494269088171183
- type: nauc_precision_at_100_std
value: 61.93415991533763
- type: nauc_precision_at_10_diff1
value: -20.970160893678482
- type: nauc_precision_at_10_max
value: -33.05509638276482
- type: nauc_precision_at_10_std
value: -2.7821722336643617
- type: nauc_precision_at_1_diff1
value: -14.93163259442193
- type: nauc_precision_at_1_max
value: -28.271418042405944
- type: nauc_precision_at_1_std
value: -8.483373359409944
- type: nauc_precision_at_20_diff1
value: -0.4353890707357106
- type: nauc_precision_at_20_max
value: -24.429655423454683
- type: nauc_precision_at_20_std
value: -3.4047901749399783
- type: nauc_precision_at_3_diff1
value: -12.694326928341601
- type: nauc_precision_at_3_max
value: -39.390569182573856
- type: nauc_precision_at_3_std
value: -8.824591692422153
- type: nauc_precision_at_5_diff1
value: -24.96294483100504
- type: nauc_precision_at_5_max
value: -39.857140509888495
- type: nauc_precision_at_5_std
value: -8.166429095904583
- type: nauc_recall_at_1000_diff1
value: -19.26364200094709
- type: nauc_recall_at_1000_max
value: -13.410785253002475
- type: nauc_recall_at_1000_std
value: 79.88850839052468
- type: nauc_recall_at_100_diff1
value: -24.001850273182704
- type: nauc_recall_at_100_max
value: -35.88316231220963
- type: nauc_recall_at_100_std
value: 31.815424604607674
- type: nauc_recall_at_10_diff1
value: -26.40817358742444
- type: nauc_recall_at_10_max
value: -33.727382714288446
- type: nauc_recall_at_10_std
value: -14.552547474689526
- type: nauc_recall_at_1_diff1
value: -20.487774952653478
- type: nauc_recall_at_1_max
value: -27.48651530838805
- type: nauc_recall_at_1_std
value: -14.850754695409051
- type: nauc_recall_at_20_diff1
value: -12.188115473749033
- type: nauc_recall_at_20_max
value: -32.11814820672923
- type: nauc_recall_at_20_std
value: -17.398182571029892
- type: nauc_recall_at_3_diff1
value: -17.529776818775066
- type: nauc_recall_at_3_max
value: -39.34912622762624
- type: nauc_recall_at_3_std
value: -17.868268060845814
- type: nauc_recall_at_5_diff1
value: -27.32652911017479
- type: nauc_recall_at_5_max
value: -39.898687035007576
- type: nauc_recall_at_5_std
value: -16.732887465142213
- type: ndcg_at_1
value: 23.469
- type: ndcg_at_10
value: 21.418
- type: ndcg_at_100
value: 34.251
- type: ndcg_at_1000
value: 45.371
- type: ndcg_at_20
value: 23.238
- type: ndcg_at_3
value: 23.886
- type: ndcg_at_5
value: 21.11
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 20.0
- type: precision_at_100
value: 7.388
- type: precision_at_1000
value: 1.488
- type: precision_at_20
value: 16.02
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 21.633
- type: recall_at_1
value: 2.103
- type: recall_at_10
value: 14.81
- type: recall_at_100
value: 46.622
- type: recall_at_1000
value: 80.69800000000001
- type: recall_at_20
value: 22.861
- type: recall_at_3
value: 6.399000000000001
- type: recall_at_5
value: 8.23
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 65.7666015625
- type: ap
value: 11.843884703213776
- type: ap_weighted
value: 11.843884703213776
- type: f1
value: 50.277795864693054
- type: f1_weighted
value: 73.18095534864581
- type: main_score
value: 65.7666015625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.28522920203735
- type: f1
value: 63.3721509844546
- type: f1_weighted
value: 62.21321405962959
- type: main_score
value: 63.28522920203735
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 48.905527607381
- type: v_measure
value: 48.905527607381
- type: v_measure_std
value: 1.6720470024352694
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 86.44572927221792
- type: cosine_accuracy_threshold
value: 87.59676218032837
- type: cosine_ap
value: 75.63030766992325
- type: cosine_f1
value: 69.44552655499287
- type: cosine_f1_threshold
value: 86.22946739196777
- type: cosine_precision
value: 68.37126054717464
- type: cosine_recall
value: 70.55408970976254
- type: dot_accuracy
value: 86.44572927221792
- type: dot_accuracy_threshold
value: 175.19352436065674
- type: dot_ap
value: 75.63032114657348
- type: dot_f1
value: 69.44552655499287
- type: dot_f1_threshold
value: 172.45893478393555
- type: dot_precision
value: 68.37126054717464
- type: dot_recall
value: 70.55408970976254
- type: euclidean_accuracy
value: 86.44572927221792
- type: euclidean_accuracy_threshold
value: 70.43645977973938
- type: euclidean_ap
value: 75.6303708231987
- type: euclidean_f1
value: 69.44552655499287
- type: euclidean_f1_threshold
value: 74.21733140945435
- type: euclidean_precision
value: 68.37126054717464
- type: euclidean_recall
value: 70.55408970976254
- type: main_score
value: 75.7377286185127
- type: manhattan_accuracy
value: 86.55897955534363
- type: manhattan_accuracy_threshold
value: 1556.027603149414
- type: manhattan_ap
value: 75.7377286185127
- type: manhattan_f1
value: 69.66236955187233
- type: manhattan_f1_threshold
value: 1654.3787002563477
- type: manhattan_precision
value: 65.1435132032147
- type: manhattan_recall
value: 74.85488126649076
- type: max_accuracy
value: 86.55897955534363
- type: max_ap
value: 75.7377286185127
- type: max_f1
value: 69.66236955187233
- type: max_precision
value: 68.37126054717464
- type: max_recall
value: 74.85488126649076
- type: similarity_accuracy
value: 86.44572927221792
- type: similarity_accuracy_threshold
value: 87.59676218032837
- type: similarity_ap
value: 75.63030766992325
- type: similarity_f1
value: 69.44552655499287
- type: similarity_f1_threshold
value: 86.22946739196777
- type: similarity_precision
value: 68.37126054717464
- type: similarity_recall
value: 70.55408970976254
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 88.97038848139093
- type: cosine_accuracy_threshold
value: 83.80404710769653
- type: cosine_ap
value: 85.91804313814289
- type: cosine_f1
value: 78.21080602302922
- type: cosine_f1_threshold
value: 82.21379518508911
- type: cosine_precision
value: 75.10632265381344
- type: cosine_recall
value: 81.5829996920234
- type: dot_accuracy
value: 88.97038848139093
- type: dot_accuracy_threshold
value: 167.60810613632202
- type: dot_ap
value: 85.91804480684166
- type: dot_f1
value: 78.21080602302922
- type: dot_f1_threshold
value: 164.42759037017822
- type: dot_precision
value: 75.10632265381344
- type: dot_recall
value: 81.5829996920234
- type: euclidean_accuracy
value: 88.97038848139093
- type: euclidean_accuracy_threshold
value: 80.48837184906006
- type: euclidean_ap
value: 85.91804603491305
- type: euclidean_f1
value: 78.21080602302922
- type: euclidean_f1_threshold
value: 84.34738516807556
- type: euclidean_precision
value: 75.10632265381344
- type: euclidean_recall
value: 81.5829996920234
- type: main_score
value: 86.05361584367344
- type: manhattan_accuracy
value: 89.02472154305894
- type: manhattan_accuracy_threshold
value: 1732.803726196289
- type: manhattan_ap
value: 86.05361584367344
- type: manhattan_f1
value: 78.20484500404977
- type: manhattan_f1_threshold
value: 1832.084083557129
- type: manhattan_precision
value: 74.93297587131367
- type: manhattan_recall
value: 81.7754850631352
- type: max_accuracy
value: 89.02472154305894
- type: max_ap
value: 86.05361584367344
- type: max_f1
value: 78.21080602302922
- type: max_precision
value: 75.10632265381344
- type: max_recall
value: 81.7754850631352
- type: similarity_accuracy
value: 88.97038848139093
- type: similarity_accuracy_threshold
value: 83.80404710769653
- type: similarity_ap
value: 85.91804313814289
- type: similarity_f1
value: 78.21080602302922
- type: similarity_f1_threshold
value: 82.21379518508911
- type: similarity_precision
value: 75.10632265381344
- type: similarity_recall
value: 81.5829996920234
---
Add stuff later! | [
"SUMMARIZATION"
] | Non_BioNLP |
mspy/twitter-paraphrase-embeddings | mspy | sentence-similarity | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:13063",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,722,169,618,000 | 2024-07-28T12:29:07 | 5 | 0 | ---
base_model: sentence-transformers/all-mpnet-base-v2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:13063
- loss:CosineSimilarityLoss
widget:
- source_sentence: I cant wait to leave Chicago
sentences:
- This is the shit Chicago needs to be recognized for not Keef
- is candice singing again tonight
- half time Chelsea were losing 10
- source_sentence: Andre miller best lobbing pg in the game
sentences:
- Am I the only one who dont get Amber alert
- Backstrom hurt in warmup Harding could start
- Andre miller is even slower in person
- source_sentence: Bayless couldve dunked that from the free throw
sentences:
- but what great finger roll by Bayless
- Wow Bayless has to make EspnSCTop with that end of 3rd
- i mean calum u didnt follow
- source_sentence: Backstrom Hurt in warmups Harding gets the start
sentences:
- Should I go to Nashville or Chicago for my 17th birthday
- I hate Chelsea possibly more than most
- Of course Backstrom would get injured during warmups
- source_sentence: Calum I love you plz follow me
sentences:
- CALUM PLEASE BE MY FIRST CELEBRITY TO FOLLOW ME
- Walking around downtown Chicago in a dress and listening to the new Iggy Pop
- I think Candice has what it takes to win American Idol AND Angie too
model-index:
- name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.6949485250178733
name: Pearson Cosine
- type: spearman_cosine
value: 0.6626359968437283
name: Spearman Cosine
- type: pearson_manhattan
value: 0.688092975176289
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6630998028133662
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6880277270034267
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6626358741747785
name: Spearman Euclidean
- type: pearson_dot
value: 0.694948520847878
name: Pearson Dot
- type: spearman_dot
value: 0.6626359082695851
name: Spearman Dot
- type: pearson_max
value: 0.6949485250178733
name: Pearson Max
- type: spearman_max
value: 0.6630998028133662
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 84f2bcc00d77236f9e89c8a360a00fb1139bf47d -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("mspy/twitter-paraphrase-embeddings")
# Run inference
sentences = [
'Calum I love you plz follow me',
'CALUM PLEASE BE MY FIRST CELEBRITY TO FOLLOW ME',
'Walking around downtown Chicago in a dress and listening to the new Iggy Pop',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6949 |
| **spearman_cosine** | **0.6626** |
| pearson_manhattan | 0.6881 |
| spearman_manhattan | 0.6631 |
| pearson_euclidean | 0.688 |
| spearman_euclidean | 0.6626 |
| pearson_dot | 0.6949 |
| spearman_dot | 0.6626 |
| pearson_max | 0.6949 |
| spearman_max | 0.6631 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 13,063 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 11.16 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.31 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------|:-------------------------------------------------------------------|:-----------------|
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>But my bro from the 757 EJ Manuel is the 1st QB gone</code> | <code>1.0</code> |
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>Can believe EJ Manuel went as the 1st QB in the draft</code> | <code>1.0</code> |
| <code>EJ Manuel the 1st QB to go in this draft</code> | <code>EJ MANUEL IS THE 1ST QB what</code> | <code>0.6</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 4,727 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.04 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.22 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:---------------------------------------------------------------|:------------------------------------------------------------------|:-----------------|
| <code>A Walk to Remember is the definition of true love</code> | <code>A Walk to Remember is on and Im in town and Im upset</code> | <code>0.2</code> |
| <code>A Walk to Remember is the definition of true love</code> | <code>A Walk to Remember is the cutest thing</code> | <code>0.6</code> |
| <code>A Walk to Remember is the definition of true love</code> | <code>A walk to remember is on ABC family youre welcome</code> | <code>0.2</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | spearman_cosine |
|:------:|:----:|:-------------:|:------:|:---------------:|
| 0.1225 | 100 | - | 0.0729 | 0.6058 |
| 0.2449 | 200 | - | 0.0646 | 0.6340 |
| 0.3674 | 300 | - | 0.0627 | 0.6397 |
| 0.4899 | 400 | - | 0.0621 | 0.6472 |
| 0.6124 | 500 | 0.0627 | 0.0626 | 0.6496 |
| 0.7348 | 600 | - | 0.0621 | 0.6446 |
| 0.8573 | 700 | - | 0.0593 | 0.6695 |
| 0.9798 | 800 | - | 0.0636 | 0.6440 |
| 1.1023 | 900 | - | 0.0618 | 0.6525 |
| 1.2247 | 1000 | 0.0383 | 0.0604 | 0.6639 |
| 1.3472 | 1100 | - | 0.0608 | 0.6590 |
| 1.4697 | 1200 | - | 0.0620 | 0.6504 |
| 1.5922 | 1300 | - | 0.0617 | 0.6467 |
| 1.7146 | 1400 | - | 0.0615 | 0.6574 |
| 1.8371 | 1500 | 0.0293 | 0.0622 | 0.6536 |
| 1.9596 | 1600 | - | 0.0609 | 0.6599 |
| 2.0821 | 1700 | - | 0.0605 | 0.6658 |
| 2.2045 | 1800 | - | 0.0615 | 0.6588 |
| 2.3270 | 1900 | - | 0.0615 | 0.6575 |
| 2.4495 | 2000 | 0.0215 | 0.0614 | 0.6598 |
| 2.5720 | 2100 | - | 0.0603 | 0.6681 |
| 2.6944 | 2200 | - | 0.0606 | 0.6669 |
| 2.8169 | 2300 | - | 0.0605 | 0.6642 |
| 2.9394 | 2400 | - | 0.0606 | 0.6630 |
| 3.0618 | 2500 | 0.018 | 0.0611 | 0.6616 |
| 3.1843 | 2600 | - | 0.0611 | 0.6619 |
| 3.3068 | 2700 | - | 0.0611 | 0.6608 |
| 3.4293 | 2800 | - | 0.0608 | 0.6632 |
| 3.5517 | 2900 | - | 0.0608 | 0.6623 |
| 3.6742 | 3000 | 0.014 | 0.0615 | 0.6596 |
| 3.7967 | 3100 | - | 0.0612 | 0.6616 |
| 3.9192 | 3200 | - | 0.0610 | 0.6626 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.1
- Transformers: 4.43.3
- PyTorch: 2.4.0+cu121
- Accelerate: 0.33.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | Non_BioNLP |
aroot/eng-mya-wsample.43a | aroot | translation | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,688,616,372,000 | 2023-07-06T04:28:08 | 12 | 0 | ---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-mya-wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8306
- Bleu: 4.6779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| [
"TRANSLATION"
] | Non_BioNLP |
cerebras/Cerebras-GPT-13B | cerebras | text-generation | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | 1,679,345,154,000 | 2023-11-22T21:49:12 | 2,440 | 647 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pytorch
- causal-lm
inference: false
---
# Cerebras-GPT 13B
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (ฮฒ1, ฮฒ2) = (0.9, 0.95), adam_eps = 1eโ8 (1eโ9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **13B** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-13B")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-13B")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleutherโs Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible. | [
"TRANSLATION"
] | Non_BioNLP |
gaudi/opus-mt-tr-en-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,175,425,000 | 2024-10-18T22:51:04 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-tr-en --output_dir ./ctranslate2/opus-mt-tr-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-tr-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-tr-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-tr-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) by Helsinki-NLP.
| [
"TRANSLATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-af-es | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:25:22 | 96 | 0 | ---
language:
- af
- es
license: apache-2.0
tags:
- translation
---
### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.spa | 49.9 | 0.680 |
### System Info:
- hf_name: afr-spa
- source_languages: afr
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'es']
- src_constituents: {'afr'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: spa
- short_pair: af-es
- chrF2_score: 0.68
- bleu: 49.9
- brevity_penalty: 1.0
- ref_len: 2783.0
- src_name: Afrikaans
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: es
- prefer_old: False
- long_pair: afr-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
"TRANSLATION"
] | Non_BioNLP |
TheBloke/finance-LLM-GGUF | TheBloke | text-generation | [
"transformers",
"gguf",
"llama",
"finance",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:GAIR/lima",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2309.09530",
"base_model:AdaptLLM/finance-LLM",
"base_model:quantized:AdaptLLM/finance-LLM",
"license:other",
"region:us"
] | 1,703,453,335,000 | 2023-12-24T21:33:31 | 757 | 19 | ---
base_model: AdaptLLM/finance-LLM
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
language:
- en
license: other
metrics:
- accuracy
model_name: Finance LLM
pipeline_tag: text-generation
tags:
- finance
inference: false
model_creator: AdaptLLM
model_type: llama
prompt_template: '[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Finance LLM - GGUF
- Model creator: [AdaptLLM](https://huggingface.co/AdaptLLM)
- Original model: [Finance LLM](https://huggingface.co/AdaptLLM/finance-LLM)
<!-- description start -->
## Description
This repo contains GGUF format model files for [AdaptLLM's Finance LLM](https://huggingface.co/AdaptLLM/finance-LLM).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/finance-LLM-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/finance-LLM-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/finance-LLM-GGUF)
* [AdaptLLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AdaptLLM/finance-LLM)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [finance-llm.Q2_K.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [finance-llm.Q3_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [finance-llm.Q3_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [finance-llm.Q3_K_L.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [finance-llm.Q4_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [finance-llm.Q4_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [finance-llm.Q4_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [finance-llm.Q5_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [finance-llm.Q5_K_S.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [finance-llm.Q5_K_M.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [finance-llm.Q6_K.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [finance-llm.Q8_0.gguf](https://huggingface.co/TheBloke/finance-LLM-GGUF/blob/main/finance-llm.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/finance-LLM-GGUF and below it, a specific filename to download, such as: finance-llm.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/finance-LLM-GGUF finance-llm.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/finance-LLM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/finance-LLM-GGUF finance-llm.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m finance-llm.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./finance-llm.Q4_K_M.gguf", # Download the model file first
n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n{prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./finance-llm.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: AdaptLLM's Finance LLM
# Adapt (Large) Language Models to Domains
This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### ๐ค We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! ๐ค
**************************** **Updates** ****************************
* 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/finance-LLM-13B) developed from LLaMA-1-13B.
* 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/finance-chat) developed from LLaMA-2-Chat-7B.
* 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [base models](https://huggingface.co/AdaptLLM/finance-LLM) developed from LLaMA-1-7B.
## Domain-Specific LLaMA-1
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
## Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the finance model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-chat")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-chat", use_fast=False)
# Put your input here:
user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
MMM Chicago Stock Exchange, Inc.
1.500% Notes due 2026 MMM26 New York Stock Exchange
1.750% Notes due 2030 MMM30 New York Stock Exchange
1.500% Notes due 2031 MMM31 New York Stock Exchange
Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
# We use the prompt template of LLaMA-2-Chat demo
prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=4096)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
```
## Domain-Specific Tasks
To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
**Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{adaptllm,
title = {Adapting Large Language Models via Reading Comprehension},
author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
journal = {CoRR},
volume = {abs/2309.09530},
year = {2023}
}
```
<!-- original-model-card end -->
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
anismahmahi/G2_replace_Whata_repetition_with_noPropaganda_SetFit | anismahmahi | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 1,704,634,408,000 | 2024-01-07T13:33:55 | 3 | 0 | ---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Fox News, The Washington Post, NBC News, The Associated Press and the Los
Angeles Times are among the entities that have said they will file amicus briefs
on behalf of CNN.
- text: 'Tommy Robinson is in prison today because he violated a court order demanding
that he not film videos outside the trials of Muslim rape gangs.
'
- text: As I wrote during the presidential campaign, Trump has no idea of Washington
and no idea who to appoint who would support him rather than work against him.
- text: IN MAY 2013, the Washington Postโs Greg Miller reported that the head of the
CIAโs clandestine service was being shifted out of that position as a result of
โa management shake-upโ by then-Director John Brennan.
- text: Columbus police are investigating the shootings.
inference: false
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.602089552238806
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6021 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("anismahmahi/G2_replace_Whata_repetition_with_noPropaganda_SetFit")
# Run inference
preds = model("Columbus police are investigating the shootings.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 23.1093 | 129 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.3592 | - |
| 0.0121 | 50 | 0.2852 | - |
| 0.0243 | 100 | 0.2694 | - |
| 0.0364 | 150 | 0.2182 | - |
| 0.0486 | 200 | 0.2224 | - |
| 0.0607 | 250 | 0.2634 | - |
| 0.0729 | 300 | 0.2431 | - |
| 0.0850 | 350 | 0.2286 | - |
| 0.0971 | 400 | 0.197 | - |
| 0.1093 | 450 | 0.2466 | - |
| 0.1214 | 500 | 0.2374 | - |
| 0.1336 | 550 | 0.2134 | - |
| 0.1457 | 600 | 0.2092 | - |
| 0.1578 | 650 | 0.1987 | - |
| 0.1700 | 700 | 0.2288 | - |
| 0.1821 | 750 | 0.1562 | - |
| 0.1943 | 800 | 0.27 | - |
| 0.2064 | 850 | 0.1314 | - |
| 0.2186 | 900 | 0.2144 | - |
| 0.2307 | 950 | 0.184 | - |
| 0.2428 | 1000 | 0.2069 | - |
| 0.2550 | 1050 | 0.1773 | - |
| 0.2671 | 1100 | 0.0704 | - |
| 0.2793 | 1150 | 0.1139 | - |
| 0.2914 | 1200 | 0.2398 | - |
| 0.3035 | 1250 | 0.0672 | - |
| 0.3157 | 1300 | 0.1321 | - |
| 0.3278 | 1350 | 0.0803 | - |
| 0.3400 | 1400 | 0.0589 | - |
| 0.3521 | 1450 | 0.0428 | - |
| 0.3643 | 1500 | 0.0886 | - |
| 0.3764 | 1550 | 0.0839 | - |
| 0.3885 | 1600 | 0.1843 | - |
| 0.4007 | 1650 | 0.0375 | - |
| 0.4128 | 1700 | 0.114 | - |
| 0.4250 | 1750 | 0.1264 | - |
| 0.4371 | 1800 | 0.0585 | - |
| 0.4492 | 1850 | 0.0586 | - |
| 0.4614 | 1900 | 0.0805 | - |
| 0.4735 | 1950 | 0.0686 | - |
| 0.4857 | 2000 | 0.0684 | - |
| 0.4978 | 2050 | 0.0803 | - |
| 0.5100 | 2100 | 0.076 | - |
| 0.5221 | 2150 | 0.0888 | - |
| 0.5342 | 2200 | 0.1091 | - |
| 0.5464 | 2250 | 0.038 | - |
| 0.5585 | 2300 | 0.0674 | - |
| 0.5707 | 2350 | 0.0562 | - |
| 0.5828 | 2400 | 0.0603 | - |
| 0.5949 | 2450 | 0.0669 | - |
| 0.6071 | 2500 | 0.0829 | - |
| 0.6192 | 2550 | 0.1442 | - |
| 0.6314 | 2600 | 0.0914 | - |
| 0.6435 | 2650 | 0.0357 | - |
| 0.6557 | 2700 | 0.0546 | - |
| 0.6678 | 2750 | 0.0748 | - |
| 0.6799 | 2800 | 0.0149 | - |
| 0.6921 | 2850 | 0.1067 | - |
| 0.7042 | 2900 | 0.0054 | - |
| 0.7164 | 2950 | 0.0878 | - |
| 0.7285 | 3000 | 0.0385 | - |
| 0.7407 | 3050 | 0.036 | - |
| 0.7528 | 3100 | 0.0902 | - |
| 0.7649 | 3150 | 0.0734 | - |
| 0.7771 | 3200 | 0.0369 | - |
| 0.7892 | 3250 | 0.0031 | - |
| 0.8014 | 3300 | 0.0113 | - |
| 0.8135 | 3350 | 0.0862 | - |
| 0.8256 | 3400 | 0.0549 | - |
| 0.8378 | 3450 | 0.0104 | - |
| 0.8499 | 3500 | 0.0072 | - |
| 0.8621 | 3550 | 0.0546 | - |
| 0.8742 | 3600 | 0.0579 | - |
| 0.8864 | 3650 | 0.0789 | - |
| 0.8985 | 3700 | 0.0711 | - |
| 0.9106 | 3750 | 0.0361 | - |
| 0.9228 | 3800 | 0.0292 | - |
| 0.9349 | 3850 | 0.0121 | - |
| 0.9471 | 3900 | 0.0066 | - |
| 0.9592 | 3950 | 0.0091 | - |
| 0.9713 | 4000 | 0.0027 | - |
| 0.9835 | 4050 | 0.0891 | - |
| 0.9956 | 4100 | 0.0186 | - |
| **1.0** | **4118** | **-** | **0.2746** |
| 1.0078 | 4150 | 0.0246 | - |
| 1.0199 | 4200 | 0.0154 | - |
| 1.0321 | 4250 | 0.0056 | - |
| 1.0442 | 4300 | 0.0343 | - |
| 1.0563 | 4350 | 0.0375 | - |
| 1.0685 | 4400 | 0.0106 | - |
| 1.0806 | 4450 | 0.0025 | - |
| 1.0928 | 4500 | 0.0425 | - |
| 1.1049 | 4550 | 0.0019 | - |
| 1.1170 | 4600 | 0.0014 | - |
| 1.1292 | 4650 | 0.0883 | - |
| 1.1413 | 4700 | 0.0176 | - |
| 1.1535 | 4750 | 0.0204 | - |
| 1.1656 | 4800 | 0.0011 | - |
| 1.1778 | 4850 | 0.005 | - |
| 1.1899 | 4900 | 0.0238 | - |
| 1.2020 | 4950 | 0.0362 | - |
| 1.2142 | 5000 | 0.0219 | - |
| 1.2263 | 5050 | 0.0487 | - |
| 1.2385 | 5100 | 0.0609 | - |
| 1.2506 | 5150 | 0.0464 | - |
| 1.2627 | 5200 | 0.0033 | - |
| 1.2749 | 5250 | 0.0087 | - |
| 1.2870 | 5300 | 0.0101 | - |
| 1.2992 | 5350 | 0.0529 | - |
| 1.3113 | 5400 | 0.0243 | - |
| 1.3235 | 5450 | 0.001 | - |
| 1.3356 | 5500 | 0.0102 | - |
| 1.3477 | 5550 | 0.0047 | - |
| 1.3599 | 5600 | 0.0034 | - |
| 1.3720 | 5650 | 0.0118 | - |
| 1.3842 | 5700 | 0.0742 | - |
| 1.3963 | 5750 | 0.0538 | - |
| 1.4085 | 5800 | 0.0162 | - |
| 1.4206 | 5850 | 0.0079 | - |
| 1.4327 | 5900 | 0.0027 | - |
| 1.4449 | 5950 | 0.0035 | - |
| 1.4570 | 6000 | 0.0581 | - |
| 1.4692 | 6050 | 0.0813 | - |
| 1.4813 | 6100 | 0.0339 | - |
| 1.4934 | 6150 | 0.0312 | - |
| 1.5056 | 6200 | 0.0323 | - |
| 1.5177 | 6250 | 0.0521 | - |
| 1.5299 | 6300 | 0.0016 | - |
| 1.5420 | 6350 | 0.0009 | - |
| 1.5542 | 6400 | 0.0967 | - |
| 1.5663 | 6450 | 0.0009 | - |
| 1.5784 | 6500 | 0.031 | - |
| 1.5906 | 6550 | 0.0114 | - |
| 1.6027 | 6600 | 0.0599 | - |
| 1.6149 | 6650 | 0.0416 | - |
| 1.6270 | 6700 | 0.0047 | - |
| 1.6391 | 6750 | 0.0234 | - |
| 1.6513 | 6800 | 0.0609 | - |
| 1.6634 | 6850 | 0.022 | - |
| 1.6756 | 6900 | 0.0042 | - |
| 1.6877 | 6950 | 0.0336 | - |
| 1.6999 | 7000 | 0.0592 | - |
| 1.7120 | 7050 | 0.0536 | - |
| 1.7241 | 7100 | 0.1198 | - |
| 1.7363 | 7150 | 0.1035 | - |
| 1.7484 | 7200 | 0.0549 | - |
| 1.7606 | 7250 | 0.027 | - |
| 1.7727 | 7300 | 0.0251 | - |
| 1.7848 | 7350 | 0.0225 | - |
| 1.7970 | 7400 | 0.0027 | - |
| 1.8091 | 7450 | 0.0309 | - |
| 1.8213 | 7500 | 0.024 | - |
| 1.8334 | 7550 | 0.0355 | - |
| 1.8456 | 7600 | 0.0239 | - |
| 1.8577 | 7650 | 0.0377 | - |
| 1.8698 | 7700 | 0.012 | - |
| 1.8820 | 7750 | 0.0233 | - |
| 1.8941 | 7800 | 0.0184 | - |
| 1.9063 | 7850 | 0.0022 | - |
| 1.9184 | 7900 | 0.0043 | - |
| 1.9305 | 7950 | 0.014 | - |
| 1.9427 | 8000 | 0.0083 | - |
| 1.9548 | 8050 | 0.0084 | - |
| 1.9670 | 8100 | 0.0009 | - |
| 1.9791 | 8150 | 0.002 | - |
| 1.9913 | 8200 | 0.0002 | - |
| 2.0 | 8236 | - | 0.2768 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
MultiBertGunjanPatrick/multiberts-seed-4-100k | MultiBertGunjanPatrick | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-4",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-10-04T05:10:05 | 111 | 0 | ---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-4
---
# MultiBERTs Seed 4 Checkpoint 100k (uncased)
Seed 4 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-100k')
model = BertModel.from_pretrained("multiberts-seed-4-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
DOSaAI/albanian-gpt2-large-120m-instruct-v0.1 | DOSaAI | text-generation | [
"transformers",
"text-generation",
"sq",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,711,913,253,000 | 2024-03-31T19:29:56 | 0 | 1 | ---
language:
- sq
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Albanian GPT-2
## Model Description
This model is a fine-tuned version of the GPT-2 model by [OpenAI](https://openai.com/) for Albanian text generation tasks. GPT-2 is a state-of-the-art natural language processing model developed by OpenAI. It is a variant of the GPT (Generative Pre-trained Transformer) model, pre-trained on a large corpus of English text data. This fine-tuned version has been trained on a custom dataset of Albanian text data and can generate coherent and contextually relevant text in Albanian.
## Intended Use
The model is intended for text generation tasks in Albanian and English. It can be used for various natural language processing tasks such as text completion, text summarization, dialogue generation, and more. It is particularly useful for generating creative and contextually relevant text in both Albanian and English.
## Training Data
The model has been fine-tuned on a custom dataset consisting of Albanian text data. The dataset used for fine-tuning includes a diverse range of text sources in Albanian to ensure the model's proficiency in generating high-quality text across different domains.
## Limitations and Biases
As with any machine learning model, this model may exhibit biases present in the training data. Additionally, while the model performs well on a wide range of text generation tasks in Albanian and English, it may not always produce contextually appropriate or grammatically correct output. Users should review and evaluate the generated text to ensure it meets their quality standards.
## Acknowledgments
- This model is based on the GPT-2 architecture developed by OpenAI.
- The fine-tuning process for this model was facilitated by the Hugging Face Transformers library.
## Contact Information
For any questions, feedback, or inquiries related to the model, please contact the model developer:
- Name: DOSaAI
- Email: [email protected] | [
"SUMMARIZATION"
] | Non_BioNLP |
Lvxue/distilled-mt5-small-1-0.5 | Lvxue | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,660,269,997,000 | 2022-08-12T03:22:00 | 11 | 0 | ---
datasets:
- wmt16
language:
- en
- ro
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: distilled-mt5-small-1-0.5
results:
- task:
type: translation
name: Translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- type: bleu
value: 5.3917
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-1-0.5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8410
- Bleu: 5.3917
- Gen Len: 40.6103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| [
"TRANSLATION"
] | Non_BioNLP |
aroot/wsample.49 | aroot | translation | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,688,511,805,000 | 2023-07-05T00:41:23 | 8 | 0 | ---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: wsample.49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsample.49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2266
- Bleu: 3.0080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| [
"TRANSLATION"
] | Non_BioNLP |
ronaldseoh/long-t5-local-base | ronaldseoh | null | [
"pytorch",
"jax",
"longt5",
"en",
"arxiv:2112.07916",
"arxiv:1912.08777",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] | 1,726,798,138,000 | 2023-01-24T17:08:34 | 9 | 0 | ---
language: en
license: apache-2.0
---
# LongT5 (local attention, base-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base")
model = LongT5Model.from_pretrained("google/long-t5-local-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
``` | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
marbogusz/bert-multi-cased-squad_sv | marbogusz | question-answering | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-05-19T23:00:13 | 103 | 0 | ---
{}
---
Swedish bert multilingual model trained on a machine translated (MS neural translation) SQUAD 1.1 dataset
| [
"TRANSLATION"
] | Non_BioNLP |
maastrichtlawtech/wizardlm-7b-v1.0-lleqa | maastrichtlawtech | text-generation | [
"peft",
"legal",
"text-generation",
"fr",
"dataset:maastrichtlawtech/lleqa",
"arxiv:2309.17050",
"license:apache-2.0",
"region:us"
] | 1,695,917,091,000 | 2023-10-03T09:44:44 | 4 | 3 | ---
datasets:
- maastrichtlawtech/lleqa
language:
- fr
library_name: peft
license: apache-2.0
metrics:
- rouge
- meteor
pipeline_tag: text-generation
tags:
- legal
inference: false
---
# wizardLM-7b-v1.0-lleqa
This is a [wizardlm-7b-v1.0](https://huggingface.co/WizardLM/WizardLM-7B-V1.0) model fine-tuned with [QLoRA](https://github.com/artidoro/qlora) for long-form legal question answering in **French**.
## Usage
```python
[...]
```
## Training
#### Data
We use the [Long-form Legal Question Answering (LLeQA)](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset to fine-tune the model. LLeQA is a French native dataset for studying legal information retrieval and question answering. It consists of a knowledge corpus of 27,941 statutory articles collected from the Belgian legislation, and 1,868 legal questions posed by Belgian citizens and labeled by experienced jurists with a comprehensive answer rooted in relevant articles from the corpus.
#### Hyperparameters
We fine-tune the model through 4-bit QLoRA finetuning with an effective batch size of 8 for 10 epochs (i.e., 1.1K steps) using paged AdamW optimizer with default momentum parameters and constant learning rate schedule of 2e-4. We employ NormalFloat4 with double quantization for the base models and add LoRA adapters on all linear layers by setting r=16, alpha=32 while utilizing float16 as computation datatype. Additionally, we perform NTK-aware scaling of RoPE to extend the context window to 4096 tokens. Training takes around 7.5 hours to complete on a single Tesla V100 GPU with 32GBs of memory. More details can be found in this [paper](https://arxiv.org/abs/) and [repository](https://github.com/maastrichtlawtech/lleqa).
## Citation
```bibtex
@article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.17050},
year = {2023},
url = {https://arxiv.org/abs/2309.17050},
eprinttype = {arXiv},
eprint = {2309.17050},
}
```
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
tmnam20/mdeberta-v3-base-vsfc-1 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,394,694,000 | 2024-01-16T08:47:32 | 4 | 0 | ---
base_model: microsoft/mdeberta-v3-base
datasets:
- tmnam20/VieGLUE
language:
- en
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mdeberta-v3-base-vsfc-1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/VSFC
type: tmnam20/VieGLUE
config: vsfc
split: validation
args: vsfc
metrics:
- type: accuracy
value: 0.950726468730259
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsfc-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSFC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2229
- Accuracy: 0.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1695 | 1.4 | 500 | 0.2297 | 0.9425 |
| 0.1095 | 2.79 | 1000 | 0.2185 | 0.9482 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF | Triangle104 | text-generation | [
"transformers",
"gguf",
"language",
"granite-3.2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:ibm-granite/granite-3.2-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.2-2b-instruct",
"license:apache-2.0",
"region:us",
"conversational"
] | 1,740,748,781,000 | 2025-02-28T13:21:09 | 18 | 0 | ---
base_model: ibm-granite/granite-3.2-2b-instruct
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.2
- llama-cpp
- gguf-my-repo
inference: false
---
# Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.2-2b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) for more details on the model.
---
Model Summary:
-
Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-2B-Instruct,
it has been trained using a mix of permissively licensed open-source
datasets and internally generated synthetic data designed for reasoning
tasks. The model allows controllability of its thinking capability,
ensuring it is applied only when required.
Developers: Granite Team, IBM
Website: Granite Docs
Release Date: February 26th, 2025
License: Apache 2.0
Supported Languages:
-
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech,
Italian, Korean, Dutch, and Chinese. However, users may finetune this
Granite model for languages beyond these 12 languages.
Intended Use:
-
This model is designed to handle general instruction-following tasks and
can be integrated into AI assistants across various domains, including
business applications.
Capabilities
-
Thinking
Summarization
Text classification
Text extraction
Question-answering
Retrieval Augmented Generation (RAG)
Code related tasks
Function-calling tasks
Multilingual dialog use cases
Long-context tasks including long document/meeting summarization, long document QA, etc.
Generation:
-
This is a simple example of how to use Granite-3.2-2B-Instruct model.
Install the following libraries:
-
pip install torch torchvision torchaudio
pip install accelerate
pip install transformers
Then, copy the snippet from the section that is relevant for your use case.
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
import torch
model_path="ibm-granite/granite-3.2-2b-instruct"
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path
)
conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
set_seed(42)
output = model.generate(
**input_ids,
max_new_tokens=8192,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print(prediction)
Example Outputs
thinking=True
Here is my thought process:
1. First, let's understand the composition of the initial 10 liters of 30% acid solution:
- Acid content = 30% of 10 liters = 0.30 * 10 = 3 liters
2. We want to end up with a 50% acid solution. Let's denote the volume of the 70% acid solution to be added as 'x' liters.
3. The amount of acid in the final mixture (50% solution) should equal the initial acid content plus the acid from the added 70% solution.
- Total acid in final mixture = 0.50 * (10 + x)
4. The equation representing the total acid content is:
3 (from initial solution) + 0.70 * x (from added solution) = 0.50 * (10 + x)
5. Now, we solve for 'x':
3 + 0.70x = 5 + 0.50x
0.70x - 0.50x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters
6. Therefore, 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution to achieve a 50% acid mixture.
Here is my response:
To solve this problem, we need to determine how much of a 70% acid solution must be added to a 10-liter solution that is already 30% acid to create a new mixture that is 50% acid. Here's how to calculate it:
1. **Initial Acid Content**:
- The initial 10-liter solution contains 3 liters of acid (since 30% of 10 liters).
2. **Desired Final Acid Content**:
- We want the final mixture to be 50% acid.
3. **Equation Setup**:
- Let 'x' be the volume (in liters) of the 70% acid solution to be added.
- The equation representing the total acid content in the final mixture (which is 10 liters from the initial solution plus 'x' liters of the added solution) should equal 50% of the total volume (10 + x).
- This gives us: 3 (acid from initial solution) + 0.70x (acid from added solution) = 0.50(10 + x)
4. **Solving the Equation**:
- First, simplify the right side of the equation: 0.50(10 + x) = 5 + 0.50x
- Now, the equation is: 3 + 0.70x = 5 + 0.50x
- Subtract 0.50x from both sides: 0.20x = 2
- Divide both sides by 0.20: x = 2 / 0.20
- x = 10 liters
5. **Conclusion**:
- To achieve a 50% acid mixture from the initial 30% solution, you need to add 10 liters of a 70% acid solution.
So, the final answer is that 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution.
thinking=False
To solve this problem, we'll use the concept of volume and concentration. Let's denote the amount of 70% acid solution we need to add as "x" liters.
First, let's find out how much acid is in the initial 10-liter solution:
Acid in initial solution = 30% of 10 liters = 0.30 * 10 = 3 liters
Now, we want to end up with a 50% acid solution in a total volume of (10 + x) liters. Let's denote the final volume as V.
Final acid concentration = 50%
Final acid amount = 50% of V = 0.50 * V
We know the initial acid amount and the final acid amount, so we can set up an equation:
Initial acid amount + Acid from added solution = Final acid amount
3 liters + (70% of x) = 0.50 * (10 + x)
Now, let's solve for x:
0.70x + 3 = 0.50 * 10 + 0.50x
0.70x - 0.50x = 0.50 * 10 - 3
0.20x = 5 - 3
0.20x = 2
x = 2 / 0.20
x = 10 liters
So, you need to add 10 liters of a 70% acid solution to the initial 10-liter 30% acid solution to achieve a 50% acid mixture.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF --hf-file granite-3.2-2b-instruct-q5_k_s.gguf -c 2048
```
| [
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | Non_BioNLP |
tcepi/sts_bertimbau | tcepi | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,729,690,604,000 | 2024-10-23T13:37:17 | 7 | 0 | ---
base_model: neuralmind/bert-base-portuguese-cased
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer based on neuralmind/bert-base-portuguese-cased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) <!-- at revision 94d69c95f98f7d5b2a8700c420230ae10def0baa -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("tcepi/sts_bertimbau")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
proxectonos/Nos_MT-OpenNMT-es-gl | proxectonos | null | [
"gl",
"license:mit",
"region:us"
] | 1,676,539,658,000 | 2024-10-01T07:59:41 | 0 | 1 | ---
language:
- gl
license: mit
metrics:
- bleu (Gold1): 79.6
- bleu (Gold2): 43.3
- bleu (Flores): 21.8
- bleu (Test-suite): 74.3
---
**English text [here](https://huggingface.co/proxectonos/NOS-MT-OpenNMT-es-gl/blob/main/README_English.md)**
**Descriciรณn do Modelo**
Modelo feito con OpenNMT-py 3.2 para o par espaรฑol-galego utilizando unha arquitectura transformer. O modelo foi transformado para o formato da ctranslate2.
**Como traducir con este Modelo**
+ Instalar o [Python 3.9](https://www.python.org/downloads/release/python-390/)
+ Instalar o [ctranslate 3.2](https://github.com/OpenNMT/CTranslate2)
+ Traducir un input_text utilizando o modelo NOS-MT-es-gl co seguinte comando:
```bash
perl tokenizer.perl < input.txt > input.tok
```
```bash
subword_nmt.apply_bpe -c ./bpe/es.bpe < input.tok > input.bpe
```
```bash
python3 translate.py ./ct2-es-gl_12L input.bpe > output.txt
```
```bash
sed -i 's/@@ //g' output.txt
```
**Adestramento**
No adestramento, utilizamos cรณrpora autรฉnticos e sintรฉticos do [ProxectoNรณs](https://github.com/proxectonos/corpora). Os primeiros son cรณrpora de traduciรณns feitas directamente por tradutores humanos. ร importante salientar que a pesar destes textos seren feitos por humanos, non estรกn libres de erros lingรผรญsticos. Os segundos son cรณrpora de traduciรณns espaรฑol-portuguรฉs, que convertemos en espaรฑol-galego a travรฉs da traduciรณn automรกtica portuguรฉs-galego con Opentrad/Apertium e transliteraciรณn para palabras fรณra de vocabulario.
**Procedemento de adestramento**
+ Tokenizaciรณn dos datasets feita co tokenizador (tokenizer.pl) de [linguakit](https://github.com/citiususc/Linguakit) que foi modificado para evitar o salto de liรฑa por token do ficheiro orixinal.
+ O vocabulario BPE para os modelos foi xerado a travรฉs do script [learn_bpe.py](https://github.com/OpenNMT/OpenNMT-py/blob/master/tools/learn_bpe.py) da OpenNMT
+ Utilizando o .yaml deste repositorio pode replicar o proceso de adestramento. ร preciso modificar os paths do ficheiro .yaml para a Open NMT saber onde ir buscar os textos. Apรณs facer isto, pode do seguinte xeito comezar o proceso:
```bash
onmt_build_vocab -config bpe-es-gl_emb.yaml -n_sample 40000
onmt_train -config bpe-es-gl_emb.yaml
```
**Hiperparรกmetros**
Os parรกmetros usados para o desenvolvemento do modelo poden ser consultados directamente no mesmo ficheiro .yaml bpe-es-gl_emb.yaml
**Avaliaciรณn**
A avaliaciรณn BLEU dos modelos รฉ feita cunha mistura de tests desenvolvidos internamente (gold1, gold2, test-suite) con outros datasets disponรญbeis en galego (Flores).
| GOLD 1 | GOLD 2 | FLORES | TEST-SUITE|
| ------------- |:-------------:| -------:|----------:|
| 79.5 | 43.5 | 21.4 | 73.4 |
**Licenzas do Modelo**
MIT License
Copyright (c) 2023 Proxecto Nรณs
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
**Financiamento**
Esta investigaciรณn foi financiada polo proxecto "Nรณs: o galego na sociedade e economรญa da intelixencia artificial", resultado dun acordo entre a Xunta de Galicia e a Universidade de Santiago de Compostela, o que resultou no subsidio ED431G2019/04 da Consellarรญa de Educaciรณn, Universidade e Formaciรณn Profesional da Galiza, e polo Fondo Europeo de Desenvolvemento Rexional (programa ERDF/FEDER), e Grupos de Referencia: ED431C 2020/21.
**Citar este traballo**
Se utilizar este modelo no seu traballo, cite por favor asรญ:
Daniel Bardanca Outeirinho, Pablo Gamallo Otero, Iria de-Dios-Flores, and Josรฉ Ramom Pichel Campos. 2024.
Exploring the effects of vocabulary size in neural machine translation: Galician as a target language.
In Proceedings of the 16th International Conference on Computational Processing of Portuguese, pages 600โ604,
Santiago de Compostela, Galiza. Association for Computational Lingustics.
| [
"TRANSLATION"
] | Non_BioNLP |
chunwoolee0/seqcls_mrpc_bert_base_uncased_model | chunwoolee0 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,689,377,271,000 | 2023-07-14T23:32:36 | 8 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: seqcls_mrpc_bert_base_uncased_model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8014705882352942
name: Accuracy
- type: f1
value: 0.8669950738916257
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seqcls_mrpc_bert_base_uncased_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4621
- Accuracy: 0.8015
- F1: 0.8670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 58 | 0.5442 | 0.7108 | 0.8228 |
| No log | 2.0 | 116 | 0.5079 | 0.7745 | 0.8558 |
| No log | 3.0 | 174 | 0.4621 | 0.8015 | 0.8670 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pierreguillou/bert-large-cased-squad-v1.1-portuguese | pierreguillou | question-answering | [
"transformers",
"pytorch",
"tf",
"bert",
"question-answering",
"bert-large",
"pt",
"dataset:brWaC",
"dataset:squad",
"dataset:squad_v1_pt",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-01-04T09:57:00 | 777 | 45 | ---
datasets:
- brWaC
- squad
- squad_v1_pt
language: pt
license: mit
metrics:
- squad
tags:
- question-answering
- bert
- bert-large
- pytorch
widget:
- text: Quando comeรงou a pandemia de Covid-19 no mundo?
context: A pandemia de COVID-19, tambรฉm conhecida como pandemia de coronavรญrus,
รฉ uma pandemia em curso de COVID-19, uma doenรงa respiratรณria causada pelo coronavรญrus
da sรญndrome respiratรณria aguda grave 2 (SARS-CoV-2). O vรญrus tem origem zoonรณtica
e o primeiro caso conhecido da doenรงa remonta a dezembro de 2019 em Wuhan, na
China.
- text: Onde foi descoberta a Covid-19?
context: A pandemia de COVID-19, tambรฉm conhecida como pandemia de coronavรญrus,
รฉ uma pandemia em curso de COVID-19, uma doenรงa respiratรณria causada pelo coronavรญrus
da sรญndrome respiratรณria aguda grave 2 (SARS-CoV-2). O vรญrus tem origem zoonรณtica
e o primeiro caso conhecido da doenรงa remonta a dezembro de 2019 em Wuhan, na
China.
---
# Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1

## Introduction
The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/).
The language model used is the [BERTimbau Large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) (aka "bert-large-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
## Informations on the method used
All the informations are in the blog post : [NLP | Como treinar um modelo de Question Answering em qualquer linguagem baseado no BERT large, melhorando o desempenho do modelo utilizando o BERT base? (estudo de caso em portuguรชs)](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96)
## Notebook in GitHub
[question_answering_BERT_large_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb) ([nbviewer version](https://nbviewer.jupyter.org/github/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb))
## Performance
The results obtained are the following:
```
f1 = 84.43 (against 82.50 for the base model)
exact match = 72.68 (against 70.49 for the base model)
```
## How to use the model... with Pipeline
```python
import transformers
from transformers import pipeline
# source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19
context = r"""
A pandemia de COVID-19, tambรฉm conhecida como pandemia de coronavรญrus, รฉ uma pandemia em curso de COVID-19,
uma doenรงa respiratรณria causada pelo coronavรญrus da sรญndrome respiratรณria aguda grave 2 (SARS-CoV-2).
O vรญrus tem origem zoonรณtica e o primeiro caso conhecido da doenรงa remonta a dezembro de 2019 em Wuhan, na China.
Em 20 de janeiro de 2020, a Organizaรงรฃo Mundial da Saรบde (OMS) classificou o surto
como Emergรชncia de Saรบde Pรบblica de รmbito Internacional e, em 11 de marรงo de 2020, como pandemia.
Em 18 de junho de 2021, 177 349 274 casos foram confirmados em 192 paรญses e territรณrios,
com 3 840 181 mortes atribuรญdas ร doenรงa, tornando-se uma das pandemias mais mortais da histรณria.
Os sintomas de COVID-19 sรฃo altamente variรกveis, variando de nenhum a doenรงas com risco de morte.
O vรญrus se espalha principalmente pelo ar quando as pessoas estรฃo perto umas das outras.
Ele deixa uma pessoa infectada quando ela respira, tosse, espirra ou fala e entra em outra pessoa pela boca, nariz ou olhos.
Ele tambรฉm pode se espalhar atravรฉs de superfรญcies contaminadas.
As pessoas permanecem contagiosas por atรฉ duas semanas e podem espalhar o vรญrus mesmo se forem assintomรกticas.
"""
model_name = 'pierreguillou/bert-large-cased-squad-v1.1-portuguese'
nlp = pipeline("question-answering", model=model_name)
question = "Quando comeรงou a pandemia de Covid-19 no mundo?"
result = nlp(question=question, context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
# Answer: 'dezembro de 2019', score: 0.5087, start: 290, end: 306
```
## How to use the model... with the Auto classes
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese")
model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese")
```
Or just clone the model repo:
```python
git lfs install
git clone https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese
# if you want to clone without large files โ just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Limitations and bias
The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases.
## Author
Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96#c2f5)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [AI Lab](https://ailab.unb.br/).
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{pierreguillou2021bertlargecasedsquadv11portuguese,
title={Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1},
author={Pierre Guillou},
year={2021}
}
``` | [
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING",
"TEXTUAL_ENTAILMENT"
] | TBD |
kunalr63/my_awesome_model | kunalr63 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,650,033,000 | 2023-04-16T13:33:32 | 14 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.92768
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1887
- Accuracy: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2233 | 1.0 | 1563 | 0.1887 | 0.9277 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-fr-ht-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,663,856,000 | 2024-10-19T04:26:33 | 9 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-ht --output_dir ./ctranslate2/opus-mt-fr-ht-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-ht-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-ht-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-ht-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ht) by Helsinki-NLP.
| [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/EmergentMethods_-_Phi-3-mini-128k-instruct-graph-4bits | RichardErkhov | null | [
"safetensors",
"phi3",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,737,190,117,000 | 2025-01-18T08:50:48 | 29 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-128k-instruct-graph - bnb 4bits
- Model creator: https://huggingface.co/EmergentMethods/
- Original model: https://huggingface.co/EmergentMethods/Phi-3-mini-128k-instruct-graph/
Original model description:
---
license: cc-by-nc-sa-4.0
datasets:
- EmergentMethods/AskNews-GraphER-v0
tags:
- entity-relationship-extraction
- news-analysis
language:
- en
pipeline_tag: text-generation
---
# Model Card for Phi-3-mini-128k-instruct-graph
This model is a fine-tuned version of Microsoft's Phi-3-mini-128k-instruct, specifically tailored for entity relationship extraction from general text data. It aims to match the quality and accuracy of GPT-4 in generating entity relationship graphs while offering improved efficiency for large-scale processing.
## Model Details
- **Developed by:** [Emergent Methods](https://emergentmethods.ai/)
- **Funded by:** [Emergent Methods](https://emergentmethods.ai/)
- **Shared by:** [Emergent Methods](https://emergentmethods.ai/)
- **Model type:** microsoft/phi-3-mini-128k-instruct (fine-tuned)
- **Language(s):** English
- **License:** Creative Commons Attribution Non Commercial Share Alike 4.0
- **Finetuned from model:** [microsoft/phi-3-mini-128k-instruct](https://huggingface.co/microsoft/phi-3-mini-128k-instruct)
For more information, see our blog post.:
๐ฐ [Blog](https://emergentmethods.medium.com/outperforming-claude-3-5-sonnet-with-phi-3-mini-4k-for-graph-entity-relationship-extraction-tasks-7c8f6c1ebd79) <br>
## Uses
This model is designed for generating structured JSON data representing entity relationships in general text data. It can be used for:
- Enhanced information retrieval in various text databases
- Exploration of temporal relationships and evolving narratives across different types of documents
- Advanced predictive modeling for trend analysis in diverse textual sources
The model is particularly useful for applications requiring high-throughput processing of large volumes of text data, such as content aggregation platforms, research databases, and comprehensive text analysis systems.
## Bias, Risks, and Limitations
Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Microsoft Phi-3 will be present in the present dataset.
## Training Details
- **Training Data:** Over 7,000 stories and updates from AskNews, curated to avoid topic overlap
- **Training Procedure:** Fine-tuned using the Transformers library, SFTTrainer, PEFT, and QLoRA
## Evaluation Results
Compared to GPT-4o (ground truth), Claude Sonnet 3.5, and the base Phi-3 model:
| Metric | Phi-3 Fine-tuned | Claude Sonnet 3.5 | Phi-3 (base) |
|:--------|:------------------|:-------------------|:--------------|
| Nodes Similarity | 0.78 | 0.64 | 0.64 |
| Edges Similarity | 0.49 | 0.41 | 0.30 |
| JSON Consistency | 0.99 | 0.97 | 0.96 |
| JSON Similarity | 0.75 | 0.67 | 0.63 |
## Environmental Impact
- **Hardware Type:** 1x A100 SXM
- **Hours used:** 3 hours
- **Carbon Emitted:** 0.44 kg (According to Machine Learning Impact calculator)
## How to Get Started with the Model (Sample inference code)
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"EmergentMethods/Phi-3-mini-128k-instruct-graph",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("EmergentMethods/Phi-3-mini-128k-instruct-graph")
messages = [
{"role": "system", "content": """
A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only.
The User provides text in the format:
-------Text begin-------
<User provided text>
-------Text end-------
The Assistant follows the following steps before replying to the User:
1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is:
"nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...]
where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity.
2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of:
"edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...]
The <entity N> must correspond to the "id" of an entity in the "nodes" list.
The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice.
The Assistant responds to the User in JSON only, according to the following JSON schema:
{"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false}
"""},
{"role": "user", "content": """
-------Text begin-------
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
-------Text end-------
"""}
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
# Output:
# {
# "nodes": [
# {
# "id": "OpenAI",
# "type": "organization",
# "detailed_type": "ai research organization"
# },
# {
# "id": "GPT family",
# "type": "technology",
# "detailed_type": "large language models"
# },
# {
# "id": "DALL-E series",
# "type": "technology",
# "detailed_type": "text-to-image models"
# },
# {
# "id": "Sora",
# "type": "technology",
# "detailed_type": "text-to-video model"
# },
# {
# "id": "ChatGPT",
# "type": "technology",
# "detailed_type": "generative ai"
# },
# {
# "id": "San Francisco",
# "type": "location",
# "detailed_type": "city"
# },
# {
# "id": "California",
# "type": "location",
# "detailed_type": "state"
# },
# {
# "id": "December 2015",
# "type": "date",
# "detailed_type": "foundation date"
# },
# {
# "id": "November 2022",
# "type": "date",
# "detailed_type": "release date"
# }
# ],
# "edges": [
# {
# "from": "OpenAI",
# "to": "San Francisco",
# "label": "headquartered in"
# },
# {
# "from": "San Francisco",
# "to": "California",
# "label": "located in"
# },
# {
# "from": "OpenAI",
# "to": "December 2015",
# "label": "founded in"
# },
# {
# "from": "OpenAI",
# "to": "GPT family",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "DALL-E series",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "Sora",
# "label": "developed"
# },
# {
# "from": "OpenAI",
# "to": "ChatGPT",
# "label": "released"
# },
# {
# "from": "ChatGPT",
# "to": "November 2022",
# "label": "released in"
# }
# ]
# }
```
## Ethical Considerations
Users should be aware that this model is designed for entity relationship extraction from general text data and may not be suitable for other domains without further fine-tuning. The model's outputs should be reviewed and validated, especially when used in applications that may impact decision-making or public information dissemination.
| [
"TRANSLATION"
] | Non_BioNLP |
HusseinEid/bert-finetuned-ner | HusseinEid | token-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,716,045,407,000 | 2024-05-18T15:35:40 | 9 | 0 | ---
base_model: bert-base-cased
datasets:
- conll2003
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- type: precision
value: 0.9342824035755669
name: Precision
- type: recall
value: 0.9498485358465163
name: Recall
- type: f1
value: 0.9420011683217892
name: F1
- type: accuracy
value: 0.9861217401542356
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Precision: 0.9343
- Recall: 0.9498
- F1: 0.9420
- Accuracy: 0.9861
## Model description
This is a model for Named entity recognition NER
## Intended uses & limitations
Open source
## Training and evaluation data
The conll2003 dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0757 | 1.0 | 1756 | 0.0638 | 0.9215 | 0.9362 | 0.9288 | 0.9833 |
| 0.0352 | 2.0 | 3512 | 0.0667 | 0.9360 | 0.9482 | 0.9421 | 0.9858 |
| 0.0215 | 3.0 | 5268 | 0.0636 | 0.9343 | 0.9498 | 0.9420 | 0.9861 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
Tasm/autotrain-esdxq-2v2zh | Tasm | text-classification | [
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"region:us"
] | 1,732,036,477,000 | 2024-11-19T17:26:01 | 5 | 0 | ---
base_model: google-bert/bert-base-multilingual-cased
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.0839352235198021
f1: 0.8888888888888888
precision: 1.0
recall: 0.8
auc: 0.8300000000000001
accuracy: 0.9846153846153847
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
ns0911/klue-roberta-base-klue-sts | ns0911 | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10501",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,736,728,078,000 | 2025-01-13T00:28:18 | 6 | 0 | ---
base_model: klue/roberta-base
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10501
- loss:CosineSimilarityLoss
widget:
- source_sentence: ์ฐจ๋ก 15๋ถ ๊ฑฐ๋ฆฌ์ ๋ฒ ๋ค์น์ ๋ก๋ง๊ด์ฅ์ ๋์ฐฉํ ์ ์์ต๋๋ค.
sentences:
- ๋ฒ ๋์ค ๋ก๋ง ๊ด์ฅ๊น์ง ์ฐจ๋ก 15๋ถ ๊ฑธ๋ฆฝ๋๋ค.
- ์ ๋ง ๋๋ฌด๋๋ ๊นจ๋ํ๊ณ ํธ์ํ ์์์
๋๋ค.
- ์ฒ์์ ์ง์ฐพ๊ธฐ๊ฐ ์กฐ๊ธ ํ๋ค์์ง๋ง ์ง์ด ์๋ ๊ณจ๋ชฉ๊น์ง๋ ๊ฐ๊ธฐ ์ฌ์์!
- source_sentence: ๋ค์๋ฒ ๋๋ง ๋ฐฉ๋ฌธ์ ๊ผญ ๋ค์ ์ค๊ฒ ์ต๋๋ค.
sentences:
- ์ค๋ ๋ ์จ๋ ๋น๊ฐ ์ฌ๊น? ์๋๋ฉด ๋ง์๊น?
- ๋ช๋ช ๋ถ์ผ๊ฐ ์๋๋ผ ์ ์ฐ์
๋ถ์ผ๊ฐ ์๊ธฐ ์ํฉ์
๋๋ค.
- ๋ค์์ ๋๋ง์ ๊ฐ๋ฉด ๊ผญ ๋์์ฌ๊ฒ์.
- source_sentence: ๊ฒฝ๊ธฐ๋ด๋ฅ์๋ ์ผ๋ง๋ ๋น๊ฐ ์ฌ ์์ ์ด์ผ?
sentences:
- ํนํ ์ํ์ฅ ์
๊ตฌ๋ ์ํ์ฅ ์ถ์
๊ตฌ๊ฐ ํต์ผ๋์ด ์์ผ๋ฉฐ, ์ํ์ฅ ์ถ์
๊ตฌ๋ง๋ค ์ ์๋
๋ฐ ๋ฐ์ด ๊ฒ์ฌ๋ฅผ ์ค์ํ๊ณ ์์ต๋๋ค.
- ์ข์ ํ๋๊ฐ ๋ค์ด์ค๋ ๋๋ ๋ค๋ฅธ ๊ฒ ๋ง๊ณ ์ํํฉ์๋ค.
- ์ค๋ ์กฐ์น๋ค์ ์์๊ณต์ธ ๋ฑ์ด ๊ฐ์ฅ ๊ธด๊ธํ๊ฒ ์์ฒญํ๋ ๊ธ์ต ์ง์ ๋์ฑ
๋ค์
๋๋ค.
- source_sentence: ํ๊ต ์ฑ์ ํ๋ ๋ฉ์ผ๋ก ๋ฐ๋๊ฒ ์ฐํธ๋ณด๋ค ํธํ๊ณ ์ข์๊ฑฐ์ผ.
sentences:
- ํ๊ต ์ฑ์ ํ๋ฅผ ๊ธฐ์กด ์ฐํธ ๋์ ๋ฉ์ผ๋ก ๋ฐ์๋ณด์ธ์.
- ์ฒญ์ฐ๋ฆฌ ๋ด๊ณ ๋ ์ ํฌ 100์ฃผ๋
์ ๊ธฐ๋
ํฉ๋๋ค
- ์๋ชฌ ์ญ์์ ์ ๊ฑธ์ด์.
- source_sentence: ์ฃผ์ ๋์ฑ
์ผ๋ก๋ ์ถ์ฐ์จ ์ ๊ณ ๋ฅผ ์ํ ์ฃผ๊ฑฐยท์ถ์ฐยท๋ณด์ก ๋ฑ ์ข
ํฉ์ ์ง์๊ณผ ๊ณ ๋ น์ ๊ณ์๊ณ ์ฉ ํ์ฑํ ๋ฐ ๊ณ ๋ น์ ๋ง์ถคํ ์ผ์๋ฆฌ
์ง์, ๋
ธ์ธ๋ณต์ง ํ๋ ๋ฑ ๊ณ ๋ นํ๋์ ์ ์ฑ
๋
ธ๋ ฅ์ ์ง์์ด๋ค.
sentences:
- ์ด์ค โ40๋ ์ผ์๋ฆฌ ๋์ฑ
โ์ โฒ์ง์
ํ๋ จยท๊ต์ก ๋ฐ ์๊ณ๋น ์ง์ โฒ๊ณ ์ฉ ์ง์ โฒ์ฐฝ์
์ง์ โฒ์ฐ์
ยท์ง์ญ ์ง์ ๋ฑ 40๋ ๋ง์ถคํ ์ง์ ๋ฐฉ์์ด ๋ด๊ธธ
์์ ์ด๋ค.
- ์ฌ๋์ด ๋ง์ด ๋ถ๋น๋๋ ์กฐ๊ธ ๋ถํธํจ์ ๊ฐ์ํด์ผ ํฉ๋๋ค.
- ์์น, ์์ ์ฒญ๊ฒฐ๋, ํธ์คํธ ๋ชจ๋ ์ต๊ณ ์์ด์.
model-index:
- name: SentenceTransformer based on klue/roberta-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9618565828583842
name: Pearson Cosine
- type: spearman_cosine
value: 0.9203242816571715
name: Spearman Cosine
---
# SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'์ฃผ์ ๋์ฑ
์ผ๋ก๋ ์ถ์ฐ์จ ์ ๊ณ ๋ฅผ ์ํ ์ฃผ๊ฑฐยท์ถ์ฐยท๋ณด์ก ๋ฑ ์ข
ํฉ์ ์ง์๊ณผ ๊ณ ๋ น์ ๊ณ์๊ณ ์ฉ ํ์ฑํ ๋ฐ ๊ณ ๋ น์ ๋ง์ถคํ ์ผ์๋ฆฌ ์ง์, ๋
ธ์ธ๋ณต์ง ํ๋ ๋ฑ ๊ณ ๋ นํ๋์ ์ ์ฑ
๋
ธ๋ ฅ์ ์ง์์ด๋ค.',
'์ด์ค โ40๋ ์ผ์๋ฆฌ ๋์ฑ
โ์ โฒ์ง์
ํ๋ จยท๊ต์ก ๋ฐ ์๊ณ๋น ์ง์ โฒ๊ณ ์ฉ ์ง์ โฒ์ฐฝ์
์ง์ โฒ์ฐ์
ยท์ง์ญ ์ง์ ๋ฑ 40๋ ๋ง์ถคํ ์ง์ ๋ฐฉ์์ด ๋ด๊ธธ ์์ ์ด๋ค.',
'์ฌ๋์ด ๋ง์ด ๋ถ๋น๋๋ ์กฐ๊ธ ๋ถํธํจ์ ๊ฐ์ํด์ผ ํฉ๋๋ค.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9619 |
| **spearman_cosine** | **0.9203** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,501 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 19.9 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 19.25 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:----------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------|
| <code>๊ทธ๋ฆฌ๊ณ ๋๋ง์ ๋ฒํ๊ฐ์ธ ์๋จผ์ ์ฆ๊ธฐ๊ธฐ์ ์์น๋ ๋๋ฌด ์ข์์ต๋๋ค.</code> | <code>๊ทธ๋ฆฌ๊ณ ๋๋ง ์๋ด ์ค์ฌ๊ฐ์ธ ์๋ชฌ์ ์ฆ๊ธฐ๊ธฐ์ ์ข์ ์ฅ์์์ต๋๋ค.</code> | <code>0.7</code> |
| <code>๋ ์ต๊ทผ 1์ฃผ์ผ ๊ฐ ํด์ธ ์ ์
ํ์ง์๋ 140๋ช
, ์ผํ๊ท 20๋ช
์ผ๋ก ์ ์ฃผ ๋๋น ์ผํ๊ท 2๋ช
๋์๋ค.</code> | <code>๊ฒ๋ค๊ฐ, ํด์ธ๋ก์ ํ์ธ๋ ์ ์
์ ์๋ ์ ์ฃผ์ ๋นํด ํ๋ฃจ ํ๊ท 2๊ฐ ์ฆ๊ฐํ์ฌ 140๊ฐ ๊ทธ๋ฆฌ๊ณ ํ๋ฃจ ํ๊ท 20๊ฐ๊ฐ ๋์์ต๋๋ค.</code> | <code>0.58</code> |
| <code>ํ๊ตญ์ธ์ผ๋ก์ ์ ๊ฒฝ์ฐ์ด๋ ์ ์ ๊ฒ ์ต๋๋ค.</code> | <code>์ ํฌ ๋ฐฉ์ ๊ฒฝ์ฐ ๋ฐ๋ฅ์ด ์๊ทธ๋ ๊ฑฐ๋ฆฌ๋ ๋ถ์ค์ด ์กฐ๊ธ๋ ์ ๊ฒฝ์ฐ์ด๋ ๋ถ๋ถ์ด์์ต๋๋ค.</code> | <code>0.16</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| 0.7610 | 500 | 0.0277 | - |
| 1.0 | 657 | - | 0.9102 |
| 1.5221 | 1000 | 0.0082 | 0.9137 |
| 2.0 | 1314 | - | 0.9174 |
| 2.2831 | 1500 | 0.0052 | - |
| 3.0 | 1971 | - | 0.9199 |
| 3.0441 | 2000 | 0.0034 | 0.9192 |
| 3.8052 | 2500 | 0.0026 | - |
| 4.0 | 2628 | - | 0.9203 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | Non_BioNLP |
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 | fine-tuned | feature-extraction | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,714,663,005,000 | 2024-05-02T15:17:00 | 6 | 0 | ---
{}
---
# fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184
## Model Description
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
jeff-RQ/new-test-model | jeff-RQ | image-to-text | [
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,688,482,327,000 | 2023-07-05T15:01:24 | 144 | 0 | ---
language: en
license: mit
pipeline_tag: image-to-text
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
duplicated_from: Salesforce/blip2-opt-2.7b
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context theyโre being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details> | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
irusl/05newa1 | irusl | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"merges",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:finetune:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,721,034,106,000 | 2024-07-15T09:04:58 | 6 | 0 | ---
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
datasets:
- teknium/OpenHermes-2.5
language:
- en
license: apache-2.0
tags:
- Llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- merges
widget:
- example_title: Hermes 2 Pro Llama-3 Instruct Merge
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge
results: []
---
# - Hermes-2 ฮ Llama-3 8B

## Model Description
Hermes-2 ฮ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit.
Hermes-2 ฮ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 ฮ, combining the best of both worlds of each model.
## Example Outputs
### Create New Mythos:

### Chat with a Meta-Cognitive Entity

### Ask for a structured JSON output:

# Prompt Format
Hermes 2 ฮ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| ย ย Task ย ย |Version| Metric |Value | ย |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| ย ย ย 0|acc ย ย |0.5529|ยฑ ย |0.0145|
| ย ย ย ย ย ย | ย ย ย |acc_norm|0.5870|ยฑ ย |0.0144|
|arc_easy ย ย | ย ย ย 0|acc ย ย |0.8371|ยฑ ย |0.0076|
| ย ย ย ย ย ย | ย ย ย |acc_norm|0.8144|ยฑ ย |0.0080|
|boolq ย ย ย ย | ย ย ย 1|acc ย ย |0.8599|ยฑ ย |0.0061|
|hellaswag ย ย | ย ย ย 0|acc ย ย |0.6133|ยฑ ย |0.0049|
| ย ย ย ย ย ย | ย ย ย |acc_norm|0.7989|ยฑ ย |0.0040|
|openbookqa ย | ย ย ย 0|acc ย ย |0.3940|ยฑ ย |0.0219|
| ย ย ย ย ย ย | ย ย ย |acc_norm|0.4680|ยฑ ย |0.0223|
|piqa ย ย ย ย | ย ย ย 0|acc ย ย |0.8063|ยฑ ย |0.0092|
| ย ย ย ย ย ย | ย ย ย |acc_norm|0.8156|ยฑ ย |0.0090|
|winogrande ย | ย ย ย 0|acc ย ย |0.7372|ยฑ ย |0.0124|
```
Average: 72.59
## AGIEval:
```
| ย ย ย ย ย ย Task ย ย ย ย ย ย |Version| Metric |Value | ย |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat ย ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.2441|ยฑ ย |0.0270|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.2441|ยฑ ย |0.0270|
|agieval_logiqa_en ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.3687|ยฑ ย |0.0189|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.3840|ยฑ ย |0.0191|
|agieval_lsat_ar ย ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.2304|ยฑ ย |0.0278|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.2174|ยฑ ย |0.0273|
|agieval_lsat_lr ย ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.5471|ยฑ ย |0.0221|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.5373|ยฑ ย |0.0221|
|agieval_lsat_rc ย ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.6617|ยฑ ย |0.0289|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.6357|ยฑ ย |0.0294|
|agieval_sat_en ย ย ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.7670|ยฑ ย |0.0295|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.7379|ยฑ ย |0.0307|
|agieval_sat_en_without_passage| ย ย ย 0|acc ย ย |0.4417|ยฑ ย |0.0347|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.4223|ยฑ ย |0.0345|
|agieval_sat_math ย ย ย ย ย ย ย | ย ย ย 0|acc ย ย |0.4000|ยฑ ย |0.0331|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |acc_norm|0.3455|ยฑ ย |0.0321|
```
Average: 44.05
## BigBench:
```
| ย ย ย ย ย ย ย ย ย ย ย Task ย ย ย ย ย ย ย ย ย ย ย |Version| ย ย ย Metric ย ย ย ย |Value | ย |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement ย ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.6000|ยฑ ย |0.0356|
|bigbench_date_understanding ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.6585|ยฑ ย |0.0247|
|bigbench_disambiguation_qa ย ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.3178|ยฑ ย |0.0290|
|bigbench_geometric_shapes ย ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.2340|ยฑ ย |0.0224|
| ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย |exact_str_match ย ย ย |0.0000|ยฑ ย |0.0000|
|bigbench_logical_deduction_five_objects ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.2980|ยฑ ย |0.0205|
|bigbench_logical_deduction_seven_objects ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.2057|ยฑ ย |0.0153|
|bigbench_logical_deduction_three_objects ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.5367|ยฑ ย |0.0288|
|bigbench_movie_recommendation ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.4040|ยฑ ย |0.0220|
|bigbench_navigate ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.4970|ยฑ ย |0.0158|
|bigbench_reasoning_about_colored_objects ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.7075|ยฑ ย |0.0102|
|bigbench_ruin_names ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.4821|ยฑ ย |0.0236|
|bigbench_salient_translation_error_detection ย ย | ย ย ย 0|multiple_choice_grade|0.2295|ยฑ ย |0.0133|
|bigbench_snarks ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.6906|ยฑ ย |0.0345|
|bigbench_sports_understanding ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.5375|ยฑ ย |0.0159|
|bigbench_temporal_sequences ย ย ย ย ย ย ย ย ย ย | ย ย ย 0|multiple_choice_grade|0.6270|ยฑ ย |0.0153|
|bigbench_tracking_shuffled_objects_five_objects | ย ย ย 0|multiple_choice_grade|0.2216|ยฑ ย |0.0118|
|bigbench_tracking_shuffled_objects_seven_objects| ย ย ย 0|multiple_choice_grade|0.1594|ยฑ ย |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| ย ย ย 0|multiple_choice_grade|0.5367|ยฑ ย |0.0288|
```
Average: 44.13
**IFEval**: 72.64
**MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))},
title={Hermes-2-Theta-Llama-3-8B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
``` | [
"TRANSLATION"
] | Non_BioNLP |
muhtasham/finetuned-mlm_mini | muhtasham | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,670,031,216,000 | 2022-12-03T01:52:06 | 11 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuned-mlm_mini
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- type: accuracy
value: 0.86176
name: Accuracy
- type: f1
value: 0.925747679614988
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mlm_mini
This model is a fine-tuned version of [muhtasham/bert-mini-mlm-finetuned-emotion](https://huggingface.co/muhtasham/bert-mini-mlm-finetuned-emotion) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6338
- Accuracy: 0.8618
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3398 | 2.55 | 500 | 0.2366 | 0.9032 | 0.9491 |
| 0.2028 | 5.1 | 1000 | 0.2750 | 0.8956 | 0.9449 |
| 0.1382 | 7.65 | 1500 | 0.2798 | 0.9034 | 0.9492 |
| 0.0899 | 10.2 | 2000 | 0.4193 | 0.882 | 0.9373 |
| 0.0624 | 12.76 | 2500 | 0.5203 | 0.864 | 0.9270 |
| 0.0417 | 15.31 | 3000 | 0.6338 | 0.8618 | 0.9257 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
dascim/greekbart | dascim | fill-mask | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"bart",
"fill-mask",
"gr",
"arxiv:2304.00869",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,728,907,428,000 | 2024-10-15T07:49:37 | 20 | 0 | ---
language:
- gr
library_name: transformers
license: mit
pipeline_tag: fill-mask
tags:
- summarization
- bart
---
# GreekBART: The First Pretrained Greek Sequence-to-Sequence Model
## Introduction
GreekBART is a Greek sequence to sequence pretrained model based on [BART](https://huggingface.co/facebook/bart-large).
GreekBART is pretrained by learning to reconstruct a corrupted input sentence. A corpus of 76.9GB of Greek raw text is used to carry out the pretraining.
Unlike already existing BERT-based Greek language model (GreekBERT), GreekBART is particularly well-suited for generative tasks (such as abstractive summarization), since not only its encoder but also its decoder is pretrained.
In addition to base GreekBART that is pretrained from scratch on the reconstruction, we finetune it as wll on three tasks: `greekbart-news24-abstract` that can generate an abstract given a Greek news article, `greekbart-news24-title` that can generate an title given a Greek news article, and `greekbart-sentiment-classification` finetuned on a binary sentinement classification task.
| Model | Architecture | #layers | #params |
| ------------- |:-------------:| :-----:|:-----:|
| [GreekBART](https://huggingface.co/dascim/greekbart) | BASE | 12 | 165M |
| [GreekBART Abstract](https://huggingface.co/dascim/greekbart-news24-abstract) | BASE | 12 | 165M |
| [GreekBART Title](https://huggingface.co/dascim/greekbart-news24-title) | BASE | 12 | 165M |
| [GreekBART Sentiment Classification](https://huggingface.co/dascim/greekbart-sentiment-classification) | BASE | 12 | 165M |
<br>
paper: https://arxiv.org/pdf/2304.00869 \
github: https://github.com/iakovosevdaimon/GreekBART
## Usage
### Mask Prediction
```python
from transformers import pipeline
greekbart_fill_mask = pipeline("fill-mask", model="dascim/greekbart", tokenizer="dascim/greekbart")
results = greekbart_fill_mask("ฮ ฯฯฯฯฮตฯฮฟฯ
ฯฮฑ ฯฮทฯ ฮฮปฮปฮฌฮดฮฑฯ ฮตฮฏฮฝฮฑฮน ฮท <mask>")
results[0]
# {'score': 0.597200870513916, 'token': 7062, 'token_str': 'ฮฮธฮฎฮฝฮฑ', 'sequence': 'ฮ ฯฯฯฯฮตฯฮฟฯ
ฯฮฑ ฯฮทฯ ฮฮปฮปฮฌฮดฮฑฯ ฮตฮฏฮฝฮฑฮน ฮท ฮฮธฮฎฮฝฮฑ'},
```
### Abstract Generation
```python
text_sentence = 'ฮฃฯฮทฮฝ ฮบฮฑฯฮฌฮธฮตฯฮท ฮฝฮฟฯฮทฮปฮตฯฯฯฮนฮฑฯ ฯฮฟฯ
ฮฮฑฯฮฑฮผฮฑฮฝฮดฮฌฮฝฮตฮนฮฟฯ
ฮฮฟฯฮฟฮบฮฟฮผฮตฮฏฮฟฯ
ฮ ฮฌฯฯฮฑฯ ฮ ฮฑฮฝฮฑฮณฮนฯฯฮฑฯ ฮคฯฮตฮฝฯฮฟฯฯฮฟฯ
, ฮท ฮฟฯฮฟฮฏฮฑ ฮตฯฮณฮฑฮถฯฯฮฑฮฝ ฯฯฮฑฮฝ ฮตฮนฯฮฎฯฮธฮท ฯฯฮนฯ 8 ฮฯฯฮนฮปฮฏฮฟฯ
2021 ฮท ฮคฮถฯฯฯฮถฮฏฮฝฮฑ, ฯฯฮฟฯฯฯฮทฯฮต ฮท ฮดฮนฮฑฮดฮนฮบฮฑฯฮฏฮฑ ฮตฮฝฯฯฮนฮฟฮฝ ฯฮฟฯ
ฮฮฮ ฯฮฟฯ
ฮดฮนฮบฮฌฮถฮตฮน ฯฮท ฮกฮฟฯฮปฮฑ ฮ ฮนฯฯฮนฯฮฏฮณฮบฮฟฯ
. ฮ ฮฝฮฟฯฮทฮปฮตฯฯฯฮนฮฑ ฮบฮฑฯฮญฮธฮตฯฮต ฯฯฯ ฮบฮฑฯฮฌ ฯฮทฮฝ ฮตฮนฯฮฑฮณฯฮณฮฎ ฯฮฟฯ
ฯฮฑฮนฮดฮนฮฟฯ "ฮผฮฟฯ
ฮฑฮฝฮญฯฮตฯฮฑฮฝ ฯฯฯ ฮตฮฏฮฝฮฑฮน ฮญฮฝฮฑ ฯฮฑฮนฮดฮฌฮบฮน ฯฮฟฯ
ฮญฯฮตฮน ฯฮฌฯฮตฮน ฯฮฑ ฮฑฮดฮตฮปฯฮฌฮบฮนฮฑ ฯฮฟฯ
ฮบฮฑฮน ฯฯฮน ฮตฮฏฮฝฮฑฮน ฮนฮดฮนฮฑฮฏฯฮตฯฮท ฯฮตฯฮฏฯฯฯฯฮท" ฮบฮฑฮน ฮตฮพฮนฯฯฯฯฮทฯฮต ฯฮฑ ฮณฮตฮณฮฟฮฝฯฯฮฑ ฯฮทฯ ฮทฮผฮญฯฮฑฯ ฮตฮนฯฮฑฮณฯฮณฮฎฯ ฮบฮฑฮน ฯฮทฯ ฮตฯฮฟฮผฮญฮฝฮทฯ ฯฮฟฯ
ฮท ฮฏฮดฮนฮฑ ฮตฮฏฯฮต ฮฒฮฌฯฮดฮนฮฑ ฯฯฮทฮฝ ฯฮฑฮนฮดฮนฮฑฯฯฮนฮบฮฎ ฮบฮปฮนฮฝฮนฮบฮฎ.'
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-news24-abstract")
model = AutoModelForSeq2SeqLM.from_pretrained("dascim/greekbart-news24-abstract")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model.generate(input_ids, max_length=100)[0]
tokenizer.decode(predict, skip_special_tokens=True)
#'ฮ ฮฝฮฟฯฮทฮปฮตฯฯฯฮนฮฑ ฮบฮฑฯฮญฮธฮตฯฮต ฯฯฯ ฮบฮฑฯฮฌ ฯฮทฮฝ ฮตฮนฯฮฑฮณฯฮณฮฎ ฯฮฟฯ
ฯฮฑฮนฮดฮนฮฟฯ "ฮผฮฟฯ
ฮฑฮฝฮญฯฮตฯฮฑฮฝ ฯฯฯ ฮตฮฏฮฝฮฑฮน ฮญฮฝฮฑ ฯฮฑฮนฮดฮฌฮบฮน ฯฮฟฯ
ฮญฯฮตฮน ฯฮฌฯฮตฮน ฯฮฑ ฮฑฮดฮตฮปฯฮฌฮบฮนฮฑ ฯฮฟฯ
ฮบฮฑฮน ฯฯฮน ฮตฮฏฮฝฮฑฮน ฮนฮดฮนฮฑฮฏฯฮตฯฮท ฯฮตฯฮฏฯฯฯฯฮท".'
```
### Title Generation
```python
text_sentence = 'ฮฃฯฮทฮฝ ฮบฮฑฯฮฌฮธฮตฯฮท ฮฝฮฟฯฮทฮปฮตฯฯฯฮนฮฑฯ ฯฮฟฯ
ฮฮฑฯฮฑฮผฮฑฮฝฮดฮฌฮฝฮตฮนฮฟฯ
ฮฮฟฯฮฟฮบฮฟฮผฮตฮฏฮฟฯ
ฮ ฮฌฯฯฮฑฯ ฮ ฮฑฮฝฮฑฮณฮนฯฯฮฑฯ ฮคฯฮตฮฝฯฮฟฯฯฮฟฯ
, ฮท ฮฟฯฮฟฮฏฮฑ ฮตฯฮณฮฑฮถฯฯฮฑฮฝ ฯฯฮฑฮฝ ฮตฮนฯฮฎฯฮธฮท ฯฯฮนฯ 8 ฮฯฯฮนฮปฮฏฮฟฯ
2021 ฮท ฮคฮถฯฯฯฮถฮฏฮฝฮฑ, ฯฯฮฟฯฯฯฮทฯฮต ฮท ฮดฮนฮฑฮดฮนฮบฮฑฯฮฏฮฑ ฮตฮฝฯฯฮนฮฟฮฝ ฯฮฟฯ
ฮฮฮ ฯฮฟฯ
ฮดฮนฮบฮฌฮถฮตฮน ฯฮท ฮกฮฟฯฮปฮฑ ฮ ฮนฯฯฮนฯฮฏฮณฮบฮฟฯ
. ฮ ฮฝฮฟฯฮทฮปฮตฯฯฯฮนฮฑ ฮบฮฑฯฮญฮธฮตฯฮต ฯฯฯ ฮบฮฑฯฮฌ ฯฮทฮฝ ฮตฮนฯฮฑฮณฯฮณฮฎ ฯฮฟฯ
ฯฮฑฮนฮดฮนฮฟฯ "ฮผฮฟฯ
ฮฑฮฝฮญฯฮตฯฮฑฮฝ ฯฯฯ ฮตฮฏฮฝฮฑฮน ฮญฮฝฮฑ ฯฮฑฮนฮดฮฌฮบฮน ฯฮฟฯ
ฮญฯฮตฮน ฯฮฌฯฮตฮน ฯฮฑ ฮฑฮดฮตฮปฯฮฌฮบฮนฮฑ ฯฮฟฯ
ฮบฮฑฮน ฯฯฮน ฮตฮฏฮฝฮฑฮน ฮนฮดฮนฮฑฮฏฯฮตฯฮท ฯฮตฯฮฏฯฯฯฯฮท" ฮบฮฑฮน ฮตฮพฮนฯฯฯฯฮทฯฮต ฯฮฑ ฮณฮตฮณฮฟฮฝฯฯฮฑ ฯฮทฯ ฮทฮผฮญฯฮฑฯ ฮตฮนฯฮฑฮณฯฮณฮฎฯ ฮบฮฑฮน ฯฮทฯ ฮตฯฮฟฮผฮญฮฝฮทฯ ฯฮฟฯ
ฮท ฮฏฮดฮนฮฑ ฮตฮฏฯฮต ฮฒฮฌฯฮดฮนฮฑ ฯฯฮทฮฝ ฯฮฑฮนฮดฮนฮฑฯฯฮนฮบฮฎ ฮบฮปฮนฮฝฮนฮบฮฎ.'
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-news24-title")
model = AutoModelForSeq2SeqLM.from_pretrained("dascim/greekbart-news24-title")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model.generate(input_ids, max_length=100)[0]
tokenizer.decode(predict, skip_special_tokens=True)
# 'ฮ ฮฌฯฯฮฑ: ฮฮฑฯฮฌฮธฮตฯฮท ฮฝฮฟฯฮทฮปฮตฯฯฯฮนฮฑฯ ฮณฮนฮฑ ฯฮทฮฝ ฮตฮนฯฮฑฮณฯฮณฮฎ ฯฮทฯ ฮคฮถฯฯฯฮถฮฏฮฝฮฑฯ ฯฯฮฟ ฮฝฮฟฯฮฟฮบฮฟฮผฮตฮฏฮฟ'
```
### Sentiment Prediction
```python
text_sentence = "ฮ ฮตฮปฮปฮทฮฝฮนฮบฯฯ ฯฮฟฮปฮนฯฮนฯฮผฯฯ ฮตฮฏฮฝฮฑฮน ฮญฮฝฮฑฯ ฮฑฯฯ ฯฮฟฯ
ฯ ฯฮนฮฟ ฯฮปฮฟฯฯฮนฮฟฯ
ฯ ฮบฮฑฮน ฮฑฮฝฮฑฮณฮฝฯฯฮนฯฮผฮญฮฝฮฟฯ
ฯ ฯฮฟฮปฮนฯฮนฯฮผฮฟฯฯ."
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification
)
tokenizer = AutoTokenizer.from_pretrained("dascim/greekbart-sentiment-classification")
model = AutoModelForSequenceClassification.from_pretrained("dascim/greekbart-sentiment-classification")
input_ids = tokenizer.encode(text_sentence, add_special_tokens=True, return_tensors='pt')
model.eval()
predict = model(input_ids)[0]
print("negative" if predict.argmax(dim=-1).item()==1 else "positive")
# positive
```
## Authors
GreekBART was trained and evaluated ar รcole Polytechnique by Iakovos Evdaimon, Hadi Abdine, Christos Xypolopoulos, Stamatis Outsios, Michalis Vazirgiannis and Giorgos Stamou.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{evdaimon-etal-2024-greekbart,
title = "{G}reek{BART}: The First Pretrained {G}reek Sequence-to-Sequence Model",
author = "Evdaimon, Iakovos and
Abdine, Hadi and
Xypolopoulos, Christos and
Outsios, Stamatis and
Vazirgiannis, Michalis and
Stamou, Giorgos",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.700",
pages = "7949--7962",
}
``` | [
"SUMMARIZATION"
] | Non_BioNLP |
Volavion/bert-base-multilingual-uncased-temperature-cls | Volavion | null | [
"safetensors",
"bert",
"en",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:mit",
"region:us"
] | 1,736,936,870,000 | 2025-01-15T11:01:31 | 18 | 1 | ---
base_model:
- google-bert/bert-base-multilingual-uncased
language:
- en
license: mit
---
# BERT-Based Classification Model for Optimal Temperature Selection
This model leverages a BERT-based classification model to analyze input prompts and identify the most suitable generation temperature, enhancing text generation quality and relevance from our paper related to temperature.
## Overview
The model classifies input text into six distinct abilities, providing a probability distribution for each:
- **Causal Reasoning**
- **Creativity**
- **In-Context Learning**
- **Instruction Following**
- **Machine Translation**
- **Summarization**
## Features
- **Pre-trained Model**: Uses the multilingual BERT model: `Volavion/bert-base-multilingual-uncased-Temperature-CLS`.
- **Tokenization**: Processes text inputs into numerical formats compatible with the model.
- **Classification Output**: Provides probabilities for each class, allowing precise evaluation of the prompt's capabilities.
## Installation
1. Clone the repository if necessary:
```bash
git clone https://huggingface.co/Volavion/bert-base-multilingual-uncased-temperature-cls
cd bert-base-multilingual-uncased-temperature-cls
```
2. Install the required Python libraries:
```bash
pip install transformers torch numpy
```
## Usage
1. Load the tokenizer and model:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Volavion/bert-base-multilingual-uncased-Temperature-CLS"
tokenizer = AutoTokenizer.from_pretrained(model_name, do_lower_case=True)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
2. Tokenize your input text:
```python
input_text = "Your input prompt here."
encoded_dict = tokenizer.encode_plus(
input_text,
add_special_tokens=True,
max_length=512,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors="pt"
)
```
3. Perform inference:
```python
import torch
import numpy as np
input_ids = encoded_dict["input_ids"].to(device)
attention_mask = encoded_dict["attention_mask"].to(device)
model.eval()
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits.cpu().numpy()
probabilities = np.exp(logits - np.max(logits, axis=1, keepdims=True))
probabilities /= np.sum(probabilities, axis=1, keepdims=True)
```
4. Map probabilities to abilities:
```python
ability_mapping = {0: "Causal Reasoning", 1: "Creativity", 2: "In-Context Learning",
3: "Instruction Following", 4: "Machine Translation", 5: "Summarization"}
for prob, ability in zip(probabilities[0], ability_mapping.values()):
print(f"{ability}: {prob*100:.2f}%")
```
## Example Output
```plaintext
Ability Classification Probabilities:
Causal Reasoning: 15.30%
Creativity: 20.45%
In-Context Learning: 18.22%
Instruction Following: 12.78%
Machine Translation: 21.09%
Summarization: 12.16%
```
## Device Compatibility
The model supports GPU acceleration for faster inference. It will automatically detect and utilize a GPU if available; otherwise, it defaults to CPU.
## Contributing
Contributions are welcome! Feel free to fork the repository, create a branch, and submit a pull request.
## License
This project is licensed under the [MIT License](LICENSE).
| [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
r4ghu/distilbert-base-uncased-finetuned-clinc | r4ghu | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,694,497,357,000 | 2023-09-13T01:19:35 | 12 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9196774193548387
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7651
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2939 | 1.0 | 318 | 3.2756 | 0.7310 |
| 2.6165 | 2.0 | 636 | 1.8607 | 0.8365 |
| 1.5359 | 3.0 | 954 | 1.1427 | 0.8965 |
| 1.0074 | 4.0 | 1272 | 0.8470 | 0.9132 |
| 0.7906 | 5.0 | 1590 | 0.7651 | 0.9197 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 1,722,618,843,000 | 2024-08-03T00:32:52 | 25 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Giraffe-13b-32k-v3 - GGUF
- Model creator: https://huggingface.co/abacusai/
- Original model: https://huggingface.co/abacusai/Giraffe-13b-32k-v3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Giraffe-13b-32k-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q2_K.gguf) | Q2_K | 4.52GB |
| [Giraffe-13b-32k-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [Giraffe-13b-32k-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [Giraffe-13b-32k-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [Giraffe-13b-32k-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [Giraffe-13b-32k-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K.gguf) | Q3_K | 5.9GB |
| [Giraffe-13b-32k-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [Giraffe-13b-32k-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [Giraffe-13b-32k-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [Giraffe-13b-32k-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_0.gguf) | Q4_0 | 6.86GB |
| [Giraffe-13b-32k-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [Giraffe-13b-32k-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [Giraffe-13b-32k-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K.gguf) | Q4_K | 7.33GB |
| [Giraffe-13b-32k-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [Giraffe-13b-32k-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q4_1.gguf) | Q4_1 | 7.61GB |
| [Giraffe-13b-32k-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_0.gguf) | Q5_0 | 8.36GB |
| [Giraffe-13b-32k-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [Giraffe-13b-32k-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K.gguf) | Q5_K | 8.6GB |
| [Giraffe-13b-32k-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [Giraffe-13b-32k-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q5_1.gguf) | Q5_1 | 9.1GB |
| [Giraffe-13b-32k-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q6_K.gguf) | Q6_K | 9.95GB |
| [Giraffe-13b-32k-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf/blob/main/Giraffe-13b-32k-v3.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
datasets:
- togethercomputer/RedPajama-Data-1T
- shahules786/orca-chat
- anon8231489123/ShareGPT_Vicuna_unfiltered
tags:
- llama2
license: llama2
---

# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Llama2 13B finetuned to process longer (32K) contexts using interpolation and then further instruct finetuned with ShareGPT and Orca-Chat.
## Model Details
### Model Description
This is next version of the model trained and evaluated as part of the experiments described in the repo
[http://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context). This version
was trained with a scaling factor of 8 and shows better reasoning and math abilites on State of the Art benchmarks
- **Developed by:** [Abacus.AI](https://abacus.ai)
- **Model type:** Transformer based autoregressive causal language model
- **License:** Llama 2 Community License: https://github.com/facebookresearch/llama/blob/main/LICENSE
- **Finetuned from model:** Llama V2 13B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [http://github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context)
### Direct Use
Since the model is instruct finetuned it can also be directly used for various prompted tasks. We have tested
it on open book question answering using the long context to supply search results.
## Bias, Risks, and Limitations
The model has not been evaluated for safety and is only intended for research and experiments.
| [
"QUESTION_ANSWERING"
] | Non_BioNLP |
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_wnli_128 | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,675,440,718,000 | 2023-02-03T16:40:16 | 129 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- type: accuracy
value: 0.14084507042253522
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9522
- Accuracy: 0.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6679 | 1.0 | 435 | 1.9522 | 0.1408 |
| 0.4943 | 2.0 | 870 | 3.6522 | 0.1127 |
| 0.4405 | 3.0 | 1305 | 4.5118 | 0.1268 |
| 0.4086 | 4.0 | 1740 | 6.0437 | 0.1127 |
| 0.3818 | 5.0 | 2175 | 9.5931 | 0.1127 |
| 0.3492 | 6.0 | 2610 | 6.9665 | 0.1408 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2402.17733",
"endpoints_compatible",
"region:us"
] | 1,715,422,053,000 | 2024-05-11T23:15:22 | 102 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TowerBase-7B-v0.1 - GGUF
- Model creator: https://huggingface.co/Unbabel/
- Original model: https://huggingface.co/Unbabel/TowerBase-7B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TowerBase-7B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q2_K.gguf) | Q2_K | 2.36GB |
| [TowerBase-7B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [TowerBase-7B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [TowerBase-7B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [TowerBase-7B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [TowerBase-7B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K.gguf) | Q3_K | 3.07GB |
| [TowerBase-7B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [TowerBase-7B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [TowerBase-7B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [TowerBase-7B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_0.gguf) | Q4_0 | 3.56GB |
| [TowerBase-7B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [TowerBase-7B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [TowerBase-7B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K.gguf) | Q4_K | 3.8GB |
| [TowerBase-7B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [TowerBase-7B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q4_1.gguf) | Q4_1 | 3.95GB |
| [TowerBase-7B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_0.gguf) | Q5_0 | 4.33GB |
| [TowerBase-7B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [TowerBase-7B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K.gguf) | Q5_K | 4.45GB |
| [TowerBase-7B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [TowerBase-7B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q5_1.gguf) | Q5_1 | 4.72GB |
| [TowerBase-7B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf/blob/main/TowerBase-7B-v0.1.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
model-index:
- name: TowerBase-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
---
# Model Card for TowerBase-7B-v0.1
## Model Details
### Model Description
TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages โ English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian โ and bilingual data. TowerBase-7B-v0.1 is the first model in the series.
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Tรฉcnico, CentraleSupรฉlec University of Paris-Saclay
- **Model type:** A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.
## Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports.
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), as well as other multilingual tasks.
### Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
## Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Data
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and Josรฉ Pombal and Nuno M. Guerreiro and Pedro H. Martins and Joรฃo Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and Josรฉ G. C. de Souza and Andrรฉ F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"TRANSLATION"
] | Non_BioNLP |
naksu/distilbert-base-uncased-finetuned-sst2 | naksu | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,674,455,631,000 | 2023-01-23T18:15:34 | 114 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- type: accuracy
value: 0.8623529411764705
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4654
- Accuracy: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 27 | 0.4003 | 0.8224 |
| No log | 2.0 | 54 | 0.3516 | 0.8459 |
| No log | 3.0 | 81 | 0.4151 | 0.8376 |
| No log | 4.0 | 108 | 0.4419 | 0.8635 |
| No log | 5.0 | 135 | 0.4654 | 0.8624 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,565,023,000 | 2024-05-24T15:37:35 | 9 | 0 | ---
datasets:
- fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
KarelDO/lstm.CEBaB_confounding.observational.absa.5-class.seed_43 | KarelDO | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:OpenTable",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,665,721,864,000 | 2022-10-14T04:32:12 | 20 | 0 | ---
datasets:
- OpenTable
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: lstm.CEBaB_confounding.observational.absa.5-class.seed_43
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: OpenTable OPENTABLE-ABSA
type: OpenTable
args: opentable-absa
metrics:
- type: accuracy
value: 0.6611587107303142
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lstm.CEBaB_confounding.observational.absa.5-class.seed_43
This model is a fine-tuned version of [lstm](https://huggingface.co/lstm) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8687
- Accuracy: 0.6612
- Macro-f1: 0.6553
- Weighted-macro-f1: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
mini1013/master_cate_top_bt5_4 | mini1013 | text-classification | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"region:us"
] | 1,735,482,532,000 | 2024-12-29T14:29:14 | 8 | 0 | ---
base_model: klue/roberta-base
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '[์์ธ์ด๋] NEW ์ฑํฌ๋ก ์คํจ ๋๋์ธํธ ๋ฆฌํํ
ํ์ด๋ฐ์ด์
SPF30/PA++++ 30ml 130 ์คํ (#M)ํ>๋ฉ์ดํฌ์
>๋ฒ ์ด์ค๋ฉ์ดํฌ์
HMALL > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค๋ฉ์ดํฌ์
'
- text: ์ด๋ฎค์ฆ ๋ฉํ ํฝ์ฑ ๋น๊ฑด ์ฟ ์
๋ฆฌํ (3์ข
ํ 1) 02 ๋๋ (#M)ํ>ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
>์ฟ ์
ํ Naverstore
> ํ์ฅํ/๋ฏธ์ฉ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
> ์ฟ ์
ํ
- text: ์์ค์์ ํ๋ก ํ
์ผ๋ฌ ํ์ด๋ฐ์ด์
๋น ๊ธ๋ก์ฐ 30ml MinSellAmount (#M)ํ์ฅํ/ํฅ์>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
Gmarket
> ๋ทฐํฐ > ํ์ฅํ/ํฅ์ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
- text: (ํ๋๋ฐฑํ์ ) ํฐ ํฌ๋ ๋ทฐํฐ ์
ฐ์ด๋ ์ค ์ผ๋ฃจ๋ฏธ๋ค์ดํธ ์ํํธ ๋๋์ธ์ค ํ์ด๋ฐ์ด์
SPF50/PA++++ 0.4 ๋ก์ฆ (#M)ํ์ฅํ/ํฅ์>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
Gmarket > ๋ทฐํฐ > ํ์ฅํ/ํฅ์ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
- text: '[์ ์๋ฌผ] ๋ง์คํฐํด๋์ค ๋๋์ธํธ ์ฟ ์
(๋ฆฌํํฌํจ)(+์ฝ๋ ํ
๋ฒ ์ด์ค5mlx3๊ฐ)(๊ฐ๋จ์ ) N1์์ด๋ณด๋ฆฌ (#M)11st>๋ฉ์ดํฌ์
>ํ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
11st > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ํ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
'
inference: true
model-index:
- name: SetFit with klue/roberta-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9475307038057129
name: Accuracy
---
# SetFit with klue/roberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 5 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'์์ค์์ ํ๋กํ
์ผ๋ฌ ๋น๊ธ๋ก์ฐ ์คํฑ ํ์ด๋ฐ์ด์
13g 23ํธ๋ฒ ์ด์ง (#M)ํ>ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
>์คํฑํ Naverstore > ํ์ฅํ/๋ฏธ์ฉ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
> ์คํฑํ'</li><li>'๊ทธ๋ผํ ์์ด์ปค๋ฒ ์คํฑ ํ์ด๋ฐ์ด์
23ํธ ๋ฒ ์ด์ง LotteOn > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
LotteOn > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
'</li><li>'๋ฐ๋น ๋ธ๋ผ์ด ์คํจ ํ์ด๋ฐ์ด์
์คํฑ-2.5 ์ ์๋ 9g (#M)ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
>ํฌ๋ฆผํ Naverstore > ํ์ฅํ/๋ฏธ์ฉ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
> ํฌ๋ฆผํ'</li></ul> |
| 1 | <ul><li>'์ ์๋ฌผ ์คํจ ์ธํ
ํค ์ฝ๋ ํ
๋ฒ ์ด์ค 40ml ๊ธ๋ก์ ๋ฒ ์ด์ค (#M)11st>๋ฉ์ดํฌ์
>ํ์ด์ค๋ฉ์ดํฌ์
>๋ฉ์ดํฌ์
๋ฒ ์ด์ค 11st > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ํ์ด์ค๋ฉ์ดํฌ์
> ๋ฉ์ดํฌ์
๋ฒ ์ด์ค'</li><li>'์์ด์คํ ํผํํธ ์ปค๋ฒ ๋ฉ์ดํฌ์
๋ฒ ์ด์ค 35ml 2ํธ ๋ผ์ดํธํผํ ร 3๊ฐ (#M)์ฟ ํก ํ>๋ทฐํฐ>๋ฉ์ดํฌ์
>๋ฒ ์ด์ค ๋ฉ์ดํฌ์
>๋ฒ ์ด์ค/ํ๋ผ์ด๋จธ Coupang > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค/ํ๋ผ์ด๋จธ'</li><li>'์์ด์คํ ํผํํธ ์ปค๋ฒ ๋ฒ ์ด์ค 35ml 2ํธ-ํผํ (#M)ํ>ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>๋ฉ์ดํฌ์
๋ฒ ์ด์ค Naverstore > ํ์ฅํ/๋ฏธ์ฉ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ๋ฉ์ดํฌ์
๋ฒ ์ด์ค'</li></ul> |
| 0 | <ul><li>'ํค๋ผ ๊ธ๋ก์ฐ ๋์คํ
ํ์ด๋ฐ์ด์
17C1 ํํ ์์ด๋ณด๋ฆฌ LotteOn > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค/ํ๋ผ์ด๋จธ LotteOn > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค/ํ๋ผ์ด๋จธ'</li><li>'[์์คํฐ ๋ก๋] ๋๋ธ์จ์ด ํ์ด๋ฐ์ด์
30ml SPF 10/PA++ (+ํ๋ผ์ด๋จธ ์ ํ ) 1W0 ์ ํฌ์ฌ๋ฆฐ ํ>๊ธฐํ ์ธํธ;ํ>๋๋ธ์จ์ด;ํ>๋๋ธ ์จ์ด;ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
>๋ฆฌํด๋ํ;(#M)ํ>์ ์ฒด์ํ Naverstore > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
'</li><li>'์์ค์์ ํ๋กํ
์ผ๋ฌ ํ์ด๋ฐ์ด์
๋น ๊ธ๋ก์ฐ 10ml 4ํธ ๋ฒ ์ด์ง ร 1๊ฐ (#M)์ฟ ํก ํ>๋ทฐํฐ>๋ฉ์ดํฌ์
>๋ฒ ์ด์ค ๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
Coupang > ๋ทฐํฐ > ๋ก๋์ต > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค ๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
'</li></ul> |
| 4 | <ul><li>'์์ธ์ด๋ ์คํฌ์ธ ์ปค๋ฒ ํ์ด๋ฐ์ด์
20g S101 (#M)ํ>ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
>ํฌ๋ฆผํ Naverstore > ํ์ฅํ/๋ฏธ์ฉ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
> ํฌ๋ฆผํ'</li><li>'์์ธ์ด๋ ์คํฌ์ธ ์ปค๋ฒ ํ์ด๋ฐ์ด์
20g S100 ร 1๊ฐ Coupang > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค ๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
;(#M)์ฟ ํก ํ>๋ทฐํฐ>๋ฉ์ดํฌ์
>๋ฒ ์ด์ค ๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
Coupang > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค ๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
'</li><li>'์์ด์งํฌ์จ๋์ค ์ค๋ฆฌ์ง๋ ์ค์ด๋๋๋กญ ์ผ์ด์ค+๋ฆฌํ3๊ฐ (+์ปคํผ์ฟ ํฐ+ํผ20ml) ์ค์ด๋๋๋กญ(ํ์ดํธ)23ํธ์ผ์ด์ค+๋ฆฌํ3๊ฐ_ํผ20ml (#M)ํ์ฅํ/๋ฏธ์ฉ>๋ฒ ์ด์ค๋ฉ์ดํฌ์
>ํ์ด๋ฐ์ด์
>์ฟ ์
ํ AD > Naverstore > ํ์ฅํ/๋ฏธ์ฉ > ๋ฒ ์ด์ค๋ฉ์ดํฌ์
> ํ์ด๋ฐ์ด์
> ํฌ๋ฆผํ'</li></ul> |
| 3 | <ul><li>'๋งคํธ ๋ฒจ๋ฒณ ์คํจ ์ปดํฉํธ ์คํฐ์ง ๋จํ์์ LotteOn > ๋ทฐํฐ > ๋ทฐํฐ๊ธฐ๊ธฐ > ์ก์ธ์๋ฆฌ/์๋ชจํ LotteOn > ๋ทฐํฐ > ๋ทฐํฐ๊ธฐ๊ธฐ > ์ก์ธ์๋ฆฌ/์๋ชจํ'</li><li>'[BF์ ๋ฆฝ] ์ํฌ๋ฅด ๋ ๋ฝ ์ฟ ์
&๋ฆฌํ ์ธํธ(+์คํ ๋ฉ ๋ฏธ๋ฌ+5์ฒLPOINT) 20ํธ_15ํธ LOREAL > DepartmentLotteOn > ์
์๋ก๋ > Branded > ์
์๋ก๋ LOREAL > DepartmentLotteOn > ์
์๋ก๋ > Branded > ์
์๋ก๋'</li><li>'์ฝํผ LotteOn > ๋ทฐํฐ > ๋ทฐํฐ๊ธฐ๊ธฐ > ์ก์ธ์๋ฆฌ/์๋ชจํ LotteOn > ๋ทฐํฐ > ๋ทฐํฐ๊ธฐ๊ธฐ > ์ก์ธ์๋ฆฌ/์๋ชจํ'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9475 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_top_bt5_4")
# Run inference
preds = model("[์์ธ์ด๋] NEW ์ฑํฌ๋ก ์คํจ ๋๋์ธํธ ๋ฆฌํํ
ํ์ด๋ฐ์ด์
SPF30/PA++++ 30ml 130 ์คํ (#M)ํ>๋ฉ์ดํฌ์
>๋ฒ ์ด์ค๋ฉ์ดํฌ์
HMALL > ๋ทฐํฐ > ๋ฉ์ดํฌ์
> ๋ฒ ์ด์ค๋ฉ์ดํฌ์
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 12 | 22.928 | 52 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 100
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-----:|:-------------:|:---------------:|
| 0.0026 | 1 | 0.521 | - |
| 0.1279 | 50 | 0.4636 | - |
| 0.2558 | 100 | 0.42 | - |
| 0.3836 | 150 | 0.292 | - |
| 0.5115 | 200 | 0.1539 | - |
| 0.6394 | 250 | 0.0626 | - |
| 0.7673 | 300 | 0.0343 | - |
| 0.8951 | 350 | 0.0071 | - |
| 1.0230 | 400 | 0.0023 | - |
| 1.1509 | 450 | 0.0005 | - |
| 1.2788 | 500 | 0.0006 | - |
| 1.4066 | 550 | 0.0003 | - |
| 1.5345 | 600 | 0.0002 | - |
| 1.6624 | 650 | 0.0001 | - |
| 1.7903 | 700 | 0.0002 | - |
| 1.9182 | 750 | 0.0006 | - |
| 2.0460 | 800 | 0.0002 | - |
| 2.1739 | 850 | 0.0001 | - |
| 2.3018 | 900 | 0.0 | - |
| 2.4297 | 950 | 0.0 | - |
| 2.5575 | 1000 | 0.0 | - |
| 2.6854 | 1050 | 0.0 | - |
| 2.8133 | 1100 | 0.0 | - |
| 2.9412 | 1150 | 0.0 | - |
| 3.0691 | 1200 | 0.0 | - |
| 3.1969 | 1250 | 0.0 | - |
| 3.3248 | 1300 | 0.0 | - |
| 3.4527 | 1350 | 0.0007 | - |
| 3.5806 | 1400 | 0.0005 | - |
| 3.7084 | 1450 | 0.0009 | - |
| 3.8363 | 1500 | 0.0008 | - |
| 3.9642 | 1550 | 0.0003 | - |
| 4.0921 | 1600 | 0.0002 | - |
| 4.2199 | 1650 | 0.0 | - |
| 4.3478 | 1700 | 0.0 | - |
| 4.4757 | 1750 | 0.0 | - |
| 4.6036 | 1800 | 0.0 | - |
| 4.7315 | 1850 | 0.0 | - |
| 4.8593 | 1900 | 0.0 | - |
| 4.9872 | 1950 | 0.0 | - |
| 5.1151 | 2000 | 0.0 | - |
| 5.2430 | 2050 | 0.0 | - |
| 5.3708 | 2100 | 0.0 | - |
| 5.4987 | 2150 | 0.0 | - |
| 5.6266 | 2200 | 0.0 | - |
| 5.7545 | 2250 | 0.0 | - |
| 5.8824 | 2300 | 0.0 | - |
| 6.0102 | 2350 | 0.0001 | - |
| 6.1381 | 2400 | 0.0006 | - |
| 6.2660 | 2450 | 0.0 | - |
| 6.3939 | 2500 | 0.0 | - |
| 6.5217 | 2550 | 0.0 | - |
| 6.6496 | 2600 | 0.0 | - |
| 6.7775 | 2650 | 0.0 | - |
| 6.9054 | 2700 | 0.0 | - |
| 7.0332 | 2750 | 0.0 | - |
| 7.1611 | 2800 | 0.0 | - |
| 7.2890 | 2850 | 0.0 | - |
| 7.4169 | 2900 | 0.0 | - |
| 7.5448 | 2950 | 0.0 | - |
| 7.6726 | 3000 | 0.0 | - |
| 7.8005 | 3050 | 0.0 | - |
| 7.9284 | 3100 | 0.0 | - |
| 8.0563 | 3150 | 0.0 | - |
| 8.1841 | 3200 | 0.0 | - |
| 8.3120 | 3250 | 0.0 | - |
| 8.4399 | 3300 | 0.0 | - |
| 8.5678 | 3350 | 0.0 | - |
| 8.6957 | 3400 | 0.0 | - |
| 8.8235 | 3450 | 0.0 | - |
| 8.9514 | 3500 | 0.0 | - |
| 9.0793 | 3550 | 0.0 | - |
| 9.2072 | 3600 | 0.0 | - |
| 9.3350 | 3650 | 0.0 | - |
| 9.4629 | 3700 | 0.0 | - |
| 9.5908 | 3750 | 0.0 | - |
| 9.7187 | 3800 | 0.0 | - |
| 9.8465 | 3850 | 0.0 | - |
| 9.9744 | 3900 | 0.0 | - |
| 10.1023 | 3950 | 0.0 | - |
| 10.2302 | 4000 | 0.0 | - |
| 10.3581 | 4050 | 0.0 | - |
| 10.4859 | 4100 | 0.0 | - |
| 10.6138 | 4150 | 0.0 | - |
| 10.7417 | 4200 | 0.0 | - |
| 10.8696 | 4250 | 0.0 | - |
| 10.9974 | 4300 | 0.0 | - |
| 11.1253 | 4350 | 0.0 | - |
| 11.2532 | 4400 | 0.0 | - |
| 11.3811 | 4450 | 0.0 | - |
| 11.5090 | 4500 | 0.0 | - |
| 11.6368 | 4550 | 0.0 | - |
| 11.7647 | 4600 | 0.0 | - |
| 11.8926 | 4650 | 0.0 | - |
| 12.0205 | 4700 | 0.0 | - |
| 12.1483 | 4750 | 0.0 | - |
| 12.2762 | 4800 | 0.0 | - |
| 12.4041 | 4850 | 0.0 | - |
| 12.5320 | 4900 | 0.0 | - |
| 12.6598 | 4950 | 0.0 | - |
| 12.7877 | 5000 | 0.0 | - |
| 12.9156 | 5050 | 0.0 | - |
| 13.0435 | 5100 | 0.0 | - |
| 13.1714 | 5150 | 0.0 | - |
| 13.2992 | 5200 | 0.0 | - |
| 13.4271 | 5250 | 0.0 | - |
| 13.5550 | 5300 | 0.0 | - |
| 13.6829 | 5350 | 0.0 | - |
| 13.8107 | 5400 | 0.0 | - |
| 13.9386 | 5450 | 0.0 | - |
| 14.0665 | 5500 | 0.0 | - |
| 14.1944 | 5550 | 0.0 | - |
| 14.3223 | 5600 | 0.0 | - |
| 14.4501 | 5650 | 0.0 | - |
| 14.5780 | 5700 | 0.0 | - |
| 14.7059 | 5750 | 0.0 | - |
| 14.8338 | 5800 | 0.0 | - |
| 14.9616 | 5850 | 0.0 | - |
| 15.0895 | 5900 | 0.0 | - |
| 15.2174 | 5950 | 0.0 | - |
| 15.3453 | 6000 | 0.0 | - |
| 15.4731 | 6050 | 0.0 | - |
| 15.6010 | 6100 | 0.0 | - |
| 15.7289 | 6150 | 0.0 | - |
| 15.8568 | 6200 | 0.0 | - |
| 15.9847 | 6250 | 0.0 | - |
| 16.1125 | 6300 | 0.0 | - |
| 16.2404 | 6350 | 0.0 | - |
| 16.3683 | 6400 | 0.0 | - |
| 16.4962 | 6450 | 0.0 | - |
| 16.6240 | 6500 | 0.0 | - |
| 16.7519 | 6550 | 0.0 | - |
| 16.8798 | 6600 | 0.0 | - |
| 17.0077 | 6650 | 0.0 | - |
| 17.1355 | 6700 | 0.0 | - |
| 17.2634 | 6750 | 0.0 | - |
| 17.3913 | 6800 | 0.0 | - |
| 17.5192 | 6850 | 0.0 | - |
| 17.6471 | 6900 | 0.0 | - |
| 17.7749 | 6950 | 0.0 | - |
| 17.9028 | 7000 | 0.0 | - |
| 18.0307 | 7050 | 0.0 | - |
| 18.1586 | 7100 | 0.0 | - |
| 18.2864 | 7150 | 0.0 | - |
| 18.4143 | 7200 | 0.0 | - |
| 18.5422 | 7250 | 0.0 | - |
| 18.6701 | 7300 | 0.0 | - |
| 18.7980 | 7350 | 0.0 | - |
| 18.9258 | 7400 | 0.0 | - |
| 19.0537 | 7450 | 0.0 | - |
| 19.1816 | 7500 | 0.0 | - |
| 19.3095 | 7550 | 0.0004 | - |
| 19.4373 | 7600 | 0.0028 | - |
| 19.5652 | 7650 | 0.0003 | - |
| 19.6931 | 7700 | 0.0002 | - |
| 19.8210 | 7750 | 0.0 | - |
| 19.9488 | 7800 | 0.0 | - |
| 20.0767 | 7850 | 0.0 | - |
| 20.2046 | 7900 | 0.0 | - |
| 20.3325 | 7950 | 0.0 | - |
| 20.4604 | 8000 | 0.0 | - |
| 20.5882 | 8050 | 0.0 | - |
| 20.7161 | 8100 | 0.0 | - |
| 20.8440 | 8150 | 0.0 | - |
| 20.9719 | 8200 | 0.0 | - |
| 21.0997 | 8250 | 0.0 | - |
| 21.2276 | 8300 | 0.0 | - |
| 21.3555 | 8350 | 0.0 | - |
| 21.4834 | 8400 | 0.0 | - |
| 21.6113 | 8450 | 0.0 | - |
| 21.7391 | 8500 | 0.0 | - |
| 21.8670 | 8550 | 0.0 | - |
| 21.9949 | 8600 | 0.0 | - |
| 22.1228 | 8650 | 0.0 | - |
| 22.2506 | 8700 | 0.0 | - |
| 22.3785 | 8750 | 0.0 | - |
| 22.5064 | 8800 | 0.0 | - |
| 22.6343 | 8850 | 0.0 | - |
| 22.7621 | 8900 | 0.0 | - |
| 22.8900 | 8950 | 0.0 | - |
| 23.0179 | 9000 | 0.0 | - |
| 23.1458 | 9050 | 0.0 | - |
| 23.2737 | 9100 | 0.0 | - |
| 23.4015 | 9150 | 0.0 | - |
| 23.5294 | 9200 | 0.0 | - |
| 23.6573 | 9250 | 0.0 | - |
| 23.7852 | 9300 | 0.0 | - |
| 23.9130 | 9350 | 0.0 | - |
| 24.0409 | 9400 | 0.0 | - |
| 24.1688 | 9450 | 0.0 | - |
| 24.2967 | 9500 | 0.0 | - |
| 24.4246 | 9550 | 0.0 | - |
| 24.5524 | 9600 | 0.0 | - |
| 24.6803 | 9650 | 0.0 | - |
| 24.8082 | 9700 | 0.0 | - |
| 24.9361 | 9750 | 0.0 | - |
| 25.0639 | 9800 | 0.0 | - |
| 25.1918 | 9850 | 0.0 | - |
| 25.3197 | 9900 | 0.0 | - |
| 25.4476 | 9950 | 0.0 | - |
| 25.5754 | 10000 | 0.0 | - |
| 25.7033 | 10050 | 0.0 | - |
| 25.8312 | 10100 | 0.0 | - |
| 25.9591 | 10150 | 0.0 | - |
| 26.0870 | 10200 | 0.0 | - |
| 26.2148 | 10250 | 0.0 | - |
| 26.3427 | 10300 | 0.0 | - |
| 26.4706 | 10350 | 0.0 | - |
| 26.5985 | 10400 | 0.0 | - |
| 26.7263 | 10450 | 0.0 | - |
| 26.8542 | 10500 | 0.0 | - |
| 26.9821 | 10550 | 0.0 | - |
| 27.1100 | 10600 | 0.0 | - |
| 27.2379 | 10650 | 0.0 | - |
| 27.3657 | 10700 | 0.0 | - |
| 27.4936 | 10750 | 0.0 | - |
| 27.6215 | 10800 | 0.0 | - |
| 27.7494 | 10850 | 0.0 | - |
| 27.8772 | 10900 | 0.0 | - |
| 28.0051 | 10950 | 0.0 | - |
| 28.1330 | 11000 | 0.0 | - |
| 28.2609 | 11050 | 0.0 | - |
| 28.3887 | 11100 | 0.0 | - |
| 28.5166 | 11150 | 0.0 | - |
| 28.6445 | 11200 | 0.0 | - |
| 28.7724 | 11250 | 0.0 | - |
| 28.9003 | 11300 | 0.0 | - |
| 29.0281 | 11350 | 0.0 | - |
| 29.1560 | 11400 | 0.0 | - |
| 29.2839 | 11450 | 0.0 | - |
| 29.4118 | 11500 | 0.0 | - |
| 29.5396 | 11550 | 0.0 | - |
| 29.6675 | 11600 | 0.0 | - |
| 29.7954 | 11650 | 0.0 | - |
| 29.9233 | 11700 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
csocsci/mt5-base-multi-label-cs-iiib-02c | csocsci | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"cs",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,695,389,385,000 | 2023-09-23T13:40:51 | 10 | 0 | ---
language:
- cs
license: mit
---
# Model Card for mt5-base-multi-label-cs-iiib-02c
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech.
## Model Description
The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informaฤnรญ podpora', 'Emocionรกlnรญ podpora', 'Zaฤlenฤnรญ do skupiny', 'Uznรกnรญ', 'Nabรญdka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance.
- **Developed by:** Anonymous
- **Language(s):** multilingual
- **Finetuned from:** mt5-base
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/chi2024submission
- **Paper:** Stay tuned!
## Usage
Here is how to use this model to classify a context-window of a dialogue:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
test_texts = ['Utterance2']
test_text_pairs = ['Utterance1;Utterance2;Utterance3']
checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib-02c"
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\
.to("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
def verbalize_input(text: str, text_pair: str) -> str:
return "Utterance: %s\nContext: %s" % (text, text_pair)
def predict_one(text, pair):
input_pair = verbalize_input(text, pair)
inputs = tokenizer(input_pair, return_tensors="pt", padding=True,
truncation=True, max_length=256).to(model.device)
outputs = model.generate(**inputs)
decoded = [text.split(",")[0].strip() for text in
tokenizer.batch_decode(outputs, skip_special_tokens=True)]
return decoded
dec = predict_one(test_texts[0], test_text_pairs[0])
print(dec)
``` | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
heegyu/TinyLlama-augesc-context-strategy | heegyu | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:thu-coai/augesc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,709,309,966,000 | 2024-03-07T13:19:42 | 8 | 0 | ---
datasets:
- thu-coai/augesc
library_name: transformers
---
Test set performance
- Top 1 Accuracy: 0.4346
- Top 3 Accuracy: 0.7677
- Top 1 Macro F1: 0.2668
- Top 3 Macro F1: 0.5669
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device="cuda:0"
model = "heegyu/TinyLlama-augesc-context-strategy"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model).eval().to(device)
example = """usr: Hi
sys[Question]: Hello, how are you today?
usr: I was scolded by my parents yesterday"""
inputs = tokenizer(example, return_tensors="pt").to(device)
logits = model(**inputs).logits.softmax(-1)
print(logits)
label = logits.argmax(-1).item()
ESCONV_STRATEGY = [
"Question",
"Restatement or Paraphrasing",
"Reflection of feelings",
"Self-disclosure",
"Affirmation and Reassurance",
"Providing Suggestions",
"Information",
"Others"
]
id2label = {i:k for i, k in enumerate(ESCONV_STRATEGY)}
print(id2label[label])
``` | [
"PARAPHRASING"
] | Non_BioNLP |
Bahasalab/BahasaGpt-chat | Bahasalab | null | [
"transformers",
"pytorch",
"tensorboard",
"license:cc-by-nc-3.0",
"endpoints_compatible",
"region:us"
] | 1,681,047,882,000 | 2023-04-11T07:23:12 | 18 | 2 | ---
license: cc-by-nc-3.0
---
# BahasaGPT-Chat
## Introduction
This document provides an overview of the BahasaGPT-Chat model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 120000 Chat instructions based.
## Model Details
**Model Name:** BahasaGPT-Chat
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 120k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot)
- Translated instructions from OA ([Anh/data at main ยท LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 120k Indonesian instructions, which were generated using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
**Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
**Bias:** The BahasaGPT-1 model, like other AI language models, can exhibit various forms of bias due to the data it was trained on. This includes, but is not limited to, gender, racial, and cultural biases. As a result, the model may generate outputs that perpetuate stereotypes, exhibit unfair treatment, or show preference for specific groups or perspectives. Efforts have been made to mitigate these biases, but they may still be present in the model's responses.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 120k Indonesian instructions generated using using [Baize-Chatbot] (https://github.com/project-baize/baize-chatbot) method with addition alpaca and OA Translation dataset. Despite some limitations, such as occasional hallucination, the model provides a valuable tool for working with Indonesian language tasks.
## How to Run
For Gradio Demo : [Gradio Code](https://github.com/acul3/Bahasa_Chat)
For Colab Using (Int8) : [Colab](https://colab.research.google.com/drive/1yvhJENcd0NKuMZNipAJVP4eP-k7-ilXj?usp=sharing) | [
"TRANSLATION"
] | Non_BioNLP |
Subsets and Splits