id
stringlengths
11
95
author
stringlengths
3
36
task_category
stringclasses
16 values
tags
sequencelengths
1
4.05k
created_time
int64
1.65k
1.74k
last_modified
int64
1.62k
1.74k
downloads
int64
0
15.6M
likes
int64
0
4.86k
README
stringlengths
246
1.01M
matched_task
sequencelengths
1
8
matched_bigbio_names
sequencelengths
1
8
is_bionlp
stringclasses
3 values
Alibaba-NLP/gte-large-en-v1.5
Alibaba-NLP
sentence-similarity
[ "transformers", "onnx", "safetensors", "new", "feature-extraction", "sentence-transformers", "gte", "mteb", "transformers.js", "sentence-similarity", "custom_code", "en", "dataset:allenai/c4", "arxiv:2407.19669", "arxiv:2308.03281", "license:apache-2.0", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,713
1,723
3,819,623
204
--- datasets: - allenai/c4 language: - en library_name: transformers license: apache-2.0 tags: - sentence-transformers - gte - mteb - transformers.js - sentence-similarity model-index: - name: gte-large-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.01492537313432 - type: ap value: 35.05341696659522 - type: f1 value: 66.71270310883853 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.97189999999999 - type: ap value: 90.5952493948908 - type: f1 value: 93.95848137716877 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 54.196 - type: f1 value: 53.80122334012787 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 47.297 - type: map_at_10 value: 64.303 - type: map_at_100 value: 64.541 - type: map_at_1000 value: 64.541 - type: map_at_3 value: 60.728 - type: map_at_5 value: 63.114000000000004 - type: mrr_at_1 value: 48.435 - type: mrr_at_10 value: 64.657 - type: mrr_at_100 value: 64.901 - type: mrr_at_1000 value: 64.901 - type: mrr_at_3 value: 61.06 - type: mrr_at_5 value: 63.514 - type: ndcg_at_1 value: 47.297 - type: ndcg_at_10 value: 72.107 - type: ndcg_at_100 value: 72.963 - type: ndcg_at_1000 value: 72.963 - type: ndcg_at_3 value: 65.063 - type: ndcg_at_5 value: 69.352 - type: precision_at_1 value: 47.297 - type: precision_at_10 value: 9.623 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 25.865 - type: precision_at_5 value: 17.596 - type: recall_at_1 value: 47.297 - type: recall_at_10 value: 96.23 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 77.596 - type: recall_at_5 value: 87.98 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.467787861077475 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.39198391914257 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.12794820591384 - type: mrr value: 75.9331442641692 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.85062993863319 - type: cos_sim_spearman value: 85.39049989733459 - type: euclidean_pearson value: 86.00222680278333 - type: euclidean_spearman value: 85.45556162077396 - type: manhattan_pearson value: 85.88769871785621 - type: manhattan_spearman value: 85.11760211290839 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.32792207792208 - type: f1 value: 87.29132945999555 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.5779328301945 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 37.94425623865118 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 32.978 - type: map_at_10 value: 44.45 - type: map_at_100 value: 46.19 - type: map_at_1000 value: 46.303 - type: map_at_3 value: 40.849000000000004 - type: map_at_5 value: 42.55 - type: mrr_at_1 value: 40.629 - type: mrr_at_10 value: 50.848000000000006 - type: mrr_at_100 value: 51.669 - type: mrr_at_1000 value: 51.705 - type: mrr_at_3 value: 47.997 - type: mrr_at_5 value: 49.506 - type: ndcg_at_1 value: 40.629 - type: ndcg_at_10 value: 51.102000000000004 - type: ndcg_at_100 value: 57.159000000000006 - type: ndcg_at_1000 value: 58.669000000000004 - type: ndcg_at_3 value: 45.738 - type: ndcg_at_5 value: 47.632999999999996 - type: precision_at_1 value: 40.629 - type: precision_at_10 value: 9.700000000000001 - type: precision_at_100 value: 1.5970000000000002 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 21.698 - type: precision_at_5 value: 15.393 - type: recall_at_1 value: 32.978 - type: recall_at_10 value: 63.711 - type: recall_at_100 value: 88.39399999999999 - type: recall_at_1000 value: 97.513 - type: recall_at_3 value: 48.025 - type: recall_at_5 value: 53.52 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 30.767 - type: map_at_10 value: 42.195 - type: map_at_100 value: 43.541999999999994 - type: map_at_1000 value: 43.673 - type: map_at_3 value: 38.561 - type: map_at_5 value: 40.532000000000004 - type: mrr_at_1 value: 38.79 - type: mrr_at_10 value: 48.021 - type: mrr_at_100 value: 48.735 - type: mrr_at_1000 value: 48.776 - type: mrr_at_3 value: 45.594 - type: mrr_at_5 value: 46.986 - type: ndcg_at_1 value: 38.79 - type: ndcg_at_10 value: 48.468 - type: ndcg_at_100 value: 53.037 - type: ndcg_at_1000 value: 55.001999999999995 - type: ndcg_at_3 value: 43.409 - type: ndcg_at_5 value: 45.654 - type: precision_at_1 value: 38.79 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.201 - type: precision_at_3 value: 21.21 - type: precision_at_5 value: 15.171999999999999 - type: recall_at_1 value: 30.767 - type: recall_at_10 value: 60.118 - type: recall_at_100 value: 79.271 - type: recall_at_1000 value: 91.43299999999999 - type: recall_at_3 value: 45.36 - type: recall_at_5 value: 51.705 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 40.007 - type: map_at_10 value: 53.529 - type: map_at_100 value: 54.602 - type: map_at_1000 value: 54.647 - type: map_at_3 value: 49.951 - type: map_at_5 value: 52.066 - type: mrr_at_1 value: 45.705 - type: mrr_at_10 value: 56.745000000000005 - type: mrr_at_100 value: 57.43899999999999 - type: mrr_at_1000 value: 57.462999999999994 - type: mrr_at_3 value: 54.25299999999999 - type: mrr_at_5 value: 55.842000000000006 - type: ndcg_at_1 value: 45.705 - type: ndcg_at_10 value: 59.809 - type: ndcg_at_100 value: 63.837999999999994 - type: ndcg_at_1000 value: 64.729 - type: ndcg_at_3 value: 53.994 - type: ndcg_at_5 value: 57.028 - type: precision_at_1 value: 45.705 - type: precision_at_10 value: 9.762 - type: precision_at_100 value: 1.275 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.368000000000002 - type: precision_at_5 value: 16.84 - type: recall_at_1 value: 40.007 - type: recall_at_10 value: 75.017 - type: recall_at_100 value: 91.99000000000001 - type: recall_at_1000 value: 98.265 - type: recall_at_3 value: 59.704 - type: recall_at_5 value: 67.109 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 26.639000000000003 - type: map_at_10 value: 35.926 - type: map_at_100 value: 37.126999999999995 - type: map_at_1000 value: 37.202 - type: map_at_3 value: 32.989000000000004 - type: map_at_5 value: 34.465 - type: mrr_at_1 value: 28.475 - type: mrr_at_10 value: 37.7 - type: mrr_at_100 value: 38.753 - type: mrr_at_1000 value: 38.807 - type: mrr_at_3 value: 35.066 - type: mrr_at_5 value: 36.512 - type: ndcg_at_1 value: 28.475 - type: ndcg_at_10 value: 41.245 - type: ndcg_at_100 value: 46.814 - type: ndcg_at_1000 value: 48.571 - type: ndcg_at_3 value: 35.528999999999996 - type: ndcg_at_5 value: 38.066 - type: precision_at_1 value: 28.475 - type: precision_at_10 value: 6.497 - type: precision_at_100 value: 0.9650000000000001 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 15.065999999999999 - type: precision_at_5 value: 10.599 - type: recall_at_1 value: 26.639000000000003 - type: recall_at_10 value: 55.759 - type: recall_at_100 value: 80.913 - type: recall_at_1000 value: 93.929 - type: recall_at_3 value: 40.454 - type: recall_at_5 value: 46.439 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 15.767999999999999 - type: map_at_10 value: 24.811 - type: map_at_100 value: 26.064999999999998 - type: map_at_1000 value: 26.186999999999998 - type: map_at_3 value: 21.736 - type: map_at_5 value: 23.283 - type: mrr_at_1 value: 19.527 - type: mrr_at_10 value: 29.179 - type: mrr_at_100 value: 30.153999999999996 - type: mrr_at_1000 value: 30.215999999999998 - type: mrr_at_3 value: 26.223000000000003 - type: mrr_at_5 value: 27.733999999999998 - type: ndcg_at_1 value: 19.527 - type: ndcg_at_10 value: 30.786 - type: ndcg_at_100 value: 36.644 - type: ndcg_at_1000 value: 39.440999999999995 - type: ndcg_at_3 value: 24.958 - type: ndcg_at_5 value: 27.392 - type: precision_at_1 value: 19.527 - type: precision_at_10 value: 5.995 - type: precision_at_100 value: 1.03 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_3 value: 12.520999999999999 - type: precision_at_5 value: 9.129 - type: recall_at_1 value: 15.767999999999999 - type: recall_at_10 value: 44.824000000000005 - type: recall_at_100 value: 70.186 - type: recall_at_1000 value: 89.934 - type: recall_at_3 value: 28.607 - type: recall_at_5 value: 34.836 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 31.952 - type: map_at_10 value: 44.438 - type: map_at_100 value: 45.778 - type: map_at_1000 value: 45.883 - type: map_at_3 value: 41.044000000000004 - type: map_at_5 value: 42.986000000000004 - type: mrr_at_1 value: 39.172000000000004 - type: mrr_at_10 value: 49.76 - type: mrr_at_100 value: 50.583999999999996 - type: mrr_at_1000 value: 50.621 - type: mrr_at_3 value: 47.353 - type: mrr_at_5 value: 48.739 - type: ndcg_at_1 value: 39.172000000000004 - type: ndcg_at_10 value: 50.760000000000005 - type: ndcg_at_100 value: 56.084 - type: ndcg_at_1000 value: 57.865 - type: ndcg_at_3 value: 45.663 - type: ndcg_at_5 value: 48.178 - type: precision_at_1 value: 39.172000000000004 - type: precision_at_10 value: 9.22 - type: precision_at_100 value: 1.387 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 21.976000000000003 - type: precision_at_5 value: 15.457 - type: recall_at_1 value: 31.952 - type: recall_at_10 value: 63.900999999999996 - type: recall_at_100 value: 85.676 - type: recall_at_1000 value: 97.03699999999999 - type: recall_at_3 value: 49.781 - type: recall_at_5 value: 56.330000000000005 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 25.332 - type: map_at_10 value: 36.874 - type: map_at_100 value: 38.340999999999994 - type: map_at_1000 value: 38.452 - type: map_at_3 value: 33.068 - type: map_at_5 value: 35.324 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 41.641 - type: mrr_at_100 value: 42.519 - type: mrr_at_1000 value: 42.573 - type: mrr_at_3 value: 38.413000000000004 - type: mrr_at_5 value: 40.542 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 43.414 - type: ndcg_at_100 value: 49.196 - type: ndcg_at_1000 value: 51.237 - type: ndcg_at_3 value: 37.230000000000004 - type: ndcg_at_5 value: 40.405 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 8.379 - type: precision_at_100 value: 1.315 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 18.417 - type: precision_at_5 value: 13.744 - type: recall_at_1 value: 25.332 - type: recall_at_10 value: 57.774 - type: recall_at_100 value: 82.071 - type: recall_at_1000 value: 95.60600000000001 - type: recall_at_3 value: 40.722 - type: recall_at_5 value: 48.754999999999995 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 25.91033333333334 - type: map_at_10 value: 36.23225000000001 - type: map_at_100 value: 37.55766666666667 - type: map_at_1000 value: 37.672583333333336 - type: map_at_3 value: 32.95666666666667 - type: map_at_5 value: 34.73375 - type: mrr_at_1 value: 30.634 - type: mrr_at_10 value: 40.19449999999999 - type: mrr_at_100 value: 41.099250000000005 - type: mrr_at_1000 value: 41.15091666666667 - type: mrr_at_3 value: 37.4615 - type: mrr_at_5 value: 39.00216666666667 - type: ndcg_at_1 value: 30.634 - type: ndcg_at_10 value: 42.162166666666664 - type: ndcg_at_100 value: 47.60708333333333 - type: ndcg_at_1000 value: 49.68616666666666 - type: ndcg_at_3 value: 36.60316666666666 - type: ndcg_at_5 value: 39.15616666666668 - type: precision_at_1 value: 30.634 - type: precision_at_10 value: 7.6193333333333335 - type: precision_at_100 value: 1.2198333333333333 - type: precision_at_1000 value: 0.15975000000000003 - type: precision_at_3 value: 17.087 - type: precision_at_5 value: 12.298333333333334 - type: recall_at_1 value: 25.91033333333334 - type: recall_at_10 value: 55.67300000000001 - type: recall_at_100 value: 79.20608333333334 - type: recall_at_1000 value: 93.34866666666667 - type: recall_at_3 value: 40.34858333333333 - type: recall_at_5 value: 46.834083333333325 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 25.006 - type: map_at_10 value: 32.177 - type: map_at_100 value: 33.324999999999996 - type: map_at_1000 value: 33.419 - type: map_at_3 value: 29.952 - type: map_at_5 value: 31.095 - type: mrr_at_1 value: 28.066999999999997 - type: mrr_at_10 value: 34.995 - type: mrr_at_100 value: 35.978 - type: mrr_at_1000 value: 36.042 - type: mrr_at_3 value: 33.103 - type: mrr_at_5 value: 34.001 - type: ndcg_at_1 value: 28.066999999999997 - type: ndcg_at_10 value: 36.481 - type: ndcg_at_100 value: 42.022999999999996 - type: ndcg_at_1000 value: 44.377 - type: ndcg_at_3 value: 32.394 - type: ndcg_at_5 value: 34.108 - type: precision_at_1 value: 28.066999999999997 - type: precision_at_10 value: 5.736 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 13.804 - type: precision_at_5 value: 9.508999999999999 - type: recall_at_1 value: 25.006 - type: recall_at_10 value: 46.972 - type: recall_at_100 value: 72.138 - type: recall_at_1000 value: 89.479 - type: recall_at_3 value: 35.793 - type: recall_at_5 value: 39.947 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 16.07 - type: map_at_10 value: 24.447 - type: map_at_100 value: 25.685999999999996 - type: map_at_1000 value: 25.813999999999997 - type: map_at_3 value: 21.634 - type: map_at_5 value: 23.133 - type: mrr_at_1 value: 19.580000000000002 - type: mrr_at_10 value: 28.127999999999997 - type: mrr_at_100 value: 29.119 - type: mrr_at_1000 value: 29.192 - type: mrr_at_3 value: 25.509999999999998 - type: mrr_at_5 value: 26.878 - type: ndcg_at_1 value: 19.580000000000002 - type: ndcg_at_10 value: 29.804000000000002 - type: ndcg_at_100 value: 35.555 - type: ndcg_at_1000 value: 38.421 - type: ndcg_at_3 value: 24.654999999999998 - type: ndcg_at_5 value: 26.881 - type: precision_at_1 value: 19.580000000000002 - type: precision_at_10 value: 5.736 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 12.033000000000001 - type: precision_at_5 value: 8.871 - type: recall_at_1 value: 16.07 - type: recall_at_10 value: 42.364000000000004 - type: recall_at_100 value: 68.01899999999999 - type: recall_at_1000 value: 88.122 - type: recall_at_3 value: 27.846 - type: recall_at_5 value: 33.638 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 26.365 - type: map_at_10 value: 36.591 - type: map_at_100 value: 37.730000000000004 - type: map_at_1000 value: 37.84 - type: map_at_3 value: 33.403 - type: map_at_5 value: 35.272999999999996 - type: mrr_at_1 value: 30.503999999999998 - type: mrr_at_10 value: 39.940999999999995 - type: mrr_at_100 value: 40.818 - type: mrr_at_1000 value: 40.876000000000005 - type: mrr_at_3 value: 37.065 - type: mrr_at_5 value: 38.814 - type: ndcg_at_1 value: 30.503999999999998 - type: ndcg_at_10 value: 42.185 - type: ndcg_at_100 value: 47.416000000000004 - type: ndcg_at_1000 value: 49.705 - type: ndcg_at_3 value: 36.568 - type: ndcg_at_5 value: 39.416000000000004 - type: precision_at_1 value: 30.503999999999998 - type: precision_at_10 value: 7.276000000000001 - type: precision_at_100 value: 1.118 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 16.729 - type: precision_at_5 value: 12.107999999999999 - type: recall_at_1 value: 26.365 - type: recall_at_10 value: 55.616 - type: recall_at_100 value: 78.129 - type: recall_at_1000 value: 93.95599999999999 - type: recall_at_3 value: 40.686 - type: recall_at_5 value: 47.668 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 22.750999999999998 - type: map_at_10 value: 33.446 - type: map_at_100 value: 35.235 - type: map_at_1000 value: 35.478 - type: map_at_3 value: 29.358 - type: map_at_5 value: 31.525 - type: mrr_at_1 value: 27.668 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.732 - type: mrr_at_1000 value: 38.779 - type: mrr_at_3 value: 34.223 - type: mrr_at_5 value: 36.08 - type: ndcg_at_1 value: 27.668 - type: ndcg_at_10 value: 40.557 - type: ndcg_at_100 value: 46.605999999999995 - type: ndcg_at_1000 value: 48.917 - type: ndcg_at_3 value: 33.677 - type: ndcg_at_5 value: 36.85 - type: precision_at_1 value: 27.668 - type: precision_at_10 value: 8.3 - type: precision_at_100 value: 1.6260000000000001 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 16.008 - type: precision_at_5 value: 12.292 - type: recall_at_1 value: 22.750999999999998 - type: recall_at_10 value: 55.643 - type: recall_at_100 value: 82.151 - type: recall_at_1000 value: 95.963 - type: recall_at_3 value: 36.623 - type: recall_at_5 value: 44.708 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 17.288999999999998 - type: map_at_10 value: 25.903 - type: map_at_100 value: 27.071 - type: map_at_1000 value: 27.173000000000002 - type: map_at_3 value: 22.935 - type: map_at_5 value: 24.573 - type: mrr_at_1 value: 18.669 - type: mrr_at_10 value: 27.682000000000002 - type: mrr_at_100 value: 28.691 - type: mrr_at_1000 value: 28.761 - type: mrr_at_3 value: 24.738 - type: mrr_at_5 value: 26.392 - type: ndcg_at_1 value: 18.669 - type: ndcg_at_10 value: 31.335 - type: ndcg_at_100 value: 36.913000000000004 - type: ndcg_at_1000 value: 39.300000000000004 - type: ndcg_at_3 value: 25.423000000000002 - type: ndcg_at_5 value: 28.262999999999998 - type: precision_at_1 value: 18.669 - type: precision_at_10 value: 5.379 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 11.214 - type: precision_at_5 value: 8.466 - type: recall_at_1 value: 17.288999999999998 - type: recall_at_10 value: 46.377 - type: recall_at_100 value: 71.53500000000001 - type: recall_at_1000 value: 88.947 - type: recall_at_3 value: 30.581999999999997 - type: recall_at_5 value: 37.354 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 21.795 - type: map_at_10 value: 37.614999999999995 - type: map_at_100 value: 40.037 - type: map_at_1000 value: 40.184999999999995 - type: map_at_3 value: 32.221 - type: map_at_5 value: 35.154999999999994 - type: mrr_at_1 value: 50.358000000000004 - type: mrr_at_10 value: 62.129 - type: mrr_at_100 value: 62.613 - type: mrr_at_1000 value: 62.62 - type: mrr_at_3 value: 59.272999999999996 - type: mrr_at_5 value: 61.138999999999996 - type: ndcg_at_1 value: 50.358000000000004 - type: ndcg_at_10 value: 48.362 - type: ndcg_at_100 value: 55.932 - type: ndcg_at_1000 value: 58.062999999999995 - type: ndcg_at_3 value: 42.111 - type: ndcg_at_5 value: 44.063 - type: precision_at_1 value: 50.358000000000004 - type: precision_at_10 value: 14.677999999999999 - type: precision_at_100 value: 2.2950000000000004 - type: precision_at_1000 value: 0.271 - type: precision_at_3 value: 31.77 - type: precision_at_5 value: 23.375 - type: recall_at_1 value: 21.795 - type: recall_at_10 value: 53.846000000000004 - type: recall_at_100 value: 78.952 - type: recall_at_1000 value: 90.41900000000001 - type: recall_at_3 value: 37.257 - type: recall_at_5 value: 44.661 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.728 - type: map_at_10 value: 22.691 - type: map_at_100 value: 31.734 - type: map_at_1000 value: 33.464 - type: map_at_3 value: 16.273 - type: map_at_5 value: 19.016 - type: mrr_at_1 value: 73.25 - type: mrr_at_10 value: 80.782 - type: mrr_at_100 value: 81.01899999999999 - type: mrr_at_1000 value: 81.021 - type: mrr_at_3 value: 79.583 - type: mrr_at_5 value: 80.146 - type: ndcg_at_1 value: 59.62499999999999 - type: ndcg_at_10 value: 46.304 - type: ndcg_at_100 value: 51.23 - type: ndcg_at_1000 value: 58.048 - type: ndcg_at_3 value: 51.541000000000004 - type: ndcg_at_5 value: 48.635 - type: precision_at_1 value: 73.25 - type: precision_at_10 value: 36.375 - type: precision_at_100 value: 11.53 - type: precision_at_1000 value: 2.23 - type: precision_at_3 value: 55.583000000000006 - type: precision_at_5 value: 47.15 - type: recall_at_1 value: 9.728 - type: recall_at_10 value: 28.793999999999997 - type: recall_at_100 value: 57.885 - type: recall_at_1000 value: 78.759 - type: recall_at_3 value: 17.79 - type: recall_at_5 value: 21.733 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.775 - type: f1 value: 41.89794273264891 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 85.378 - type: map_at_10 value: 91.51 - type: map_at_100 value: 91.666 - type: map_at_1000 value: 91.676 - type: map_at_3 value: 90.757 - type: map_at_5 value: 91.277 - type: mrr_at_1 value: 91.839 - type: mrr_at_10 value: 95.49 - type: mrr_at_100 value: 95.493 - type: mrr_at_1000 value: 95.493 - type: mrr_at_3 value: 95.345 - type: mrr_at_5 value: 95.47200000000001 - type: ndcg_at_1 value: 91.839 - type: ndcg_at_10 value: 93.806 - type: ndcg_at_100 value: 94.255 - type: ndcg_at_1000 value: 94.399 - type: ndcg_at_3 value: 93.027 - type: ndcg_at_5 value: 93.51 - type: precision_at_1 value: 91.839 - type: precision_at_10 value: 10.93 - type: precision_at_100 value: 1.1400000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 34.873 - type: precision_at_5 value: 21.44 - type: recall_at_1 value: 85.378 - type: recall_at_10 value: 96.814 - type: recall_at_100 value: 98.386 - type: recall_at_1000 value: 99.21600000000001 - type: recall_at_3 value: 94.643 - type: recall_at_5 value: 95.976 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.190000000000005 - type: map_at_10 value: 53.605000000000004 - type: map_at_100 value: 55.550999999999995 - type: map_at_1000 value: 55.665 - type: map_at_3 value: 46.62 - type: map_at_5 value: 50.517999999999994 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 70.775 - type: mrr_at_100 value: 71.238 - type: mrr_at_1000 value: 71.244 - type: mrr_at_3 value: 68.72399999999999 - type: mrr_at_5 value: 69.959 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 63.226000000000006 - type: ndcg_at_100 value: 68.60300000000001 - type: ndcg_at_1000 value: 69.901 - type: ndcg_at_3 value: 58.048 - type: ndcg_at_5 value: 59.789 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.130000000000003 - type: precision_at_100 value: 2.29 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 38.323 - type: precision_at_5 value: 27.87 - type: recall_at_1 value: 32.190000000000005 - type: recall_at_10 value: 73.041 - type: recall_at_100 value: 91.31 - type: recall_at_1000 value: 98.104 - type: recall_at_3 value: 53.70399999999999 - type: recall_at_5 value: 62.358999999999995 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.511 - type: map_at_10 value: 58.15 - type: map_at_100 value: 58.95399999999999 - type: map_at_1000 value: 59.018 - type: map_at_3 value: 55.31700000000001 - type: map_at_5 value: 57.04900000000001 - type: mrr_at_1 value: 87.022 - type: mrr_at_10 value: 91.32000000000001 - type: mrr_at_100 value: 91.401 - type: mrr_at_1000 value: 91.403 - type: mrr_at_3 value: 90.77 - type: mrr_at_5 value: 91.156 - type: ndcg_at_1 value: 87.022 - type: ndcg_at_10 value: 68.183 - type: ndcg_at_100 value: 70.781 - type: ndcg_at_1000 value: 72.009 - type: ndcg_at_3 value: 64.334 - type: ndcg_at_5 value: 66.449 - type: precision_at_1 value: 87.022 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.542 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 39.023 - type: precision_at_5 value: 25.080000000000002 - type: recall_at_1 value: 43.511 - type: recall_at_10 value: 67.02900000000001 - type: recall_at_100 value: 77.11 - type: recall_at_1000 value: 85.294 - type: recall_at_3 value: 58.535000000000004 - type: recall_at_5 value: 62.70099999999999 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.0996 - type: ap value: 87.86206089096373 - type: f1 value: 92.07554547510763 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 23.179 - type: map_at_10 value: 35.86 - type: map_at_100 value: 37.025999999999996 - type: map_at_1000 value: 37.068 - type: map_at_3 value: 31.921 - type: map_at_5 value: 34.172000000000004 - type: mrr_at_1 value: 23.926 - type: mrr_at_10 value: 36.525999999999996 - type: mrr_at_100 value: 37.627 - type: mrr_at_1000 value: 37.665 - type: mrr_at_3 value: 32.653 - type: mrr_at_5 value: 34.897 - type: ndcg_at_1 value: 23.910999999999998 - type: ndcg_at_10 value: 42.927 - type: ndcg_at_100 value: 48.464 - type: ndcg_at_1000 value: 49.533 - type: ndcg_at_3 value: 34.910000000000004 - type: ndcg_at_5 value: 38.937 - type: precision_at_1 value: 23.910999999999998 - type: precision_at_10 value: 6.758 - type: precision_at_100 value: 0.9520000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.838000000000001 - type: precision_at_5 value: 10.934000000000001 - type: recall_at_1 value: 23.179 - type: recall_at_10 value: 64.622 - type: recall_at_100 value: 90.135 - type: recall_at_1000 value: 98.301 - type: recall_at_3 value: 42.836999999999996 - type: recall_at_5 value: 52.512 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.59598723210215 - type: f1 value: 96.41913500001952 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.89557683538533 - type: f1 value: 63.379319722356264 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 78.93745796906524 - type: f1 value: 75.71616541785902 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 81.41223940820443 - type: f1 value: 81.2877893719078 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 35.03682528325662 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.942529406124 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.459949660460317 - type: mrr value: 32.70509582031616 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.497 - type: map_at_10 value: 13.843 - type: map_at_100 value: 17.713 - type: map_at_1000 value: 19.241 - type: map_at_3 value: 10.096 - type: map_at_5 value: 11.85 - type: mrr_at_1 value: 48.916 - type: mrr_at_10 value: 57.764 - type: mrr_at_100 value: 58.251 - type: mrr_at_1000 value: 58.282999999999994 - type: mrr_at_3 value: 55.623999999999995 - type: mrr_at_5 value: 57.018 - type: ndcg_at_1 value: 46.594 - type: ndcg_at_10 value: 36.945 - type: ndcg_at_100 value: 34.06 - type: ndcg_at_1000 value: 43.05 - type: ndcg_at_3 value: 41.738 - type: ndcg_at_5 value: 39.330999999999996 - type: precision_at_1 value: 48.916 - type: precision_at_10 value: 27.43 - type: precision_at_100 value: 8.616 - type: precision_at_1000 value: 2.155 - type: precision_at_3 value: 39.112 - type: precision_at_5 value: 33.808 - type: recall_at_1 value: 6.497 - type: recall_at_10 value: 18.163 - type: recall_at_100 value: 34.566 - type: recall_at_1000 value: 67.15 - type: recall_at_3 value: 11.100999999999999 - type: recall_at_5 value: 14.205000000000002 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 31.916 - type: map_at_10 value: 48.123 - type: map_at_100 value: 49.103 - type: map_at_1000 value: 49.131 - type: map_at_3 value: 43.711 - type: map_at_5 value: 46.323 - type: mrr_at_1 value: 36.181999999999995 - type: mrr_at_10 value: 50.617999999999995 - type: mrr_at_100 value: 51.329 - type: mrr_at_1000 value: 51.348000000000006 - type: mrr_at_3 value: 47.010999999999996 - type: mrr_at_5 value: 49.175000000000004 - type: ndcg_at_1 value: 36.181999999999995 - type: ndcg_at_10 value: 56.077999999999996 - type: ndcg_at_100 value: 60.037 - type: ndcg_at_1000 value: 60.63499999999999 - type: ndcg_at_3 value: 47.859 - type: ndcg_at_5 value: 52.178999999999995 - type: precision_at_1 value: 36.181999999999995 - type: precision_at_10 value: 9.284 - type: precision_at_100 value: 1.149 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 22.006999999999998 - type: precision_at_5 value: 15.695 - type: recall_at_1 value: 31.916 - type: recall_at_10 value: 77.771 - type: recall_at_100 value: 94.602 - type: recall_at_1000 value: 98.967 - type: recall_at_3 value: 56.528 - type: recall_at_5 value: 66.527 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.486 - type: map_at_10 value: 85.978 - type: map_at_100 value: 86.587 - type: map_at_1000 value: 86.598 - type: map_at_3 value: 83.04899999999999 - type: map_at_5 value: 84.857 - type: mrr_at_1 value: 82.32000000000001 - type: mrr_at_10 value: 88.64 - type: mrr_at_100 value: 88.702 - type: mrr_at_1000 value: 88.702 - type: mrr_at_3 value: 87.735 - type: mrr_at_5 value: 88.36 - type: ndcg_at_1 value: 82.34 - type: ndcg_at_10 value: 89.67 - type: ndcg_at_100 value: 90.642 - type: ndcg_at_1000 value: 90.688 - type: ndcg_at_3 value: 86.932 - type: ndcg_at_5 value: 88.408 - type: precision_at_1 value: 82.34 - type: precision_at_10 value: 13.675999999999998 - type: precision_at_100 value: 1.544 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.24 - type: precision_at_5 value: 25.068 - type: recall_at_1 value: 71.486 - type: recall_at_10 value: 96.844 - type: recall_at_100 value: 99.843 - type: recall_at_1000 value: 99.996 - type: recall_at_3 value: 88.92099999999999 - type: recall_at_5 value: 93.215 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 59.75758437908334 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 68.03497914092789 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.808 - type: map_at_10 value: 16.059 - type: map_at_100 value: 19.048000000000002 - type: map_at_1000 value: 19.43 - type: map_at_3 value: 10.953 - type: map_at_5 value: 13.363 - type: mrr_at_1 value: 28.7 - type: mrr_at_10 value: 42.436 - type: mrr_at_100 value: 43.599 - type: mrr_at_1000 value: 43.62 - type: mrr_at_3 value: 38.45 - type: mrr_at_5 value: 40.89 - type: ndcg_at_1 value: 28.7 - type: ndcg_at_10 value: 26.346000000000004 - type: ndcg_at_100 value: 36.758 - type: ndcg_at_1000 value: 42.113 - type: ndcg_at_3 value: 24.254 - type: ndcg_at_5 value: 21.506 - type: precision_at_1 value: 28.7 - type: precision_at_10 value: 13.969999999999999 - type: precision_at_100 value: 2.881 - type: precision_at_1000 value: 0.414 - type: precision_at_3 value: 22.933 - type: precision_at_5 value: 19.220000000000002 - type: recall_at_1 value: 5.808 - type: recall_at_10 value: 28.310000000000002 - type: recall_at_100 value: 58.475 - type: recall_at_1000 value: 84.072 - type: recall_at_3 value: 13.957 - type: recall_at_5 value: 19.515 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.39274129958557 - type: cos_sim_spearman value: 79.78021235170053 - type: euclidean_pearson value: 79.35335401300166 - type: euclidean_spearman value: 79.7271870968275 - type: manhattan_pearson value: 79.35256263340601 - type: manhattan_spearman value: 79.76036386976321 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.99130429246708 - type: cos_sim_spearman value: 73.88322811171203 - type: euclidean_pearson value: 80.7569419170376 - type: euclidean_spearman value: 73.82542155409597 - type: manhattan_pearson value: 80.79468183847625 - type: manhattan_spearman value: 73.87027144047784 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.88548789489907 - type: cos_sim_spearman value: 85.07535893847255 - type: euclidean_pearson value: 84.6637222061494 - type: euclidean_spearman value: 85.14200626702456 - type: manhattan_pearson value: 84.75327892344734 - type: manhattan_spearman value: 85.24406181838596 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.88140039325008 - type: cos_sim_spearman value: 79.61211268112362 - type: euclidean_pearson value: 81.29639728816458 - type: euclidean_spearman value: 79.51284578041442 - type: manhattan_pearson value: 81.3381797137111 - type: manhattan_spearman value: 79.55683684039808 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.16716737270485 - type: cos_sim_spearman value: 86.14823841857738 - type: euclidean_pearson value: 85.36325733440725 - type: euclidean_spearman value: 86.04919691402029 - type: manhattan_pearson value: 85.3147511385052 - type: manhattan_spearman value: 86.00676205857764 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 80.34266645861588 - type: cos_sim_spearman value: 81.59914035005882 - type: euclidean_pearson value: 81.15053076245988 - type: euclidean_spearman value: 81.52776915798489 - type: manhattan_pearson value: 81.1819647418673 - type: manhattan_spearman value: 81.57479527353556 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.38263326821439 - type: cos_sim_spearman value: 89.10946308202642 - type: euclidean_pearson value: 88.87831312540068 - type: euclidean_spearman value: 89.03615865973664 - type: manhattan_pearson value: 88.79835539970384 - type: manhattan_spearman value: 88.9766156339753 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 70.1574915581685 - type: cos_sim_spearman value: 70.59144980004054 - type: euclidean_pearson value: 71.43246306918755 - type: euclidean_spearman value: 70.5544189562984 - type: manhattan_pearson value: 71.4071414609503 - type: manhattan_spearman value: 70.31799126163712 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 83.36215796635351 - type: cos_sim_spearman value: 83.07276756467208 - type: euclidean_pearson value: 83.06690453635584 - type: euclidean_spearman value: 82.9635366303289 - type: manhattan_pearson value: 83.04994049700815 - type: manhattan_spearman value: 82.98120125356036 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.92530011616722 - type: mrr value: 96.21826793395421 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 65.75 - type: map_at_10 value: 77.701 - type: map_at_100 value: 78.005 - type: map_at_1000 value: 78.006 - type: map_at_3 value: 75.48 - type: map_at_5 value: 76.927 - type: mrr_at_1 value: 68.333 - type: mrr_at_10 value: 78.511 - type: mrr_at_100 value: 78.704 - type: mrr_at_1000 value: 78.704 - type: mrr_at_3 value: 77 - type: mrr_at_5 value: 78.083 - type: ndcg_at_1 value: 68.333 - type: ndcg_at_10 value: 82.42699999999999 - type: ndcg_at_100 value: 83.486 - type: ndcg_at_1000 value: 83.511 - type: ndcg_at_3 value: 78.96300000000001 - type: ndcg_at_5 value: 81.028 - type: precision_at_1 value: 68.333 - type: precision_at_10 value: 10.667 - type: precision_at_100 value: 1.127 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.333 - type: precision_at_5 value: 20.133000000000003 - type: recall_at_1 value: 65.75 - type: recall_at_10 value: 95.578 - type: recall_at_100 value: 99.833 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 86.506 - type: recall_at_5 value: 91.75 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.75247524752476 - type: cos_sim_ap value: 94.16065078045173 - type: cos_sim_f1 value: 87.22986247544205 - type: cos_sim_precision value: 85.71428571428571 - type: cos_sim_recall value: 88.8 - type: dot_accuracy value: 99.74554455445545 - type: dot_ap value: 93.90633887037264 - type: dot_f1 value: 86.9873417721519 - type: dot_precision value: 88.1025641025641 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.75247524752476 - type: euclidean_ap value: 94.17466319018055 - type: euclidean_f1 value: 87.3405299313052 - type: euclidean_precision value: 85.74181117533719 - type: euclidean_recall value: 89 - type: manhattan_accuracy value: 99.75445544554455 - type: manhattan_ap value: 94.27688371923577 - type: manhattan_f1 value: 87.74002954209749 - type: manhattan_precision value: 86.42095053346266 - type: manhattan_recall value: 89.1 - type: max_accuracy value: 99.75445544554455 - type: max_ap value: 94.27688371923577 - type: max_f1 value: 87.74002954209749 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 71.26500637517056 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 39.17507906280528 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.4848744828509 - type: mrr value: 53.33678168236992 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.599864323827887 - type: cos_sim_spearman value: 30.91116204665598 - type: dot_pearson value: 30.82637894269936 - type: dot_spearman value: 30.957573868416066 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.23600000000000002 - type: map_at_10 value: 1.892 - type: map_at_100 value: 11.586 - type: map_at_1000 value: 27.761999999999997 - type: map_at_3 value: 0.653 - type: map_at_5 value: 1.028 - type: mrr_at_1 value: 88 - type: mrr_at_10 value: 94 - type: mrr_at_100 value: 94 - type: mrr_at_1000 value: 94 - type: mrr_at_3 value: 94 - type: mrr_at_5 value: 94 - type: ndcg_at_1 value: 82 - type: ndcg_at_10 value: 77.48899999999999 - type: ndcg_at_100 value: 60.141 - type: ndcg_at_1000 value: 54.228 - type: ndcg_at_3 value: 82.358 - type: ndcg_at_5 value: 80.449 - type: precision_at_1 value: 88 - type: precision_at_10 value: 82.19999999999999 - type: precision_at_100 value: 61.760000000000005 - type: precision_at_1000 value: 23.684 - type: precision_at_3 value: 88 - type: precision_at_5 value: 85.6 - type: recall_at_1 value: 0.23600000000000002 - type: recall_at_10 value: 2.117 - type: recall_at_100 value: 14.985000000000001 - type: recall_at_1000 value: 51.107 - type: recall_at_3 value: 0.688 - type: recall_at_5 value: 1.1039999999999999 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.3040000000000003 - type: map_at_10 value: 9.025 - type: map_at_100 value: 15.312999999999999 - type: map_at_1000 value: 16.954 - type: map_at_3 value: 4.981 - type: map_at_5 value: 6.32 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 39.835 - type: mrr_at_100 value: 40.8 - type: mrr_at_1000 value: 40.8 - type: mrr_at_3 value: 35.034 - type: mrr_at_5 value: 37.687 - type: ndcg_at_1 value: 22.448999999999998 - type: ndcg_at_10 value: 22.545 - type: ndcg_at_100 value: 35.931999999999995 - type: ndcg_at_1000 value: 47.665 - type: ndcg_at_3 value: 23.311 - type: ndcg_at_5 value: 22.421 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 20.408 - type: precision_at_100 value: 7.815999999999999 - type: precision_at_1000 value: 1.553 - type: precision_at_3 value: 25.169999999999998 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 2.3040000000000003 - type: recall_at_10 value: 15.693999999999999 - type: recall_at_100 value: 48.917 - type: recall_at_1000 value: 84.964 - type: recall_at_3 value: 6.026 - type: recall_at_5 value: 9.066 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 82.6074 - type: ap value: 23.187467098602013 - type: f1 value: 65.36829506379657 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 63.16355404640635 - type: f1 value: 63.534725639863346 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 50.91004094411276 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.55301901412649 - type: cos_sim_ap value: 75.25312618556728 - type: cos_sim_f1 value: 68.76561719140429 - type: cos_sim_precision value: 65.3061224489796 - type: cos_sim_recall value: 72.61213720316623 - type: dot_accuracy value: 86.29671574178936 - type: dot_ap value: 75.11910195501207 - type: dot_f1 value: 68.44048376830045 - type: dot_precision value: 66.12546125461255 - type: dot_recall value: 70.92348284960423 - type: euclidean_accuracy value: 86.5828217202122 - type: euclidean_ap value: 75.22986344900924 - type: euclidean_f1 value: 68.81267797449549 - type: euclidean_precision value: 64.8238861674831 - type: euclidean_recall value: 73.3245382585752 - type: manhattan_accuracy value: 86.61262442629791 - type: manhattan_ap value: 75.24401608557328 - type: manhattan_f1 value: 68.80473982483257 - type: manhattan_precision value: 67.21187720181177 - type: manhattan_recall value: 70.47493403693932 - type: max_accuracy value: 86.61262442629791 - type: max_ap value: 75.25312618556728 - type: max_f1 value: 68.81267797449549 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.10688089416696 - type: cos_sim_ap value: 84.17862178779863 - type: cos_sim_f1 value: 76.17305208781748 - type: cos_sim_precision value: 71.31246641590543 - type: cos_sim_recall value: 81.74468740375731 - type: dot_accuracy value: 88.1844995536927 - type: dot_ap value: 84.33816725235876 - type: dot_f1 value: 76.43554032918746 - type: dot_precision value: 74.01557767200346 - type: dot_recall value: 79.0190945488143 - type: euclidean_accuracy value: 88.07001203089223 - type: euclidean_ap value: 84.12267000814985 - type: euclidean_f1 value: 76.12232600180778 - type: euclidean_precision value: 74.50604541433205 - type: euclidean_recall value: 77.81028641823221 - type: manhattan_accuracy value: 88.06419063142779 - type: manhattan_ap value: 84.11648917164187 - type: manhattan_f1 value: 76.20579953925474 - type: manhattan_precision value: 72.56772755762935 - type: manhattan_recall value: 80.22790267939637 - type: max_accuracy value: 88.1844995536927 - type: max_ap value: 84.33816725235876 - type: max_f1 value: 76.43554032918746 --- <!-- **English** | [中文](./README_zh.md) --> # gte-large-en-v1.5 We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**, while further enhancing model performance. The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU). The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)). We also present the [`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct), a SOTA instruction-tuned multi-lingual embedding model that ranked 2nd in MTEB and 1st in C-MTEB. <!-- Provide a longer summary of what this model is. --> - **Developed by:** Institute for Intelligent Computing, Alibaba Group - **Model type:** Text Embeddings - **Paper:** [mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval](https://arxiv.org/pdf/2407.19669) <!-- - **Demo [optional]:** [More Information Needed] --> ### Model list | Models | Language | Model Size | Max Seq. Length | Dimension | MTEB-en | LoCo | |:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: | |[`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| Multiple | 7720 | 32768 | 4096 | 67.34 | 87.57 | |[`gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 434 | 8192 | 1024 | 65.39 | 86.71 | |[`gte-base-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 137 | 8192 | 768 | 64.11 | 87.44 | ## How to Get Started with the Model Use the code below to get started with the model. ```python # Requires transformers>=4.36.0 import torch.nn.functional as F from transformers import AutoModel, AutoTokenizer input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] model_path = 'Alibaba-NLP/gte-large-en-v1.5' tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModel.from_pretrained(model_path, trust_remote_code=True) # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = outputs.last_hidden_state[:, 0] # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` **It is recommended to install xformers and enable unpadding for acceleration, refer to [enable-unpadding-and-xformers](https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers).** Use with sentence-transformers: ```python # Requires sentence_transformers>=2.7.0 from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('Alibaba-NLP/gte-large-en-v1.5', trust_remote_code=True) embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` Use with `transformers.js`: ```js // npm i @xenova/transformers import { pipeline, dot } from '@xenova/transformers'; // Create feature extraction pipeline const extractor = await pipeline('feature-extraction', 'Alibaba-NLP/gte-large-en-v1.5', { quantized: false, // Comment out this line to use the quantized version }); // Generate sentence embeddings const sentences = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] const output = await extractor(sentences, { normalize: true, pooling: 'cls' }); // Compute similarity scores const [source_embeddings, ...document_embeddings ] = output.tolist(); const similarities = document_embeddings.map(x => 100 * dot(source_embeddings, x)); console.log(similarities); // [41.86354093370361, 77.07076371259589, 37.02981979677899] ``` ## Training Details ### Training Data - Masked language modeling (MLM): `c4-en` - Weak-supervised contrastive pre-training (CPT): [GTE](https://arxiv.org/pdf/2308.03281.pdf) pre-training data - Supervised contrastive fine-tuning: [GTE](https://arxiv.org/pdf/2308.03281.pdf) fine-tuning data ### Training Procedure To enable the backbone model to support a context length of 8192, we adopted a multi-stage training strategy. The model first undergoes preliminary MLM pre-training on shorter lengths. And then, we resample the data, reducing the proportion of short texts, and continue the MLM pre-training. The entire training process is as follows: - MLM-512: lr 2e-4, mlm_probability 0.3, batch_size 4096, num_steps 300000, rope_base 10000 - MLM-2048: lr 5e-5, mlm_probability 0.3, batch_size 4096, num_steps 30000, rope_base 10000 - [MLM-8192](https://huggingface.co/Alibaba-NLP/gte-en-mlm-large): lr 5e-5, mlm_probability 0.3, batch_size 1024, num_steps 30000, rope_base 160000 - CPT: max_len 512, lr 5e-5, batch_size 28672, num_steps 100000 - Fine-tuning: TODO ## Evaluation ### MTEB The results of other models are retrieved from [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). The gte evaluation setting: `mteb==1.2.0, fp16 auto mix precision, max_length=8192`, and set ntk scaling factor to 2 (equivalent to rope_base * 2). | Model Name | Param Size (M) | Dimension | Sequence Length | Average (56) | Class. (12) | Clust. (11) | Pair Class. (3) | Reran. (4) | Retr. (15) | STS (10) | Summ. (1) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 409 | 1024 | 8192 | **65.39** | 77.75 | 47.95 | 84.63 | 58.50 | 57.91 | 81.43 | 30.91 | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 335 | 1024 | 512 | 64.68 | 75.64 | 46.71 | 87.2 | 60.11 | 54.39 | 85 | 32.71 | | [multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) | 560 | 1024 | 514 | 64.41 | 77.56 | 47.1 | 86.19 | 58.58 | 52.47 | 84.78 | 30.39 | | [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)| 335 | 1024 | 512 | 64.23 | 75.97 | 46.08 | 87.12 | 60.03 | 54.29 | 83.11 | 31.61 | | [**gte-base-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | 137 | 768 | 8192 | **64.11** | 77.17 | 46.82 | 85.33 | 57.66 | 54.09 | 81.97 | 31.17 | | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)| 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 | ### LoCo | Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [gte-qwen1.5-7b](https://huggingface.co/Alibaba-NLP/gte-qwen1.5-7b) | 4096 | 32768 | 87.57 | 49.37 | 93.10 | 99.67 | 97.54 | 98.21 | | [gte-large-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-v1.5) |1024 | 8192 | 86.71 | 44.55 | 92.61 | 99.82 | 97.81 | 98.74 | | [gte-base-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-v1.5) | 768 | 8192 | 87.44 | 49.91 | 91.78 | 99.82 | 97.13 | 98.58 | ## Citation If you find our paper or models helpful, please consider citing them as follows: ``` @article{zhang2024mgte, title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval}, author={Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Wen and Dai, Ziqi and Tang, Jialong and Lin, Huan and Yang, Baosong and Xie, Pengjun and Huang, Fei and others}, journal={arXiv preprint arXiv:2407.19669}, year={2024} } @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
medspaner/xlm-roberta-large-spanish-trials-cases-temp-ent
medspaner
token-classification
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,696
1,727
14
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: Edad ≥ 18 años (en todos los centros), o edad ≥12 y <18 años con peso igual o superior a 40kg - text: Estudio realizado en un hospital desde julio de 2010 hasta diciembre de 2011 (18 meses) - text: Pacientes que hayan recibido bifosfonatos diarios, semanales o mensuales durante al menos 3 años. - text: 50 g (40 g la noche anterior y 10 g por la mañana) de L-glutamina model-index: - name: xlm-roberta-large-spanish-trials-cases-temp-ents results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-spanish-trials-cases-medic-attr This named entity recognition model detects temporal expressions (TIMEX) according to the [TimeML scheme](https://en.wikipedia.org/wiki/ISO-TimeML) ([Pustejovsky et al. 2005](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.5610&rep=rep1&type=pdf)), in addition to Age entities: - Age: e.g. *18 años* - Date: e.g. *2022*, *26 de noviembre* - Duration: e.g. *3 horas* - Frequency: e.g. *semanal* - Time: e.g. *noche* The model achieves the following results on the test set (results are averaged over 5 evaluation rounds): - Precision: 0.906 (±0.006) - Recall: 0.901 (±0.006) - F1: 0.904 (±0.004) - Accuracy: 0.996 (±0.001) ## Model description This model adapts the pre-trained model [xlm-roberta-large-spanish-clinical](https://huggingface.co/llange/xlm-roberta-large-spanish-clinical), presented in [Lange et al. (2022)](https://academic.oup.com/bioinformatics/article/38/12/3267/6575884). It is fine-tuned to conduct medical named entity recognition on texts about in Spanish. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) and 100 clinical cases with Creative Commons License. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/). It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` To fine-tune the model, we also used 100 clinical cases with Creative Commons licence. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam - num_epochs: average 14.8 epochs (±2.39); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.906 (±0.006) | 0.901 (±0.006) | 0.904 (±0.004) | 0.996 (±0.001) | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "NAMED_ENTITY_RECOGNITION" ]
[ "SCIELO" ]
BioNLP
RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-8bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:2101.00027", "arxiv:2201.07311", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
1,713
1,713
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-1b-deduped-v0 - bnb 8bits - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped-v0/ Original model description: --- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-1B-deduped ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change in the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data Pythia-1B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge – Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "QUESTION_ANSWERING", "TRANSLATION" ]
[ "SCIQ" ]
TBD
mogaio/pr_ebsa_fr_tran_merged25_e1_middle_offsets
mogaio
text-classification
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "model-index", "region:us" ]
1,702
1,702
49
0
--- base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 library_name: setfit metrics: - accuracy_score - classification_report pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Adil Hussain Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin Shah à l''époque où il fréquentait l''École nationale d''art dramatique' - text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates aspirent à renverser six circonscriptions détenues par les républicains que M. Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la conférence du parti démocrate à la Chambre des représentants, Suite à la page suivante a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions de dollars aux campagnes dans les circonscriptions de New York Des problèmes à venir pour les démocrates de New York en 2024 ? Les dirigeants démocrates de New York se débattent depuis des mois avec le problème de l''hébergement des dizaines de milliers de migrants qui ont été transportés par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les démocrates de New York en 2024 ? Les dirigeants démocrates de New York se débattent depuis des mois avec le problème de l''hébergement des dizaines de milliers de migrants qui ont été transportés par bus jusqu''à New York et laissés à sa charge. Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune issue ne soit en vue : les retombées potentielles pour leur parti lors des élections de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils parlent d''immigration - comme les démocrates le font pour l''avortement - et sont clairement à l''attaque sur la question des migrants à New York, tandis que les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication pour le Centre de politique de l''Université de Virginie, au réseau USA Today Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville, et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux frais de la ville Les démocrates doivent y remporter des victoires pour gagner cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain président de la Chambre des représentants Les publicités d''attaque des républicains s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique de la crise des migrants, soulignant que les élections de 2024 n''auront lieu que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient se poser' - text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA H-1B AUX ETATS-UNIS Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy, candidat républicain indien-américain à l''élection présidentielle, a promis de "vider" le système basé sur la loterie et de le remplacer par un système d''admission méritocratique s''il remporte les élections présidentielles de 2024' - text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover ("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover ("Queer as Folk") a 54 ans. a 54 ans. Acteur ("Je sais ce que vous avez fait l''été dernier") a 50 ans' - text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche. "Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago, voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé Howard, qui était le roi de tous les médias, en prince Harry de tous les médias. Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous avec lui ?" M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre de ses sketches à l''antenne, a été un critique virulent de Trump tout au long de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à nouveau en 2024. En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu l''élection ?" Il a poursuivi en qualifiant les partisans de Trump de "nigauds". "Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous avec lui ?" M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke" comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego". Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine. "Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs. L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy Failla critique Stern. "L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne", a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral' inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy_score value: 0.9434954007884363 name: Accuracy_Score - type: classification_report value: '0': precision: 0.9361702127659575 recall: 0.9322033898305084 f1-score: 0.9341825902335456 support: 236 '1': precision: 0.9333333333333333 recall: 0.9302325581395349 f1-score: 0.9317803660565723 support: 301 '2': precision: 0.9646017699115044 recall: 0.9732142857142857 f1-score: 0.9688888888888889 support: 224 accuracy: 0.9434954007884363 macro avg: precision: 0.9447017720035985 recall: 0.945216744561443 f1-score: 0.9449506150596689 support: 761 weighted avg: precision: 0.9434169513880108 recall: 0.9434954007884363 f1-score: 0.9434482162802315 support: 761 name: Classification_Report --- # SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 3 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> | | neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> | | obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy_Score | Classification_Report | |:--------|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **all** | 0.9435 | {'0': {'precision': 0.9361702127659575, 'recall': 0.9322033898305084, 'f1-score': 0.9341825902335456, 'support': 236}, '1': {'precision': 0.9333333333333333, 'recall': 0.9302325581395349, 'f1-score': 0.9317803660565723, 'support': 301}, '2': {'precision': 0.9646017699115044, 'recall': 0.9732142857142857, 'f1-score': 0.9688888888888889, 'support': 224}, 'accuracy': 0.9434954007884363, 'macro avg': {'precision': 0.9447017720035985, 'recall': 0.945216744561443, 'f1-score': 0.9449506150596689, 'support': 761}, 'weighted avg': {'precision': 0.9434169513880108, 'recall': 0.9434954007884363, 'f1-score': 0.9434482162802315, 'support': 761}} | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e1_middle_offsets") # Run inference preds = model("Adil Hussain Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 9 | 247.2638 | 2089 | | Label | Training Sample Count | |:------|:----------------------| | neg | 913 | | obj | 1216 | | pos | 911 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 1 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0013 | 1 | 0.3703 | - | | 0.0658 | 50 | 0.3145 | - | | 0.1316 | 100 | 0.1839 | - | | 0.1974 | 150 | 0.2558 | - | | 0.2632 | 200 | 0.2683 | - | | 0.3289 | 250 | 0.1572 | - | | 0.3947 | 300 | 0.1953 | - | | 0.4605 | 350 | 0.171 | - | | 0.5263 | 400 | 0.2326 | - | | 0.5921 | 450 | 0.1762 | - | | 0.6579 | 500 | 0.2818 | - | | 0.7237 | 550 | 0.2733 | - | | 0.7895 | 600 | 0.195 | - | | 0.8553 | 650 | 0.2104 | - | | 0.9211 | 700 | 0.2124 | - | | 0.9868 | 750 | 0.0818 | - | | 1.0526 | 800 | 0.1046 | - | | 1.1184 | 850 | 0.1633 | - | | 1.1842 | 900 | 0.3207 | - | | 1.25 | 950 | 0.2703 | - | | 1.3158 | 1000 | 0.1934 | - | | 1.3816 | 1050 | 0.2547 | - | | 1.4474 | 1100 | 0.0933 | - | | 1.5132 | 1150 | 0.2102 | - | | 1.5789 | 1200 | 0.0699 | - | | 1.6447 | 1250 | 0.1778 | - | | 1.7105 | 1300 | 0.1796 | - | | 1.7763 | 1350 | 0.0221 | - | | 1.8421 | 1400 | 0.2154 | - | | 1.9079 | 1450 | 0.1683 | - | | 1.9737 | 1500 | 0.3096 | - | | 2.0395 | 1550 | 0.201 | - | | 2.1053 | 1600 | 0.1954 | - | | 2.1711 | 1650 | 0.2301 | - | | 2.2368 | 1700 | 0.1141 | - | | 2.3026 | 1750 | 0.1949 | - | | 2.3684 | 1800 | 0.164 | - | | 2.4342 | 1850 | 0.2307 | - | | 2.5 | 1900 | 0.1912 | - | | 2.5658 | 1950 | 0.2349 | - | | 2.6316 | 2000 | 0.0922 | - | | 2.6974 | 2050 | 0.0702 | - | | 2.7632 | 2100 | 0.1089 | - | | 2.8289 | 2150 | 0.1711 | - | | 2.8947 | 2200 | 0.1432 | - | | 2.9605 | 2250 | 0.2739 | - | | 3.0263 | 2300 | 0.1889 | - | | 3.0921 | 2350 | 0.1036 | - | | 3.1579 | 2400 | 0.1372 | - | | 3.2237 | 2450 | 0.028 | - | | 3.2895 | 2500 | 0.1739 | - | | 3.3553 | 2550 | 0.142 | - | | 3.4211 | 2600 | 0.0838 | - | | 3.4868 | 2650 | 0.0657 | - | | 3.5526 | 2700 | 0.0054 | - | | 3.6184 | 2750 | 0.0426 | - | | 3.6842 | 2800 | 0.1974 | - | | 3.75 | 2850 | 0.0279 | - | | 3.8158 | 2900 | 0.1326 | - | | 3.8816 | 2950 | 0.1614 | - | | 3.9474 | 3000 | 0.1251 | - | | 4.0132 | 3050 | 0.1174 | - | | 4.0789 | 3100 | 0.1948 | - | | 4.1447 | 3150 | 0.0555 | - | | 4.2105 | 3200 | 0.0064 | - | | 4.2763 | 3250 | 0.064 | - | | 4.3421 | 3300 | 0.0013 | - | | 4.4079 | 3350 | 0.135 | - | | 4.4737 | 3400 | 0.0574 | - | | 4.5395 | 3450 | 0.174 | - | | 4.6053 | 3500 | 0.2199 | - | | 4.6711 | 3550 | 0.387 | - | | 4.7368 | 3600 | 0.114 | - | | 4.8026 | 3650 | 0.0853 | - | | 4.8684 | 3700 | 0.0325 | - | | 4.9342 | 3750 | 0.019 | - | | 5.0 | 3800 | 0.0572 | - | | 0.0013 | 1 | 0.1435 | - | | 0.0658 | 50 | 0.0969 | - | | 0.1316 | 100 | 0.1085 | - | | 0.1974 | 150 | 0.0271 | - | | 0.2632 | 200 | 0.0138 | - | | 0.3289 | 250 | 0.058 | - | | 0.3947 | 300 | 0.1205 | - | | 0.4605 | 350 | 0.0788 | - | | 0.5263 | 400 | 0.1449 | - | | 0.5921 | 450 | 0.0383 | - | | 0.6579 | 500 | 0.0338 | - | | 0.7237 | 550 | 0.1253 | - | | 0.7895 | 600 | 0.069 | - | | 0.8553 | 650 | 0.104 | - | | 0.9211 | 700 | 0.0462 | - | | 0.9868 | 750 | 0.1975 | - | | 1.0526 | 800 | 0.0241 | - | | 1.1184 | 850 | 0.0426 | - | | 1.1842 | 900 | 0.0519 | - | | 1.25 | 950 | 0.0815 | - | | 1.3158 | 1000 | 0.1839 | - | | 1.3816 | 1050 | 0.0198 | - | | 1.4474 | 1100 | 0.0128 | - | | 1.5132 | 1150 | 0.1645 | - | | 1.5789 | 1200 | 0.0019 | - | | 1.6447 | 1250 | 0.0557 | - | | 1.7105 | 1300 | 0.0098 | - | | 1.7763 | 1350 | 0.001 | - | | 1.8421 | 1400 | 0.1557 | - | | 1.9079 | 1450 | 0.1286 | - | | 1.9737 | 1500 | 0.094 | - | | 2.0395 | 1550 | 0.0059 | - | | 2.1053 | 1600 | 0.0227 | - | | 2.1711 | 1650 | 0.0899 | - | | 2.2368 | 1700 | 0.0053 | - | | 2.3026 | 1750 | 0.0021 | - | | 2.3684 | 1800 | 0.0114 | - | | 2.4342 | 1850 | 0.1163 | - | | 2.5 | 1900 | 0.0959 | - | | 2.5658 | 1950 | 0.0252 | - | | 2.6316 | 2000 | 0.0921 | - | | 2.6974 | 2050 | 0.1159 | - | | 2.7632 | 2100 | 0.0026 | - | | 2.8289 | 2150 | 0.1211 | - | | 2.8947 | 2200 | 0.1843 | - | | 2.9605 | 2250 | 0.0014 | - | | 3.0263 | 2300 | 0.0085 | - | | 3.0921 | 2350 | 0.0839 | - | | 3.1579 | 2400 | 0.2372 | - | | 3.2237 | 2450 | 0.0213 | - | | 3.2895 | 2500 | 0.0155 | - | | 3.3553 | 2550 | 0.1128 | - | | 3.4211 | 2600 | 0.0945 | - | | 3.4868 | 2650 | 0.0917 | - | | 3.5526 | 2700 | 0.0011 | - | | 3.6184 | 2750 | 0.0024 | - | | 3.6842 | 2800 | 0.0044 | - | | 3.75 | 2850 | 0.121 | - | | 3.8158 | 2900 | 0.0056 | - | | 3.8816 | 2950 | 0.003 | - | | 3.9474 | 3000 | 0.0899 | - | | 4.0132 | 3050 | 0.0157 | - | | 4.0789 | 3100 | 0.1188 | - | | 4.1447 | 3150 | 0.001 | - | | 4.2105 | 3200 | 0.0222 | - | | 4.2763 | 3250 | 0.1209 | - | | 4.3421 | 3300 | 0.1085 | - | | 4.4079 | 3350 | 0.0054 | - | | 4.4737 | 3400 | 0.0009 | - | | 4.5395 | 3450 | 0.0015 | - | | 4.6053 | 3500 | 0.003 | - | | 4.6711 | 3550 | 0.0009 | - | | 4.7368 | 3600 | 0.0003 | - | | 4.8026 | 3650 | 0.0009 | - | | 4.8684 | 3700 | 0.03 | - | | 4.9342 | 3750 | 0.1206 | - | | 5.0 | 3800 | 0.0003 | - | | 0.0013 | 1 | 0.2045 | - | | 0.0658 | 50 | 0.0078 | - | | 0.1316 | 100 | 0.0087 | - | | 0.1974 | 150 | 0.0386 | - | | 0.2632 | 200 | 0.1015 | - | | 0.3289 | 250 | 0.0022 | - | | 0.3947 | 300 | 0.0291 | - | | 0.4605 | 350 | 0.0013 | - | | 0.5263 | 400 | 0.0022 | - | | 0.5921 | 450 | 0.1324 | - | | 0.6579 | 500 | 0.113 | - | | 0.7237 | 550 | 0.0011 | - | | 0.7895 | 600 | 0.1723 | - | | 0.8553 | 650 | 0.0049 | - | | 0.9211 | 700 | 0.206 | - | | 0.9868 | 750 | 0.1683 | - | | 1.0526 | 800 | 0.0954 | - | | 1.1184 | 850 | 0.018 | - | | 1.1842 | 900 | 0.1854 | - | | 1.25 | 950 | 0.0342 | - | | 1.3158 | 1000 | 0.0015 | - | | 1.3816 | 1050 | 0.0062 | - | | 1.4474 | 1100 | 0.1187 | - | | 1.5132 | 1150 | 0.0048 | - | | 1.5789 | 1200 | 0.0011 | - | | 1.6447 | 1250 | 0.002 | - | | 1.7105 | 1300 | 0.092 | - | | 1.7763 | 1350 | 0.1245 | - | | 1.8421 | 1400 | 0.0009 | - | | 1.9079 | 1450 | 0.1185 | - | | 1.9737 | 1500 | 0.0017 | - | | 2.0395 | 1550 | 0.008 | - | | 2.1053 | 1600 | 0.0049 | - | | 2.1711 | 1650 | 0.0083 | - | | 2.2368 | 1700 | 0.0026 | - | | 2.3026 | 1750 | 0.0081 | - | | 2.3684 | 1800 | 0.0036 | - | | 2.4342 | 1850 | 0.0016 | - | | 2.5 | 1900 | 0.0017 | - | | 2.5658 | 1950 | 0.0014 | - | | 2.6316 | 2000 | 0.0017 | - | | 2.6974 | 2050 | 0.002 | - | | 2.7632 | 2100 | 0.1022 | - | | 2.8289 | 2150 | 0.0004 | - | | 2.8947 | 2200 | 0.0007 | - | | 2.9605 | 2250 | 0.0794 | - | | 3.0263 | 2300 | 0.0183 | - | | 3.0921 | 2350 | 0.0377 | - | | 3.1579 | 2400 | 0.029 | - | | 3.2237 | 2450 | 0.0003 | - | | 3.2895 | 2500 | 0.0961 | - | | 3.3553 | 2550 | 0.0008 | - | | 3.4211 | 2600 | 0.0873 | - | | 3.4868 | 2650 | 0.0501 | - | | 3.5526 | 2700 | 0.0029 | - | | 3.6184 | 2750 | 0.0008 | - | | 3.6842 | 2800 | 0.0004 | - | | 3.75 | 2850 | 0.0011 | - | | 3.8158 | 2900 | 0.0518 | - | | 3.8816 | 2950 | 0.0002 | - | | 3.9474 | 3000 | 0.1115 | - | | 4.0132 | 3050 | 0.0129 | - | | 4.0789 | 3100 | 0.0005 | - | | 4.1447 | 3150 | 0.0012 | - | | 4.2105 | 3200 | 0.1086 | - | | 4.2763 | 3250 | 0.0199 | - | | 4.3421 | 3300 | 0.0004 | - | | 4.4079 | 3350 | 0.0001 | - | | 4.4737 | 3400 | 0.0832 | - | | 4.5395 | 3450 | 0.0003 | - | | 4.6053 | 3500 | 0.0041 | - | | 4.6711 | 3550 | 0.1146 | - | | 4.7368 | 3600 | 0.0027 | - | | 4.8026 | 3650 | 0.0002 | - | | 4.8684 | 3700 | 0.0544 | - | | 4.9342 | 3750 | 0.0002 | - | | 5.0 | 3800 | 0.0046 | - | | 0.0013 | 1 | 0.0015 | - | | 0.0658 | 50 | 0.1973 | - | | 0.1316 | 100 | 0.0106 | - | | 0.1974 | 150 | 0.0744 | - | | 0.2632 | 200 | 0.1033 | - | | 0.3289 | 250 | 0.0425 | - | | 0.3947 | 300 | 0.1125 | - | | 0.4605 | 350 | 0.0018 | - | | 0.5263 | 400 | 0.0019 | - | | 0.5921 | 450 | 0.0002 | - | | 0.6579 | 500 | 0.0007 | - | | 0.7237 | 550 | 0.1393 | - | | 0.7895 | 600 | 0.0002 | - | | 0.8553 | 650 | 0.0043 | - | | 0.9211 | 700 | 0.0339 | - | | 0.9868 | 750 | 0.0002 | - | | 0.0013 | 1 | 0.0007 | - | | 0.0658 | 50 | 0.0419 | - | | 0.1316 | 100 | 0.0068 | - | | 0.1974 | 150 | 0.1401 | - | | 0.2632 | 200 | 0.0423 | - | | 0.3289 | 250 | 0.1122 | - | | 0.3947 | 300 | 0.0037 | - | | 0.4605 | 350 | 0.005 | - | | 0.5263 | 400 | 0.0006 | - | | 0.5921 | 450 | 0.0006 | - | | 0.6579 | 500 | 0.0016 | - | | 0.7237 | 550 | 0.1244 | - | | 0.7895 | 600 | 0.0016 | - | | 0.8553 | 650 | 0.0028 | - | | 0.9211 | 700 | 0.002 | - | | 0.9868 | 750 | 0.057 | - | | 0.0013 | 1 | 0.1396 | - | | 0.0658 | 50 | 0.0366 | - | | 0.1316 | 100 | 0.0021 | - | | 0.1974 | 150 | 0.1088 | - | | 0.2632 | 200 | 0.0449 | - | | 0.3289 | 250 | 0.0187 | - | | 0.3947 | 300 | 0.0017 | - | | 0.4605 | 350 | 0.1262 | - | | 0.5263 | 400 | 0.0052 | - | | 0.5921 | 450 | 0.1188 | - | | 0.6579 | 500 | 0.0002 | - | | 0.7237 | 550 | 0.0006 | - | | 0.7895 | 600 | 0.0758 | - | | 0.8553 | 650 | 0.025 | - | | 0.9211 | 700 | 0.0052 | - | | 0.9868 | 750 | 0.1985 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.15.0 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CAS" ]
Non_BioNLP
ictumuk/vietnameses_legal_final
ictumuk
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4391", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:keepitreal/vietnamese-sbert", "base_model:finetune:keepitreal/vietnamese-sbert", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,741
1,741
7
0
--- base_model: keepitreal/vietnamese-sbert library_name: sentence-transformers metrics: - cosine_accuracy - cosine_accuracy_threshold - cosine_f1 - cosine_f1_threshold - cosine_precision - cosine_recall - cosine_ap - cosine_mcc pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4391 - loss:CoSENTLoss widget: - source_sentence: Đơn vị nào có trách nhiệm xây dựng và triển khai hoạt động phổ biến kiến thức, nâng cao nhận thức về an ninh mạng cho cơ quan, tổ chức, cá nhân của tỉnh? sentences: - 'Phòng, chống khủng bố mạng 1. Cơ quan nhà nước có thẩm quyền có trách nhiệm áp dụng biện pháp theo quy định của Luật này, Điều 29 của Luật An toàn thông tin mạng và pháp luật về phòng, chống khủng bố để xử lý khủng bố mạng. Chủ quản hệ thống thông tin thường xuyên rà soát, kiểm tra hệ thống thông tin thuộc phạm vi quản lý nhằm loại trừ nguy cơ khủng bố mạng. 3. Khi phát hiện dấu hiệu, hành vi khủng bố mạng, cơ quan, tổ chức, cá nhân phải kịp thời báo cho lực lượng bảo vệ an ninh mạng. Cơ quan tiếp nhận tin báo có trách nhiệm tiếp nhận đầy đủ tin báo về khủng bố mạng và kịp thời thông báo cho lực lượng chuyên trách bảo vệ an ninh mạng. 4. Bộ Công an chủ trì, phối hợp với Bộ, ngành có liên quan triển khai công tác phòng, chống khủng bố mạng, áp dụng biện pháp vô hiệu hóa nguồn khủng bố mạng, xử lý khủng bố mạng, hạn chế đến mức thấp nhất hậu quả xảy ra đối với hệ thống thông tin, trừ trường hợp quy định tại khoản 5 và khoản 6 Điều này. 5. Bộ Quốc phòng chủ trì, phối hợp với Bộ, ngành có liên quan triển khai công tác phòng, chống khủng bố mạng, áp dụng biện pháp xử lý khủng bố mạng xảy ra đối với hệ thống thông tin quân sự. 6. Ban Cơ yếu Chính phủ chủ trì, phối hợp với Bộ, ngành có liên quan triển khai công tác phòng, chống khủng bố mạng, áp dụng biện pháp xử lý khủng bố mạng xảy ra đối với hệ thống thông tin cơ yếu thuộc Ban Cơ yếu Chính phủ.' - '1. Công trình đường cao tốc khi đưa vào khai thác, sử dụng phải được quản lý, khai thác và bảo trì theo quy định tại Luật Giao thông đường bộ, Nghị định số 32/2014/NĐ-CP ngày 22 tháng 4 năm 2014 của Chính phủ về quản lý, khai thác và bảo trì công trình đường cao tốc (sau đây gọi tắt là Nghị định số 32/2014/NĐ-CP), Nghị định số 11/2010/NĐ-CP ngày 24 tháng 02 năm 2010 của Chính phủ quy định về quản lý và bảo vệ kết cấu hạ tầng giao thông đường bộ (sau đây gọi tắt là Nghị định số 11/2010/NĐ-CP), Nghị định số 100/2013/NĐ-CP ngày 03 tháng 9 năm 2013 của Chính phủ về sửa đổi, bổ sung một số điều của Nghị định số 11/2010/NĐ-CP ngày 24 tháng 02 năm 2010 (sau đây gọi tắt là Nghị định số 100/2013/NĐ-CP), Nghị định số 114/2010/NĐ-CP ngày 06 tháng 12 năm 2010 của Chính phủ về bảo trì công trình xây dựng (sau đây gọi tắt là Nghị định số 114/2010/NĐ-CP), Nghị định số 10/2013/NĐ-CP ngày 11 tháng 01 năm 2013 của Chính phủ quy định việc quản lý, sử dụng và khai thác tài sản kết cấu hạ tầng giao thông đường bộ (sau đây gọi tắt là Nghị định số 10/2013/NĐ-CP), các văn bản quy phạm pháp luật có liên quan và quy định tại Thông tư này. 2. Việc quản lý, khai thác và bảo trì công trình đường cao tốc phải thực hiện theo quy trình vận hành khai thác, quy trình bảo trì, tiêu chuẩn, quy chuẩn kỹ thuật về quản lý, khai thác và bảo trì công trình đường cao tốc được cơ quan có thẩm quyền ban hành. 3. Quy trình vận hành khai thác, quy trình bảo trì công trình đường cao tốc được lập phù hợp với các bộ phận công trình, thiết bị lắp đặt vào công trình, loại công trình, cấp công trình và mục đích sử dụng công trình; được thể hiện rõ ràng, công khai bằng tiếng Việt trên giấy, đĩa từ hoặc các phương tiện khác.' - 'Triển khai hoạt động bảo vệ an ninh mạng trong cơ quan nhà nước, tổ chức chính trị ở trung ương và địa phương 1. Nội dung triển khai hoạt động bảo vệ an ninh mạng bao gồm: a) Xây dựng, hoàn thiện quy định, quy chế sử dụng mạng máy tính nội bộ, mạng máy tính có kết nối mạng Internet; phương án bảo đảm an ninh mạng đối với hệ thống thông tin; phương án ứng phó, khắc phục sự cố an ninh mạng; b) Ứng dụng, triển khai phương án, biện pháp, công nghệ bảo vệ an ninh mạng đối với hệ thống thông tin và thông tin, tài liệu được lưu trữ, soạn thảo, truyền đưa trên hệ thống thông tin thuộc phạm vi quản lý; c) Tổ chức bồi dưỡng kiến thức về an ninh mạng cho cán bộ, công chức, viên chức, người lao động; nâng cao năng lực bảo vệ an ninh mạng cho lực lượng bảo vệ an ninh mạng; d) Bảo vệ an ninh mạng trong hoạt động cung cấp dịch vụ công trên không gian mạng, cung cấp, trao đổi, thu thập thông tin với cơ quan, tổ chức, cá nhân, chia sẻ thông tin trong nội bộ và với cơ quan khác hoặc trong hoạt động khác theo quy định của Chính phủ; đ) Đầu tư, xây dựng hạ tầng cơ sở vật chất phù hợp với điều kiện bảo đảm triển khai hoạt động bảo vệ an ninh mạng đối với hệ thống thông tin; e) Kiểm tra an ninh mạng đối với hệ thống thông tin; phòng, chống hành vi vi phạm pháp luật về an ninh mạng; ứng phó, khắc phục sự cố an ninh mạng. Người đứng đầu cơ quan, tổ chức có trách nhiệm triển khai hoạt động bảo vệ an ninh mạng thuộc quyền quản lý.' - source_sentence: Người trồng cây thuốc phiện với số lượng 3.000 cây trở lên thì bị phạt tù từ bao lâu đến bao lâu? sentences: - Doanh nghiệp được xem xét cấp Giấy phép hoạt động dịch vụ đưa người lao động đi làm việc ở nước ngoài (sau đây gọi tắt là Giấy phép) là doanh nghiệp được thành lập và hoạt động theo Luật Doanh nghiệp có 100% vốn điều lệ của các tổ chức, cá nhân Việt Nam. - 'Tội trồng cây thuốc phiện, cây côca, cây cần sa hoặc các loại cây khác có chứa chất ma túy 1. Người nào trồng cây thuốc phiện, cây côca, cây cần sa hoặc các loại cây khác có chứa chất ma túy thuộc một trong các trường hợp sau đây, thì bị phạt tù từ 06 tháng đến 03 năm: a) Đã được giáo dục 02 lần và đã được tạo điều kiện ổn định cuộc sống; b) Đã bị xử phạt vi phạm hành chính về hành vi này hoặc đã bị kết án về tội này, chưa được xóa án tích mà còn vi phạm; c) Với số lượng từ 500 cây đến dưới 3.000 cây. Phạm tội thuộc một trong các trường hợp sau đây, thì bị phạt tù từ 03 năm đến 07 năm: a) Có tổ chức; b) Với số lượng 3.000 cây trở lên; c) Tái phạm nguy hiểm. 3. Người phạm tội còn có thể bị phạt tiền từ 5.000.000 đồng đến 50.000.000 đồng. 4. Người nào phạm tội thuộc khoản 1 Điều này, nhưng đã tự nguyện phá bỏ, giao nộp cho cơ quan chức năng có thẩm quyền trước khi thu hoạch, thì có thể được miễn trách nhiệm hình sự.' - Người chấp hành án và mọi công dân có quyền tố cáo với cơ quan, người có thẩm quyền về hành vi vi phạm pháp luật của bất kỳ người có thẩm quyền nào trong thi hành án hình sự mà gây thiệt hại hoặc đe dọa gây thiệt hại lợi ích của Nhà nước, quyền, lợi ích hợp pháp của cơ quan, tổ chức, cá nhân. - source_sentence: Việc ứng dụng mô hình thông tin công trình trong quản lý dự án đầu tư xây dựng được quy định như thế nào? sentences: - '1. Thành viên hợp danh bị chấm dứt tư cách trong trường hợp sau đây: a) Tự nguyện rút vốn khỏi công ty; b) Chết, mất tích, bị hạn chế hoặc mất năng lực hành vi dân sự, có khó khăn trong nhận thức, làm chủ hành vi; c) Bị khai trừ khỏi công ty; d) Chấp hành hình phạt tù hoặc bị Tòa án cấm hành nghề hoặc làm công việc nhất định theo quy định của pháp luật; đ) Trường hợp khác do Điều lệ công ty quy định. 2. Thành viên hợp danh có quyền rút vốn khỏi công ty nếu được Hội đồng thành viên chấp thuận. Trường hợp này, thành viên muốn rút vốn khỏi công ty phải thông báo bằng văn bản yêu cầu rút vốn chậm nhất là 06 tháng trước ngày rút vốn; chỉ được rút vốn vào thời điểm kết thúc năm tài chính và báo cáo tài chính của năm tài chính đó đã được thông qua. 3. Thành viên hợp danh bị khai trừ khỏi công ty trong trường hợp sau đây: a) Không có khả năng góp vốn hoặc không góp vốn như đã cam kết sau khi công ty đã có yêu cầu lần thứ hai; b) Vi phạm quy định tại Điều 180 của Luật này; c) Tiến hành công việc kinh doanh không trung thực, không cẩn trọng hoặc có hành vi không thích hợp khác gây thiệt hại nghiêm trọng đến lợi ích của công ty và thành viên khác; d) Không thực hiện đúng nghĩa vụ của thành viên hợp danh. 4. Trường hợp chấm dứt tư cách thành viên của thành viên bị hạn chế hoặc mất năng lực hành vi dân sự, có khó khăn trong nhận thức, làm chủ hành vi thì phần vốn góp của thành viên đó được hoàn trả công bằng và thỏa đáng. 5. Trong thời hạn 02 năm kể từ ngày chấm dứt tư cách thành viên hợp danh theo quy định tại các điểm a, c, d và đ khoản 1 Điều này thì người đó vẫn phải liên đới chịu trách nhiệm bằng toàn bộ tài sản của mình đối với các khoản nợ của công ty đã phát sinh trước ngày chấm dứt tư cách thành viên. 6. Sau khi chấm dứt tư cách thành viên hợp danh, nếu tên của thành viên đó đã được sử dụng thành một phần hoặc toàn bộ tên công ty thì người đó hoặc người thừa kế, người đại diện theo pháp luật của họ có quyền yêu cầu công ty chấm dứt việc sử dụng tên đó.' - '1. Phạt cảnh cáo hoặc phạt tiền từ 100.000 đồng đến 500.000 đồng đối với hành vi thông báo không đủ nội dung theo quy định sau khi được lựa chọn thực hiện đề án, dự án điều tra cơ bản, tư vấn lập quy hoạch tài nguyên nước. 2. Phạt tiền từ 3.000.000 đồng đến 5.000.000 đồng đối với hành vi không thông báo đến cơ quan nhà nước có thẩm quyền theo quy định sau khi được lựa chọn thực hiện đề án, dự án điều tra cơ bản, tư vấn lập quy hoạch tài nguyên nước. 3. Phạt tiền từ 20.000.000 đồng đến 30.000.000 đồng đối với hành vi của người phụ trách kỹ thuật của đề án, dự án điều tra cơ bản tài nguyên nước, tư vấn lập quy hoạch tài nguyên nước cùng một thời điểm thực hiện từ 03 đề án, dự án điều tra cơ bản tài nguyên nước hoặc từ 04 dự án lập quy hoạch tài nguyên nước trở lên. 4. Phạt tiền từ 30.000.000 đồng đến 40.000.000 đồng đối với hành vi kê khai không trung thực thông tin trong hồ sơ năng lực lập đề án, báo cáo trong thực hiện đề án, dự án điều tra cơ bản, tư vấn lập quy hoạch tài nguyên nước. 5. Hình thức xử phạt bổ sung: Đình chỉ hoạt động thực hiện đề án, dự án điều tra cơ bản, tư vấn lập quy hoạch tài nguyên nước trong thời hạn từ 01 tháng đến 06 tháng đối với hành vi vi phạm quy định tại khoản 4 Điều này.' - '1. Khuyến khích áp dụng mô hình thông tin công trình (sau đây gọi tắt là BIM), giải pháp công nghệ số trong hoạt động xây dựng và quản lý vận hành công trình. Người quyết định đầu tư quyết định việc áp dụng BIM, giải pháp công nghệ số khi quyết định dự án đầu tư xây dựng. 2. Tệp tin BIM là một thành phần trong hồ sơ thiết kế xây dựng, hồ sơ hoàn thành công trình đối với các dự án, công trình xây dựng áp dụng BIM. Nội dung và mức độ chi tiết của mô hình thông tin công trình thực hiện theo thỏa thuận của các bên có liên quan đến việc ứng dụng BIM trong hợp đồng xây dựng. 3. Thủ tướng Chính phủ quy định lộ trình áp dụng BIM, giải pháp công nghệ số trong hoạt động xây dựng.' - source_sentence: Người sử dụng lao động có hành vi không thực hiện đối thoại khi đại diện tập thể lao động yêu cầu thì bị xử lý ra sao? sentences: - '1. Phạt tiền từ 100.000 đồng đến 200.000 đồng đối với hành vi điều khiển xe không đáp ứng yêu cầu về vệ sinh lưu thông trong đô thị. 2. Phạt tiền từ 2.000.000 đồng đến 4.000.000 đồng đối với một trong các hành vi vi phạm sau đây: a) Để dầu nhờn, hóa chất rơi vãi xuống đường bộ; b) Chở hàng rời, chất thải, vật liệu xây dựng dễ rơi vãi mà không có mui, bạt che đậy hoặc có mui, bạt che đậy nhưng vẫn để rơi vãi; chở hàng hoặc chất thải để nước chảy xuống mặt đường gây mất an toàn giao thông và vệ sinh môi trường; c) Lôi kéo bùn, đất, cát, nguyên liệu, vật liệu hoặc chất phế thải khác ra đường bộ gây mất an toàn giao thông và vệ sinh môi trường. 3. Phạt tiền từ 4.000.000 đồng đến 6.000.000 đồng đối với người điều khiển xe đổ trái phép rác, đất, cát, đá, vật liệu, chất phế thải trong phạm vi đất dành cho đường bộ ở đoạn đường ngoài đô thị. 4. Phạt tiền từ 10.000.000 đồng đến 15.000.000 đồng đối với người điều khiển xe thực hiện hành vi đổ trái phép rác, đất, cát, đá, vật liệu, chất phế thải ra đường phố. 5. Ngoài việc bị phạt tiền, người điều khiển phương tiện thực hiện hành vi vi phạm quy định tại khoản 3, khoản 4 Điều này còn bị áp dụng hình thức xử phạt bổ sung tước quyền sử dụng Giấy phép lái xe từ 01 tháng đến 03 tháng. 6. Ngoài việc bị áp dụng hình thức xử phạt, người điều khiển phương tiện thực hiện hành vi vi phạm quy định tại khoản 2, khoản 3, khoản 4 Điều này còn bị áp dụng các biện pháp khắc phục hậu quả: Buộc phải thu dọn rác, chất phế thải, vật liệu, hàng hóa và khôi phục lại tình trạng ban đầu đã bị thay đổi do vi phạm hành chính gây ra; nếu gây ô nhiễm môi trường phải thực hiện các biện pháp khắc phục tình trạng ô nhiễm môi trường do vi phạm hành chính gây ra.' - '1. Phạt tiền từ 500.000 đồng đến 1.000.000 đồng đối với người sử dụng lao động có một trong các hành vi sau đây: a) Không thực hiện quy chế dân chủ ở cơ sở theo quy định pháp luật; b) Không bố trí địa điểm và bảo đảm các điều kiện vật chất khác cho việc đối thoại tại nơi làm việc. 2. Phạt tiền từ 2.000.000 đồng đến 5.000.000 đồng đối với người sử dụng lao động có hành vi không thực hiện đối thoại khi đại diện tập thể lao động yêu cầu.' - Công ty quản lý quỹ đầu tư chứng khoán phải báo cáo Ủy ban Chứng khoán Nhà nước định kỳ và bất thường về danh mục đầu tư, hoạt động đầu tư, tình hình tài chính của quỹ đầu tư chứng khoán. - source_sentence: Chế độ giáo dục phạm nhân dưới 18 tuổi từ năm 2020 được quy định như thế nào? sentences: - '1. Phạm nhân là người dưới 18 tuổi được giam giữ theo chế độ riêng phù hợp với sức khỏe, giới tính và đặc điểm nhân thân. 2. Trại giam có trách nhiệm giáo dục phạm nhân là người dưới 18 tuổi về văn hóa, pháp luật và dạy nghề phù hợp với độ tuổi, học vấn, giới tính và sức khỏe, chuẩn bị điều kiện để họ hòa nhập cộng đồng sau khi chấp hành xong án phạt tù. Thực hiện phổ cập giáo dục tiểu học và giáo dục trung học cơ sở. Giáo dục tiểu học là bắt buộc đối với phạm nhân chưa học xong chương trình tiểu học.' - '1. Sĩ quan thôi phục vụ tại ngũ không đủ điều kiện để nghỉ hưu hoặc không chuyển ngành được thì phục viên về địa phương và được hưởng các quyền lợi như sau: a) Được hưởng trợ cấp tạo việc làm bằng 06 tháng tiền lương tối thiểu chung theo quy định của Chính phủ; được ưu tiên học nghề hoặc giới thiệu việc làm tại các tổ chức giới thiệu việc làm của các Bộ, ngành, đoàn thể, địa phương và các tổ chức kinh tế - xã hội khác; b) Được hưởng trợ cấp phục viên một lần, cứ mỗi năm công tác được trợ cấp bằng 01 tháng tiền lương; c) Được hưởng chế độ bảo hiểm xã hội và các chế độ khác theo quy định hiện hành của pháp luật. 2. Sĩ quan đã phục viên về địa phương trong thời gian không quá một năm, kể từ ngày quyết định phục viên có hiệu lực, nếu được tuyển dụng vào các cơ quan, đơn vị quy định tại khoản 1 Điều 3 Nghị định này thì được thực hiện chế độ chuyển ngành. Khi thực hiện chế độ chuyển ngành thì phải hoàn trả khoản trợ cấp phục viên một lần theo quy định tại điểm b khoản 1 Điều này và trợ cấp bảo hiểm xã hội một lần đã nhận. Cơ quan, đơn vị quân đội nhân dân ra quyết định chuyển ngành có trách nhiệm thu lại số tiền trợ cấp phục viên và trợ cấp bảo hiểm xã hội đã nhận. 3. Sĩ quan đã phục viên về địa phương trong thời gian không quá một năm, kể từ ngày quyết định phục viên có hiệu lực, nếu được tuyển dụng vào làm việc tại các doanh nghiệp, cơ quan, đơn vị không hưởng lương từ ngân sách nhà nước, nếu muốn tính nối thời gian đóng bảo hiểm xã hội thì phải hoàn trả quỹ bảo hiểm xã hội khoản trợ cấp bảo hiểm xã hội đã nhận.' - '1. Công tác nạo vét duy tu luồng hàng hải công cộng và luồng đường thủy nội địa sử dụng nguồn ngân sách nhà nước do Bộ Giao thông vận tải, Ủy ban nhân dân cấp tỉnh quản lý được nhà nước bảo đảm, bố trí từ nguồn vốn ngân sách hàng năm để thực hiện. 2. Không thực hiện việc bảo hành và mua bảo hiểm thi công công trình nạo vét duy tu luồng hàng hải công cộng và luồng đường thủy nội địa.' model-index: - name: SentenceTransformer based on keepitreal/vietnamese-sbert results: - task: type: binary-classification name: Binary Classification dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy value: 0.7438524590163934 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.5209897756576538 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.7861842105263158 name: Cosine F1 - type: cosine_f1_threshold value: 0.47490352392196655 name: Cosine F1 Threshold - type: cosine_precision value: 0.6713483146067416 name: Cosine Precision - type: cosine_recall value: 0.9484126984126984 name: Cosine Recall - type: cosine_ap value: 0.7967408055834047 name: Cosine Ap - type: cosine_mcc value: 0.509221602285373 name: Cosine Mcc --- # SentenceTransformer based on keepitreal/vietnamese-sbert This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [keepitreal/vietnamese-sbert](https://huggingface.co/keepitreal/vietnamese-sbert) <!-- at revision a9467ef2ef47caa6448edeabfd8e5e5ce0fa2a23 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("ictumuk/vietnameses_legal_final") # Run inference sentences = [ 'Chế độ giáo dục phạm nhân dưới 18 tuổi từ năm 2020 được quy định như thế nào?', '1. Phạm nhân là người dưới 18 tuổi được giam giữ theo chế độ riêng phù hợp với sức khỏe, giới tính và đặc điểm nhân thân.\n2. Trại giam có trách nhiệm giáo dục phạm nhân là người dưới 18 tuổi về văn hóa, pháp luật và dạy nghề phù hợp với độ tuổi, học vấn, giới tính và sức khỏe, chuẩn bị điều kiện để họ hòa nhập cộng đồng sau khi chấp hành xong án phạt tù. Thực hiện phổ cập giáo dục tiểu học và giáo dục trung học cơ sở. Giáo dục tiểu học là bắt buộc đối với phạm nhân chưa học xong chương trình tiểu học.', '1. Công tác nạo vét duy tu luồng hàng hải công cộng và luồng đường thủy nội địa sử dụng nguồn ngân sách nhà nước do Bộ Giao thông vận tải, Ủy ban nhân dân cấp tỉnh quản lý được nhà nước bảo đảm, bố trí từ nguồn vốn ngân sách hàng năm để thực hiện.\n2. Không thực hiện việc bảo hành và mua bảo hiểm thi công công trình nạo vét duy tu luồng hàng hải công cộng và luồng đường thủy nội địa.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Binary Classification * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) | Metric | Value | |:--------------------------|:-----------| | cosine_accuracy | 0.7439 | | cosine_accuracy_threshold | 0.521 | | cosine_f1 | 0.7862 | | cosine_f1_threshold | 0.4749 | | cosine_precision | 0.6713 | | cosine_recall | 0.9484 | | **cosine_ap** | **0.7967** | | cosine_mcc | 0.5092 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 4,391 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 7 tokens</li><li>mean: 25.02 tokens</li><li>max: 99 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 195.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>0: ~48.00%</li><li>1: ~52.00%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:----------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Gửi hồ sơ giả mạo đến kho bạc nhà nước để chi cho chương trình mục tiêu quốc gia phạt bao nhiêu?</code> | <code>1. Phạt tiền từ 10.000.000 đồng đến 15.000.000 đồng đối với hành vi lập hồ sơ, chứng từ giả mạo gửi Kho bạc Nhà nước để thanh toán, chi trả các khoản chi thường xuyên, chi sự nghiệp có tính chất thường xuyên, chi chương trình mục tiêu quốc gia, chương trình mục tiêu sử dụng kinh phí sự nghiệp (loại trừ các khoản chi thực hiện các công trình sửa chữa, bảo trì, cải tạo, nâng cấp, mở rộng cơ sở vật chất từ nguồn kinh phí chi thường xuyên ngân sách nhà nước và nguồn phí được để lại theo chế độ quy định để chi thường xuyên có tổng mức đầu tư trên 500.000.000 đồng).<br>2. Phạt tiền từ 30.000.000 đồng đến 50.000.000 đồng đối với hành vi lập hồ sơ, chứng từ giả mạo gửi Kho bạc Nhà nước để thanh toán vốn đầu tư thuộc nguồn vốn ngân sách nhà nước và nguồn vốn đầu tư từ ngân sách nhà nước thực hiện các chương trình mục tiêu hoặc chi thực hiện các công trình sửa chữa, bảo trì, cải tạo, nâng cấp, mở rộng cơ sở vật chất từ nguồn kinh phí chi thường xuyên ngân sách nhà nước và nguồn phí được để lại theo...</code> | <code>0</code> | | <code>Điều kiện tham gia hoạt động điều tra, khảo sát, rà phá bom mìn vật nổ sau chiến tranh theo quy định hiện hành</code> | <code>1. Việc quản lý chi phí dự án được thực hiện theo quy định của pháp luật về quản lý chi phí đầu tư xây dựng công trình và theo thỏa thuận với nhà tài trợ nước ngoài.<br>2. Chi phí nhân công trong hoạt động điều tra, khảo sát, rà phá bom mìn vật nổ sau chiến tranh thực hiện như sau:<br>a) Chi phí tiền công và các khoản phụ cấp đối với các đối tượng không hưởng lương từ ngân sách nhà nước khi tham gia hoạt động điều tra, khảo sát, rà phá bom mìn vật nổ sau chiến tranh;<br>b) Chi phí bồi dưỡng đối với các đối tượng hưởng lương từ ngân sách nhà nước khi tham gia hoạt động điều tra, khảo sát, rà phá bom mìn vật nổ sau chiến tranh theo quyết định của Thủ tướng Chính phủ.</code> | <code>0</code> | | <code>Hiệu lực pháp lý của di chúc miệng khi người lập di chúc phục hồi sức khỏe?</code> | <code>1. Trường hợp tính mạng một người bị cái chết đe dọa và không thể lập di chúc bằng văn bản thì có thể lập di chúc miệng.<br>2. Sau 03 tháng, kể từ thời điểm di chúc miệng mà người lập di chúc còn sống, minh mẫn, sáng suốt thì di chúc miệng mặc nhiên bị hủy bỏ.</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | cosine_ap | |:------:|:----:|:-------------:|:---------:| | -1 | -1 | - | 0.6371 | | 0.7273 | 200 | 4.5203 | - | | 1.4545 | 400 | 3.7861 | - | | 0.7273 | 200 | 3.1329 | - | | 1.4545 | 400 | 2.4773 | 0.7967 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.4.1 - Transformers: 4.41.2 - PyTorch: 2.1.2+cu121 - Accelerate: 0.34.2 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CHIA" ]
Non_BioNLP
chandanzeon/setfit_finetuned_iaf_98
chandanzeon
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "model-index", "region:us" ]
1,734
1,734
0
0
--- base_model: BAAI/bge-small-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Review of Administrative and Disciplinary Records Recent administrative evaluations have revealed irregularities within key operational postings. Investigations were launched in key command areas such as Jalandhar and Secunderabad, focusing on personnel movement, access permissions, and communication lapses. Anomalies in operational reports indicated unauthorized sharing of personnel data with external parties, sparking concerns about internal security and discipline. The South Western Command and Central Military Headquarters are overseeing these investigations. Reports highlight a need for increased supervision of personnel involved in administrative roles, as lapses in information sharing protocols pose a significant risk to mission readiness. In response, administrative retraining programs focused on compliance, confidentiality, and secure communication have been implemented across all units. Improved oversight measures, such as enhanced access control protocols and personnel background checks, are being prioritized to prevent such breaches from occurring in the future. Specialized training sessions have been hosted at key logistical hubs to strengthen accountability and ensure all military officials understand their responsibilities.' - text: 'Advanced Technological Integration into Military Strategies To maintain strategic advantages, the military has integrated cutting-edge technological assets into operational strategies. Innovations such as advanced surveillance drones equipped with night vision cameras and AI-assisted threat detection have enhanced the military''s ability to track adversarial movements. These drones are deployed on both border operations and maritime patrols, enabling continuous and real-time intelligence-gathering without compromising operational security. Furthermore, electronic warfare units have been equipped with advanced jamming devices capable of disrupting electronic communication signals used by insurgents. This capability ensures that adversarial communication networks are neutralized during operational missions, reducing the ability of enemy cells to coordinate and launch attacks.' - text: 'Drones in Target Acquisition and Precision Strikes Beyond surveillance and reconnaissance, drones are increasingly being used in target acquisition and precision strike missions. The integration of guided munitions with UAVs allows for highly accurate strikes on key targets, including terrorist camps, weapons caches, and enemy fortifications. Drones like the Harpy and Predator have been used in similar missions, providing high-precision strikes while minimizing the risk to personnel. The use of drones for precision strikes significantly reduces the collateral damage typically associated with traditional airstrikes and ground-based artillery fire.' - text: 'Strengthening Army Resilience through Infrastructure Upgrades Recent initiatives to modernize military infrastructure are focusing on strategic roadways, railway networks, and key logistical hubs across Northern and Eastern theater areas. Troop movement flexibility has become vital as regional border security remains fragile. Construction projects have been prioritized near operational areas like Leh, Arunachal Pradesh, and parts of the Indo-Nepal border. Specialized engineering battalions are spearheading the construction of advanced bridges and all- weather roadways, particularly through challenging terrains such as the Himalayan foothills and desert corridors. The latest developments include high-capacity bridge-building technology, allowing troops and supplies to be moved rapidly even in the most inaccessible locations. The strategic development of these routes ensures the swift mobility of logistical support, troop reinforcements, and rapid response units. Furthermore, advancements in railway infrastructure are underway to support rapid troop deployment. Railway hubs near key operational zones are being modernized, with emphasis on dual-use infrastructure that allows both civilian and military operations to utilize these networks when necessary.' - text: 'Tactical Coordination and Training Joint training exercises involving armored and artillery units have been conducted to refine battlefield tactics. These exercises, held in the Thar Desert, simulated multi-front conflict scenarios, emphasizing coordination between various branches of the armed forces. Feedback from these exercises has led to the adoption of new operational guidelines, such as optimized deployment patterns for tanks and artillery systems. Post-exercise debriefings at Jodhpur Cantonment highlighted the importance of synchronized maneuvers in achieving tactical superiority.' inference: true model-index: - name: SetFit with BAAI/bge-small-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9193548387096774 name: Accuracy --- # SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 3 | <ul><li>'Closing Ceremony and Awards Distribution\nThe event concluded with the closing ceremony on August 18, 2024. The ceremony was attended by senior military officials, including the Chief of Naval Staff and Chief of Air Staff. The athletes were recognized for their outstanding performances with awards presented in various categories, such as Best Athlete, Best Team, and Best Sportsmanship. The Indian Air Force won the Best Team Performance trophy, while Captain Aaryan Verma from the Army was named the Best Athlete for his exceptional performance in athletics. The Indian Army was declared the overall winner of the competition, having secured the most points across all events. A highlight of the ceremony was the traditional military drill performed by the three services, showcasing the discipline and precision that is characteristic of the Indian Armed Forces.'</li><li>'Community Engagement Activities\nIn addition to the aerial demonstrations, the IAF organized several community outreach activities. A special education booth was set up for schoolchildren, focusing on the history of the Indian Air Force, aviation careers, and the importance of air defense. The booth also displayed several educational films and interactive content about the Air Forceâ\x80\x99s role in peacekeeping, disaster relief, and national defense. Additionally, a blood donation camp was established near the main entrance, in collaboration with Bangaloreâ\x80\x99s city hospital, to encourage voluntary blood donation. Visitors were encouraged to participate, with the goal of increasing awareness about health and wellness in the community. The camp ran smoothly and successfully collected over 500 units of blood, which would be distributed to regional hospitals.'</li><li>'Environmental Considerations\nWith the ongoing infrastructure developments, the Indian Air Force has taken steps to minimize the environmental impact of the construction. Measures are being implemented to reduce waste and promote sustainability during the projectâ\x80\x99s execution. Additionally, the base has set up an environmental monitoring system to track air and water quality in the vicinity, ensuring that the baseâ\x80\x99s operations do not adversely affect local ecosystems. Moreover, the wastewater treatment facility at the station is being upgraded to ensure that all waste generated from daily operations is properly treated and does not affect the surrounding areas. This initiative is part of the Air Forceâ\x80\x99s broader commitment to environmental responsibility and sustainability.'</li></ul> | | 1 | <ul><li>'Radio Frequency Allocation Updates\nThe communications division recently conducted a comprehensive review of radio frequency allocations across the northern and northeastern sectors. Adjustments were made to avoid overlaps that could interfere with civilian and military operations. A new allocation plan has been implemented for units stationed at Bagdogra and Dimapur, ensuring seamless communication during both routine and emergency operations. Periodic audits of frequency usage continue to safeguard against potential breaches or overlaps.'</li><li>'Signal Frequency Interference Monitoring\nSignal frequency interference has become a growing concern as electronic warfare threats evolve. The monitoring units, based in key strategic areas such as Karnal, Leh, and Udhampur, have observed unauthorized intrusions into radio communication patterns. Advanced detection technologies have been deployed to analyze this data, with initial results highlighting the need for improved counter-electronic warfare capabilities. The Signal Corps has expanded its focus on electronic jamming threats near key tactical airstrips and operational centers. Units in these regions are conducting surveillance with advanced signal detection systems, ensuring they can identify and neutralize attempts at electronic disruption. Military radar units in Jaisalmer and Pathankot are receiving new upgrades to improve signal detection in operationally sensitive regions. Commanders have emphasized the importance of coordinating electronic warfare drills with these signal monitoring operations to enhance response mechanisms. Coordination between signal analysis teams and field operations ensures timely detection and neutralization of electronic threats.'</li><li>'Naval Assets and Maritime Patrolling Operations\nNaval deployments along key trade routes and strategic maritime chokepoints have seen increased patrols and strategic upgrades. Units have been repositioned in response to recent developments in regional waters, focusing on both counter-terror operations and maintaining freedom of navigation. Surveillance assets, such as Indian Navy frigates and long-range maritime reconnaissance aircraft, are actively monitoring the Malabar Sea and Arabian Sea for any irregular ship movements or unauthorized military deployments. Naval units stationed at key operational ports like Visakhapatnam and Karwar are equipped with advanced sonar and radar systems. Recent deployments emphasize anti-submarine warfare capabilities, leveraging advanced underwater detection technology to identify potential threats from hostile assets or insurgent activity. Coordination with air assets, including the Sea King and P-8I aircraft, has improved naval surveillance effectiveness, with regular joint operations enhancing strategic interoperability.'</li></ul> | | 2 | <ul><li>"Training Manuals for Official Use Only\nThe following manuals were distributed among units for use during December 2024 training sessions: 1. Guidelines for Advanced Vehicle Maintenance: ï\x82· This manual provides detailed procedures for troubleshooting and repairing light utility vehicles commonly used in supply operations. Emphasis is placed on maintaining vehicle efficiency in cold-weather conditions. ï\x82· A new section outlines methods for diagnosing electronic systems, a critical aspect as newer models are introduced into service. 2. Basic Communication Protocols: ï\x82· Designed for new recruits, this guide introduces secure communication techniques, including encryption basics and signal relay procedures. ï\x82· The document also includes practical exercises to simulate field scenarios, enhancing recruits' readiness for real-world applications."</li><li>'Routine Procurement Documents\nThe procurement department at Ambala Air Force Station has finalized contracts for the supply of spare parts for MiG-21 aircraft. The document details the scheduled delivery of parts such as hydraulic actuators, brake systems, and navigation units over the next quarter. These supplies are essential for routine maintenance and ensuring that the aircraft remains in operational condition for non-combat purposes. The report also includes internal memos on supplier performance and cost negotiations, which are classified as Restricted to prevent unauthorized access and ensure smooth contract execution. Highlights of the Competition\nThe athletics events were among the most anticipated, with the fastest runners from each branch competing for medals. The 100m sprint final featured a thrilling race between the top sprinters from the Army, Navy, and Air Force, with Captain Aaryan Verma of the Army securing the gold medal with a time of 10.87 seconds, followed by Lieutenant Neha Mehra of the Navy, who claimed the silver with 11.03 seconds. In the football tournament, the Indian Army emerged as the champions after a tense final match against the Indian Air Force. The game ended with a score of 2-1 in favor of the Army, with Subedar Major Vikram Singh scoring the winning goal in the final minutes. The Army team displayed exceptional teamwork and strategic play, which ultimately led them to victory. The cricket matches were highly competitive, with the Indian Navy defeating the Air Force team in a closely contested T20 match. The final was a nail-biting affair, with Navyâ\x80\x99s Lieutenant Commander Rahul Mehta hitting the winning six in the last over of the game.'</li><li>'Supply Chain and Procurement Documents\nRoutine procurement activities continue to fuel military preparedness. The most recent batch of documents contains procurement orders for various operational materials needed in peripheral zones. These orders range from vehicles used in reconnaissance missions to tactical gear for military units that are not directly involved in combat but are still crucial for maintaining defense capabilities. For example, a recent procurement request was made for a series of high-powered satellite phones that will be issued to units deployed in isolated locations. These phones are essential for ensuring that communication lines remain open in areas where traditional communication infrastructure is unavailable. Similarly, there are ongoing negotiations for acquiring medical supplies, such as portable surgical kits and trauma care equipment, specifically for units working in non-conflict zones where medical infrastructure might be limited. The documentation detailing these procurements includes specifics on supplier agreements, delivery schedules, and operational requirements. This is sensitive data, as it could potentially reveal gaps in military supply chains if accessed by unauthorized individuals. Suppliers are carefully vetted, and any leak of information regarding these supply chains could jeopardize the mission\'s success in certain strategic areas. Cipher Message: Cipher Text: "NQ5P7 QXZ8T 7J6B2 P1M9Y." â\x80\x93 Encrypted procurement details, listing authorized suppliers and material quantities for internal distribution only.'</li></ul> | | 0 | <ul><li>'Enhancement of Aerial Surveillance\nUnmanned Aerial Vehicles (UAVs) have been deployed from the Jorhat Air Force Station to maintain constant surveillance over disputed areas. These UAVs, equipped with high-resolution cameras and thermal sensors, provide real-time imagery of adversarial activities. Regular patrol missions conducted over regions like Kibithu and Walong have been instrumental in identifying unauthorized constructions. The data gathered is relayed to command centers in Shillong for detailed analysis. AI-powered algorithms help in detecting anomalies, ensuring swift decision-making in case of any potential threats. These proactive measures have significantly improved situational awareness. Implementation of AI-Based Border Surveillance\nThe recent deployment of artificial intelligence-driven surveillance mechanisms has introduced cutting-edge technology into border operations. Surveillance drones and AI-powered detection sensors have been positioned along key border regions, including the North Eastern States and the Indian-Pakistani border. These assets are leveraging machine learning algorithms to identify patterns of unusual activity, unauthorized crossings, and changes in terrain anomalies. The AI systems are capable of processing vast quantities of real-time data collected from UAVs, thermal imaging cameras, and ground-based radar installations. Machine learning analysis identifies trends that may go unnoticed by conventional monitoring, such as small troop movements or unauthorized infiltration attempts across the porous Indo-Bangladesh border. These capabilities have already proven effective in detecting early signs of infiltration and cross-border activity. Additionally, intelligence teams are collaborating with AI experts to fine-tune these tools for real- time decision-making support. Advanced signal detection and image recognition capabilities are improving response times and ensuring that border patrols can intercept threats with enhanced accuracy and minimal delay.'</li><li>'Conclusion\nThe integration of advanced technology with strategic realignments across operational zones highlights the dynamic and robust approach adopted by the armed forces. These measures not only bolster defensive capabilities but also reinforce the nationâ\x80\x99s readiness to respond to evolving threats.'</li><li>'New Munitions Deployment\nTo enhance combat effectiveness, advanced munitions tailored for specific operational conditions have been introduced. The recent deployment of guided mortar systems to units stationed in the Siachen Glacier highlights this focus. These munitions, tested under extreme conditions, provide unmatched accuracy and reliability. Additionally, countermeasure systems designed to neutralize enemy drones have been distributed across critical sectors. These systems employ directed energy technology, effectively disrupting the electronic controls of hostile UAVs.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9194 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("chandanzeon/setfit_finetuned_iaf_98") # Run inference preds = model("Tactical Coordination and Training Joint training exercises involving armored and artillery units have been conducted to refine battlefield tactics. These exercises, held in the Thar Desert, simulated multi-front conflict scenarios, emphasizing coordination between various branches of the armed forces. Feedback from these exercises has led to the adoption of new operational guidelines, such as optimized deployment patterns for tanks and artillery systems. Post-exercise debriefings at Jodhpur Cantonment highlighted the importance of synchronized maneuvers in achieving tactical superiority.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:----| | Word count | 39 | 130.3317 | 475 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 49 | | 1 | 56 | | 2 | 49 | | 3 | 51 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (5, 5) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0010 | 1 | 0.267 | - | | 0.0508 | 50 | 0.2533 | - | | 0.1016 | 100 | 0.2342 | - | | 0.1524 | 150 | 0.2272 | - | | 0.2033 | 200 | 0.2065 | - | | 0.2541 | 250 | 0.1573 | - | | 0.3049 | 300 | 0.1051 | - | | 0.3557 | 350 | 0.0546 | - | | 0.4065 | 400 | 0.011 | - | | 0.4573 | 450 | 0.004 | - | | 0.5081 | 500 | 0.0028 | - | | 0.5589 | 550 | 0.0023 | - | | 0.6098 | 600 | 0.0019 | - | | 0.6606 | 650 | 0.0015 | - | | 0.7114 | 700 | 0.0014 | - | | 0.7622 | 750 | 0.0014 | - | | 0.8130 | 800 | 0.0013 | - | | 0.8638 | 850 | 0.0012 | - | | 0.9146 | 900 | 0.0011 | - | | 0.9654 | 950 | 0.001 | - | | 1.0 | 984 | - | 0.0731 | | 1.0163 | 1000 | 0.001 | - | | 1.0671 | 1050 | 0.0009 | - | | 1.1179 | 1100 | 0.0009 | - | | 1.1687 | 1150 | 0.0008 | - | | 1.2195 | 1200 | 0.0008 | - | | 1.2703 | 1250 | 0.0008 | - | | 1.3211 | 1300 | 0.0008 | - | | 1.3720 | 1350 | 0.0007 | - | | 1.4228 | 1400 | 0.0007 | - | | 1.4736 | 1450 | 0.0007 | - | | 1.5244 | 1500 | 0.0007 | - | | 1.5752 | 1550 | 0.0006 | - | | 1.6260 | 1600 | 0.0006 | - | | 1.6768 | 1650 | 0.0006 | - | | 1.7276 | 1700 | 0.0006 | - | | 1.7785 | 1750 | 0.0006 | - | | 1.8293 | 1800 | 0.0006 | - | | 1.8801 | 1850 | 0.0006 | - | | 1.9309 | 1900 | 0.0006 | - | | 1.9817 | 1950 | 0.0005 | - | | 2.0 | 1968 | - | 0.0762 | | 2.0325 | 2000 | 0.0005 | - | | 2.0833 | 2050 | 0.0005 | - | | 2.1341 | 2100 | 0.0005 | - | | 2.1850 | 2150 | 0.0005 | - | | 2.2358 | 2200 | 0.0005 | - | | 2.2866 | 2250 | 0.0005 | - | | 2.3374 | 2300 | 0.0005 | - | | 2.3882 | 2350 | 0.0005 | - | | 2.4390 | 2400 | 0.0005 | - | | 2.4898 | 2450 | 0.0005 | - | | 2.5407 | 2500 | 0.0005 | - | | 2.5915 | 2550 | 0.0004 | - | | 2.6423 | 2600 | 0.0004 | - | | 2.6931 | 2650 | 0.0004 | - | | 2.7439 | 2700 | 0.0004 | - | | 2.7947 | 2750 | 0.0004 | - | | 2.8455 | 2800 | 0.0004 | - | | 2.8963 | 2850 | 0.0004 | - | | 2.9472 | 2900 | 0.0004 | - | | 2.9980 | 2950 | 0.0004 | - | | 3.0 | 2952 | - | 0.0786 | | 3.0488 | 3000 | 0.0004 | - | | 3.0996 | 3050 | 0.0004 | - | | 3.1504 | 3100 | 0.0004 | - | | 3.2012 | 3150 | 0.0004 | - | | 3.2520 | 3200 | 0.0004 | - | | 3.3028 | 3250 | 0.0004 | - | | 3.3537 | 3300 | 0.0004 | - | | 3.4045 | 3350 | 0.0004 | - | | 3.4553 | 3400 | 0.0004 | - | | 3.5061 | 3450 | 0.0004 | - | | 3.5569 | 3500 | 0.0003 | - | | 3.6077 | 3550 | 0.0004 | - | | 3.6585 | 3600 | 0.0004 | - | | 3.7093 | 3650 | 0.0004 | - | | 3.7602 | 3700 | 0.0003 | - | | 3.8110 | 3750 | 0.0003 | - | | 3.8618 | 3800 | 0.0004 | - | | 3.9126 | 3850 | 0.0003 | - | | 3.9634 | 3900 | 0.0003 | - | | 4.0 | 3936 | - | 0.0813 | | 4.0142 | 3950 | 0.0003 | - | | 4.0650 | 4000 | 0.0003 | - | | 4.1159 | 4050 | 0.0003 | - | | 4.1667 | 4100 | 0.0003 | - | | 4.2175 | 4150 | 0.0003 | - | | 4.2683 | 4200 | 0.0003 | - | | 4.3191 | 4250 | 0.0003 | - | | 4.3699 | 4300 | 0.0003 | - | | 4.4207 | 4350 | 0.0003 | - | | 4.4715 | 4400 | 0.0003 | - | | 4.5224 | 4450 | 0.0003 | - | | 4.5732 | 4500 | 0.0003 | - | | 4.6240 | 4550 | 0.0003 | - | | 4.6748 | 4600 | 0.0003 | - | | 4.7256 | 4650 | 0.0003 | - | | 4.7764 | 4700 | 0.0003 | - | | 4.8272 | 4750 | 0.0003 | - | | 4.8780 | 4800 | 0.0003 | - | | 4.9289 | 4850 | 0.0003 | - | | 4.9797 | 4900 | 0.0003 | - | | 5.0 | 4920 | - | 0.0804 | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.2.1 - Transformers: 4.42.2 - PyTorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "MEDAL" ]
Non_BioNLP
mav23/gemma2-9b-cpt-sea-lionv3-instruct-GGUF
mav23
text-generation
[ "transformers", "gguf", "text-generation", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "jv", "su", "arxiv:2309.06085", "arxiv:2311.07911", "arxiv:2306.05685", "base_model:aisingapore/gemma2-9b-cpt-sea-lionv3-base", "base_model:quantized:aisingapore/gemma2-9b-cpt-sea-lionv3-base", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
1,731
1,731
273
0
--- base_model: - aisingapore/gemma2-9b-cpt-sea-lionv3-base language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my - jv - su library_name: transformers license: gemma pipeline_tag: text-generation --- # Gemma2 9B CPT SEA-LIONv3 Instruct SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Gemma2 9B CPT SEA-LIONv3 Instruct is a multilingual model which has been fine-tuned with around **500,000 English instruction-completion pairs** alongside a larger pool of around **1,000,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese. SEA-LION stands for _Southeast Asian Languages In One Network_. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages:** English, Chinese, Vietnamese, Indonesian, Thai, Filipino, Tamil, Malay, Khmer, Lao, Burmese, Javanese, Sundanese - **License:** [Gemma Community License](https://ai.google.dev/gemma/terms) ## Model Details ### Model Description We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Gemma2 9B CPT SEA-LIONv3](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-base), a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Instruct. For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192. ### Benchmark Performance We evaluated Gemma2 9B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities. #### General Language Capabilities For the evaluation of general language capabilities, we employed the [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI). Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance. The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset. #### Instruction-following Capabilities Since Gemma2 9B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685). As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural. **IFEval** IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task). **MT-Bench** MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5. For more details on Gemma2 9B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/ ### Usage Gemma2 9B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library ```python # Please use transformers==4.45.2 import transformers import torch model_id = "aisingapore/gemma2-9b-cpt-sea-lionv3-instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ### Caveats It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. ## Limitations ### Safety Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## Technical Specifications ### Fine-Tuning Details Gemma2 9B CPT SEA-LIONv3 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 15 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs. ## Data Gemma2 9B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source. ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Choa Esther, Cheng Nicholas, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Teng Walter, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
[ "CHIA" ]
Non_BioNLP
blockblockblock/Dark-Miqu-70B-bpw3-exl2
blockblockblock
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2403.19522", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
1,715
1,715
4
0
--- license: other --- ![Dark-Miqu.png](Dark-Miqu.png) ***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).* A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere! This model **excels** at writing Dark/Grimdark fantasy (see examples below). # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0). This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted. The model was created in two stages: - First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results. - In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model. # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml name: midnight-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: sophosympatheia/Midnight-Rose-70B-v2.0.3 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: euryale-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/Euryale-1.3-L2-70B base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: winter-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/WinterGoddess-1.4x-70B-L2 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: dark-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: midnight-miqu-70b - model: euryale-miqu-70b - model: winter-miqu-70b base_model: 152334H/miqu-1-70b-sf merge_method: model_stock dtype: float16 ``` ## Key configuration details: - '`merge_method: slerp`' uses spherical linear interpolation for merging models. - '`parameters: t`' controls the interpolation ratios between models. - '`embed_slerp: true`' applies slerp to the embedding layers. - '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. **NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Dawn The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks. His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death. Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces. He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings. Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close. He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out. He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live." She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat." She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun." He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air. "Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick." She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend. "I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!" Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk." He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape. "Tell me," he said, his voice a whisper. "Tell me, and I'll end this." She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve. Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way. He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head. And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous. Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him. They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured. In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain. It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it. He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive. But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight. He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost. And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it. It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows. It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters. It was a story that had only just begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat. He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run. Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life. His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place. As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start. The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts. He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced. "Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away." The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted. He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets. "What have I done?" he asked the mirror, but it gave no answer. He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous. He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his. The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach. But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time. The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying. He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair. *Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?* But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do. He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart. "Shh," he whispered, his voice a soothing balm. "It's over now." The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here." The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me." As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name. *Elara.* The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end. He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release. As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield. "Anyone here need my services?" he asked, his voice devoid of emotion. One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it." The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged. *Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.* He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance. "No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly." The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different. "Why?" he asked, genuinely curious. The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up." The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on. He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick." The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening. The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?* But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole. "What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?" The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window. "Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot. The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered. As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely? He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies. "You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice. Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North." The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom. "What do you think it means?" he pressed, unable to keep the desperation at bay. Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here." The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine. As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone. "We have to get out of here," he croaked, his voice barely above a whisper. Grimgor just grunted in agreement, his gaze never leaving the window. The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment. But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it. "Grimgor," he hissed, tugging on the larger man's arm. "Look." Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it. The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe? The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge. "Who are you?" he demanded, trying to sound braver than he felt. The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice. "I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom." With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell. The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains. "We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here." The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown. Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other. "What now?" the Fox asked, his voice barely above a whisper. Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine. "Now," he said, "we find out what's really going on." And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism. In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford. Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs. She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel. As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay. Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills. "Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now." She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement. "Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines." Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper. "They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart." The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others. "It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work." Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!" The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over? As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them. And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart. As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive? </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes of Exodus The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began. As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago. Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself. But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth. He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely. Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online. And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale. His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps. "What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear. Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly. Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened." The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return. As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived. But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void. He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next." Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here." Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft." Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet. As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened. And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why? The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought. He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look." The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be. As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand. But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one. He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Biodome The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet. Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known. "Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal." He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland. But today was not that day. Today, it was just another tomb waiting to be sealed. As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked. "What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?" Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion. "I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good." Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead. The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine. "Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!" Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake. With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom. "Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!" But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness. And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more. When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones. Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten. Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard. And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors. But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile. "Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity. And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1 - Whispers from the Void The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos. His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge. A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean. The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake. Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger. He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now. As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again. But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act. The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe. As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch. But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home. The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything. What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence. Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind. He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here. Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash. His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used. The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations. Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long? He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes. He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff. Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news." The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply. "The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years." Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?" "We have enough for six, maybe seven if we ration carefully. But not twelve." The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close. "We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes." Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival. As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure. In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now. He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary. Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking. Samuel shook his head. "I don't know." "We can't just let six of us die," she said, her voice rising in panic. "There has to be another way." He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?" Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up." Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic." She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic." They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic." Samuel raised an eyebrow. "What?" "We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface." He stared at her in disbelief. "You can't be serious." "It's not ideal," she admitted. "But it's better than just letting them die down here." Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have? As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor. What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls. Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel. And all he could do was watch as the world above claimed them, one by one. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune. In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely. His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything. Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within. Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner. One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world. "Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?" The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion. "No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me." He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within. "I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it." The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box." As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for. The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness. But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher. Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones. "I must see this box for myself," he said, his voice barely above a whisper. "Take me to it." The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels. There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity. As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls. A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe." Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe. "But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds." And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean? As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him! He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety. And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself. But little did he know that the true mystery was only just beginning... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1 The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live. The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light. I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line." I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them. A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth. I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness. When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming." And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step. I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there... But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction. "Halt!" one of them yelled, his voice harsh and commanding. "Hands up!" I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him. "Run," he whispered, his voice barely audible. "Leave me." I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything. But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat. "Nein," I whispered, my voice choked with grief and rage. "Nein!" I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us. With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness. As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me. But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel. His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work. A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips. The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface. Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival. "You're late," Chigurh said finally, his voice low and gravelly. The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic." Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement. Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket. "You know what needs to be done," the man in the suit said, his voice barely above a whisper. Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress. Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary. He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose. Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die. He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction. As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move. Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw. </details> Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
[ "TRANSLATION" ]
[ "BEAR" ]
Non_BioNLP
Santyyy/ember-v1-Q8_0-GGUF
Santyyy
feature-extraction
[ "sentence-transformers", "gguf", "mteb", "feature-extraction", "sentence-similarity", "transformers", "llama-cpp", "gguf-my-repo", "en", "base_model:llmrails/ember-v1", "base_model:quantized:llmrails/ember-v1", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,726
1,726
21
0
--- base_model: llmrails/ember-v1 language: en license: mit tags: - mteb - sentence-transformers - feature-extraction - sentence-similarity - transformers - llama-cpp - gguf-my-repo model-index: - name: ember_v1 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.05970149253731 - type: ap value: 38.76045348512767 - type: f1 value: 69.8824007294685 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.977 - type: ap value: 88.63507587170176 - type: f1 value: 91.9524133311038 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.938 - type: f1 value: 47.58273047536129 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 41.252 - type: map_at_10 value: 56.567 - type: map_at_100 value: 57.07600000000001 - type: map_at_1000 value: 57.08 - type: map_at_3 value: 52.394 - type: map_at_5 value: 55.055 - type: mrr_at_1 value: 42.39 - type: mrr_at_10 value: 57.001999999999995 - type: mrr_at_100 value: 57.531 - type: mrr_at_1000 value: 57.535000000000004 - type: mrr_at_3 value: 52.845 - type: mrr_at_5 value: 55.47299999999999 - type: ndcg_at_1 value: 41.252 - type: ndcg_at_10 value: 64.563 - type: ndcg_at_100 value: 66.667 - type: ndcg_at_1000 value: 66.77 - type: ndcg_at_3 value: 56.120000000000005 - type: ndcg_at_5 value: 60.889 - type: precision_at_1 value: 41.252 - type: precision_at_10 value: 8.982999999999999 - type: precision_at_100 value: 0.989 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.309 - type: precision_at_5 value: 15.690000000000001 - type: recall_at_1 value: 41.252 - type: recall_at_10 value: 89.82900000000001 - type: recall_at_100 value: 98.86200000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.927 - type: recall_at_5 value: 78.45 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.5799968717232 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.142844164856136 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.45997990276463 - type: mrr value: 77.85560392208592 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.38299310075898 - type: cos_sim_spearman value: 85.81038898286454 - type: euclidean_pearson value: 84.28002556389774 - type: euclidean_spearman value: 85.80315990248238 - type: manhattan_pearson value: 83.9755390675032 - type: manhattan_spearman value: 85.30435335611396 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.89935064935065 - type: f1 value: 87.87886687103833 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.84335510371379 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.377963093857005 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.557 - type: map_at_10 value: 44.501000000000005 - type: map_at_100 value: 46.11 - type: map_at_1000 value: 46.232 - type: map_at_3 value: 40.711000000000006 - type: map_at_5 value: 42.937 - type: mrr_at_1 value: 40.916000000000004 - type: mrr_at_10 value: 51.317 - type: mrr_at_100 value: 52.003 - type: mrr_at_1000 value: 52.044999999999995 - type: mrr_at_3 value: 48.569 - type: mrr_at_5 value: 50.322 - type: ndcg_at_1 value: 40.916000000000004 - type: ndcg_at_10 value: 51.353 - type: ndcg_at_100 value: 56.762 - type: ndcg_at_1000 value: 58.555 - type: ndcg_at_3 value: 46.064 - type: ndcg_at_5 value: 48.677 - type: precision_at_1 value: 40.916000000000004 - type: precision_at_10 value: 9.927999999999999 - type: precision_at_100 value: 1.592 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 22.078999999999997 - type: precision_at_5 value: 16.08 - type: recall_at_1 value: 32.557 - type: recall_at_10 value: 63.942 - type: recall_at_100 value: 86.436 - type: recall_at_1000 value: 97.547 - type: recall_at_3 value: 48.367 - type: recall_at_5 value: 55.818 - type: map_at_1 value: 32.106 - type: map_at_10 value: 42.55 - type: map_at_100 value: 43.818 - type: map_at_1000 value: 43.952999999999996 - type: map_at_3 value: 39.421 - type: map_at_5 value: 41.276 - type: mrr_at_1 value: 39.936 - type: mrr_at_10 value: 48.484 - type: mrr_at_100 value: 49.123 - type: mrr_at_1000 value: 49.163000000000004 - type: mrr_at_3 value: 46.221000000000004 - type: mrr_at_5 value: 47.603 - type: ndcg_at_1 value: 39.936 - type: ndcg_at_10 value: 48.25 - type: ndcg_at_100 value: 52.674 - type: ndcg_at_1000 value: 54.638 - type: ndcg_at_3 value: 44.05 - type: ndcg_at_5 value: 46.125 - type: precision_at_1 value: 39.936 - type: precision_at_10 value: 9.096 - type: precision_at_100 value: 1.473 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.295 - type: precision_at_5 value: 15.121 - type: recall_at_1 value: 32.106 - type: recall_at_10 value: 58.107 - type: recall_at_100 value: 76.873 - type: recall_at_1000 value: 89.079 - type: recall_at_3 value: 45.505 - type: recall_at_5 value: 51.479 - type: map_at_1 value: 41.513 - type: map_at_10 value: 54.571999999999996 - type: map_at_100 value: 55.579 - type: map_at_1000 value: 55.626 - type: map_at_3 value: 51.127 - type: map_at_5 value: 53.151 - type: mrr_at_1 value: 47.398 - type: mrr_at_10 value: 57.82000000000001 - type: mrr_at_100 value: 58.457 - type: mrr_at_1000 value: 58.479000000000006 - type: mrr_at_3 value: 55.32899999999999 - type: mrr_at_5 value: 56.89999999999999 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 60.599000000000004 - type: ndcg_at_100 value: 64.366 - type: ndcg_at_1000 value: 65.333 - type: ndcg_at_3 value: 54.98 - type: ndcg_at_5 value: 57.874 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.806 - type: precision_at_100 value: 1.2590000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 24.619 - type: precision_at_5 value: 16.878 - type: recall_at_1 value: 41.513 - type: recall_at_10 value: 74.91799999999999 - type: recall_at_100 value: 90.96 - type: recall_at_1000 value: 97.923 - type: recall_at_3 value: 60.013000000000005 - type: recall_at_5 value: 67.245 - type: map_at_1 value: 26.319 - type: map_at_10 value: 35.766999999999996 - type: map_at_100 value: 36.765 - type: map_at_1000 value: 36.829 - type: map_at_3 value: 32.888 - type: map_at_5 value: 34.538999999999994 - type: mrr_at_1 value: 28.249000000000002 - type: mrr_at_10 value: 37.766 - type: mrr_at_100 value: 38.62 - type: mrr_at_1000 value: 38.667 - type: mrr_at_3 value: 35.009 - type: mrr_at_5 value: 36.608000000000004 - type: ndcg_at_1 value: 28.249000000000002 - type: ndcg_at_10 value: 41.215 - type: ndcg_at_100 value: 46.274 - type: ndcg_at_1000 value: 48.007 - type: ndcg_at_3 value: 35.557 - type: ndcg_at_5 value: 38.344 - type: precision_at_1 value: 28.249000000000002 - type: precision_at_10 value: 6.429 - type: precision_at_100 value: 0.9480000000000001 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 15.179 - type: precision_at_5 value: 10.734 - type: recall_at_1 value: 26.319 - type: recall_at_10 value: 56.157999999999994 - type: recall_at_100 value: 79.65 - type: recall_at_1000 value: 92.73 - type: recall_at_3 value: 40.738 - type: recall_at_5 value: 47.418 - type: map_at_1 value: 18.485 - type: map_at_10 value: 27.400999999999996 - type: map_at_100 value: 28.665000000000003 - type: map_at_1000 value: 28.79 - type: map_at_3 value: 24.634 - type: map_at_5 value: 26.313 - type: mrr_at_1 value: 23.134 - type: mrr_at_10 value: 32.332 - type: mrr_at_100 value: 33.318 - type: mrr_at_1000 value: 33.384 - type: mrr_at_3 value: 29.664 - type: mrr_at_5 value: 31.262 - type: ndcg_at_1 value: 23.134 - type: ndcg_at_10 value: 33.016 - type: ndcg_at_100 value: 38.763 - type: ndcg_at_1000 value: 41.619 - type: ndcg_at_3 value: 28.017999999999997 - type: ndcg_at_5 value: 30.576999999999998 - type: precision_at_1 value: 23.134 - type: precision_at_10 value: 6.069999999999999 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 13.599 - type: precision_at_5 value: 9.975000000000001 - type: recall_at_1 value: 18.485 - type: recall_at_10 value: 45.39 - type: recall_at_100 value: 69.876 - type: recall_at_1000 value: 90.023 - type: recall_at_3 value: 31.587 - type: recall_at_5 value: 38.164 - type: map_at_1 value: 30.676 - type: map_at_10 value: 41.785 - type: map_at_100 value: 43.169000000000004 - type: map_at_1000 value: 43.272 - type: map_at_3 value: 38.462 - type: map_at_5 value: 40.32 - type: mrr_at_1 value: 37.729 - type: mrr_at_10 value: 47.433 - type: mrr_at_100 value: 48.303000000000004 - type: mrr_at_1000 value: 48.337 - type: mrr_at_3 value: 45.011 - type: mrr_at_5 value: 46.455 - type: ndcg_at_1 value: 37.729 - type: ndcg_at_10 value: 47.921 - type: ndcg_at_100 value: 53.477 - type: ndcg_at_1000 value: 55.300000000000004 - type: ndcg_at_3 value: 42.695 - type: ndcg_at_5 value: 45.175 - type: precision_at_1 value: 37.729 - type: precision_at_10 value: 8.652999999999999 - type: precision_at_100 value: 1.336 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 20.18 - type: precision_at_5 value: 14.302000000000001 - type: recall_at_1 value: 30.676 - type: recall_at_10 value: 60.441 - type: recall_at_100 value: 83.37 - type: recall_at_1000 value: 95.092 - type: recall_at_3 value: 45.964 - type: recall_at_5 value: 52.319 - type: map_at_1 value: 24.978 - type: map_at_10 value: 35.926 - type: map_at_100 value: 37.341 - type: map_at_1000 value: 37.445 - type: map_at_3 value: 32.748 - type: map_at_5 value: 34.207 - type: mrr_at_1 value: 31.163999999999998 - type: mrr_at_10 value: 41.394 - type: mrr_at_100 value: 42.321 - type: mrr_at_1000 value: 42.368 - type: mrr_at_3 value: 38.964999999999996 - type: mrr_at_5 value: 40.135 - type: ndcg_at_1 value: 31.163999999999998 - type: ndcg_at_10 value: 42.191 - type: ndcg_at_100 value: 48.083999999999996 - type: ndcg_at_1000 value: 50.21 - type: ndcg_at_3 value: 36.979 - type: ndcg_at_5 value: 38.823 - type: precision_at_1 value: 31.163999999999998 - type: precision_at_10 value: 7.968 - type: precision_at_100 value: 1.2550000000000001 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 18.075 - type: precision_at_5 value: 12.626000000000001 - type: recall_at_1 value: 24.978 - type: recall_at_10 value: 55.410000000000004 - type: recall_at_100 value: 80.562 - type: recall_at_1000 value: 94.77600000000001 - type: recall_at_3 value: 40.359 - type: recall_at_5 value: 45.577 - type: map_at_1 value: 26.812166666666666 - type: map_at_10 value: 36.706916666666665 - type: map_at_100 value: 37.94016666666666 - type: map_at_1000 value: 38.05358333333333 - type: map_at_3 value: 33.72408333333334 - type: map_at_5 value: 35.36508333333333 - type: mrr_at_1 value: 31.91516666666667 - type: mrr_at_10 value: 41.09716666666666 - type: mrr_at_100 value: 41.931916666666666 - type: mrr_at_1000 value: 41.98458333333333 - type: mrr_at_3 value: 38.60183333333333 - type: mrr_at_5 value: 40.031916666666675 - type: ndcg_at_1 value: 31.91516666666667 - type: ndcg_at_10 value: 42.38725 - type: ndcg_at_100 value: 47.56291666666667 - type: ndcg_at_1000 value: 49.716499999999996 - type: ndcg_at_3 value: 37.36491666666667 - type: ndcg_at_5 value: 39.692166666666665 - type: precision_at_1 value: 31.91516666666667 - type: precision_at_10 value: 7.476749999999999 - type: precision_at_100 value: 1.1869166666666668 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 17.275249999999996 - type: precision_at_5 value: 12.25825 - type: recall_at_1 value: 26.812166666666666 - type: recall_at_10 value: 54.82933333333333 - type: recall_at_100 value: 77.36508333333333 - type: recall_at_1000 value: 92.13366666666667 - type: recall_at_3 value: 40.83508333333334 - type: recall_at_5 value: 46.85083333333334 - type: map_at_1 value: 25.352999999999998 - type: map_at_10 value: 33.025999999999996 - type: map_at_100 value: 33.882 - type: map_at_1000 value: 33.983999999999995 - type: map_at_3 value: 30.995 - type: map_at_5 value: 32.113 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 36.14 - type: mrr_at_100 value: 36.815 - type: mrr_at_1000 value: 36.893 - type: mrr_at_3 value: 34.305 - type: mrr_at_5 value: 35.263 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.26 - type: ndcg_at_100 value: 41.723 - type: ndcg_at_1000 value: 44.314 - type: ndcg_at_3 value: 33.584 - type: ndcg_at_5 value: 35.302 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.736 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 14.468 - type: precision_at_5 value: 9.847 - type: recall_at_1 value: 25.352999999999998 - type: recall_at_10 value: 47.155 - type: recall_at_100 value: 68.024 - type: recall_at_1000 value: 87.26899999999999 - type: recall_at_3 value: 37.074 - type: recall_at_5 value: 41.352 - type: map_at_1 value: 17.845 - type: map_at_10 value: 25.556 - type: map_at_100 value: 26.787 - type: map_at_1000 value: 26.913999999999998 - type: map_at_3 value: 23.075000000000003 - type: map_at_5 value: 24.308 - type: mrr_at_1 value: 21.714 - type: mrr_at_10 value: 29.543999999999997 - type: mrr_at_100 value: 30.543 - type: mrr_at_1000 value: 30.618000000000002 - type: mrr_at_3 value: 27.174 - type: mrr_at_5 value: 28.409000000000002 - type: ndcg_at_1 value: 21.714 - type: ndcg_at_10 value: 30.562 - type: ndcg_at_100 value: 36.27 - type: ndcg_at_1000 value: 39.033 - type: ndcg_at_3 value: 26.006 - type: ndcg_at_5 value: 27.843 - type: precision_at_1 value: 21.714 - type: precision_at_10 value: 5.657 - type: precision_at_100 value: 1 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_3 value: 12.4 - type: precision_at_5 value: 8.863999999999999 - type: recall_at_1 value: 17.845 - type: recall_at_10 value: 41.72 - type: recall_at_100 value: 67.06400000000001 - type: recall_at_1000 value: 86.515 - type: recall_at_3 value: 28.78 - type: recall_at_5 value: 33.629999999999995 - type: map_at_1 value: 26.695 - type: map_at_10 value: 36.205999999999996 - type: map_at_100 value: 37.346000000000004 - type: map_at_1000 value: 37.447 - type: map_at_3 value: 32.84 - type: map_at_5 value: 34.733000000000004 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.335 - type: mrr_at_100 value: 41.162 - type: mrr_at_1000 value: 41.221000000000004 - type: mrr_at_3 value: 37.329 - type: mrr_at_5 value: 39.068999999999996 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 41.996 - type: ndcg_at_100 value: 47.096 - type: ndcg_at_1000 value: 49.4 - type: ndcg_at_3 value: 35.902 - type: ndcg_at_5 value: 38.848 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.146 - type: precision_at_100 value: 1.098 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_3 value: 16.014 - type: precision_at_5 value: 11.735 - type: recall_at_1 value: 26.695 - type: recall_at_10 value: 55.525000000000006 - type: recall_at_100 value: 77.376 - type: recall_at_1000 value: 93.476 - type: recall_at_3 value: 39.439 - type: recall_at_5 value: 46.501 - type: map_at_1 value: 24.196 - type: map_at_10 value: 33.516 - type: map_at_100 value: 35.202 - type: map_at_1000 value: 35.426 - type: map_at_3 value: 30.561 - type: map_at_5 value: 31.961000000000002 - type: mrr_at_1 value: 29.644 - type: mrr_at_10 value: 38.769 - type: mrr_at_100 value: 39.843 - type: mrr_at_1000 value: 39.888 - type: mrr_at_3 value: 36.132999999999996 - type: mrr_at_5 value: 37.467 - type: ndcg_at_1 value: 29.644 - type: ndcg_at_10 value: 39.584 - type: ndcg_at_100 value: 45.964 - type: ndcg_at_1000 value: 48.27 - type: ndcg_at_3 value: 34.577999999999996 - type: ndcg_at_5 value: 36.498000000000005 - type: precision_at_1 value: 29.644 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.545 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 16.271 - type: precision_at_5 value: 11.620999999999999 - type: recall_at_1 value: 24.196 - type: recall_at_10 value: 51.171 - type: recall_at_100 value: 79.212 - type: recall_at_1000 value: 92.976 - type: recall_at_3 value: 36.797999999999995 - type: recall_at_5 value: 42.006 - type: map_at_1 value: 21.023 - type: map_at_10 value: 29.677 - type: map_at_100 value: 30.618000000000002 - type: map_at_1000 value: 30.725 - type: map_at_3 value: 27.227 - type: map_at_5 value: 28.523 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 31.832 - type: mrr_at_100 value: 32.675 - type: mrr_at_1000 value: 32.751999999999995 - type: mrr_at_3 value: 29.513 - type: mrr_at_5 value: 30.89 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 34.699999999999996 - type: ndcg_at_100 value: 39.302 - type: ndcg_at_1000 value: 41.919000000000004 - type: ndcg_at_3 value: 29.965999999999998 - type: ndcg_at_5 value: 32.22 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.8340000000000001 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 13.123999999999999 - type: precision_at_5 value: 9.316 - type: recall_at_1 value: 21.023 - type: recall_at_10 value: 48.015 - type: recall_at_100 value: 68.978 - type: recall_at_1000 value: 88.198 - type: recall_at_3 value: 35.397 - type: recall_at_5 value: 40.701 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.198 - type: map_at_10 value: 19.336000000000002 - type: map_at_100 value: 21.382 - type: map_at_1000 value: 21.581 - type: map_at_3 value: 15.992 - type: map_at_5 value: 17.613 - type: mrr_at_1 value: 25.080999999999996 - type: mrr_at_10 value: 36.032 - type: mrr_at_100 value: 37.1 - type: mrr_at_1000 value: 37.145 - type: mrr_at_3 value: 32.595 - type: mrr_at_5 value: 34.553 - type: ndcg_at_1 value: 25.080999999999996 - type: ndcg_at_10 value: 27.290999999999997 - type: ndcg_at_100 value: 35.31 - type: ndcg_at_1000 value: 38.885 - type: ndcg_at_3 value: 21.895999999999997 - type: ndcg_at_5 value: 23.669999999999998 - type: precision_at_1 value: 25.080999999999996 - type: precision_at_10 value: 8.645 - type: precision_at_100 value: 1.7209999999999999 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.287 - type: precision_at_5 value: 12.625 - type: recall_at_1 value: 11.198 - type: recall_at_10 value: 33.355000000000004 - type: recall_at_100 value: 60.912 - type: recall_at_1000 value: 80.89 - type: recall_at_3 value: 20.055 - type: recall_at_5 value: 25.14 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.228 - type: map_at_10 value: 20.018 - type: map_at_100 value: 28.388999999999996 - type: map_at_1000 value: 30.073 - type: map_at_3 value: 14.366999999999999 - type: map_at_5 value: 16.705000000000002 - type: mrr_at_1 value: 69 - type: mrr_at_10 value: 77.058 - type: mrr_at_100 value: 77.374 - type: mrr_at_1000 value: 77.384 - type: mrr_at_3 value: 75.708 - type: mrr_at_5 value: 76.608 - type: ndcg_at_1 value: 57.49999999999999 - type: ndcg_at_10 value: 41.792 - type: ndcg_at_100 value: 47.374 - type: ndcg_at_1000 value: 55.13 - type: ndcg_at_3 value: 46.353 - type: ndcg_at_5 value: 43.702000000000005 - type: precision_at_1 value: 69 - type: precision_at_10 value: 32.85 - type: precision_at_100 value: 10.708 - type: precision_at_1000 value: 2.024 - type: precision_at_3 value: 49.5 - type: precision_at_5 value: 42.05 - type: recall_at_1 value: 9.228 - type: recall_at_10 value: 25.635 - type: recall_at_100 value: 54.894 - type: recall_at_1000 value: 79.38 - type: recall_at_3 value: 15.68 - type: recall_at_5 value: 19.142 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.035 - type: f1 value: 46.85325505614071 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 70.132 - type: map_at_10 value: 79.527 - type: map_at_100 value: 79.81200000000001 - type: map_at_1000 value: 79.828 - type: map_at_3 value: 78.191 - type: map_at_5 value: 79.092 - type: mrr_at_1 value: 75.563 - type: mrr_at_10 value: 83.80199999999999 - type: mrr_at_100 value: 83.93 - type: mrr_at_1000 value: 83.933 - type: mrr_at_3 value: 82.818 - type: mrr_at_5 value: 83.505 - type: ndcg_at_1 value: 75.563 - type: ndcg_at_10 value: 83.692 - type: ndcg_at_100 value: 84.706 - type: ndcg_at_1000 value: 85.001 - type: ndcg_at_3 value: 81.51 - type: ndcg_at_5 value: 82.832 - type: precision_at_1 value: 75.563 - type: precision_at_10 value: 10.245 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 31.518 - type: precision_at_5 value: 19.772000000000002 - type: recall_at_1 value: 70.132 - type: recall_at_10 value: 92.204 - type: recall_at_100 value: 96.261 - type: recall_at_1000 value: 98.17399999999999 - type: recall_at_3 value: 86.288 - type: recall_at_5 value: 89.63799999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.269 - type: map_at_10 value: 36.042 - type: map_at_100 value: 37.988 - type: map_at_1000 value: 38.162 - type: map_at_3 value: 31.691000000000003 - type: map_at_5 value: 33.988 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.348 - type: mrr_at_100 value: 54.033 - type: mrr_at_1000 value: 54.064 - type: mrr_at_3 value: 50.977 - type: mrr_at_5 value: 52.112 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 44.302 - type: ndcg_at_100 value: 51.054 - type: ndcg_at_1000 value: 53.822 - type: ndcg_at_3 value: 40.615 - type: ndcg_at_5 value: 41.455999999999996 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.176 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.16 - type: precision_at_5 value: 19.567999999999998 - type: recall_at_1 value: 22.269 - type: recall_at_10 value: 51.188 - type: recall_at_100 value: 75.924 - type: recall_at_1000 value: 92.525 - type: recall_at_3 value: 36.643 - type: recall_at_5 value: 42.27 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.412 - type: map_at_10 value: 66.376 - type: map_at_100 value: 67.217 - type: map_at_1000 value: 67.271 - type: map_at_3 value: 62.741 - type: map_at_5 value: 65.069 - type: mrr_at_1 value: 80.824 - type: mrr_at_10 value: 86.53 - type: mrr_at_100 value: 86.67399999999999 - type: mrr_at_1000 value: 86.678 - type: mrr_at_3 value: 85.676 - type: mrr_at_5 value: 86.256 - type: ndcg_at_1 value: 80.824 - type: ndcg_at_10 value: 74.332 - type: ndcg_at_100 value: 77.154 - type: ndcg_at_1000 value: 78.12400000000001 - type: ndcg_at_3 value: 69.353 - type: ndcg_at_5 value: 72.234 - type: precision_at_1 value: 80.824 - type: precision_at_10 value: 15.652 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.911 - type: precision_at_5 value: 29.221000000000004 - type: recall_at_1 value: 40.412 - type: recall_at_10 value: 78.25800000000001 - type: recall_at_100 value: 89.196 - type: recall_at_1000 value: 95.544 - type: recall_at_3 value: 67.367 - type: recall_at_5 value: 73.05199999999999 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.78880000000001 - type: ap value: 89.39251741048801 - type: f1 value: 92.78019950076781 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.888 - type: map_at_10 value: 35.146 - type: map_at_100 value: 36.325 - type: map_at_1000 value: 36.372 - type: map_at_3 value: 31.3 - type: map_at_5 value: 33.533 - type: mrr_at_1 value: 23.480999999999998 - type: mrr_at_10 value: 35.777 - type: mrr_at_100 value: 36.887 - type: mrr_at_1000 value: 36.928 - type: mrr_at_3 value: 31.989 - type: mrr_at_5 value: 34.202 - type: ndcg_at_1 value: 23.496 - type: ndcg_at_10 value: 42.028999999999996 - type: ndcg_at_100 value: 47.629 - type: ndcg_at_1000 value: 48.785000000000004 - type: ndcg_at_3 value: 34.227000000000004 - type: ndcg_at_5 value: 38.207 - type: precision_at_1 value: 23.496 - type: precision_at_10 value: 6.596 - type: precision_at_100 value: 0.9400000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.513000000000002 - type: precision_at_5 value: 10.711 - type: recall_at_1 value: 22.888 - type: recall_at_10 value: 63.129999999999995 - type: recall_at_100 value: 88.90299999999999 - type: recall_at_1000 value: 97.69 - type: recall_at_3 value: 42.014 - type: recall_at_5 value: 51.554 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.36568950290486 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.25672594619242 - type: f1 value: 59.52405059722216 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.4142568930733 - type: f1 value: 75.23044196543388 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.44720914593141 - type: f1 value: 80.41049641537015 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.960921474993775 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.88042240204361 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.27071371606404 - type: mrr value: 33.541450459533856 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.551 - type: map_at_10 value: 14.359 - type: map_at_100 value: 18.157 - type: map_at_1000 value: 19.659 - type: map_at_3 value: 10.613999999999999 - type: map_at_5 value: 12.296 - type: mrr_at_1 value: 47.368 - type: mrr_at_10 value: 56.689 - type: mrr_at_100 value: 57.24399999999999 - type: mrr_at_1000 value: 57.284 - type: mrr_at_3 value: 54.489 - type: mrr_at_5 value: 55.928999999999995 - type: ndcg_at_1 value: 45.511 - type: ndcg_at_10 value: 36.911 - type: ndcg_at_100 value: 34.241 - type: ndcg_at_1000 value: 43.064 - type: ndcg_at_3 value: 42.348 - type: ndcg_at_5 value: 39.884 - type: precision_at_1 value: 46.749 - type: precision_at_10 value: 27.028000000000002 - type: precision_at_100 value: 8.52 - type: precision_at_1000 value: 2.154 - type: precision_at_3 value: 39.525 - type: precision_at_5 value: 34.18 - type: recall_at_1 value: 6.551 - type: recall_at_10 value: 18.602 - type: recall_at_100 value: 34.882999999999996 - type: recall_at_1000 value: 66.049 - type: recall_at_3 value: 11.872 - type: recall_at_5 value: 14.74 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 27.828999999999997 - type: map_at_10 value: 43.606 - type: map_at_100 value: 44.656 - type: map_at_1000 value: 44.690000000000005 - type: map_at_3 value: 39.015 - type: map_at_5 value: 41.625 - type: mrr_at_1 value: 31.518 - type: mrr_at_10 value: 46.047 - type: mrr_at_100 value: 46.846 - type: mrr_at_1000 value: 46.867999999999995 - type: mrr_at_3 value: 42.154 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 31.518 - type: ndcg_at_10 value: 51.768 - type: ndcg_at_100 value: 56.184999999999995 - type: ndcg_at_1000 value: 56.92 - type: ndcg_at_3 value: 43.059999999999995 - type: ndcg_at_5 value: 47.481 - type: precision_at_1 value: 31.518 - type: precision_at_10 value: 8.824 - type: precision_at_100 value: 1.131 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 19.969 - type: precision_at_5 value: 14.502 - type: recall_at_1 value: 27.828999999999997 - type: recall_at_10 value: 74.244 - type: recall_at_100 value: 93.325 - type: recall_at_1000 value: 98.71799999999999 - type: recall_at_3 value: 51.601 - type: recall_at_5 value: 61.841 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.54 - type: map_at_10 value: 85.509 - type: map_at_100 value: 86.137 - type: map_at_1000 value: 86.151 - type: map_at_3 value: 82.624 - type: map_at_5 value: 84.425 - type: mrr_at_1 value: 82.45 - type: mrr_at_10 value: 88.344 - type: mrr_at_100 value: 88.437 - type: mrr_at_1000 value: 88.437 - type: mrr_at_3 value: 87.417 - type: mrr_at_5 value: 88.066 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.092 - type: ndcg_at_100 value: 90.252 - type: ndcg_at_1000 value: 90.321 - type: ndcg_at_3 value: 86.404 - type: ndcg_at_5 value: 87.883 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.496 - type: precision_at_100 value: 1.536 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.833 - type: precision_at_5 value: 24.79 - type: recall_at_1 value: 71.54 - type: recall_at_10 value: 95.846 - type: recall_at_100 value: 99.715 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 88.01299999999999 - type: recall_at_5 value: 92.32000000000001 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.60557586253866 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.0287172242051 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 3.9849999999999994 - type: map_at_10 value: 11.397 - type: map_at_100 value: 13.985 - type: map_at_1000 value: 14.391000000000002 - type: map_at_3 value: 7.66 - type: map_at_5 value: 9.46 - type: mrr_at_1 value: 19.8 - type: mrr_at_10 value: 31.958 - type: mrr_at_100 value: 33.373999999999995 - type: mrr_at_1000 value: 33.411 - type: mrr_at_3 value: 28.316999999999997 - type: mrr_at_5 value: 30.297 - type: ndcg_at_1 value: 19.8 - type: ndcg_at_10 value: 19.580000000000002 - type: ndcg_at_100 value: 29.555999999999997 - type: ndcg_at_1000 value: 35.882 - type: ndcg_at_3 value: 17.544 - type: ndcg_at_5 value: 15.815999999999999 - type: precision_at_1 value: 19.8 - type: precision_at_10 value: 10.61 - type: precision_at_100 value: 2.501 - type: precision_at_1000 value: 0.40099999999999997 - type: precision_at_3 value: 16.900000000000002 - type: precision_at_5 value: 14.44 - type: recall_at_1 value: 3.9849999999999994 - type: recall_at_10 value: 21.497 - type: recall_at_100 value: 50.727999999999994 - type: recall_at_1000 value: 81.27499999999999 - type: recall_at_3 value: 10.263 - type: recall_at_5 value: 14.643 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.0087509585503 - type: cos_sim_spearman value: 81.74697270664319 - type: euclidean_pearson value: 81.80424382731947 - type: euclidean_spearman value: 81.29794251968431 - type: manhattan_pearson value: 81.81524666226125 - type: manhattan_spearman value: 81.29475370198963 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.44442736429552 - type: cos_sim_spearman value: 78.51011398910948 - type: euclidean_pearson value: 83.36181801196723 - type: euclidean_spearman value: 79.47272621331535 - type: manhattan_pearson value: 83.3660113483837 - type: manhattan_spearman value: 79.47695922566032 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.82923943323635 - type: cos_sim_spearman value: 86.62037823380983 - type: euclidean_pearson value: 83.56369548403958 - type: euclidean_spearman value: 84.2176755481191 - type: manhattan_pearson value: 83.55460702084464 - type: manhattan_spearman value: 84.18617930921467 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 84.09071068110103 - type: cos_sim_spearman value: 83.05697553913335 - type: euclidean_pearson value: 81.1377457216497 - type: euclidean_spearman value: 81.74714169016676 - type: manhattan_pearson value: 81.0893424142723 - type: manhattan_spearman value: 81.7058918219677 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.61132157220429 - type: cos_sim_spearman value: 88.38581627185445 - type: euclidean_pearson value: 86.14904510913374 - type: euclidean_spearman value: 86.5452758925542 - type: manhattan_pearson value: 86.1484025377679 - type: manhattan_spearman value: 86.55483841566252 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.46195145161064 - type: cos_sim_spearman value: 86.82409112251158 - type: euclidean_pearson value: 84.75479672288957 - type: euclidean_spearman value: 85.41144307151548 - type: manhattan_pearson value: 84.70914329694165 - type: manhattan_spearman value: 85.38477943384089 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.06351289930238 - type: cos_sim_spearman value: 87.90311138579116 - type: euclidean_pearson value: 86.17651467063077 - type: euclidean_spearman value: 84.89447802019073 - type: manhattan_pearson value: 86.3267677479595 - type: manhattan_spearman value: 85.00472295103874 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.78311975978767 - type: cos_sim_spearman value: 66.76465685245887 - type: euclidean_pearson value: 67.21687806595443 - type: euclidean_spearman value: 65.05776733534435 - type: manhattan_pearson value: 67.14008143635883 - type: manhattan_spearman value: 65.25247076149701 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.7403488889418 - type: cos_sim_spearman value: 87.76870289783061 - type: euclidean_pearson value: 84.83171077794671 - type: euclidean_spearman value: 85.50579695091902 - type: manhattan_pearson value: 84.83074260180555 - type: manhattan_spearman value: 85.47589026938667 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.56234016237356 - type: mrr value: 96.26124238869338 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 59.660999999999994 - type: map_at_10 value: 69.105 - type: map_at_100 value: 69.78 - type: map_at_1000 value: 69.80199999999999 - type: map_at_3 value: 65.991 - type: map_at_5 value: 68.02 - type: mrr_at_1 value: 62.666999999999994 - type: mrr_at_10 value: 70.259 - type: mrr_at_100 value: 70.776 - type: mrr_at_1000 value: 70.796 - type: mrr_at_3 value: 67.889 - type: mrr_at_5 value: 69.52199999999999 - type: ndcg_at_1 value: 62.666999999999994 - type: ndcg_at_10 value: 73.425 - type: ndcg_at_100 value: 75.955 - type: ndcg_at_1000 value: 76.459 - type: ndcg_at_3 value: 68.345 - type: ndcg_at_5 value: 71.319 - type: precision_at_1 value: 62.666999999999994 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 17.732999999999997 - type: recall_at_1 value: 59.660999999999994 - type: recall_at_10 value: 85.422 - type: recall_at_100 value: 96.167 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 72.044 - type: recall_at_5 value: 79.428 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86435643564356 - type: cos_sim_ap value: 96.83057412333741 - type: cos_sim_f1 value: 93.04215337734891 - type: cos_sim_precision value: 94.53044375644994 - type: cos_sim_recall value: 91.60000000000001 - type: dot_accuracy value: 99.7910891089109 - type: dot_ap value: 94.10681982106397 - type: dot_f1 value: 89.34881373043918 - type: dot_precision value: 90.21406727828746 - type: dot_recall value: 88.5 - type: euclidean_accuracy value: 99.85544554455446 - type: euclidean_ap value: 96.78545104478602 - type: euclidean_f1 value: 92.65143992055613 - type: euclidean_precision value: 92.01183431952663 - type: euclidean_recall value: 93.30000000000001 - type: manhattan_accuracy value: 99.85841584158416 - type: manhattan_ap value: 96.80748903307823 - type: manhattan_f1 value: 92.78247884519662 - type: manhattan_precision value: 92.36868186323092 - type: manhattan_recall value: 93.2 - type: max_accuracy value: 99.86435643564356 - type: max_ap value: 96.83057412333741 - type: max_f1 value: 93.04215337734891 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.53971025855282 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.97791591490788 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.852215301355066 - type: mrr value: 56.85527809608691 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.21442519856758 - type: cos_sim_spearman value: 30.822536216936825 - type: dot_pearson value: 28.661325528121807 - type: dot_spearman value: 28.1435226478879 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.183 - type: map_at_10 value: 1.526 - type: map_at_100 value: 7.915 - type: map_at_1000 value: 19.009 - type: map_at_3 value: 0.541 - type: map_at_5 value: 0.8659999999999999 - type: mrr_at_1 value: 68 - type: mrr_at_10 value: 81.186 - type: mrr_at_100 value: 81.186 - type: mrr_at_1000 value: 81.186 - type: mrr_at_3 value: 80 - type: mrr_at_5 value: 80.9 - type: ndcg_at_1 value: 64 - type: ndcg_at_10 value: 64.13799999999999 - type: ndcg_at_100 value: 47.632000000000005 - type: ndcg_at_1000 value: 43.037 - type: ndcg_at_3 value: 67.542 - type: ndcg_at_5 value: 67.496 - type: precision_at_1 value: 68 - type: precision_at_10 value: 67.80000000000001 - type: precision_at_100 value: 48.980000000000004 - type: precision_at_1000 value: 19.036 - type: precision_at_3 value: 72 - type: precision_at_5 value: 71.2 - type: recall_at_1 value: 0.183 - type: recall_at_10 value: 1.799 - type: recall_at_100 value: 11.652999999999999 - type: recall_at_1000 value: 40.086 - type: recall_at_3 value: 0.5930000000000001 - type: recall_at_5 value: 0.983 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.29 - type: map_at_10 value: 9.489 - type: map_at_100 value: 15.051 - type: map_at_1000 value: 16.561999999999998 - type: map_at_3 value: 5.137 - type: map_at_5 value: 6.7989999999999995 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.699 - type: mrr_at_100 value: 46.461000000000006 - type: mrr_at_1000 value: 46.461000000000006 - type: mrr_at_3 value: 41.837 - type: mrr_at_5 value: 43.163000000000004 - type: ndcg_at_1 value: 23.469 - type: ndcg_at_10 value: 23.544999999999998 - type: ndcg_at_100 value: 34.572 - type: ndcg_at_1000 value: 46.035 - type: ndcg_at_3 value: 27.200000000000003 - type: ndcg_at_5 value: 25.266 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 22.041 - type: precision_at_100 value: 7.3469999999999995 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 26.531 - type: recall_at_1 value: 2.29 - type: recall_at_10 value: 15.895999999999999 - type: recall_at_100 value: 45.518 - type: recall_at_1000 value: 80.731 - type: recall_at_3 value: 6.433 - type: recall_at_5 value: 9.484 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.4178 - type: ap value: 14.575240629602373 - type: f1 value: 55.02449563229096 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.00282965478212 - type: f1 value: 60.34413028768773 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 50.409448342549936 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.62591643321214 - type: cos_sim_ap value: 79.28766491329633 - type: cos_sim_f1 value: 71.98772064466617 - type: cos_sim_precision value: 69.8609731876862 - type: cos_sim_recall value: 74.24802110817942 - type: dot_accuracy value: 84.75293556654945 - type: dot_ap value: 69.72705761174353 - type: dot_f1 value: 65.08692852543464 - type: dot_precision value: 63.57232704402516 - type: dot_recall value: 66.6754617414248 - type: euclidean_accuracy value: 87.44710019669786 - type: euclidean_ap value: 79.11021477292638 - type: euclidean_f1 value: 71.5052389470994 - type: euclidean_precision value: 69.32606541129832 - type: euclidean_recall value: 73.82585751978891 - type: manhattan_accuracy value: 87.42325803182929 - type: manhattan_ap value: 79.05094494327616 - type: manhattan_f1 value: 71.36333985649055 - type: manhattan_precision value: 70.58064516129032 - type: manhattan_recall value: 72.16358839050132 - type: max_accuracy value: 87.62591643321214 - type: max_ap value: 79.28766491329633 - type: max_f1 value: 71.98772064466617 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.85202002561415 - type: cos_sim_ap value: 85.9835303311168 - type: cos_sim_f1 value: 78.25741142443962 - type: cos_sim_precision value: 73.76635768811342 - type: cos_sim_recall value: 83.3307668617185 - type: dot_accuracy value: 88.20584468506229 - type: dot_ap value: 83.591632302697 - type: dot_f1 value: 76.81739705396173 - type: dot_precision value: 73.45275728837373 - type: dot_recall value: 80.50508161379734 - type: euclidean_accuracy value: 88.64633057787093 - type: euclidean_ap value: 85.25705123182283 - type: euclidean_f1 value: 77.18535726329199 - type: euclidean_precision value: 75.17699437997226 - type: euclidean_recall value: 79.30397289805975 - type: manhattan_accuracy value: 88.63274731245392 - type: manhattan_ap value: 85.2376825633018 - type: manhattan_f1 value: 77.15810785937788 - type: manhattan_precision value: 73.92255061014319 - type: manhattan_recall value: 80.68986757006468 - type: max_accuracy value: 88.85202002561415 - type: max_ap value: 85.9835303311168 - type: max_f1 value: 78.25741142443962 --- # Santyyy/ember-v1-Q8_0-GGUF This model was converted to GGUF format from [`llmrails/ember-v1`](https://huggingface.co/llmrails/ember-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/llmrails/ember-v1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Santyyy/ember-v1-Q8_0-GGUF --hf-file ember-v1-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Santyyy/ember-v1-Q8_0-GGUF --hf-file ember-v1-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Santyyy/ember-v1-Q8_0-GGUF --hf-file ember-v1-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Santyyy/ember-v1-Q8_0-GGUF --hf-file ember-v1-q8_0.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
minhtuan7akp/gte-vietnamese-finetune
minhtuan7akp
sentence-similarity
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:21892", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:Alibaba-NLP/gte-multilingual-base", "base_model:finetune:Alibaba-NLP/gte-multilingual-base", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,740
1,740
17
0
--- base_model: Alibaba-NLP/gte-multilingual-base library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:21892 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Sự khác biệt giữa các thời đại trong nghệ thuật trang trí rồng được thể hiện như thế nào qua các thời Hùng Vương, Lý, Trần, Hồ, Lê, Mạc, Nguyễn? sentences: - "Tài liệu tham khảo\r\n323. Nguyễn Quang Ngọc, “Mấy nhận xét về kết cấu kinh tế\ \ của \r\nmột số làng thương nghiệp ờ vùng đồng bằng Bắc Bộ thế kỳ \r\nXVIII-XIX”,\ \ Tạp chí Nghiên cứu Lịch sứ, số 5 (218), 1984.\r\n324. Nguyễn Quang Ngọc, Phan\ \ Đại Doãn, “Mấy ý kiến về hoạt \r\nđộng thương nghiệp ở nông thôn đồng bằng Bắc\ \ Bộ thế kỷ \r\nXVIII-XIX (hiện tượng và bản chất)”, Tạp chí Nghiên cứu\r\nLịch\ \ sử, số 5 (224), 1985.\r\n325. Nguyễn Quang Ngọc, “Thêm vài ý kiến về Tam Điệp”,\ \ Tạp \r\nchí Nghiên cứu Lịch sử, số 1 (244), 1989.\r\n326. Nguyễn Quang Ngọc,\ \ về một số làng buôn ở Đồng bàng Bắc \r\nBộ thế kỳ XVIII-XIX, Hội Sừ học Việt\ \ Nam, 1993.\r\n327. Nguyễn Quang Ngọc, Vũ Văn Quân, “Tư liệu về nguồn gốc \r\n\ chức năng và hoạt động cùa đội Hoàng Sa”, Tạp chí Khoa\r\nhọc xã hội, Đại học\ \ Quốc gia, t.XIV, số 3, 1998, ư. 10-20.\r\n328. Nguyễn Quang Ngọc, “Bảo vệ chủ\ \ quyền ưên Biển Đông: \r\nmột hoạt động nổi bật của vương triều Tây Sơn”, Tạp\ \ chí \r\nLịch sử quân sự, số 1, 1999, tr. 15-18.\r\n329. Nguyễn Quang Ngọc (Chủ\ \ biên), Tiến trình lịch sứ Việt Nam,\r\nNxb. Giáo dục, Hà Nội, 2001.\r\n330.\ \ Nguyền Quân, Phan cẩm Thượng, Mỹ thuật cùa người Việt,\r\nNxb. Mỹ thuật. Hà\ \ Nội. 1989.\r\n331. Nguyễn Tài Thư (Chủ biên), Lịch sử tư tưởng Việt Nam, 2\r\ \ntập, Nxb. Khoa học xã hội, Hà Nội, 1993.\r\n332. Nguyễn Tài Thư, Nho học và\ \ Nho học ớ Việt Nam: Một số lý\r\nluận và thực tiễn, Nxb. Khoa học xã hội, Hà\ \ Nội, 1997.\r\n333. Nguyễn Tưòmg Phượng, Binh chế Việt Nam qua các thời đại,\r\ \nNgày Mai, 1950." - "Ba Thục, Kinh Sở, Ngô Việt…). Kết thúc cuộc \"Hán Sở tranh hùng\", nhà Hán\r\n\ đã thống nhất đất nước Trung Hoa từ bắc xuống nam (tiền bắc hậu nam) và phát\r\ \ntriển đất nước theo một trật tự ngược lại: tiền nam hậu bắc\".\r\nCó thể hình\ \ dung cơ cấu của văn hóa Trung Hoa như sau: \r\nVĂN HOÁ\r\nTRUNG\r\nHOA\r\n=\r\ \nVăn hoá lưu vực sông Hoàng Hà\r\n+\r\nVăn hoá nông\r\nnghiệp lúa nước\r\nĐông\ \ Nam Á\r\nVăn hoá du\r\nmục Tây Bắc +\r\nVăn hoá nông\r\nnghiệp khối Trung\r\n\ Nguyên\r\nMối liên hệ và sự tác động qua lại giữa văn hóa Việt Nam với Trung Hoa,\r\ \ngiữa văn hóa phương Bắc cổ đại với văn hóa phương Nam cổ đại (trong đó có\r\n\ văn hóa Nam – Á - Bách Việt) có thể trình bày trong bảng 1.5.\r\nVĂN HOÁ\r\nP.BẮC\ \ CỔ ĐẠI\r\nVĂN HOÁ PHƯƠNG NAM (= Đ.N.Á cổ đại)\r\nVăn hoá Nam-Á (Bách Việt)\r\ \nVăn hóa vùng lưu\r\nvực sông Hoàng\r\nHà\r\nVăn hóa vùng lưu\r\nvực sông Dương\r\ \nTử\r\nVăn hóa vùng lưu\r\nvực s. Hồng, s.\r\nMã\r\nVăn hóa miền\r\nTrung và\ \ đồng\r\nbằng s. Mê Kông\r\nVĂN HOÁ TRUNG HOA VĂN HOÁ VIỆT NAM\r\nBảng 1.5: Quan\ \ hệ cội nguồn giữa văn hóa Việt Nam và Trung Hoa\r\nBài 3: TIẾN TRÌNH VĂN HÓA\ \ VIỆT NAM\r\nTiến trình văn hóa Việt Nam có thể chia thành 6 giai đoạn: văn hóa\ \ tiền\r\nsử, văn hóa Văn Lang - Âu Lạc, văn hóa thời chống Bắc thuộc, văn hóa\ \ Đại\r\nViệt, văn hóa Đại Nam và văn hóa hiện đại. Sáu giai đoạn này tạo thành\ \ ba lớp:\r\nlớp văn hóa bản địa, lớp văn hóa giao lưu với Trung Hoa và khu vực,\ \ lớp văn\r\nhóa giao lưu với phương Tây.\r\n3.1. Lớp văn hóa bản địa\r\n28\r\n\ Downloaded by Tu?n ?ào Minh ([email protected])\r\nlOMoARcPSD|49704028" - "trái), và hình bán nguyệt (đôi dưới, phải). Trước mắt ta là sự hòa hợp tuyệt\ \ vời\r\ncủa cái động (vật nhau) trong thế tĩnh của ba hình hình học với những\ \ cạnh đáy\r\nvững vàng cho thấy sự ngang sức ngang tài của các chàng trai; sự\ \ vận động liên\r\ntục của cơ bắp như dừng lại. Hai người chờ vật được khuôn lại\ \ trong hai hình\r\nchữ nhật đứng tạo nên cảm giác co ro bất tận trong cái rét\ \ của lễ hội đầu xuân.\r\n4.1.3. Thủ pháp mô hình hóa đã tạo nên một nền nghệ\ \ thuật trang trí và\r\nnhiều mô hình mang tính triết lí sâu sắc.\r\nBộ Tứ Linh\ \ (Hình 4.20a) với long (rồng) biểu trưng cho uy là nam tính; li\r\n(= long mã)\ \ hoặc lân (kì lân, con vật tưởng tượng đầu sư tử, mình nai, đuôi trâu,\r\n131\r\ \nDownloaded by Tu?n ?ào Minh ([email protected])\r\nlOMoARcPSD|49704028\r\ \năn cỏ, rất hiền lành - hình 4.20b) biểu trưng cho ước vọng thái bình, quy (rùa)\r\ \nhiểu tượng cho sự sống lâu và phượng (phụng) biểu tượng cho nữ tính. Rồng -\r\ \nPhượng biểu tượng cho hạnh phúc lứa đôi (ở Trung Hoa hiên tượng này là\r\n“loan-phượng”:\ \ loan là con đực, phượng là con cái). Đồ án trang trí RỒNG phổ\r\nbiến đến mức\ \ phản ánh những đặc trưng cửa từng thời đại. Rồng thời Hùng\r\nvương, thời Lí,\ \ Trần, Hồ, Lê, Mạc, Nguyễn – mỗi thời có những nét đặc thù\r\nriêng tương ứng\ \ với thời đại của mình.\r\nTứ linh cộng thêm ngư-phúc-hạc-hổ thì thành BÁT VẬT.\ \ Ngư (Cá) gắn\r\nvới truyền thuyết \"cá hóa rồng\" biểu tượng cho sự thành đạt.\ \ Chữ phúc là “sự tốt\r\nlành, may mắn” đồng âm và viết gần giống với chữ bức\ \ nghĩa là \"con dơi\", vì" - source_sentence: Nhiệm vụ quan trọng nhất của các nước công nghiệp chủ nghĩa châu Âu và Nhật Bản sau chiến tranh thế giới thứ hai là gì? sentences: - "Dupuis phái tự mình hành động. Tháng 10-1872, Dupuis đi Hương \r\nCảng và Thượng\ \ Hải mua pháo thuyền và đạn dược, mộ quân lính,\r\n1. Đó là các cuộc thám hiểm\ \ cùa phái đoàn Doudard de Lagrée và Francis \r\nGamier vào những năm từ 1866\ \ đến 1870.\r\n2. Nguyễn Phan Quang (1949), Việt Nam thế ky XIX (1802-1884), Nxb.\ \ \r\nThành phố Hồ Chí Minh, tr. 321.\r\n159\r\nLỊCH SỪ VIỆT NAM - TẬP 6\r\nrồi\ \ đến tháng 11 năm đó thì kéo nhau về Bắc Kỳ. Cùng lúc đó, bọn \r\nthực dân hiếu\ \ chiến ở Nam Kỳ cũng lợi dụng việc triều đình Huế \r\nyêu cầu đưa ra Bắc tiễu\ \ trừ giặc biển để phái tàu chiến ra tiếp tay \r\ncho Dupuis. Cậy có lực lượng\ \ mạnh, Dupuis buộc Kinh lược sứ Lê \r\nTuấn trong vòng hai tuần phải xin triều\ \ đình Huế cho phép hắn \r\nđược mượn đường đi lên Vân Nam. Nhung hạn 2 tuần chưa\ \ hết và \r\ngiấy phép cũng chưa có mà Dupuis đã nổ súng, rồi tự tiện kéo đoàn\ \ \r\ntàu vào Cửa cấm (Hải Phòng) ngược sông Hồng lên Hà Nội (ngày \r\n22-12-1872).\ \ Theo sử nhà Nguyễn thì ngày 2-12-1872, Dupuis “từ\r\nHài Dương đi đen Bắc Ninh,\ \ Hà Nội, các quan tình và quân thứ 2-\r\n3 lần biện bác ngăn trở không cho đi,\ \ nhưng chúng không nghe\r\nTrong khoảng thời gian từ năm 1872 đến năm 1873, Dupuis\ \ đã ỷ \r\nthế quân Pháp và triều đình nhà Thanh, trắng trợn xâm phạm chủ \r\n\ quyền Việt Nam, liên tiếp gây ra nhiều vụ khiêu khích, cướp phá \r\nđối với nhân\ \ dân dọc hai bờ sông, tấn công các đồn bốt của triều \r\nđình nhà Nguyễn.\r\n\ Trước hành động ngang ngược cùa Dupuis, quân dân Hà Nội \r\nmặc dù chưa có lệnh\ \ triều đình nhung vẫn tích cực đề phòng. Lệnh" - "hội loài người nói chung hay cùa một quốc gia, một dân tộc nói \r\nriêng. Nghiên\ \ cứu lịch sử là nhằm tìm hiểu những sự kiện xảy ra \r\ntrong quá khứ để từ đó\ \ rút ra các bài học kinh nghiệm cho hiện tại \r\nvà tương lai. Nghiên cứu và\ \ biên soạn lịch sừ, vì vậy, trở thành một \r\nyêu cầu bức thiết của mọi quốc\ \ gia, dân tộc. Phạm Công Trứ, nhà \r\nchính trị danh tiếng, nhà sử học sống ở\ \ thế kỳ XVII, trong bài Tựa\r\nsách Đại Việt sử ký bản kỷ tục biên viết: \"Vì\ \ sao mà làm quốc sử?\r\nVĩ sử chù yếu là để ghi chép sự việc. Có chinh trị cùa\ \ một đời tất\r\nphải có sử của một đời. Mà ngòi bút chép sử giữ nghị luận rất\r\ \nnghiêm, ca ngợi đời thịnh trị thì sáng tỏ ngang với mặt trời, mặt\r\ntrăng,\ \ lên án kẻ loạn tặc thì gay gắt nhu sương thu lạnh buốt,\r\nngười thiện biết\ \ có thể bắt chước, người ác biết có thể tự răn, quan\r\nhệ đến việc chính trị\ \ không phải là không nhiều. Cho nên làm sử là\r\ncốt để cho được như thế\"'.\r\ \nViệt Nam là một dân tộc có lịch sử lâu đời. Việt Nam cũng là \r\nmột dân tộc\ \ yêu sử và có rất nhiều người ham thích tìm tòi, nghiên \r\ncứu và biên soạn\ \ lịch sử. Đã có nhiều công trình lịch sử được công \r\nbố, không chi do các cơ\ \ quan, tổ chức chuyên nghiên cứu biên \r\nsoạn, mà còn do cá nhân người yêu sử\ \ thực hiện... Điều này vừa có \r\nmặt tích cực, lại cỏ mặt tiêu cực. Tích cực\ \ vì sẽ góp phần giúp nhân \r\ndân hiểu thêm về lịch sử nước nhà, nhưng cũng chứa\ \ đựng yếu tố \r\ntiêu cực là dễ dẫn tới những hiểu biết phiến diện, sai lầm về\ \ lịch \r\nsử... đôi khi đồng nhất truyền thuyết với lịch sử?" - "LỊCH SỪ VIỆT NAM - TẬP 11\r\ngiầu mạnh hcm nhờ chiến tranh. Những nước bại trận\ \ như Đức, Ý, \r\nNhật thì kiệt quệ. Song dù thắng hay bại, sự kết thúc chiến\ \ tranh đặt \r\ncho mỗi nước những yêu cầu cấp bách cần giải quyết, tạo nên \r\ \nnhững đặc trưng kinh tế - xã hội ở nhóm nước này.\r\nSau chiến tranh thế giới,\ \ những nưóc công nghiệp chủ nghĩa \r\nchâu Âu và Nhật Bản đều bị chiến tranh\ \ tàn phá nặng nề. Nhiệm vụ \r\nquan trọng của họ ỉà hàn gắn vết thương chiến\ \ tranh, khôi phục \r\nkinh tế, ổn định đời sống xã hội. Đối với Mỹ, nhiệm vụ\ \ chủ yếu là \r\nphải chuyển hướng vận hành kinh tế từ một nền kinh tế phục vụ\ \ \r\nquân sự thời chiến sang nền kinh tế thời bình.\r\nNhừng nét cơ bản của tình\ \ hình thế giới nêu trên đã tác động \r\nđến hầu hết các khu vực trên thế giới,\ \ đặc biệt là khu vực Châu Á \r\nvà Đông Nam Á, tạo điều kiện thuận lợi cho cuộc\ \ đấu tranh giải \r\nphóng của các dân tộc Đông Dương. Từ đầu những năm 1950,\ \ tình \r\nhình cách mạng ba nước Đông Dương chuyển biến nhanh chóng. \r\nVới\ \ cuộc đi thăm Trung Quốc, Liên Xô của Chủ tịch Hồ Chí Minh \r\nđầu năm 1950 và\ \ việc các nước xã hội chủ nghĩa công nhận và đặt \r\nquan hệ ngoại giao với Chính\ \ phủ Việt Nam Dân chủ Cộng hòa là \r\nmột thắng lợi ngoại giao vô cùng quan trọng.\ \ Thắng lợi về ngoại \r\ngiao này đã chấm dứt thời kỳ chiến đấu đom độc, hầu như\ \ bị cách ly \r\nvới bên ngoài và từ đó tiếp nhận được sự đồng tình về chính trị\ \ và \r\nsự viện trợ về vật chất.\r\nVới sự giúp đỡ của Liên Xô, Trung Quốc và\ \ các nước xã hội" - source_sentence: Chức năng của quan Đốc học trong việc quản lý giáo dục ở các tỉnh là gì? sentences: - "Định, Phú Yên, Biên Hoà, Gia Định, Vĩnh Long, Định Tường, An \r\nGiang đều đặt\ \ mỗi tỉnh một quan Đốc học coi việc học chính trong \r\ntinh. Các tỉnh từ Quảng\ \ Trị, Quảng Bình, Hà Tĩnh, Nghệ An, \r\nThanh Hoá, Ninh Bình, Nam Định, Hà Nội,\ \ Hưng Yên, Hải Dương, \r\nSơn Tây, Bắc Ninh cũng đều đật chức Đốc học. Tinh nào\ \ khuyết \r\nchức Đốc học thì đặt Thự đốc học tạm quyền đốc học một thời gian\ \ \r\nđổ phụ trách, đôn đốc việc học trong tỉnh.\r\nCác tỉnh Khánh Hoà, Bình Thuận,\ \ Hà Tiên, Quảng Yên, Hưng \r\nHoá, Tuyên Quang, Thái Nguyên, Lạng Sơn, Cao Bằng,\ \ do số học \r\nsinh ít nên đến cuối thời Thiệu Trị (1847) vẫn chưa đặt chức Đốc\ \ học.\r\nTheo lệ Nhà nước chế cấp ấn quan phòng giao cho Đốc học lo \r\nviệc\ \ học chính trong địa hạt của tinh sờ tại, trong đó có việc xây \r\ndựng trường\ \ sở ở tinh, phù, hoặc huyện, châu; sắp xếp các thày \r\ngiáo và tuyển chọn học\ \ sinh vào học ở các trường. Những công \r\nviệc licn quun đén việc học đểu có\ \ sự phối hựp giữa quan Đốc hục \r\nvới các viên giữ chức Giáo thụ ở các phủ và\ \ Huấn đạo ờ các huyện, \r\nchâu. Một bộ máy giáo dục được tổ chức chặt chẽ theo\ \ ngành dọc \r\ntừ tinh đến phủ, huyện, châu; tổng (ở tổng có Tổng giáo) để theo\ \ \r\ndõi, đôn đốc việc giảng dạy và học tập, đã góp phần đẩy mạnh hom \r\nviệc\ \ giáo dục ở những triều vua Nguyễn nửa đầu thế kỳ XIX. Những \r\nthành tích của\ \ giáo dục bấy giờ biểu hiện rõ nhất ở việc Nhà nước \r\ncứ 3 năm lại mở một kỳ\ \ thi Hương ờ một số tinh thuộc Bác Kỳ (Nam \r\nĐịnh, Hài Dương, Thăng Long);\ \ Nghệ An; kinh đô Huế; Trung Kỳ" - "Trước tình hình thế giới và trong nước ngày càng khẩn trương, ngày 28 - I - 1941,\r\ \nlãnh tụ Nguyễn Ái Quốc về nước triệu tập Hội nghị lần thứ 8 Ban Chấp hành\r\n\ Trung ương Đảng Cộng sản Đông Dương. Hội nghị họp tại Pác Bó (Cao Bằng) từ\r\n\ ngày 10 đến ngày 19 - 5 - 1941.\r\nHội nghị chủ †rương trước hết phởi giỏi phóng\ \ cho được cóc dôn tộc\r\nĐông Dương ro khỏi éch Phớp - Nhột. Hội nghị quyết định\ \ tiếp tục tạm\r\ngóc khổu hiệu “Đónh đổ địa chủ, chia ruộng đốt cho dôn còy”\ \ thay bằng\r\ncóc khổu hiệu “Tịch thu ruộng đốt của đế quốc vò Việt gian chia\ \ cho dên\r\ncòy nghèo, giởm †ô, giỏm tức, chia lợi ruộng công”, tiến tới thực\ \ hiện\r\n“Người còy có ruộng”. Hội nghị chủ trương †hònh lộp Việt Nơm độc lập\r\ \nđồng minh (gọi tốt lò Việt Minh) bao gồm céc †ổ chức quồn chúng, lốy\r\ntên\ \ lò Hội Cứu quốc nhồm : “Liên hiệp hết thỏy cóc giới đồng bèo yêu\r\nnước, không\ \ phôn biệt giòu nghèo, giò trẻ, gới trai, không phôn biệt tôn\r\ngiáo vò xu hướng\ \ chính trị, đặng cùng nhau mưu cuộc dôn tộc giỏi phóng\r\nvò sinh tồn” °°,\r\n\ \r\nMặt trận Việt Minh chính thức thành lập ngày 19 - 5 - 1941. Chỉ sau một thời\r\ \ngian ngắn, tổ chức này đã có uy tín và ảnh hưởng sâu rộng trong nhân dân. Sau\ \ Hội\r\nnghị Trung ương, lãnh tụ Nguyễn Ái Quốc đã gửi thư kêu gọi đồng bào cả\ \ nước\r\nđoàn kết thống nhất đánh đuổi Pháp - Nhật." - "\"Chính sự ngày một đổ nát, đói kém xảy ra luôn luôn. Nhân dân cùng\r\nquân,\ \ khốn khổ, giặc cướp nổi lên ở nhiễu nơi\".\r\n(Khâm định Việt sử thông giám\ \ cương mục)\r\n\r\nỞ Nghệ An, Thanh Hoá, Ninh Bình,... dân nghèo nổi dậy đấu\ \ tranh. Trong\r\ntình hình đó, một số thế lực phong kiến ở các địa phương lại\ \ đánh giết lẫn\r\nnhau, quấy phá nhân dân và chống lại triều đình. Nhà Lý phải\ \ dựa vào thế lực\r\nhọ Trần để chống lại các lực lượng nổi loạn nên đã tạo điều\ \ kiện và thời cơ cho\r\nhọ Trần buộc Chiêu Hoàng (vua cuối cùng của nhà Lý) phải\ \ nhường ngôi cho\r\nTrần Cảnh vào tháng 12, năm Ất Dậu (đâu năm 1226).\r\n\r\n\ (1) Việc thổ mộc : việc làm nhà cửa, chùa, đền, đào sông, hồ..." - source_sentence: Thiệu Trị đã xử lý trường hợp của Lý Văn Phức và việc người Pháp bắt giữ thuyền quân đi tuần biển của Việt Nam ra sao? sentences: - "hóa; thuế độc quyền; thué điền thổ...\r\nTheo những con số thống kê chính thức\ \ thì các loại thuế trên \r\nđều tăng lên đáng kể, khoảng từ ba đến hơn ba lần\ \ vào năm 1945 \r\n(số dự thu) so với năm 1939 (số thực thu) như sau:\r\nBảng\ \ 29: Thu nhập từ một sổ loại thuế ở Đông Dương \r\ntrong các năm 1939 và 19453\r\ \nĐom vị: nghìn đồng\r\nThuế 1939 1945\r\nThuế tiêu thụ và vận chuyển hàng hoá\ \ 20.655.000 58.265.000\r\nThuế muối, rượu, thuốc phiện, diêm, pháo,\r\nthuốc\ \ lá\r\n24.694.000 87.000.000\r\nThuế điền thổ, trước bạ 11.821.000 28.625.000\r\ \nvề thuốc phiện, do việc nhập khẩu bị ngừng, Pháp khuyến khích \r\nnhân dân thượng\ \ du trồng loại cây này nên số thuốc phiện sản xuất \r\nđược ngày một tăng: năm\ \ 1940: 7.560kg; nãm 1941: 17.344kg; năm\r\n1. Annuaire statistique de V Union\ \ f,rariỊaise Outre- mer 1939-1946, tr. K -\r\n90-93.\r\n2, 3. Annuaire statistique\ \ de runion firanẹaise Outre - mer 1939-1946, tr.\r\nK-90.\r\n552" - "Chương I. Chính sách thuộc địa của Pháp..\r\nbộ đồng bào các dân tộc thiểu số.\ \ về phương diện này, chính quyền \r\nthuộc địa còn muốn đi xa hơn là cố định\ \ đồng bào vào một không \r\ngian nhất định, rồi đưa họ đến với chế độ sở hữu\ \ ruộng đất - chế độ \r\nsở hữu tập thể và ấn định cho họ một chế độ thuế khóa.\r\ \nNhư vậy, “chính sách thâm nhập” có xuất phát điểm là chính \r\nsách “chia đế\ \ trf' và mục tiêu là tách các dân tộc thiểu số ra khỏi \r\ndân tộc Kinh, dùng\ \ dân tộc nọ chống lại dân tộc kia và nhằm một \r\nmục đích cao hơn là từ chinh\ \ phục, khuất phục về chính trị để tiến \r\nsang khai thác, bóc lột về đất đai,\ \ nhân công và thuế khóa của các \r\nđồng bào.\r\n7. Một số “cải cách” xã hội\ \ khác liên quan đến nông dân và\r\ncông nhân\r\nLiên quan đến nông dân, trong\ \ bài diễn văn về Tinh hình Đông\r\nDương và tuyên bo cải cách vào tháng 9/19301,\ \ Pierre Pasquier nêu \r\nra những vấn đề như: thi hành luật điền thổ, giúp nông\ \ dân Nam Kỳ \r\nthế chấp ruộng đất để vay tín dụng ngân hàng; dẫn thủy nhập điền,\ \ \r\nlàm thuỷ lợi để tăng diện tích canh tác, cải tiến kỹ thuật trồng trọt; \r\ \ngiúp nông dân thăng tién về sờ hữu ruộng đất (từ người không có \r\nđất lên\ \ tiểu điền chủ); mở rộng việc nhượng đất, khẩn hoang ở \r\nnhững vùng rừng núi\ \ ở Bắc và Trung Kỳ cũng như ở phía tây và \r\nnam Nam Kỳ; quy định lại chế độ\ \ lĩnh canh để \"hạn ché bớt sự bóc\r\nlột cùa địa chù đoi với tá điền”.\r\nTriển\ \ khai những “cải cách” này, Pierre Pasquier cho tiếp tục \r\nxây dựng các công\ \ trình thuỷ nông, rồi thành lập Hội đồng Khẩn" - "theo vài mươi người, đeo gươm, đeo súng, đến thẳng ngay công \r\nquán, đưa ra\ \ một lá thư của nước Pháp bằng chữ Hán, lời lẽ ngang \r\nngược. Lý Văn Phức không\ \ nhận thư, Lạp Biệt Nhĩ quát to doạ nạt, \r\nđể lại thư xuống ghế rồi đi. Lý\ \ Văn Phức và Nguyễn Đình Tân bàn \r\nvới nhau rằng: \"Nhận lấy thư là có tội,\ \ mà đốt thư đi cũng có tội, \r\nkhông gì bằng cho chạy trạm về đệ tâu lên\".\ \ Lý Văn Phức về Kinh,\r\n1. Thực lục, tập VI, sđd, tr. 301.\r\n492\r\nChương\ \ VII. Quan hệ đối ngoại\r\nThiệu Trị giận là làm mất quốc thể, sai vệ cẩm y đóng\ \ gông đem \r\ngiam ở Tà đãi lậu, bắt giải chức, giao cho đình thần bàn.\r\nKhi\ \ ấy, bọn Pháp ngày thường lên bờ, ngông nghênh đi lại các \r\nnơi giao tiếp với\ \ dân đi đạo. Những thuyền quân đi tuần biển bị \r\nchúng bắt giữ lại ở cừa biển\ \ và cướp lấy buồm thuyền và dây buộc \r\nthuyền cùa 5 chiếc thuyền bọc đồng ở\ \ Kinh phái đi Nam (Kim \r\nƯng, Phấn Bằng, Linh Phượng, Thọ Hạc, Vân Bằng) đậu\ \ ở vụng \r\nTrà Sơn, đối diện vói chiến thuyền Pháp.\r\nViệc báo lên, Thiệu Trị\ \ sai ngay Đô thống Hữu quân Mai Công \r\nNgôn, Tham tri Bộ Hộ Đào Trí Phú đem\ \ biền binh 3 vệ Vũ lâm, Hổ \r\noai, Hùng nhuệ đến Quảng Nam cùng với lực lượng\ \ thủy, bộ tại \r\nchỗ tổ chức bố phòng. Thiệu Trị truyền chi căn dặn Mai Công\ \ \r\nNgôn và Đào Trí Phú rằng: \"Người Tây dương nếu đã sợ uy, thu \r\nhình,\ \ thì ta không nên tự động thủ trước; nếu chúng sinh chuyện \r\ntrước, thì đốc\ \ sức thành đài cùng biền binh các hiệu thuyền và \r\nthuyền đồng do Kinh phái\ \ đi, ngoài hợp, trong ứng, lập tức đánh" - source_sentence: Gia Cát Lượng đã giúp ai trong việc quản lý nước Thục? sentences: - "phải trông coi mọi việc, giúp Thành Vương đến lúc trưởng thành. \r\n4\r\n Hoắc\ \ Quang giữ chức Đại tư mã tướng quân, phò Hán Chiêu Đế lúc lên ngôi mới 9 tuổi.\ \ \r\n5\r\n Gia Cát Lượng tức Khổng Minh, là thừa tướng của Chiêu Đế Lưu Bị nước\ \ Thục đời Tam Quốc. Lưu Bị chết, con là Lưu Thiện nối \r\nngôi, tức Thục Hậu\ \ chúa, mọi việc nước, việc quân đều phải trông cậy vào Gia Cát Lượng. \r\n6\r\ \n Tô Hiến Thành là Thái úy triều Lý Cao Tông, nhận di mệnh Cao Tông phò vua nhỏ\ \ là Long Cán lên nối ngôi mới 3 tuổi. \r\n7\r\n Tứ phụ: nghĩa là bốn viên đại\ \ thần giúp vua khi mới lên ngôi. \r\n8\r\n Chỉ Thuận Tông. \r\n9\r\n Xích chủy:\ \ nghĩa là mõm đỏ, miệng đỏ, hay đỏ mỏ. Xích chủy hầu là loài đỏ mỏ ám chỉ Lê\ \ Quý Ly. \r\n10 Bạch kê: nghĩa là gà trắng. Nghệ Tông sinh năm Tân Dậu, tức năm\ \ gà. Tân thuộc hành kim, loài kim sắc trắng. Vì thế \"bạch kê\" \r\nám chỉ Nghệ\ \ Tông. \r\n11 Chữ vương? ở trong lòng chữ khẩu? là chữ \"quốc\"?. \r\n12 Theo\ \ tục nhà Trần, hằng năm vào ngày mồng 4 tháng 4, vua hội họp bề tôi làm lễ tuyên\ \ thệ ở đền Đồng Cổ. (Xem bản kỷ, quyển \r\n5, Kiến Trung năm thứ 3, 1277). \r\ \n13 Chỉ Quý Ly. \r\n288 Đại Việt Sử Ký Toàn Thư - Bản Kỷ - Quyển VIII \r\nQuý\ \ Ly bỏ mũ, rập đầu khóc lóc từ tạ, chỉ trời vạch đất thề rằng: \r\n\"Nếu thần\ \ không biết dốc lòng trung, hết sức giúp Quan gia để truyền đến con cháu về sau\ \ thì \r\ntrời sẽ ghét bỏ thần\". \r\nQuý Ly lại nói: \"Lúc Linh Đức Vương làm\ \ điều thất đức, nếu không nhờ oai linh bệ hạ thì thần đã" - "éo, xênh xang lạ hom cả\", và gánh xiếc của BẮc thành trổ tài dịp Đại \r\nkhánh\ \ \"Ngũ tuần\" của vua: \"4 đứa leo dây, đứa trẻ lộn dây, đứa trẻ \r\nmúa trên\ \ bàn tay 2 đứa\".\r\nNhững định chế về tổ chức và hoạt động nghệ thuật của nhà\ \ \r\nNguyễn đã có tác dụng quan ữọng kích thích các loại hình vãn nghệ \r\ndân\ \ gian phát triển cả về số lượng lẫn chất lượng. Trong các đợt biểu \r\ndiễn ở\ \ Kinh đô, trước yêu cầu thưởng lãm nghiêm ngặt và cao hơn \r\nđịa phương, các\ \ nhà viết kịch bản. đạo diễn, diễn viên phải trau dồi để \r\nnâng cao năng lực\ \ sáng tác, dàn dựng và kỹ năng biểu diễn.\r\n2. Nghệ thuật dân gian\r\nSinh hoạt\ \ văn nghệ dân gian trong các làng quê cũng phát triển. \r\nỞ Bắc Kỳ, Bắc Trung\ \ Kỳ, hát ả đào rất phổ biến. Bên cạnh đó là \r\ncác thể loại dân ca: hát Xoan\ \ ở Phú Thọ, Quan họ Bắc Ninh, hát \r\nSli, Then ở Lạng Sơn, hát Ví dặm, Phường\ \ vải ở Nghệ An, Hà \r\nTĩnh. Ở các tinh trung du và đồng bằng Bắc Bộ, Thanh Hóa,\ \ chèo \r\nsân đình mang tính trào lộng nở rộ. Thể loại trò hài, xiếc ở Bắc Kỳ\ \ \r\ncũng thu hút đông đảo khán giả.\r\n639" - "Tây. Ngoài cơ sờ đúc súng cũ của tiên triều, năm 1825 vua Minh \r\nMệnh mờ thêm\ \ sáu xưởng nữa. vốn cần cù và ham học hỏi sáng \r\ntạo, những người thợ quân\ \ giới đã được \"thứ súng tay nạp thuốc nổ \r\nmạnh theo kiểu Tây dương\". Vào\ \ những năm cuối triều Minh \r\nM ệnh, họ đã đúc 15 cỗ đại pháo X ung tiêu băng\ \ đồng và hai cỗ \r\nsúng lớn Chấn hải, loại đại pháo lợi hại trong thủy chiến\ \ phương \r\nTây. Sau đó, lại xuất xưởng tiếp 30 cỗ Chấn hải. Năm 1829, quản \r\ \nkho Hải Dương là Tôn Thất Thiện cùng với 100 lính Chấn cơ chế \r\nra cối gỗ\ \ chạy bàng sức nước ở khe suối để giã, luyện thuốc súng. \r\nDụng cụ này là xe\ \ \"Thủy hỏa ký tế\", và những năm sau được phổ \r\ncập trong quân ngũ. Từ vũ\ \ khí phương Tây, người Đại Nam đã tự \r\ntìm hiểu từng chi tiết để chế tạo thước\ \ đo ngắm bắn, thước kiểm tra \r\nthuốc súng. Trong bảy năm ờ ngôi, vua Thiệu\ \ Trị đúc 9 cỗ súng \r\nbàng đồng hiệu là \"Thần uy phục viễn đại tướng quân\"\ , cỗ to nhất \r\nlà 10.706 cân, cỗ nhỏ nhất là 10.222 cân, tổng cộng là 93.829\ \ cân.\r\n649\r\nLỊCH SỬ VIỆT NAM - TẬP 5\r\nVà ba cỗ súng hiệu \"Bảo Đại định\ \ công an dân hòa chúng thượng \r\ntướng quân\", mỗi cỗ trên 14.500 cân, tổng\ \ cộng là 43.620 cân1.\r\nĐe tạo điều kiện cho quân thủy học tập, bộ Công cấp\ \ cho họ la \r\nbàn, thước đo nước, đồng hồ cát xem giờ của phương Tây. v ề khoa\ \ \r\nmục bắn súng thì lính thủy phải tập bắn súng điểu sang và đại bác. \r\n\ Minh Mệnh yêu cầu Hiệp biện Đại học sĩ lãnh Thượng thư bộ Binh \r\nTrương Đăng\ \ Quế đọc kỹ các sách và bản đồ thủy chiến \"Tây" model-index: - name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base results: - task: type: information-retrieval name: Information Retrieval dataset: name: Alibaba NLP/gte multilingual base type: Alibaba-NLP/gte-multilingual-base metrics: - type: cosine_accuracy@1 value: 0.4269406392694064 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6648401826484018 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7388127853881279 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8168949771689498 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.4269406392694064 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2216133942161339 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1477625570776256 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08168949771689496 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.4269406392694064 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.6648401826484018 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7388127853881279 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8168949771689498 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6233026051051767 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5611618467782854 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5670558073651792 name: Cosine Map@100 --- # SentenceTransformer based on Alibaba-NLP/gte-multilingual-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision ca1791e0bcc104f6db161f27de1340241b13c5a4 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("minhtuan7akp/gte-vietnamese-finetune") # Run inference sentences = [ 'Gia Cát Lượng đã giúp ai trong việc quản lý nước Thục?', 'phải trông coi mọi việc, giúp Thành Vương đến lúc trưởng thành. \r\n4\r\n Hoắc Quang giữ chức Đại tư mã tướng quân, phò Hán Chiêu Đế lúc lên ngôi mới 9 tuổi. \r\n5\r\n Gia Cát Lượng tức Khổng Minh, là thừa tướng của Chiêu Đế Lưu Bị nước Thục đời Tam Quốc. Lưu Bị chết, con là Lưu Thiện nối \r\nngôi, tức Thục Hậu chúa, mọi việc nước, việc quân đều phải trông cậy vào Gia Cát Lượng. \r\n6\r\n Tô Hiến Thành là Thái úy triều Lý Cao Tông, nhận di mệnh Cao Tông phò vua nhỏ là Long Cán lên nối ngôi mới 3 tuổi. \r\n7\r\n Tứ phụ: nghĩa là bốn viên đại thần giúp vua khi mới lên ngôi. \r\n8\r\n Chỉ Thuận Tông. \r\n9\r\n Xích chủy: nghĩa là mõm đỏ, miệng đỏ, hay đỏ mỏ. Xích chủy hầu là loài đỏ mỏ ám chỉ Lê Quý Ly. \r\n10 Bạch kê: nghĩa là gà trắng. Nghệ Tông sinh năm Tân Dậu, tức năm gà. Tân thuộc hành kim, loài kim sắc trắng. Vì thế "bạch kê" \r\nám chỉ Nghệ Tông. \r\n11 Chữ vương? ở trong lòng chữ khẩu? là chữ "quốc"?. \r\n12 Theo tục nhà Trần, hằng năm vào ngày mồng 4 tháng 4, vua hội họp bề tôi làm lễ tuyên thệ ở đền Đồng Cổ. (Xem bản kỷ, quyển \r\n5, Kiến Trung năm thứ 3, 1277). \r\n13 Chỉ Quý Ly. \r\n288 Đại Việt Sử Ký Toàn Thư - Bản Kỷ - Quyển VIII \r\nQuý Ly bỏ mũ, rập đầu khóc lóc từ tạ, chỉ trời vạch đất thề rằng: \r\n"Nếu thần không biết dốc lòng trung, hết sức giúp Quan gia để truyền đến con cháu về sau thì \r\ntrời sẽ ghét bỏ thần". \r\nQuý Ly lại nói: "Lúc Linh Đức Vương làm điều thất đức, nếu không nhờ oai linh bệ hạ thì thần đã', 'Tây. Ngoài cơ sờ đúc súng cũ của tiên triều, năm 1825 vua Minh \r\nMệnh mờ thêm sáu xưởng nữa. vốn cần cù và ham học hỏi sáng \r\ntạo, những người thợ quân giới đã được "thứ súng tay nạp thuốc nổ \r\nmạnh theo kiểu Tây dương". Vào những năm cuối triều Minh \r\nM ệnh, họ đã đúc 15 cỗ đại pháo X ung tiêu băng đồng và hai cỗ \r\nsúng lớn Chấn hải, loại đại pháo lợi hại trong thủy chiến phương \r\nTây. Sau đó, lại xuất xưởng tiếp 30 cỗ Chấn hải. Năm 1829, quản \r\nkho Hải Dương là Tôn Thất Thiện cùng với 100 lính Chấn cơ chế \r\nra cối gỗ chạy bàng sức nước ở khe suối để giã, luyện thuốc súng. \r\nDụng cụ này là xe "Thủy hỏa ký tế", và những năm sau được phổ \r\ncập trong quân ngũ. Từ vũ khí phương Tây, người Đại Nam đã tự \r\ntìm hiểu từng chi tiết để chế tạo thước đo ngắm bắn, thước kiểm tra \r\nthuốc súng. Trong bảy năm ờ ngôi, vua Thiệu Trị đúc 9 cỗ súng \r\nbàng đồng hiệu là "Thần uy phục viễn đại tướng quân", cỗ to nhất \r\nlà 10.706 cân, cỗ nhỏ nhất là 10.222 cân, tổng cộng là 93.829 cân.\r\n649\r\nLỊCH SỬ VIỆT NAM - TẬP 5\r\nVà ba cỗ súng hiệu "Bảo Đại định công an dân hòa chúng thượng \r\ntướng quân", mỗi cỗ trên 14.500 cân, tổng cộng là 43.620 cân1.\r\nĐe tạo điều kiện cho quân thủy học tập, bộ Công cấp cho họ la \r\nbàn, thước đo nước, đồng hồ cát xem giờ của phương Tây. v ề khoa \r\nmục bắn súng thì lính thủy phải tập bắn súng điểu sang và đại bác. \r\nMinh Mệnh yêu cầu Hiệp biện Đại học sĩ lãnh Thượng thư bộ Binh \r\nTrương Đăng Quế đọc kỹ các sách và bản đồ thủy chiến "Tây', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `Alibaba-NLP/gte-multilingual-base` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.4269 | | cosine_accuracy@3 | 0.6648 | | cosine_accuracy@5 | 0.7388 | | cosine_accuracy@10 | 0.8169 | | cosine_precision@1 | 0.4269 | | cosine_precision@3 | 0.2216 | | cosine_precision@5 | 0.1478 | | cosine_precision@10 | 0.0817 | | cosine_recall@1 | 0.4269 | | cosine_recall@3 | 0.6648 | | cosine_recall@5 | 0.7388 | | cosine_recall@10 | 0.8169 | | **cosine_ndcg@10** | **0.6233** | | cosine_mrr@10 | 0.5612 | | cosine_map@100 | 0.5671 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 21,892 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 26.95 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 373.94 tokens</li><li>max: 596 tokens</li></ul> | * Samples: | anchor | positive | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Tính chất kiến trúc của đình làng triều Mạc được thể hiện qua những đặc điểm gì, như số gian, hình dạng, nội thất và cách bố trí không gian trong công trình?</code> | <code>Đình làng là công trình kiến trúc công cộng được dựng nên <br>băng sự đóng góp của cải và công sức của cả cộng đồng làng xã. <br>Ngoài chức năng là trụ sở hành chính của cả làng, ngôi đình còn là <br>trung tâm sinh hoạt văn hóa làng xã, là nơi diễn ra các nghi lễ trọng <br>đại trong dịp tế lễ thần Thành hoàng làng và tô chức hội hè hăng <br>năm. Có thê nói, ngôi đình làng là nơi hội tụ sức mạnh của cả cộng <br>đồng và là biểu trưng đặc sắc nhất của văn hóa làng xã. <br> <br>Trong các ngôi đình triều Mạc, Thân thành hoàng có lý lịch <br>xuất thân khá phong phú. Tản Viên sơn thánh là vị thần có ảnh <br>hưởng lớn ở xứ Đoài được thờ phụng ở đình Tây Đăng, Thanh Lũng <br>và nhiều làng xã khác. Thần Cao Sơn, Quý Minh tương truyền là <br>tướng tâm phúc của Hùng Vương được thờ ở đình làng Lỗ Hạnh. <br>Dân làng Lỗ Hạnh còn thờ cả Phương Dung công chúa... Từ thế <br>kỷ XYVI và các thế kỷ tiếp sau, Thần thành hoàng làng trở thành <br>vị vua tỉnh thần ở các làng xã, tín ngưỡng thờ cúng Thân thành <br>hoàng càng trở nên phong phú thê hiện qua lễ...</code> | | <code>Nguyễn Khắc Nhu có vai trò gì trong khởi nghĩa toàn khu vực miền núi Bắc Kỳ của Việt Nam Quốc dân Đảng vào năm 1930?</code> | <code>bị nổ do bất cẩn. Do đó công việc bị phát hiện. Hai người phụ trách <br>cơ quan chế bom là Đỗ Cương và Quản Trác trốn thoát. Nhiều binh <br>lính và dân thường bị bắt. Công việc bạo động của Xứ Nhu không <br>thành. Đúng lúc này Việt Nam Quốc dân Đảng vừa thành lập, cử <br>người tới mời Xứ Nhu và Việt Nam Dân quốc gia nhập Việt Nam <br>Quốc dân Đảng. Hầu hết các đồng chí của Xứ Nhu trở thành đảng <br>viên của Việt Nam Quốc dân Đảng ở vùng Bắc Ninh, Bắc Giang. <br>Do đó, Việt Nam Quốc dân Đảng mạnh lên về số lượng1. Cùng với <br>việc phát triển đảng viên ở Bẳc Ninh, Bắc Giang, Việt Nam Quốc <br>dân Đảng còn thiết lập nhiều cơ sở ở các tỉnh Thái Bình, Hải Dương, <br>1. Nguyễn Khắc Nhu tức Xứ Nhu (1882-1930), người làng Song Khê, huyện <br>Yên Dũng, tinh Bắc Giang. Với lòng yêu nuớc và ý chí chống Pháp, <br>ông dự tính thành lập một tổ chức hoạt động công khai nhăm đào tạo <br>tài năng cho đất nước lấy tên là "Hội Quốc dân dục tài”. Việc này <br>không thành công, ông lại lập tổ chức bí mật nhăm bạo động lật đổ ách <br>áp b...</code> | | <code>Giá gạo tháng 3-1950 ở Liên khu IV là bao nhiêu đồng/tạ và có chênh lệch gì so với giá gạo ở Liên khu III và Liên khu Việt Bắc?</code> | <code>ngày càng tăng nhanh, nhất là ở Việt Bắc. Giá gạo tăng mạnh <br>nhất, giá thực phẩm cũng tăng dần theo giá gạo. Giá các mặt hàng <br>kỹ nghệ tăng chậm hơn. Giá hàng ngoại hóa hầu như không tăng <br>vỉ trong vùng Pháp chiếm đóng, hàng ngoại hóa tính bằng tiền <br>Đông Dương không tăng, hom nữa nhân dân cũng ít tiêu thụ hàng <br>ngoại hóa vì bị cấm. <br>1. Viện Kinh tế học, Kinh tế Việt Nam từ Cách mạng Tháng Tám đến..., Sách <br>đã dẫn, tr. 238. <br>2. Chuơng trình và báo cáo của Bộ Kinh tế về tình hình hoạt động năm 1950. <br>Trung tâm lưu trữ quốc gia in, phông Phủ Thủ tướng, Hồ sơ số 1914. <br>488 <br>Chương VI. Việt Nam dân chủ cộng hòa xây dựng.. <br>Giá gạo trong những tháng đầu năm 1950 so với cuối năm 1949 <br>có thay đổi, Liên khu IV (Thanh Hóa) giá tăng lên 154%; Liên khu <br>III (Hà Đông - Hà Nam) giá tăng lên 153%; Liên khu Việt Bắc <br>(Thái Nguyên) giá tăng lên 800%. <br>Giá gạo ở Thái Nguyên từ 1.625 đồng/tạ lên 13.000 đồng/tạ <br>(tăng 800%); ờ Phú Thọ từ 2.650 đồng/tạ lên 7.500 đồng/tạ (tăng <br>283%). Mặt khác, ...</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### csv * Dataset: csv * Size: 21,892 evaluation samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 26.56 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 369.01 tokens</li><li>max: 559 tokens</li></ul> | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Nguyễn Hoàng đã thực hiện những hành động gì để dần dần tách khỏi sự ràng buộc của họ Trịnh sau khi trở lại Thuận Quảng vào năm 1600, và những hành động này đã ảnh hưởng như thế nào đến mối quan hệ giữa hai dòng họ?</code> | <code>thẳng đối với họ Nguyễn. Trịnh Tùng đã lấy danh nghĩa vua Lê sai <br>sứ giả là Thiêm đô ngự sử Lê Nghĩa Trạch đem sắc vào phủ dụ <br>Nguyễn Hoàng và vẫn cho ở lại trấn thủ, hằng năm nộp thuế như <br>cũ. Cùng với sắc của vua Lê, Trịnh Tùng có gửi thư kèm theo <br>Chương ĩ. Sự phân liệt Đàng Trong - Đàng Ngoài... <br>1, Toàn thư. quyển 17, tập IV, Sđd, tr. 200. <br>2, Đại Nam thực lục, Tiền biên, quyển 1, tập I, Sđd, tr. 34. <br>3, Đại Nam thực lục, Tiển biên, quyển 1, tập I, Sđd, tr. 35. <br>39 <br>LỊCH SỬ VIỆT NAM - TẬP 4 <br>"khuyên giữ việc thuế cống". Nguyễn Hoàng sai sứ giả đáp lễ tạ on <br>vua Lê và gửi thư cho Trịnh Tùng hẹn kết nghĩa thông gia, đem con <br>gái là Ngọc Tú gả cho Trịnh Tráng (con Trịnh Tùng) lấy danh <br>nghĩa hôn nhân để duy trì mối quan hệ bề ngoài giao hảo giữa hai <br>dòng họ vốn có sẵn một mối thù địch. <br>- Chính sách cùa họ Nguyễn từ khi Nguyễn Hoàng trở lại <br>Thuận Quảng <br>Năm 1600, Nguyễn Hoàng ròi được khỏi đất Bẳc trở về Thuận <br>Quảng bắt đầu thực hiện một chính sách cai trị mói, dần dần tác...</code> | | <code>Báo cáo của Ủy ban Kháng chiến hành chính Hà Nội về hoạt động giáo dục bù nhìn và tình hình các giáo sư trường Chu Văn An có nội dung gì?</code> | <code>Tài liệu tham khảo <br>21. Báo cáo sô' 2 BC/I ngày 12-11-1949 và Báo cáo sô' 463 <br>BC/DB ngày 25-12-1949 của Ty Công an H à Nội. Trung <br>tâm Lưu trữ Quốc gia III, phông Phủ Thủ tướng, Hồ sơ <br>SỐ921. <br>28. Báo “Le song” ngày 11-2-1949. Trung tâm Lưu trữ Quốc <br>gia III, phông Phủ Thủ tướng, Hồ sơ sô' 2002. <br>29. Báo cáo của u ỷ ban Kháng chiến hành chính Hà Nội vê <br>hoạt động giáo dục bù nhìn và tình hình các giáo sư <br>trường Chu Văn An. Trung tâm Lưu trữ Quốc gia III, <br>phông Phủ Thủ tướng, Hồ sơ số 979. <br>30. Báo cáo của Tổng Giám đốc Việt N am Công an vụ sô' <br>122/NCB3 ngày 1-4-1951. Trung tâm Lưu trữ Quốic gia <br>III, phông Phủ Thủ tướng, Hồ sơ sô' 979. <br>31. Báo cáo thành tích về cống tác công an trong 8 năm kháng <br>chiến (1946-1954) của Bộ Công an. Trung tâm Lưu trữ <br>Quốc gia III, phông Phủ Thủ tướng, Hồ sơ sô' 927. <br>32. Báo cáo một năm kháng chiến (12-1946 đến 12-1947) của <br>UBKCHC Khu 12. Trung tâm Lưu trữ Quốc gia III, phông <br>Phủ Thủ tướng, Hồ sơ sô" 2000. <br>33. Báo cáo thành tích quăn sự trong 8 n...</code> | | <code>Đặc điểm dân số của nước ta ảnh hưởng đến các ngành dịch vụ như thế nào và đòi hỏi những ngành dịch vụ nào cần được ưu tiên phát triển trong quá trình đô thị hóa?</code> | <code>— Trong các thành phố lớn thường hình thành các trung tâm giao dịch, <br>thương mại. Đó là nơi tập trung các ngân hàng, các văn phòng đại diện <br>của các công ti, các siêu thị hay các tổ hợp thương mại, dịch vụ lớn... <br>Ở các thành phố lớn trên thế giới, thường dễ nhận thấy các trung tâm <br>thương mại này do sự tập trung các ngôi nhà cao tầng, chọc trời. Một <br>thành phố có thể có trung tâm thương mại chính và một số trung tâm <br>thương mại nhỏ hơn, kết quả của sự phát triển đô thị. <br> <br>— Ở nước ta, các thành phố, thị xã thường có khu hành chính (phân <br>“đô”) và khu buôn bán, dịch vụ (phân “thị'). Ở Hà Nội, Thành phố <br>Hồ Chí Minh các trung tâm giao dịch, thương mại của thành phố đang <br>được hình thành rõ nét. <br> <br>CÂU HỎI VÀ BÀI TẬP <br> <br>174 <br> <br>1. Cho biết đặc điểm dân số của nước ta (đông, tăng còn tương đối <br>nhanh, mức sống đang nâng lên và đô thị hoá đang phát triển với <br>tốc độ nhanh hơn) có ảnh hưởng đến các ngành dịch vụ như thế <br>nào ? Các đặc điểm đó đòi hỏi những ngành dịch vụ nào cần được <br>ưu tiê...</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 6 - `per_device_eval_batch_size`: 6 - `learning_rate`: 3e-06 - `num_train_epochs`: 2 - `warmup_ratio`: 0.05 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 6 - `per_device_eval_batch_size`: 6 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-06 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | Alibaba-NLP/gte-multilingual-base_cosine_ndcg@10 | |:------:|:----:|:-------------:|:---------------:|:------------------------------------------------:| | 0.0305 | 100 | 0.1448 | 0.1106 | 0.5590 | | 0.0609 | 200 | 0.1453 | 0.0878 | 0.5671 | | 0.0914 | 300 | 0.093 | 0.0642 | 0.5851 | | 0.1218 | 400 | 0.0763 | 0.0490 | 0.5939 | | 0.1523 | 500 | 0.0641 | 0.0417 | 0.5992 | | 0.1827 | 600 | 0.0524 | 0.0388 | 0.5993 | | 0.2132 | 700 | 0.0495 | 0.0349 | 0.6054 | | 0.2436 | 800 | 0.0497 | 0.0324 | 0.6059 | | 0.2741 | 900 | 0.0354 | 0.0311 | 0.6049 | | 0.3045 | 1000 | 0.0494 | 0.0300 | 0.6088 | | 0.3350 | 1100 | 0.0596 | 0.0294 | 0.6080 | | 0.3654 | 1200 | 0.0463 | 0.0284 | 0.6101 | | 0.3959 | 1300 | 0.0359 | 0.0272 | 0.6097 | | 0.4263 | 1400 | 0.0458 | 0.0267 | 0.6096 | | 0.4568 | 1500 | 0.0402 | 0.0265 | 0.6104 | | 0.4872 | 1600 | 0.0392 | 0.0256 | 0.6099 | | 0.5177 | 1700 | 0.0425 | 0.0250 | 0.6116 | | 0.5481 | 1800 | 0.0367 | 0.0250 | 0.6117 | | 0.5786 | 1900 | 0.0359 | 0.0246 | 0.6091 | | 0.6090 | 2000 | 0.0304 | 0.0254 | 0.6069 | | 0.6395 | 2100 | 0.0429 | 0.0247 | 0.6087 | | 0.6699 | 2200 | 0.0405 | 0.0240 | 0.6137 | | 0.7004 | 2300 | 0.0206 | 0.0241 | 0.6129 | | 0.7308 | 2400 | 0.0406 | 0.0237 | 0.6123 | | 0.7613 | 2500 | 0.0431 | 0.0235 | 0.6138 | | 0.7917 | 2600 | 0.032 | 0.0233 | 0.6169 | | 0.8222 | 2700 | 0.0365 | 0.0230 | 0.6145 | | 0.8526 | 2800 | 0.0319 | 0.0222 | 0.6182 | | 0.8831 | 2900 | 0.0316 | 0.0225 | 0.6170 | | 0.9135 | 3000 | 0.0319 | 0.0223 | 0.6179 | | 0.9440 | 3100 | 0.0458 | 0.0222 | 0.6190 | | 0.9744 | 3200 | 0.0387 | 0.0221 | 0.6203 | | 1.0049 | 3300 | 0.0356 | 0.0217 | 0.6216 | | 1.0353 | 3400 | 0.0298 | 0.0213 | 0.6229 | | 1.0658 | 3500 | 0.0411 | 0.0211 | 0.6229 | | 1.0962 | 3600 | 0.0269 | 0.0211 | 0.6231 | | 1.1267 | 3700 | 0.0279 | 0.0214 | 0.6199 | | 1.1571 | 3800 | 0.0207 | 0.0213 | 0.6217 | | 1.1876 | 3900 | 0.0269 | 0.0208 | 0.6231 | | 1.2180 | 4000 | 0.0282 | 0.0212 | 0.6195 | | 1.2485 | 4100 | 0.0226 | 0.0212 | 0.6215 | | 1.2789 | 4200 | 0.0269 | 0.0212 | 0.6219 | | 1.3094 | 4300 | 0.026 | 0.0212 | 0.6191 | | 1.3398 | 4400 | 0.026 | 0.0211 | 0.6220 | | 1.3703 | 4500 | 0.0266 | 0.0213 | 0.6214 | | 1.4007 | 4600 | 0.034 | 0.0214 | 0.6206 | | 1.4312 | 4700 | 0.0344 | 0.0213 | 0.6213 | | 1.4616 | 4800 | 0.0183 | 0.0215 | 0.6219 | | 1.4921 | 4900 | 0.03 | 0.0214 | 0.6224 | | 1.5225 | 5000 | 0.0245 | 0.0213 | 0.6226 | | 1.5530 | 5100 | 0.0372 | 0.0211 | 0.6216 | | 1.5834 | 5200 | 0.0251 | 0.0209 | 0.6223 | | 1.6139 | 5300 | 0.0227 | 0.0208 | 0.6222 | | 1.6443 | 5400 | 0.0256 | 0.0208 | 0.6210 | | 1.6748 | 5500 | 0.0284 | 0.0209 | 0.6224 | | 1.7052 | 5600 | 0.0286 | 0.0211 | 0.6218 | | 1.7357 | 5700 | 0.0271 | 0.0209 | 0.6236 | | 1.7661 | 5800 | 0.0184 | 0.0209 | 0.6217 | | 1.7966 | 5900 | 0.0347 | 0.0208 | 0.6219 | | 1.8270 | 6000 | 0.0245 | 0.0208 | 0.6227 | | 1.8575 | 6100 | 0.0248 | 0.0207 | 0.6224 | | 1.8879 | 6200 | 0.0261 | 0.0207 | 0.6235 | | 1.9184 | 6300 | 0.0284 | 0.0206 | 0.6224 | | 1.9488 | 6400 | 0.0174 | 0.0207 | 0.6233 | | 1.9793 | 6500 | 0.0213 | 0.0207 | 0.6233 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.0 - PyTorch: 2.5.1 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CHIA" ]
Non_BioNLP
McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp-unsup-simcse
McGill-NLP
sentence-similarity
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
1,712
1,712
2,528
1
--- language: - en library_name: peft license: mit pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb model-index: - name: LLM2Vec-Sheared-LLaMA-unsupervised results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 72.92537313432835 - type: ap value: 36.6875749512053 - type: f1 value: 67.36274146169845 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 74.282675 - type: ap value: 69.15441866642587 - type: f1 value: 74.13028166370813 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 36.136 - type: f1 value: 35.840498320506235 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 21.407999999999998 - type: map_at_10 value: 35.474 - type: map_at_100 value: 36.653999999999996 - type: map_at_1000 value: 36.68 - type: map_at_3 value: 30.974 - type: map_at_5 value: 33.265 - type: mrr_at_1 value: 22.119 - type: mrr_at_10 value: 35.714 - type: mrr_at_100 value: 36.895 - type: mrr_at_1000 value: 36.921 - type: mrr_at_3 value: 31.2 - type: mrr_at_5 value: 33.518 - type: ndcg_at_1 value: 21.407999999999998 - type: ndcg_at_10 value: 43.644 - type: ndcg_at_100 value: 49.035000000000004 - type: ndcg_at_1000 value: 49.685 - type: ndcg_at_3 value: 34.174 - type: ndcg_at_5 value: 38.288 - type: precision_at_1 value: 21.407999999999998 - type: precision_at_10 value: 6.999 - type: precision_at_100 value: 0.9440000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.485999999999999 - type: precision_at_5 value: 10.683 - type: recall_at_1 value: 21.407999999999998 - type: recall_at_10 value: 69.986 - type: recall_at_100 value: 94.381 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 43.457 - type: recall_at_5 value: 53.413999999999994 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.915010245699904 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 35.19568272188972 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 52.696972763822615 - type: mrr value: 65.87136701402629 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 75.12038636775851 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.99675324675324 - type: f1 value: 78.90527329824852 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.02170435970243 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 27.208216971540782 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: cqadupstack/android config: default split: test revision: None metrics: - type: map_at_1 value: 16.432 - type: map_at_10 value: 23.769000000000002 - type: map_at_100 value: 25.038 - type: map_at_1000 value: 25.208000000000002 - type: map_at_3 value: 21.532999999999998 - type: map_at_5 value: 22.668 - type: mrr_at_1 value: 21.316 - type: mrr_at_10 value: 28.89 - type: mrr_at_100 value: 29.799999999999997 - type: mrr_at_1000 value: 29.887999999999998 - type: mrr_at_3 value: 26.705000000000002 - type: mrr_at_5 value: 27.864 - type: ndcg_at_1 value: 21.316 - type: ndcg_at_10 value: 28.656 - type: ndcg_at_100 value: 34.405 - type: ndcg_at_1000 value: 37.771 - type: ndcg_at_3 value: 24.98 - type: ndcg_at_5 value: 26.384999999999998 - type: precision_at_1 value: 21.316 - type: precision_at_10 value: 5.8229999999999995 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.181 - type: precision_at_3 value: 12.446 - type: precision_at_5 value: 8.984 - type: recall_at_1 value: 16.432 - type: recall_at_10 value: 37.696000000000005 - type: recall_at_100 value: 63.198 - type: recall_at_1000 value: 86.651 - type: recall_at_3 value: 26.651000000000003 - type: recall_at_5 value: 30.901 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: cqadupstack/english config: default split: test revision: None metrics: - type: map_at_1 value: 16.106 - type: map_at_10 value: 21.770999999999997 - type: map_at_100 value: 22.538 - type: map_at_1000 value: 22.656000000000002 - type: map_at_3 value: 19.918 - type: map_at_5 value: 20.957 - type: mrr_at_1 value: 21.083 - type: mrr_at_10 value: 26.502 - type: mrr_at_100 value: 27.161 - type: mrr_at_1000 value: 27.234 - type: mrr_at_3 value: 24.735 - type: mrr_at_5 value: 25.753999999999998 - type: ndcg_at_1 value: 21.083 - type: ndcg_at_10 value: 25.625999999999998 - type: ndcg_at_100 value: 29.152 - type: ndcg_at_1000 value: 32.025 - type: ndcg_at_3 value: 22.721 - type: ndcg_at_5 value: 24.029 - type: precision_at_1 value: 21.083 - type: precision_at_10 value: 4.8919999999999995 - type: precision_at_100 value: 0.844 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 11.104 - type: precision_at_5 value: 7.987 - type: recall_at_1 value: 16.106 - type: recall_at_10 value: 32.385999999999996 - type: recall_at_100 value: 47.961999999999996 - type: recall_at_1000 value: 67.63900000000001 - type: recall_at_3 value: 23.568 - type: recall_at_5 value: 27.326 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: cqadupstack/gaming config: default split: test revision: None metrics: - type: map_at_1 value: 22.517 - type: map_at_10 value: 29.593999999999998 - type: map_at_100 value: 30.695 - type: map_at_1000 value: 30.803000000000004 - type: map_at_3 value: 27.592 - type: map_at_5 value: 28.768 - type: mrr_at_1 value: 26.27 - type: mrr_at_10 value: 33.076 - type: mrr_at_100 value: 33.998 - type: mrr_at_1000 value: 34.073 - type: mrr_at_3 value: 31.223 - type: mrr_at_5 value: 32.257000000000005 - type: ndcg_at_1 value: 26.27 - type: ndcg_at_10 value: 33.726 - type: ndcg_at_100 value: 39.079 - type: ndcg_at_1000 value: 41.762 - type: ndcg_at_3 value: 30.064 - type: ndcg_at_5 value: 31.858999999999998 - type: precision_at_1 value: 26.27 - type: precision_at_10 value: 5.448 - type: precision_at_100 value: 0.898 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 13.417000000000002 - type: precision_at_5 value: 9.317 - type: recall_at_1 value: 22.517 - type: recall_at_10 value: 42.814 - type: recall_at_100 value: 67.037 - type: recall_at_1000 value: 86.89099999999999 - type: recall_at_3 value: 33.041 - type: recall_at_5 value: 37.389 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: cqadupstack/gis config: default split: test revision: None metrics: - type: map_at_1 value: 7.681 - type: map_at_10 value: 10.655000000000001 - type: map_at_100 value: 11.274000000000001 - type: map_at_1000 value: 11.381 - type: map_at_3 value: 9.793000000000001 - type: map_at_5 value: 10.202 - type: mrr_at_1 value: 8.248999999999999 - type: mrr_at_10 value: 11.453000000000001 - type: mrr_at_100 value: 12.074 - type: mrr_at_1000 value: 12.174 - type: mrr_at_3 value: 10.452 - type: mrr_at_5 value: 10.989 - type: ndcg_at_1 value: 8.248999999999999 - type: ndcg_at_10 value: 12.467 - type: ndcg_at_100 value: 15.942 - type: ndcg_at_1000 value: 19.378999999999998 - type: ndcg_at_3 value: 10.631 - type: ndcg_at_5 value: 11.411 - type: precision_at_1 value: 8.248999999999999 - type: precision_at_10 value: 1.966 - type: precision_at_100 value: 0.40099999999999997 - type: precision_at_1000 value: 0.075 - type: precision_at_3 value: 4.444 - type: precision_at_5 value: 3.186 - type: recall_at_1 value: 7.681 - type: recall_at_10 value: 17.302 - type: recall_at_100 value: 34.014 - type: recall_at_1000 value: 61.207 - type: recall_at_3 value: 12.389 - type: recall_at_5 value: 14.158999999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: cqadupstack/mathematica config: default split: test revision: None metrics: - type: map_at_1 value: 3.868 - type: map_at_10 value: 6.281000000000001 - type: map_at_100 value: 6.903 - type: map_at_1000 value: 7.038 - type: map_at_3 value: 5.234 - type: map_at_5 value: 5.685 - type: mrr_at_1 value: 5.1 - type: mrr_at_10 value: 8.148 - type: mrr_at_100 value: 8.846 - type: mrr_at_1000 value: 8.963000000000001 - type: mrr_at_3 value: 6.944 - type: mrr_at_5 value: 7.498 - type: ndcg_at_1 value: 5.1 - type: ndcg_at_10 value: 8.405999999999999 - type: ndcg_at_100 value: 12.014 - type: ndcg_at_1000 value: 15.956999999999999 - type: ndcg_at_3 value: 6.22 - type: ndcg_at_5 value: 6.962 - type: precision_at_1 value: 5.1 - type: precision_at_10 value: 1.8159999999999998 - type: precision_at_100 value: 0.437 - type: precision_at_1000 value: 0.09 - type: precision_at_3 value: 3.1510000000000002 - type: precision_at_5 value: 2.463 - type: recall_at_1 value: 3.868 - type: recall_at_10 value: 13.319 - type: recall_at_100 value: 29.985 - type: recall_at_1000 value: 59.245999999999995 - type: recall_at_3 value: 7.0809999999999995 - type: recall_at_5 value: 8.914 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: cqadupstack/physics config: default split: test revision: None metrics: - type: map_at_1 value: 13.091 - type: map_at_10 value: 18.701999999999998 - type: map_at_100 value: 19.897000000000002 - type: map_at_1000 value: 20.044 - type: map_at_3 value: 17.041999999999998 - type: map_at_5 value: 17.943 - type: mrr_at_1 value: 16.939 - type: mrr_at_10 value: 23.038 - type: mrr_at_100 value: 24.029 - type: mrr_at_1000 value: 24.12 - type: mrr_at_3 value: 21.221999999999998 - type: mrr_at_5 value: 22.198999999999998 - type: ndcg_at_1 value: 16.939 - type: ndcg_at_10 value: 22.566 - type: ndcg_at_100 value: 28.364 - type: ndcg_at_1000 value: 31.646 - type: ndcg_at_3 value: 19.646 - type: ndcg_at_5 value: 20.915 - type: precision_at_1 value: 16.939 - type: precision_at_10 value: 4.340999999999999 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 9.785 - type: precision_at_5 value: 6.93 - type: recall_at_1 value: 13.091 - type: recall_at_10 value: 30.022 - type: recall_at_100 value: 55.579 - type: recall_at_1000 value: 78.14 - type: recall_at_3 value: 21.4 - type: recall_at_5 value: 25.020999999999997 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: cqadupstack/programmers config: default split: test revision: None metrics: - type: map_at_1 value: 11.315999999999999 - type: map_at_10 value: 16.191 - type: map_at_100 value: 17.116 - type: map_at_1000 value: 17.262 - type: map_at_3 value: 14.302999999999999 - type: map_at_5 value: 15.278 - type: mrr_at_1 value: 14.269000000000002 - type: mrr_at_10 value: 19.409000000000002 - type: mrr_at_100 value: 20.298 - type: mrr_at_1000 value: 20.393 - type: mrr_at_3 value: 17.504 - type: mrr_at_5 value: 18.423000000000002 - type: ndcg_at_1 value: 14.269000000000002 - type: ndcg_at_10 value: 19.735 - type: ndcg_at_100 value: 24.582 - type: ndcg_at_1000 value: 28.337 - type: ndcg_at_3 value: 16.220000000000002 - type: ndcg_at_5 value: 17.644000000000002 - type: precision_at_1 value: 14.269000000000002 - type: precision_at_10 value: 3.721 - type: precision_at_100 value: 0.752 - type: precision_at_1000 value: 0.129 - type: precision_at_3 value: 7.800999999999999 - type: precision_at_5 value: 5.753 - type: recall_at_1 value: 11.315999999999999 - type: recall_at_10 value: 27.693 - type: recall_at_100 value: 49.265 - type: recall_at_1000 value: 76.291 - type: recall_at_3 value: 17.593 - type: recall_at_5 value: 21.368000000000002 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 11.131583333333332 - type: map_at_10 value: 15.4605 - type: map_at_100 value: 16.3075 - type: map_at_1000 value: 16.4375 - type: map_at_3 value: 13.995833333333332 - type: map_at_5 value: 14.783666666666667 - type: mrr_at_1 value: 13.805833333333334 - type: mrr_at_10 value: 18.405749999999998 - type: mrr_at_100 value: 19.17516666666667 - type: mrr_at_1000 value: 19.265833333333333 - type: mrr_at_3 value: 16.892416666666666 - type: mrr_at_5 value: 17.71058333333333 - type: ndcg_at_1 value: 13.805833333333334 - type: ndcg_at_10 value: 18.500666666666664 - type: ndcg_at_100 value: 22.78191666666667 - type: ndcg_at_1000 value: 26.095583333333334 - type: ndcg_at_3 value: 15.846916666666663 - type: ndcg_at_5 value: 17.004250000000003 - type: precision_at_1 value: 13.805833333333334 - type: precision_at_10 value: 3.4233333333333325 - type: precision_at_100 value: 0.6828333333333333 - type: precision_at_1000 value: 0.11641666666666667 - type: precision_at_3 value: 7.511749999999999 - type: precision_at_5 value: 5.440916666666666 - type: recall_at_1 value: 11.131583333333332 - type: recall_at_10 value: 24.794166666666666 - type: recall_at_100 value: 44.356 - type: recall_at_1000 value: 68.71899999999998 - type: recall_at_3 value: 17.145583333333335 - type: recall_at_5 value: 20.229083333333335 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: cqadupstack/stats config: default split: test revision: None metrics: - type: map_at_1 value: 7.5520000000000005 - type: map_at_10 value: 10.355 - type: map_at_100 value: 10.875 - type: map_at_1000 value: 10.972999999999999 - type: map_at_3 value: 9.341000000000001 - type: map_at_5 value: 9.969 - type: mrr_at_1 value: 9.049 - type: mrr_at_10 value: 12.002 - type: mrr_at_100 value: 12.55 - type: mrr_at_1000 value: 12.635 - type: mrr_at_3 value: 11.12 - type: mrr_at_5 value: 11.626 - type: ndcg_at_1 value: 9.049 - type: ndcg_at_10 value: 12.241 - type: ndcg_at_100 value: 15.231 - type: ndcg_at_1000 value: 18.265 - type: ndcg_at_3 value: 10.424999999999999 - type: ndcg_at_5 value: 11.360000000000001 - type: precision_at_1 value: 9.049 - type: precision_at_10 value: 2.147 - type: precision_at_100 value: 0.411 - type: precision_at_1000 value: 0.073 - type: precision_at_3 value: 4.755 - type: precision_at_5 value: 3.558 - type: recall_at_1 value: 7.5520000000000005 - type: recall_at_10 value: 16.448999999999998 - type: recall_at_100 value: 30.505 - type: recall_at_1000 value: 54.435 - type: recall_at_3 value: 11.366 - type: recall_at_5 value: 13.758999999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: cqadupstack/tex config: default split: test revision: None metrics: - type: map_at_1 value: 5.954000000000001 - type: map_at_10 value: 8.229000000000001 - type: map_at_100 value: 8.694 - type: map_at_1000 value: 8.788 - type: map_at_3 value: 7.5 - type: map_at_5 value: 7.856000000000001 - type: mrr_at_1 value: 7.983 - type: mrr_at_10 value: 10.833 - type: mrr_at_100 value: 11.324 - type: mrr_at_1000 value: 11.404 - type: mrr_at_3 value: 9.911 - type: mrr_at_5 value: 10.401 - type: ndcg_at_1 value: 7.983 - type: ndcg_at_10 value: 10.126 - type: ndcg_at_100 value: 12.702 - type: ndcg_at_1000 value: 15.581999999999999 - type: ndcg_at_3 value: 8.779 - type: ndcg_at_5 value: 9.279 - type: precision_at_1 value: 7.983 - type: precision_at_10 value: 1.955 - type: precision_at_100 value: 0.392 - type: precision_at_1000 value: 0.076 - type: precision_at_3 value: 4.382 - type: precision_at_5 value: 3.09 - type: recall_at_1 value: 5.954000000000001 - type: recall_at_10 value: 13.472000000000001 - type: recall_at_100 value: 25.407999999999998 - type: recall_at_1000 value: 47.028 - type: recall_at_3 value: 9.367 - type: recall_at_5 value: 10.867 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: cqadupstack/unix config: default split: test revision: None metrics: - type: map_at_1 value: 8.894 - type: map_at_10 value: 12.758 - type: map_at_100 value: 13.639999999999999 - type: map_at_1000 value: 13.76 - type: map_at_3 value: 11.447000000000001 - type: map_at_5 value: 12.205 - type: mrr_at_1 value: 10.914 - type: mrr_at_10 value: 15.739 - type: mrr_at_100 value: 16.589000000000002 - type: mrr_at_1000 value: 16.679 - type: mrr_at_3 value: 14.179 - type: mrr_at_5 value: 15.162999999999998 - type: ndcg_at_1 value: 10.914 - type: ndcg_at_10 value: 15.629000000000001 - type: ndcg_at_100 value: 20.261000000000003 - type: ndcg_at_1000 value: 23.781 - type: ndcg_at_3 value: 13.102 - type: ndcg_at_5 value: 14.338000000000001 - type: precision_at_1 value: 10.914 - type: precision_at_10 value: 2.91 - type: precision_at_100 value: 0.601 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 6.311999999999999 - type: precision_at_5 value: 4.683 - type: recall_at_1 value: 8.894 - type: recall_at_10 value: 21.45 - type: recall_at_100 value: 42.617 - type: recall_at_1000 value: 69.233 - type: recall_at_3 value: 14.52 - type: recall_at_5 value: 17.681 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: cqadupstack/webmasters config: default split: test revision: None metrics: - type: map_at_1 value: 12.158 - type: map_at_10 value: 16.332 - type: map_at_100 value: 17.458000000000002 - type: map_at_1000 value: 17.687 - type: map_at_3 value: 14.529 - type: map_at_5 value: 15.515 - type: mrr_at_1 value: 15.809999999999999 - type: mrr_at_10 value: 19.917 - type: mrr_at_100 value: 20.875 - type: mrr_at_1000 value: 20.985 - type: mrr_at_3 value: 18.116 - type: mrr_at_5 value: 19.025 - type: ndcg_at_1 value: 15.809999999999999 - type: ndcg_at_10 value: 19.869999999999997 - type: ndcg_at_100 value: 24.907 - type: ndcg_at_1000 value: 29.076999999999998 - type: ndcg_at_3 value: 16.899 - type: ndcg_at_5 value: 18.23 - type: precision_at_1 value: 15.809999999999999 - type: precision_at_10 value: 3.972 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.203 - type: precision_at_3 value: 8.169 - type: precision_at_5 value: 6.087 - type: recall_at_1 value: 12.158 - type: recall_at_10 value: 26.338 - type: recall_at_100 value: 49.845 - type: recall_at_1000 value: 78.82000000000001 - type: recall_at_3 value: 16.997 - type: recall_at_5 value: 20.848 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: cqadupstack/wordpress config: default split: test revision: None metrics: - type: map_at_1 value: 8.01 - type: map_at_10 value: 10.889 - type: map_at_100 value: 11.562 - type: map_at_1000 value: 11.65 - type: map_at_3 value: 9.718 - type: map_at_5 value: 10.358 - type: mrr_at_1 value: 8.688 - type: mrr_at_10 value: 11.862 - type: mrr_at_100 value: 12.558 - type: mrr_at_1000 value: 12.642000000000001 - type: mrr_at_3 value: 10.598 - type: mrr_at_5 value: 11.328000000000001 - type: ndcg_at_1 value: 8.688 - type: ndcg_at_10 value: 12.959999999999999 - type: ndcg_at_100 value: 16.744 - type: ndcg_at_1000 value: 19.564999999999998 - type: ndcg_at_3 value: 10.476 - type: ndcg_at_5 value: 11.639 - type: precision_at_1 value: 8.688 - type: precision_at_10 value: 2.089 - type: precision_at_100 value: 0.43299999999999994 - type: precision_at_1000 value: 0.07200000000000001 - type: precision_at_3 value: 4.375 - type: precision_at_5 value: 3.253 - type: recall_at_1 value: 8.01 - type: recall_at_10 value: 18.589 - type: recall_at_100 value: 36.857 - type: recall_at_1000 value: 59.047000000000004 - type: recall_at_3 value: 11.774 - type: recall_at_5 value: 14.516000000000002 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 6.4719999999999995 - type: map_at_10 value: 12.322 - type: map_at_100 value: 14.122000000000002 - type: map_at_1000 value: 14.35 - type: map_at_3 value: 9.667 - type: map_at_5 value: 10.931000000000001 - type: mrr_at_1 value: 15.179 - type: mrr_at_10 value: 24.864 - type: mrr_at_100 value: 26.144000000000002 - type: mrr_at_1000 value: 26.198 - type: mrr_at_3 value: 20.999000000000002 - type: mrr_at_5 value: 23.097 - type: ndcg_at_1 value: 15.179 - type: ndcg_at_10 value: 18.951999999999998 - type: ndcg_at_100 value: 26.924 - type: ndcg_at_1000 value: 30.991999999999997 - type: ndcg_at_3 value: 13.778000000000002 - type: ndcg_at_5 value: 15.549 - type: precision_at_1 value: 15.179 - type: precision_at_10 value: 6.625 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.22599999999999998 - type: precision_at_3 value: 10.51 - type: precision_at_5 value: 8.847 - type: recall_at_1 value: 6.4719999999999995 - type: recall_at_10 value: 25.191999999999997 - type: recall_at_100 value: 53.315 - type: recall_at_1000 value: 76.163 - type: recall_at_3 value: 12.834999999999999 - type: recall_at_5 value: 17.388 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 1.947 - type: map_at_10 value: 4.858 - type: map_at_100 value: 7.185999999999999 - type: map_at_1000 value: 7.931000000000001 - type: map_at_3 value: 3.2939999999999996 - type: map_at_5 value: 3.914 - type: mrr_at_1 value: 23.25 - type: mrr_at_10 value: 33.035 - type: mrr_at_100 value: 33.721000000000004 - type: mrr_at_1000 value: 33.789 - type: mrr_at_3 value: 29.75 - type: mrr_at_5 value: 31.738 - type: ndcg_at_1 value: 15.625 - type: ndcg_at_10 value: 13.211999999999998 - type: ndcg_at_100 value: 16.422 - type: ndcg_at_1000 value: 23.058999999999997 - type: ndcg_at_3 value: 14.573 - type: ndcg_at_5 value: 13.733999999999998 - type: precision_at_1 value: 23.25 - type: precision_at_10 value: 12.45 - type: precision_at_100 value: 4.192 - type: precision_at_1000 value: 1.083 - type: precision_at_3 value: 18.667 - type: precision_at_5 value: 15.950000000000001 - type: recall_at_1 value: 1.947 - type: recall_at_10 value: 9.317 - type: recall_at_100 value: 23.066 - type: recall_at_1000 value: 45.704 - type: recall_at_3 value: 4.12 - type: recall_at_5 value: 5.591 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 42.855 - type: f1 value: 39.029787102377576 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 8.461 - type: map_at_10 value: 13.655999999999999 - type: map_at_100 value: 14.499 - type: map_at_1000 value: 14.585999999999999 - type: map_at_3 value: 11.848 - type: map_at_5 value: 12.842999999999998 - type: mrr_at_1 value: 9.136 - type: mrr_at_10 value: 14.587 - type: mrr_at_100 value: 15.436 - type: mrr_at_1000 value: 15.518 - type: mrr_at_3 value: 12.690999999999999 - type: mrr_at_5 value: 13.747000000000002 - type: ndcg_at_1 value: 9.136 - type: ndcg_at_10 value: 16.958000000000002 - type: ndcg_at_100 value: 21.43 - type: ndcg_at_1000 value: 24.031 - type: ndcg_at_3 value: 13.191 - type: ndcg_at_5 value: 14.987 - type: precision_at_1 value: 9.136 - type: precision_at_10 value: 2.897 - type: precision_at_100 value: 0.532 - type: precision_at_1000 value: 0.077 - type: precision_at_3 value: 5.8709999999999996 - type: precision_at_5 value: 4.47 - type: recall_at_1 value: 8.461 - type: recall_at_10 value: 26.509 - type: recall_at_100 value: 47.776 - type: recall_at_1000 value: 68.26299999999999 - type: recall_at_3 value: 16.203 - type: recall_at_5 value: 20.505000000000003 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 7.396 - type: map_at_10 value: 12.393 - type: map_at_100 value: 13.857 - type: map_at_1000 value: 14.086000000000002 - type: map_at_3 value: 10.545 - type: map_at_5 value: 11.505 - type: mrr_at_1 value: 15.432000000000002 - type: mrr_at_10 value: 21.615000000000002 - type: mrr_at_100 value: 22.833000000000002 - type: mrr_at_1000 value: 22.931 - type: mrr_at_3 value: 19.522000000000002 - type: mrr_at_5 value: 20.663999999999998 - type: ndcg_at_1 value: 15.432000000000002 - type: ndcg_at_10 value: 16.986 - type: ndcg_at_100 value: 23.880000000000003 - type: ndcg_at_1000 value: 28.762999999999998 - type: ndcg_at_3 value: 14.482999999999999 - type: ndcg_at_5 value: 15.334999999999999 - type: precision_at_1 value: 15.432000000000002 - type: precision_at_10 value: 4.984999999999999 - type: precision_at_100 value: 1.167 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 9.825000000000001 - type: precision_at_5 value: 7.469 - type: recall_at_1 value: 7.396 - type: recall_at_10 value: 21.389 - type: recall_at_100 value: 48.107 - type: recall_at_1000 value: 78.366 - type: recall_at_3 value: 13.181000000000001 - type: recall_at_5 value: 16.611 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 11.884 - type: map_at_10 value: 17.09 - type: map_at_100 value: 17.96 - type: map_at_1000 value: 18.081 - type: map_at_3 value: 15.296000000000001 - type: map_at_5 value: 16.289 - type: mrr_at_1 value: 23.768 - type: mrr_at_10 value: 29.991 - type: mrr_at_100 value: 30.862000000000002 - type: mrr_at_1000 value: 30.935000000000002 - type: mrr_at_3 value: 27.986 - type: mrr_at_5 value: 29.078 - type: ndcg_at_1 value: 23.768 - type: ndcg_at_10 value: 22.634999999999998 - type: ndcg_at_100 value: 27.059 - type: ndcg_at_1000 value: 30.145 - type: ndcg_at_3 value: 19.058 - type: ndcg_at_5 value: 20.762 - type: precision_at_1 value: 23.768 - type: precision_at_10 value: 5.2490000000000006 - type: precision_at_100 value: 0.8829999999999999 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 12.091000000000001 - type: precision_at_5 value: 8.605 - type: recall_at_1 value: 11.884 - type: recall_at_10 value: 26.246000000000002 - type: recall_at_100 value: 44.153 - type: recall_at_1000 value: 64.889 - type: recall_at_3 value: 18.136 - type: recall_at_5 value: 21.512 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 71.9232 - type: ap value: 66.56619827391917 - type: f1 value: 71.60536244284128 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 3.037 - type: map_at_10 value: 5.414 - type: map_at_100 value: 6.072 - type: map_at_1000 value: 6.172 - type: map_at_3 value: 4.437 - type: map_at_5 value: 4.939 - type: mrr_at_1 value: 3.123 - type: mrr_at_10 value: 5.572 - type: mrr_at_100 value: 6.235 - type: mrr_at_1000 value: 6.334 - type: mrr_at_3 value: 4.563 - type: mrr_at_5 value: 5.09 - type: ndcg_at_1 value: 3.123 - type: ndcg_at_10 value: 7.027 - type: ndcg_at_100 value: 10.776 - type: ndcg_at_1000 value: 13.904 - type: ndcg_at_3 value: 4.95 - type: ndcg_at_5 value: 5.865 - type: precision_at_1 value: 3.123 - type: precision_at_10 value: 1.252 - type: precision_at_100 value: 0.32299999999999995 - type: precision_at_1000 value: 0.059000000000000004 - type: precision_at_3 value: 2.168 - type: precision_at_5 value: 1.7680000000000002 - type: recall_at_1 value: 3.037 - type: recall_at_10 value: 12.11 - type: recall_at_100 value: 30.714999999999996 - type: recall_at_1000 value: 56.006 - type: recall_at_3 value: 6.3229999999999995 - type: recall_at_5 value: 8.518 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.24259005927954 - type: f1 value: 90.7594022786747 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.08344733242134 - type: f1 value: 52.377556461789055 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.99327505043712 - type: f1 value: 66.15141376479805 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.1546738399462 - type: f1 value: 74.83013584700711 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.146364191412356 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 26.96347584990607 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.520993847103533 - type: mrr value: 30.402007095845374 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 1.72 - type: map_at_10 value: 4.041 - type: map_at_100 value: 5.356000000000001 - type: map_at_1000 value: 6.413 - type: map_at_3 value: 2.9770000000000003 - type: map_at_5 value: 3.3689999999999998 - type: mrr_at_1 value: 21.981 - type: mrr_at_10 value: 30.286 - type: mrr_at_100 value: 31.272 - type: mrr_at_1000 value: 31.347 - type: mrr_at_3 value: 27.193 - type: mrr_at_5 value: 28.694999999999997 - type: ndcg_at_1 value: 19.814 - type: ndcg_at_10 value: 15.732 - type: ndcg_at_100 value: 16.033 - type: ndcg_at_1000 value: 25.865 - type: ndcg_at_3 value: 17.944 - type: ndcg_at_5 value: 16.634 - type: precision_at_1 value: 21.981 - type: precision_at_10 value: 12.786 - type: precision_at_100 value: 4.83 - type: precision_at_1000 value: 1.765 - type: precision_at_3 value: 17.75 - type: precision_at_5 value: 15.232000000000001 - type: recall_at_1 value: 1.72 - type: recall_at_10 value: 7.436 - type: recall_at_100 value: 20.275000000000002 - type: recall_at_1000 value: 54.19500000000001 - type: recall_at_3 value: 3.787 - type: recall_at_5 value: 4.829 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 7.964 - type: map_at_10 value: 14.025000000000002 - type: map_at_100 value: 15.222 - type: map_at_1000 value: 15.32 - type: map_at_3 value: 11.886 - type: map_at_5 value: 13.056999999999999 - type: mrr_at_1 value: 9.183 - type: mrr_at_10 value: 15.651000000000002 - type: mrr_at_100 value: 16.753999999999998 - type: mrr_at_1000 value: 16.833000000000002 - type: mrr_at_3 value: 13.437 - type: mrr_at_5 value: 14.69 - type: ndcg_at_1 value: 9.183 - type: ndcg_at_10 value: 17.96 - type: ndcg_at_100 value: 23.823 - type: ndcg_at_1000 value: 26.461000000000002 - type: ndcg_at_3 value: 13.536999999999999 - type: ndcg_at_5 value: 15.642 - type: precision_at_1 value: 9.183 - type: precision_at_10 value: 3.366 - type: precision_at_100 value: 0.67 - type: precision_at_1000 value: 0.092 - type: precision_at_3 value: 6.547 - type: precision_at_5 value: 5.098 - type: recall_at_1 value: 7.964 - type: recall_at_10 value: 28.599000000000004 - type: recall_at_100 value: 55.381 - type: recall_at_1000 value: 75.63 - type: recall_at_3 value: 16.77 - type: recall_at_5 value: 21.671000000000003 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 59.846999999999994 - type: map_at_10 value: 73.18599999999999 - type: map_at_100 value: 74.055 - type: map_at_1000 value: 74.09 - type: map_at_3 value: 69.95700000000001 - type: map_at_5 value: 71.925 - type: mrr_at_1 value: 69.0 - type: mrr_at_10 value: 77.23299999999999 - type: mrr_at_100 value: 77.52 - type: mrr_at_1000 value: 77.526 - type: mrr_at_3 value: 75.59 - type: mrr_at_5 value: 76.63799999999999 - type: ndcg_at_1 value: 69.02000000000001 - type: ndcg_at_10 value: 78.226 - type: ndcg_at_100 value: 80.60199999999999 - type: ndcg_at_1000 value: 80.971 - type: ndcg_at_3 value: 74.124 - type: ndcg_at_5 value: 76.265 - type: precision_at_1 value: 69.02000000000001 - type: precision_at_10 value: 12.102 - type: precision_at_100 value: 1.468 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 32.5 - type: precision_at_5 value: 21.7 - type: recall_at_1 value: 59.846999999999994 - type: recall_at_10 value: 88.485 - type: recall_at_100 value: 97.425 - type: recall_at_1000 value: 99.523 - type: recall_at_3 value: 77.051 - type: recall_at_5 value: 82.762 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 38.67296729610079 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 53.42017351823769 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 0.893 - type: map_at_10 value: 2.804 - type: map_at_100 value: 3.6740000000000004 - type: map_at_1000 value: 3.94 - type: map_at_3 value: 1.926 - type: map_at_5 value: 2.363 - type: mrr_at_1 value: 4.3 - type: mrr_at_10 value: 9.520000000000001 - type: mrr_at_100 value: 10.692 - type: mrr_at_1000 value: 10.841000000000001 - type: mrr_at_3 value: 7.6 - type: mrr_at_5 value: 8.63 - type: ndcg_at_1 value: 4.3 - type: ndcg_at_10 value: 5.531 - type: ndcg_at_100 value: 10.512 - type: ndcg_at_1000 value: 16.683 - type: ndcg_at_3 value: 4.632 - type: ndcg_at_5 value: 4.3229999999999995 - type: precision_at_1 value: 4.3 - type: precision_at_10 value: 3.16 - type: precision_at_100 value: 1.065 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 4.667000000000001 - type: precision_at_5 value: 4.1000000000000005 - type: recall_at_1 value: 0.893 - type: recall_at_10 value: 6.428000000000001 - type: recall_at_100 value: 21.662 - type: recall_at_1000 value: 52.162 - type: recall_at_3 value: 2.868 - type: recall_at_5 value: 4.188 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 69.34396953516386 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 60.094374065360746 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 72.51503781013379 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 66.6954698644186 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 77.69462578028768 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 75.9397626457859 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 81.67242768943406 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 63.7027324700292 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 73.36074244064153 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 67.75984402370518 - type: mrr value: 86.9951798383171 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 24.583 - type: map_at_10 value: 33.125 - type: map_at_100 value: 34.14 - type: map_at_1000 value: 34.22 - type: map_at_3 value: 29.616 - type: map_at_5 value: 31.896 - type: mrr_at_1 value: 26.333000000000002 - type: mrr_at_10 value: 34.437 - type: mrr_at_100 value: 35.363 - type: mrr_at_1000 value: 35.433 - type: mrr_at_3 value: 31.333 - type: mrr_at_5 value: 33.267 - type: ndcg_at_1 value: 26.333000000000002 - type: ndcg_at_10 value: 38.311 - type: ndcg_at_100 value: 43.923 - type: ndcg_at_1000 value: 45.923 - type: ndcg_at_3 value: 31.596000000000004 - type: ndcg_at_5 value: 35.448 - type: precision_at_1 value: 26.333000000000002 - type: precision_at_10 value: 5.933 - type: precision_at_100 value: 0.91 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.0 - type: precision_at_5 value: 9.933 - type: recall_at_1 value: 24.583 - type: recall_at_10 value: 53.417 - type: recall_at_100 value: 80.989 - type: recall_at_1000 value: 96.322 - type: recall_at_3 value: 35.611 - type: recall_at_5 value: 44.833 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.48514851485149 - type: cos_sim_ap value: 77.36426466374054 - type: cos_sim_f1 value: 72.0702116675271 - type: cos_sim_precision value: 74.49306296691569 - type: cos_sim_recall value: 69.8 - type: dot_accuracy value: 99.15049504950495 - type: dot_ap value: 46.792474140260715 - type: dot_f1 value: 48.76476906552094 - type: dot_precision value: 52.66821345707656 - type: dot_recall value: 45.4 - type: euclidean_accuracy value: 99.46534653465346 - type: euclidean_ap value: 74.1978837990589 - type: euclidean_f1 value: 69.47256259989345 - type: euclidean_precision value: 74.34435575826683 - type: euclidean_recall value: 65.2 - type: manhattan_accuracy value: 99.47128712871287 - type: manhattan_ap value: 75.31910551743364 - type: manhattan_f1 value: 70.1582105837425 - type: manhattan_precision value: 77.19087635054022 - type: manhattan_recall value: 64.3 - type: max_accuracy value: 99.48514851485149 - type: max_ap value: 77.36426466374054 - type: max_f1 value: 72.0702116675271 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 59.353792480720436 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.474896484744836 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 40.82378653430986 - type: mrr value: 41.13905600118835 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.08154836998798 - type: cos_sim_spearman value: 31.232033308845907 - type: dot_pearson value: 23.767593496465828 - type: dot_spearman value: 25.6201612766572 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.186 - type: map_at_10 value: 1.1809999999999998 - type: map_at_100 value: 5.21 - type: map_at_1000 value: 12.447999999999999 - type: map_at_3 value: 0.44200000000000006 - type: map_at_5 value: 0.673 - type: mrr_at_1 value: 72.0 - type: mrr_at_10 value: 80.01899999999999 - type: mrr_at_100 value: 80.42099999999999 - type: mrr_at_1000 value: 80.42099999999999 - type: mrr_at_3 value: 78.0 - type: mrr_at_5 value: 79.4 - type: ndcg_at_1 value: 66.0 - type: ndcg_at_10 value: 56.041 - type: ndcg_at_100 value: 37.987 - type: ndcg_at_1000 value: 34.198 - type: ndcg_at_3 value: 60.23500000000001 - type: ndcg_at_5 value: 58.025999999999996 - type: precision_at_1 value: 72.0 - type: precision_at_10 value: 60.4 - type: precision_at_100 value: 38.940000000000005 - type: precision_at_1000 value: 16.106 - type: precision_at_3 value: 63.333 - type: precision_at_5 value: 61.6 - type: recall_at_1 value: 0.186 - type: recall_at_10 value: 1.458 - type: recall_at_100 value: 8.455 - type: recall_at_1000 value: 33.141999999999996 - type: recall_at_3 value: 0.461 - type: recall_at_5 value: 0.756 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.2849999999999997 - type: map_at_10 value: 6.909 - type: map_at_100 value: 11.231 - type: map_at_1000 value: 12.472 - type: map_at_3 value: 3.53 - type: map_at_5 value: 4.675 - type: mrr_at_1 value: 26.531 - type: mrr_at_10 value: 40.73 - type: mrr_at_100 value: 41.637 - type: mrr_at_1000 value: 41.647 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 38.741 - type: ndcg_at_1 value: 24.490000000000002 - type: ndcg_at_10 value: 19.17 - type: ndcg_at_100 value: 29.946 - type: ndcg_at_1000 value: 40.842 - type: ndcg_at_3 value: 19.088 - type: ndcg_at_5 value: 19.445999999999998 - type: precision_at_1 value: 26.531 - type: precision_at_10 value: 17.959 - type: precision_at_100 value: 6.468999999999999 - type: precision_at_1000 value: 1.351 - type: precision_at_3 value: 19.048000000000002 - type: precision_at_5 value: 19.592000000000002 - type: recall_at_1 value: 2.2849999999999997 - type: recall_at_10 value: 12.973 - type: recall_at_100 value: 40.239999999999995 - type: recall_at_1000 value: 73.247 - type: recall_at_3 value: 4.407 - type: recall_at_5 value: 6.908 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 68.405 - type: ap value: 13.9913678628558 - type: f1 value: 53.209691917560285 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.080928126768534 - type: f1 value: 56.36329965117965 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 31.540976715818065 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.90516778923526 - type: cos_sim_ap value: 61.5394989621502 - type: cos_sim_f1 value: 58.02297689685646 - type: cos_sim_precision value: 55.62817719680465 - type: cos_sim_recall value: 60.633245382585756 - type: dot_accuracy value: 78.95928950348691 - type: dot_ap value: 48.61088896690895 - type: dot_f1 value: 51.0104674059488 - type: dot_precision value: 42.00375490698071 - type: dot_recall value: 64.93403693931398 - type: euclidean_accuracy value: 82.476008821601 - type: euclidean_ap value: 59.59406971314053 - type: euclidean_f1 value: 56.424962447084525 - type: euclidean_precision value: 58.47721483158789 - type: euclidean_recall value: 54.51187335092348 - type: manhattan_accuracy value: 82.66078559933241 - type: manhattan_ap value: 60.414321716856925 - type: manhattan_f1 value: 56.88221089348002 - type: manhattan_precision value: 57.86026200873362 - type: manhattan_recall value: 55.93667546174142 - type: max_accuracy value: 82.90516778923526 - type: max_ap value: 61.5394989621502 - type: max_f1 value: 58.02297689685646 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 85.71622618077386 - type: cos_sim_ap value: 77.72774861009667 - type: cos_sim_f1 value: 71.40275165062152 - type: cos_sim_precision value: 68.53359767754726 - type: cos_sim_recall value: 74.52263627964275 - type: dot_accuracy value: 83.97174680793262 - type: dot_ap value: 72.89480417427734 - type: dot_f1 value: 68.57803792366198 - type: dot_precision value: 62.94151708164447 - type: dot_recall value: 75.32337542346782 - type: euclidean_accuracy value: 84.88570652384834 - type: euclidean_ap value: 75.78371710915128 - type: euclidean_f1 value: 69.44268877569989 - type: euclidean_precision value: 67.1435761018046 - type: euclidean_recall value: 71.90483523252233 - type: manhattan_accuracy value: 85.6114409904141 - type: manhattan_ap value: 77.38579436755944 - type: manhattan_f1 value: 70.8608538430316 - type: manhattan_precision value: 68.03656203500319 - type: manhattan_recall value: 73.92978133661842 - type: max_accuracy value: 85.71622618077386 - type: max_ap value: 77.72774861009667 - type: max_f1 value: 71.40275165062152 --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp-unsup-simcse" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.5964, 0.1270], [0.0698, 0.2394]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final
avsolatorio
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1943715", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,732
1,732
11
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1943715 - loss:MultipleNegativesRankingLoss widget: - source_sentence: percentage of irrigated land in india is about sentences: - Irrigation in India Irrigation in India Irrigation in India includes a network of major and minor canals from Indian rivers, groundwater well based systems, tanks, and other rainwater harvesting projects for agricultural activities. Of these groundwater system is the largest. In 2013-14, only about 47.7% of total agricultural land in India was reliably irrigated. The largest canal in India is Indira Gandhi Canal, which is about 650 km long. About 2/3rd cultivated land in India is dependent on monsoons. Irrigation in India helps improve food security, reduce dependence on monsoons, improve agricultural productivity and create rural job opportunities. Dams used for irrigation projects - Waiting for a Girl Like You Waiting for a Girl Like You "Waiting for a Girl Like You" is a 1981 power ballad by the British-American rock band Foreigner. The distinctive synthesizer theme was performed by the then-little-known Thomas Dolby, and this song also marked a major departure from their earlier singles because their previous singles were mid to upper tempo rock songs while this song was a softer love song with the energy of a power ballad. It was the second single released from the album "4" (1981) and was co-written by Lou Gramm and Mick Jones. It has become one of the band's most - Agriculture in India 2010, only about 35% of agricultural land in India was reliably irrigated. About 2/3rd cultivated land in India is dependent on monsoons. The improvements in irrigation infrastructure in the last 50 years have helped India improve food security, reduce dependence on monsoons, improve agricultural productivity and create rural job opportunities. Dams used for irrigation projects have helped provide drinking water to a growing rural population, control flood and prevent drought-related damage to agriculture. , India had a large and diverse agricultural sector, accounting, on average, for about 16% of GDP and 10% of export earnings. India's arable land area of - source_sentence: 'Use of multiple antimicrobial drugs by clinical patients: a prognostic index of hospital mortality?' sentences: - Recent reports have suggested that extramedullary (EM) relapse of acute myeloid leukemia (AML) post-hematopoietic stem cell transplantation (HSCT), unlike isolated bone marrow (BM) relapse, is associated with improved prognosis. We reviewed the outcomes of relapsed AML post-HSCT at our institution to determine whether survival for patients with EM relapse was truly improved in comparison to patients suffering BM relapses treated in a similar (active) way.Outcomes of all 274 allogeneic HSCT performed for adult AML between 2000 and 2010 at our institution were retrospectively reviewed.As of January 2011, 72 relapses post-HSCT had occurred, including 64 BM relapses (89%), two concomitant BM and EM relapses (3%), and six EM relapses alone (8%). EM relapses occurred significantly later post-HSCT than BM relapses (median 25.2 vs 3.9 months, respectively; P = 0.001). Patients suffering an EM relapse were significantly more likely to receive active therapy at relapse (7/8; 88%) than those suffering a BM relapse alone (28/64; 44%; P = 0.026). When survival analysis was restricted to outcomes of patients treated actively (i.e., with curative intent), no difference in outcome between EM and BM relapses was observed (median survival 13.5 vs 8 months for EM vs BM relapses, respectively, P = 0.44). - 'Laparoscopic box model trainers have been used in training curricula for a long time, however data on their impact on skills acquisition is still limited. Our aim was to validate a low cost box model trainer as a tool for the training of skills relevant to laparoscopic surgery.Randomised, controlled trial (Canadian Task Force Classification I).University Hospital.Sixteen gynaecologic residents with limited laparoscopic experience were randomised to a group that received a structured box model training curriculum, and a control group. Performance before and after the training was assessed in a virtual reality laparoscopic trainer (LapSim and was based on objective parameters, registered by the computer system (time, error, and economy of motion scores). Group A showed significantly greater improvement in all performance parameters compared with the control group: economy of movement (p=0.001), time (p=0.001) and tissue damage (p=0.036), confirming the positive impact of box-trainer curriculum on laparoscopic skills acquisition.' - To quantify the use of multiple and prolonged antibiotics and anti-infective drug therapy in clinical patients in a 144-bed hospital.Adult patients (2,790 patients with 3,706 admissions over a period of 19 months) were investigated prospectively regarding treatment with anti-infective agents. The mean age was 57.4 (range, 18.8-97 years), and 54.3% were females (2012).Hospital stay was 5.5 (6.7 days (range, 2-226 days), with duration up to 10 days for 91.9% of the subjects. Antibiotics or other agents were administered to 1,166 subjects (31.5%), 325 (8.8%) required assistance in the ICU, and a total of 141 (3.8%) died. The association between anti-infective drug therapy and hospital mortality was statistically significant (P<.01) with a strong linear correlation (r = 0.902, P = .014). The quantity of prescribed antimicrobial drugs, age, and need for ICU assistance were independent variables for death by logistic regression analysis. The odds ratio for anti-infective drug therapy was 1.341 (1.043 to 1.725); for age, 1.042 ( 1.026 to 1.058); and for stay in the ICU, 11.226 ( 6.648 to 18.957). - source_sentence: who is notre dame de paris dedicated to sentences: - Musée de Notre Dame de Paris paintings; and historical documents including a petition to restore the cathedral signed by, among others, Victor Hugo and Jean Auguste Dominique Ingres. The museum closed in November 2008. [and opened again in 2013] Musée de Notre Dame de Paris The Musée de Notre Dame de Paris was a small museum dedicated to the cathedral of Notre Dame de Paris and its archaeology. It stands at 10 Rue du Cloître Notre Dame, Paris, France, and was open to the public several afternoons per week; an admission fee was charged. The museum was established in 1951 to present the cathedral's history, as - 'Smoking serves different functions for men and women. Thus, we wanted to investigate the association between smoking behaviour and intakes of selected healthy foods in men and women with special focus on differences and similarities between the two genders.In 1993-1997, a random sample of 80 996 men and 79 729 women aged 50-64 y was invited to participate in the study ''Diet, Cancer and Health''. In all, 27 179 men and 29 876 women attended a health examination and completed a 192-item food-frequency questionnaire (FFQ). The association between smoking status and low, median and high intakes of selected foods was examined among 25 821 men and 28 596 women.The greater Copenhagen and Aarhus area, Denmark.For both men and women, smoking status group was associated with diet, such that increasing level of smoking status ranging from never smokers over ex-smokers to currently heavy smokers was associated with a lower intake of the healthy foods: fresh fruit, cooked vegetables, raw vegetables/salad, and olive oil. For wine, increasing level of smoking status category was associated with a higher fraction of abstainers and heavy drinkers. The difference between the extreme smoking status categories was larger than the difference between men and women within smoking status categories such that never smoking men in general had a higher intake of healthy foods than heavy smoking women. Correction for age, educational level, and body mass index (BMI) did not affect the results.' - Notre-Dame de Paris rededicated to the Cult of Reason, and then to the Cult of the Supreme Being. During this time, many of the treasures of the cathedral were either destroyed or plundered. The twenty-eight statues of biblical kings located at the west facade, mistaken for statues of French kings, were beheaded. Many of the heads were found during a 1977 excavation nearby, and are on display at the Musée de Cluny. For a time the Goddess of Liberty replaced the Virgin Mary on several altars. The cathedral's great bells escaped being melted down. All of the other large statues on the facade, - source_sentence: who sang schoolhouse rock i 'm just a bill sentences: - Grand Hotel (Mackinac Island) In 1886, the Michigan Central Railroad, Grand Rapids and Indiana Railroad, and Detroit and Cleveland Steamship Navigation Company formed the Mackinac Island Hotel Company. The group purchased the land on which the hotel was built and construction began, based upon the design by Detroit architects Mason and Rice. When it opened the following year, the hotel was advertised to Chicago, Erie, Montreal and Detroit residents as a summer retreat for vacationers who arrived by lake steamer and by rail from across the continent. The hotel opened on July 10, 1887 and took a mere 93 days to complete. At its - Jack Sheldon He was Griffin's sidekick for many years. His voice is perhaps best known from the "Schoolhouse Rock!" cartoons of the 1970s, such as "Conjunction Junction" and "I'm Just a Bill." He appeared in one episode of "Johnny Bravo" as the Sensitive Man. He sang a few songs in the episode similar to the "Schoolhouse Rock!" style. Sheldon returned to the "Schoolhouse Rock!" series for a 2002 episode titled "I'm Gonna Send Your Vote to College," explaining the electoral college process, and distributed on the series' DVD collection that same year. Sheldon sang and played trumpet for the new segment. Sheldon - I'm Just a Bill I'm Just a Bill "I'm Just a Bill" is a 1976 "Schoolhouse Rock!" segment, featuring a song of the same title written by Dave Frishberg. The segment debuted as part of "America Rock", the third season of the Schoolhouse Rock series. The song featured in the segment is sung by Jack Sheldon (the voice of the Bill), with dialogue by Sheldon's son John as the boy learning the process. It is about how a bill becomes a law, how it must go through Congress, and how it can be vetoed, etc. The Bill is for the law that school buses - source_sentence: who does the chief risk officer report to sentences: - Chief risk officer a company's executive chief officer and chief financial officer to clarify the precision of its financial reports. Moreover, to ensure the mentioned accuracy of financial reports, internal controls are required. Accordingly, each financial report required an internal control report to prevent fraud. Furthermore, the CRO has to be aware of everything occurring in his company on a daily basis, but he must also be current on all of the requirements from the SEC. In addition, the CRO restrains corporate risk by managing compliance. Why is a CRO so important in financial institutions? There is a report of having a CRO - Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk management officer (CRMO) of a firm or corporation is the executive accountable for enabling the efficient and effective governance of significant risks, and related opportunities, to a business and its various segments. Risks are commonly categorized as strategic, reputational, operational, financial, or compliance-related. CROs are accountable to the Executive Committee and The Board for enabling the business to balance risk and reward. In more complex organizations, they are generally responsible for coordinating the organization's Enterprise Risk Management (ERM) approach. The CRO is responsible for assessing and mitigating significant competitive, - Foundations of Constraint Satisfaction model-index: - name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets results: - task: type: triplet name: Triplet dataset: name: medi mteb dev type: medi-mteb-dev metrics: - type: cosine_accuracy value: 0.9156494608352947 name: Cosine Accuracy --- # all-MiniLM-L6-v2 trained on MEDI-MTEB triplets This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Datasets:** - NQ - pubmed - specter_train_triples - S2ORC_citations_abstracts - fever - gooaq_pairs - codesearchnet - wikihow - WikiAnswers - eli5_question_answer - amazon-qa - medmcqa - zeroshot - TriviaQA_pairs - PAQ_pairs - stackexchange_duplicate_questions_title-body_title-body - trex - flickr30k_captions - hotpotqa - task671_ambigqa_text_generation - task061_ropes_answer_generation - task285_imdb_answer_generation - task905_hate_speech_offensive_classification - task566_circa_classification - task184_snli_entailment_to_neutral_text_modification - task280_stereoset_classification_stereotype_type - task1599_smcalflow_classification - task1384_deal_or_no_dialog_classification - task591_sciq_answer_generation - task823_peixian-rtgender_sentiment_analysis - task023_cosmosqa_question_generation - task900_freebase_qa_category_classification - task924_event2mind_word_generation - task152_tomqa_find_location_easy_noise - task1368_healthfact_sentence_generation - task1661_super_glue_classification - task1187_politifact_classification - task1728_web_nlg_data_to_text - task112_asset_simple_sentence_identification - task1340_msr_text_compression_compression - task072_abductivenli_answer_generation - task1504_hatexplain_answer_generation - task684_online_privacy_policy_text_information_type_generation - task1290_xsum_summarization - task075_squad1.1_answer_generation - task1587_scifact_classification - task384_socialiqa_question_classification - task1555_scitail_answer_generation - task1532_daily_dialog_emotion_classification - task239_tweetqa_answer_generation - task596_mocha_question_generation - task1411_dart_subject_identification - task1359_numer_sense_answer_generation - task329_gap_classification - task220_rocstories_title_classification - task316_crows-pairs_classification_stereotype - task495_semeval_headline_classification - task1168_brown_coarse_pos_tagging - task348_squad2.0_unanswerable_question_generation - task049_multirc_questions_needed_to_answer - task1534_daily_dialog_question_classification - task322_jigsaw_classification_threat - task295_semeval_2020_task4_commonsense_reasoning - task186_snli_contradiction_to_entailment_text_modification - task034_winogrande_question_modification_object - task160_replace_letter_in_a_sentence - task469_mrqa_answer_generation - task105_story_cloze-rocstories_sentence_generation - task649_race_blank_question_generation - task1536_daily_dialog_happiness_classification - task683_online_privacy_policy_text_purpose_answer_generation - task024_cosmosqa_answer_generation - task584_udeps_eng_fine_pos_tagging - task066_timetravel_binary_consistency_classification - task413_mickey_en_sentence_perturbation_generation - task182_duorc_question_generation - task028_drop_answer_generation - task1601_webquestions_answer_generation - task1295_adversarial_qa_question_answering - task201_mnli_neutral_classification - task038_qasc_combined_fact - task293_storycommonsense_emotion_text_generation - task572_recipe_nlg_text_generation - task517_emo_classify_emotion_of_dialogue - task382_hybridqa_answer_generation - task176_break_decompose_questions - task1291_multi_news_summarization - task155_count_nouns_verbs - task031_winogrande_question_generation_object - task279_stereoset_classification_stereotype - task1336_peixian_equity_evaluation_corpus_gender_classifier - task508_scruples_dilemmas_more_ethical_isidentifiable - task518_emo_different_dialogue_emotions - task077_splash_explanation_to_sql - task923_event2mind_classifier - task470_mrqa_question_generation - task638_multi_woz_classification - task1412_web_questions_question_answering - task847_pubmedqa_question_generation - task678_ollie_actual_relationship_answer_generation - task290_tellmewhy_question_answerability - task575_air_dialogue_classification - task189_snli_neutral_to_contradiction_text_modification - task026_drop_question_generation - task162_count_words_starting_with_letter - task079_conala_concat_strings - task610_conllpp_ner - task046_miscellaneous_question_typing - task197_mnli_domain_answer_generation - task1325_qa_zre_question_generation_on_subject_relation - task430_senteval_subject_count - task672_nummersense - task402_grailqa_paraphrase_generation - task904_hate_speech_offensive_classification - task192_hotpotqa_sentence_generation - task069_abductivenli_classification - task574_air_dialogue_sentence_generation - task187_snli_entailment_to_contradiction_text_modification - task749_glucose_reverse_cause_emotion_detection - task1552_scitail_question_generation - task750_aqua_multiple_choice_answering - task327_jigsaw_classification_toxic - task1502_hatexplain_classification - task328_jigsaw_classification_insult - task304_numeric_fused_head_resolution - task1293_kilt_tasks_hotpotqa_question_answering - task216_rocstories_correct_answer_generation - task1326_qa_zre_question_generation_from_answer - task1338_peixian_equity_evaluation_corpus_sentiment_classifier - task1729_personachat_generate_next - task1202_atomic_classification_xneed - task400_paws_paraphrase_classification - task502_scruples_anecdotes_whoiswrong_verification - task088_identify_typo_verification - task221_rocstories_two_choice_classification - task200_mnli_entailment_classification - task074_squad1.1_question_generation - task581_socialiqa_question_generation - task1186_nne_hrngo_classification - task898_freebase_qa_answer_generation - task1408_dart_similarity_classification - task168_strategyqa_question_decomposition - task1357_xlsum_summary_generation - task390_torque_text_span_selection - task165_mcscript_question_answering_commonsense - task1533_daily_dialog_formal_classification - task002_quoref_answer_generation - task1297_qasc_question_answering - task305_jeopardy_answer_generation_normal - task029_winogrande_full_object - task1327_qa_zre_answer_generation_from_question - task326_jigsaw_classification_obscene - task1542_every_ith_element_from_starting - task570_recipe_nlg_ner_generation - task1409_dart_text_generation - task401_numeric_fused_head_reference - task846_pubmedqa_classification - task1712_poki_classification - task344_hybridqa_answer_generation - task875_emotion_classification - task1214_atomic_classification_xwant - task106_scruples_ethical_judgment - task238_iirc_answer_from_passage_answer_generation - task1391_winogrande_easy_answer_generation - task195_sentiment140_classification - task163_count_words_ending_with_letter - task579_socialiqa_classification - task569_recipe_nlg_text_generation - task1602_webquestion_question_genreation - task747_glucose_cause_emotion_detection - task219_rocstories_title_answer_generation - task178_quartz_question_answering - task103_facts2story_long_text_generation - task301_record_question_generation - task1369_healthfact_sentence_generation - task515_senteval_odd_word_out - task496_semeval_answer_generation - task1658_billsum_summarization - task1204_atomic_classification_hinderedby - task1392_superglue_multirc_answer_verification - task306_jeopardy_answer_generation_double - task1286_openbookqa_question_answering - task159_check_frequency_of_words_in_sentence_pair - task151_tomqa_find_location_easy_clean - task323_jigsaw_classification_sexually_explicit - task037_qasc_generate_related_fact - task027_drop_answer_type_generation - task1596_event2mind_text_generation_2 - task141_odd-man-out_classification_category - task194_duorc_answer_generation - task679_hope_edi_english_text_classification - task246_dream_question_generation - task1195_disflqa_disfluent_to_fluent_conversion - task065_timetravel_consistent_sentence_classification - task351_winomt_classification_gender_identifiability_anti - task580_socialiqa_answer_generation - task583_udeps_eng_coarse_pos_tagging - task202_mnli_contradiction_classification - task222_rocstories_two_chioce_slotting_classification - task498_scruples_anecdotes_whoiswrong_classification - task067_abductivenli_answer_generation - task616_cola_classification - task286_olid_offense_judgment - task188_snli_neutral_to_entailment_text_modification - task223_quartz_explanation_generation - task820_protoqa_answer_generation - task196_sentiment140_answer_generation - task1678_mathqa_answer_selection - task349_squad2.0_answerable_unanswerable_question_classification - task154_tomqa_find_location_hard_noise - task333_hateeval_classification_hate_en - task235_iirc_question_from_subtext_answer_generation - task1554_scitail_classification - task210_logic2text_structured_text_generation - task035_winogrande_question_modification_person - task230_iirc_passage_classification - task1356_xlsum_title_generation - task1726_mathqa_correct_answer_generation - task302_record_classification - task380_boolq_yes_no_question - task212_logic2text_classification - task748_glucose_reverse_cause_event_detection - task834_mathdataset_classification - task350_winomt_classification_gender_identifiability_pro - task191_hotpotqa_question_generation - task236_iirc_question_from_passage_answer_generation - task217_rocstories_ordering_answer_generation - task568_circa_question_generation - task614_glucose_cause_event_detection - task361_spolin_yesand_prompt_response_classification - task421_persent_sentence_sentiment_classification - task203_mnli_sentence_generation - task420_persent_document_sentiment_classification - task153_tomqa_find_location_hard_clean - task346_hybridqa_classification - task1211_atomic_classification_hassubevent - task360_spolin_yesand_response_generation - task510_reddit_tifu_title_summarization - task511_reddit_tifu_long_text_summarization - task345_hybridqa_answer_generation - task270_csrg_counterfactual_context_generation - task307_jeopardy_answer_generation_final - task001_quoref_question_generation - task089_swap_words_verification - task1196_atomic_classification_oeffect - task080_piqa_answer_generation - task1598_nyc_long_text_generation - task240_tweetqa_question_generation - task615_moviesqa_answer_generation - task1347_glue_sts-b_similarity_classification - task114_is_the_given_word_longest - task292_storycommonsense_character_text_generation - task115_help_advice_classification - task431_senteval_object_count - task1360_numer_sense_multiple_choice_qa_generation - task177_para-nmt_paraphrasing - task132_dais_text_modification - task269_csrg_counterfactual_story_generation - task233_iirc_link_exists_classification - task161_count_words_containing_letter - task1205_atomic_classification_isafter - task571_recipe_nlg_ner_generation - task1292_yelp_review_full_text_categorization - task428_senteval_inversion - task311_race_question_generation - task429_senteval_tense - task403_creak_commonsense_inference - task929_products_reviews_classification - task582_naturalquestion_answer_generation - task237_iirc_answer_from_subtext_answer_generation - task050_multirc_answerability - task184_break_generate_question - task669_ambigqa_answer_generation - task169_strategyqa_sentence_generation - task500_scruples_anecdotes_title_generation - task241_tweetqa_classification - task1345_glue_qqp_question_paraprashing - task218_rocstories_swap_order_answer_generation - task613_politifact_text_generation - task1167_penn_treebank_coarse_pos_tagging - task1422_mathqa_physics - task247_dream_answer_generation - task199_mnli_classification - task164_mcscript_question_answering_text - task1541_agnews_classification - task516_senteval_conjoints_inversion - task294_storycommonsense_motiv_text_generation - task501_scruples_anecdotes_post_type_verification - task213_rocstories_correct_ending_classification - task821_protoqa_question_generation - task493_review_polarity_classification - task308_jeopardy_answer_generation_all - task1595_event2mind_text_generation_1 - task040_qasc_question_generation - task231_iirc_link_classification - task1727_wiqa_what_is_the_effect - task578_curiosity_dialogs_answer_generation - task310_race_classification - task309_race_answer_generation - task379_agnews_topic_classification - task030_winogrande_full_person - task1540_parsed_pdfs_summarization - task039_qasc_find_overlapping_words - task1206_atomic_classification_isbefore - task157_count_vowels_and_consonants - task339_record_answer_generation - task453_swag_answer_generation - task848_pubmedqa_classification - task673_google_wellformed_query_classification - task676_ollie_relationship_answer_generation - task268_casehold_legal_answer_generation - task844_financial_phrasebank_classification - task330_gap_answer_generation - task595_mocha_answer_generation - task1285_kpa_keypoint_matching - task234_iirc_passage_line_answer_generation - task494_review_polarity_answer_generation - task670_ambigqa_question_generation - task289_gigaword_summarization - npr - nli - SimpleWiki - amazon_review_2018 - ccnews_title_text - agnews - xsum - msmarco - yahoo_answers_title_answer - squad_pairs - wow - mteb-amazon_counterfactual-avs_triplets - mteb-amazon_massive_intent-avs_triplets - mteb-amazon_massive_scenario-avs_triplets - mteb-amazon_reviews_multi-avs_triplets - mteb-banking77-avs_triplets - mteb-emotion-avs_triplets - mteb-imdb-avs_triplets - mteb-mtop_domain-avs_triplets - mteb-mtop_intent-avs_triplets - mteb-toxic_conversations_50k-avs_triplets - mteb-tweet_sentiment_extraction-avs_triplets - covid-bing-query-gpt4-avs_triplets - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final") # Run inference sentences = [ 'who does the chief risk officer report to', "Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk management officer (CRMO) of a firm or corporation is the executive accountable for enabling the efficient and effective governance of significant risks, and related opportunities, to a business and its various segments. Risks are commonly categorized as strategic, reputational, operational, financial, or compliance-related. CROs are accountable to the Executive Committee and The Board for enabling the business to balance risk and reward. In more complex organizations, they are generally responsible for coordinating the organization's Enterprise Risk Management (ERM) approach. The CRO is responsible for assessing and mitigating significant competitive,", "Chief risk officer a company's executive chief officer and chief financial officer to clarify the precision of its financial reports. Moreover, to ensure the mentioned accuracy of financial reports, internal controls are required. Accordingly, each financial report required an internal control report to prevent fraud. Furthermore, the CRO has to be aware of everything occurring in his company on a daily basis, but he must also be current on all of the requirements from the SEC. In addition, the CRO restrains corporate risk by managing compliance. Why is a CRO so important in financial institutions? There is a report of having a CRO", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `medi-mteb-dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9156** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Datasets #### NQ * Dataset: NQ * Size: 49,676 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.86 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 111 tokens</li><li>mean: 137.85 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 110 tokens</li><li>mean: 138.8 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### pubmed * Dataset: pubmed * Size: 29,908 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 22.62 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 77 tokens</li><li>mean: 240.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 77 tokens</li><li>mean: 239.5 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### specter_train_triples * Dataset: specter_train_triples * Size: 49,676 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 15.41 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.07 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 15.69 tokens</li><li>max: 50 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### S2ORC_citations_abstracts * Dataset: S2ORC_citations_abstracts * Size: 99,352 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 198.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 207.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 204.86 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### fever * Dataset: fever * Size: 74,514 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 12.51 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 112.46 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 113.69 tokens</li><li>max: 155 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### gooaq_pairs * Dataset: gooaq_pairs * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 11.96 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 59.94 tokens</li><li>max: 144 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 63.02 tokens</li><li>max: 150 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### codesearchnet * Dataset: codesearchnet * Size: 15,210 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 29.65 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 134.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 164.44 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### wikihow * Dataset: wikihow * Size: 5,070 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 8.03 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 44.2 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 36.49 tokens</li><li>max: 104 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### WikiAnswers * Dataset: WikiAnswers * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 12.77 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.89 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.36 tokens</li><li>max: 42 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### eli5_question_answer * Dataset: eli5_question_answer * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 21.24 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 98.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 108.48 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### amazon-qa * Dataset: amazon-qa * Size: 99,352 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 22.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 54.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 62.82 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### medmcqa * Dataset: medmcqa * Size: 29,908 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 20.68 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 112.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 110.9 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### zeroshot * Dataset: zeroshot * Size: 15,210 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 8.55 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 111.81 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 116.53 tokens</li><li>max: 239 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### TriviaQA_pairs * Dataset: TriviaQA_pairs * Size: 49,676 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 19.77 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 245.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 233.43 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### PAQ_pairs * Dataset: PAQ_pairs * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 12.55 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 109 tokens</li><li>mean: 136.21 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 112 tokens</li><li>mean: 135.15 tokens</li><li>max: 223 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### stackexchange_duplicate_questions_title-body_title-body * Dataset: stackexchange_duplicate_questions_title-body_title-body * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 147.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 144.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 201.86 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### trex * Dataset: trex * Size: 29,908 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 9.53 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 102.65 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 117.98 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### flickr30k_captions * Dataset: flickr30k_captions * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 15.72 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.93 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.11 tokens</li><li>max: 52 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### hotpotqa * Dataset: hotpotqa * Size: 40,048 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 24.11 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 113.67 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 114.74 tokens</li><li>max: 189 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task671_ambigqa_text_generation * Dataset: task671_ambigqa_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 12.72 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.53 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.24 tokens</li><li>max: 19 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task061_ropes_answer_generation * Dataset: task061_ropes_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 117 tokens</li><li>mean: 210.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 117 tokens</li><li>mean: 210.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 119 tokens</li><li>mean: 212.51 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task285_imdb_answer_generation * Dataset: task285_imdb_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 46 tokens</li><li>mean: 209.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 204.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 209.59 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task905_hate_speech_offensive_classification * Dataset: task905_hate_speech_offensive_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 41.93 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 41.02 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 32.41 tokens</li><li>max: 135 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task566_circa_classification * Dataset: task566_circa_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 27.86 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.24 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 27.52 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task184_snli_entailment_to_neutral_text_modification * Dataset: task184_snli_entailment_to_neutral_text_modification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 29.87 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 28.89 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.34 tokens</li><li>max: 100 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task280_stereoset_classification_stereotype_type * Dataset: task280_stereoset_classification_stereotype_type * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 18.47 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.93 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.85 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1599_smcalflow_classification * Dataset: task1599_smcalflow_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 11.31 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.56 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.28 tokens</li><li>max: 45 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1384_deal_or_no_dialog_classification * Dataset: task1384_deal_or_no_dialog_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 59.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 59.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 58.71 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task591_sciq_answer_generation * Dataset: task591_sciq_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 17.59 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.13 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.72 tokens</li><li>max: 75 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task823_peixian-rtgender_sentiment_analysis * Dataset: task823_peixian-rtgender_sentiment_analysis * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 56.98 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 59.75 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.1 tokens</li><li>max: 169 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task023_cosmosqa_question_generation * Dataset: task023_cosmosqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 78.99 tokens</li><li>max: 159 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 80.06 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 79.04 tokens</li><li>max: 161 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task900_freebase_qa_category_classification * Dataset: task900_freebase_qa_category_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 20.52 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.26 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.06 tokens</li><li>max: 69 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task924_event2mind_word_generation * Dataset: task924_event2mind_word_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 32.1 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.18 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.42 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task152_tomqa_find_location_easy_noise * Dataset: task152_tomqa_find_location_easy_noise * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 52.82 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.35 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.73 tokens</li><li>max: 82 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1368_healthfact_sentence_generation * Dataset: task1368_healthfact_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 91 tokens</li><li>mean: 240.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 239.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 97 tokens</li><li>mean: 245.07 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1661_super_glue_classification * Dataset: task1661_super_glue_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 140.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 143.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 142.81 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1187_politifact_classification * Dataset: task1187_politifact_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 33.14 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 31.38 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 32.0 tokens</li><li>max: 71 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1728_web_nlg_data_to_text * Dataset: task1728_web_nlg_data_to_text * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 43.18 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 46.4 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 43.15 tokens</li><li>max: 152 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task112_asset_simple_sentence_identification * Dataset: task112_asset_simple_sentence_identification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 52.11 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.9 tokens</li><li>max: 144 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 52.06 tokens</li><li>max: 114 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1340_msr_text_compression_compression * Dataset: task1340_msr_text_compression_compression * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 41.91 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 44.3 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.09 tokens</li><li>max: 141 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task072_abductivenli_answer_generation * Dataset: task072_abductivenli_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 26.79 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.15 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.43 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1504_hatexplain_answer_generation * Dataset: task1504_hatexplain_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 28.83 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.33 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 28.06 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task684_online_privacy_policy_text_information_type_generation * Dataset: task684_online_privacy_policy_text_information_type_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 29.89 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.11 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 30.07 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1290_xsum_summarization * Dataset: task1290_xsum_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 226.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 229.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 229.42 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task075_squad1.1_answer_generation * Dataset: task075_squad1.1_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 48 tokens</li><li>mean: 167.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 172.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 179.84 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1587_scifact_classification * Dataset: task1587_scifact_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 88 tokens</li><li>mean: 242.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 90 tokens</li><li>mean: 246.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 244.62 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task384_socialiqa_question_classification * Dataset: task384_socialiqa_question_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 35.43 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.43 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.63 tokens</li><li>max: 57 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1555_scitail_answer_generation * Dataset: task1555_scitail_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 36.85 tokens</li><li>max: 90 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.15 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.55 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1532_daily_dialog_emotion_classification * Dataset: task1532_daily_dialog_emotion_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 136.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 140.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 134.53 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task239_tweetqa_answer_generation * Dataset: task239_tweetqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 55.93 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 56.54 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 55.95 tokens</li><li>max: 81 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task596_mocha_question_generation * Dataset: task596_mocha_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 34 tokens</li><li>mean: 80.84 tokens</li><li>max: 163 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 95.19 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 45.62 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1411_dart_subject_identification * Dataset: task1411_dart_subject_identification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 14.95 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.05 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.34 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1359_numer_sense_answer_generation * Dataset: task1359_numer_sense_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 18.74 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.39 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.29 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task329_gap_classification * Dataset: task329_gap_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 40 tokens</li><li>mean: 123.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 127.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 128.32 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task220_rocstories_title_classification * Dataset: task220_rocstories_title_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 53 tokens</li><li>mean: 80.74 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 81.05 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 79.84 tokens</li><li>max: 115 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task316_crows-pairs_classification_stereotype * Dataset: task316_crows-pairs_classification_stereotype * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 19.78 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.21 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.83 tokens</li><li>max: 52 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task495_semeval_headline_classification * Dataset: task495_semeval_headline_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 24.49 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.19 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.2 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1168_brown_coarse_pos_tagging * Dataset: task1168_brown_coarse_pos_tagging * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 43.8 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 43.34 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 44.88 tokens</li><li>max: 197 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task348_squad2.0_unanswerable_question_generation * Dataset: task348_squad2.0_unanswerable_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 30 tokens</li><li>mean: 152.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 161.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 165.55 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task049_multirc_questions_needed_to_answer * Dataset: task049_multirc_questions_needed_to_answer * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 174 tokens</li><li>mean: 252.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 169 tokens</li><li>mean: 252.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 178 tokens</li><li>mean: 252.82 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1534_daily_dialog_question_classification * Dataset: task1534_daily_dialog_question_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 125.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 130.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 135.15 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task322_jigsaw_classification_threat * Dataset: task322_jigsaw_classification_threat * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 54.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 61.29 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 61.83 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task295_semeval_2020_task4_commonsense_reasoning * Dataset: task295_semeval_2020_task4_commonsense_reasoning * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 45.19 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 45.14 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 44.6 tokens</li><li>max: 88 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task186_snli_contradiction_to_entailment_text_modification * Dataset: task186_snli_contradiction_to_entailment_text_modification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.16 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.23 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 32.18 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task034_winogrande_question_modification_object * Dataset: task034_winogrande_question_modification_object * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 36.34 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 35.6 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 34.88 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task160_replace_letter_in_a_sentence * Dataset: task160_replace_letter_in_a_sentence * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 31.98 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.78 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 31.79 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task469_mrqa_answer_generation * Dataset: task469_mrqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 27 tokens</li><li>mean: 182.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 181.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 184.86 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task105_story_cloze-rocstories_sentence_generation * Dataset: task105_story_cloze-rocstories_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 36 tokens</li><li>mean: 55.59 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 54.88 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 55.93 tokens</li><li>max: 76 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task649_race_blank_question_generation * Dataset: task649_race_blank_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 36 tokens</li><li>mean: 253.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 252.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 157 tokens</li><li>mean: 253.95 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1536_daily_dialog_happiness_classification * Dataset: task1536_daily_dialog_happiness_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 128.45 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 135.05 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 143.71 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task683_online_privacy_policy_text_purpose_answer_generation * Dataset: task683_online_privacy_policy_text_purpose_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 29.98 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.36 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 29.89 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task024_cosmosqa_answer_generation * Dataset: task024_cosmosqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 92.42 tokens</li><li>max: 176 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 93.6 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 94.42 tokens</li><li>max: 183 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task584_udeps_eng_fine_pos_tagging * Dataset: task584_udeps_eng_fine_pos_tagging * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 40.27 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 39.65 tokens</li><li>max: 186 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.61 tokens</li><li>max: 148 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task066_timetravel_binary_consistency_classification * Dataset: task066_timetravel_binary_consistency_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 42 tokens</li><li>mean: 66.76 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 67.45 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 66.98 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task413_mickey_en_sentence_perturbation_generation * Dataset: task413_mickey_en_sentence_perturbation_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 13.75 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.81 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.31 tokens</li><li>max: 20 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task182_duorc_question_generation * Dataset: task182_duorc_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 99 tokens</li><li>mean: 242.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 246.33 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 246.42 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task028_drop_answer_generation * Dataset: task028_drop_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 76 tokens</li><li>mean: 230.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 234.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 81 tokens</li><li>mean: 235.81 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1601_webquestions_answer_generation * Dataset: task1601_webquestions_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 16.51 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.69 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 16.73 tokens</li><li>max: 27 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1295_adversarial_qa_question_answering * Dataset: task1295_adversarial_qa_question_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 164.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 54 tokens</li><li>mean: 166.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 166.85 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task201_mnli_neutral_classification * Dataset: task201_mnli_neutral_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 73.03 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 73.42 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 72.64 tokens</li><li>max: 205 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task038_qasc_combined_fact * Dataset: task038_qasc_combined_fact * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.27 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 30.52 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.84 tokens</li><li>max: 53 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task293_storycommonsense_emotion_text_generation * Dataset: task293_storycommonsense_emotion_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 40.0 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 40.18 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 37.66 tokens</li><li>max: 85 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task572_recipe_nlg_text_generation * Dataset: task572_recipe_nlg_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 114.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 119.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 124.27 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task517_emo_classify_emotion_of_dialogue * Dataset: task517_emo_classify_emotion_of_dialogue * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 18.12 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.16 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.4 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task382_hybridqa_answer_generation * Dataset: task382_hybridqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 42.31 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 41.59 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 41.75 tokens</li><li>max: 75 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task176_break_decompose_questions * Dataset: task176_break_decompose_questions * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 17.43 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.21 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.73 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1291_multi_news_summarization * Dataset: task1291_multi_news_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 116 tokens</li><li>mean: 255.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 255.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 68 tokens</li><li>mean: 252.32 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task155_count_nouns_verbs * Dataset: task155_count_nouns_verbs * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 23 tokens</li><li>mean: 27.02 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.8 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.96 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task031_winogrande_question_generation_object * Dataset: task031_winogrande_question_generation_object * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 7.43 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.31 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.25 tokens</li><li>max: 11 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task279_stereoset_classification_stereotype * Dataset: task279_stereoset_classification_stereotype * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 17.86 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.52 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.39 tokens</li><li>max: 50 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1336_peixian_equity_evaluation_corpus_gender_classifier * Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.59 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.58 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.64 tokens</li><li>max: 16 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task508_scruples_dilemmas_more_ethical_isidentifiable * Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 29.67 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.64 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.71 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task518_emo_different_dialogue_emotions * Dataset: task518_emo_different_dialogue_emotions * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 47.83 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 45.5 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 45.83 tokens</li><li>max: 123 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task077_splash_explanation_to_sql * Dataset: task077_splash_explanation_to_sql * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 39.84 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 39.9 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 35.84 tokens</li><li>max: 111 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task923_event2mind_classifier * Dataset: task923_event2mind_classifier * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 20.63 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 18.63 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 19.5 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task470_mrqa_question_generation * Dataset: task470_mrqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 171.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 173.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 179.34 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task638_multi_woz_classification * Dataset: task638_multi_woz_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 78 tokens</li><li>mean: 223.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 220.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 219.78 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1412_web_questions_question_answering * Dataset: task1412_web_questions_question_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 10.32 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.18 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.07 tokens</li><li>max: 16 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task847_pubmedqa_question_generation * Dataset: task847_pubmedqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 249.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 249.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 249.01 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task678_ollie_actual_relationship_answer_generation * Dataset: task678_ollie_actual_relationship_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 40.91 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 38.11 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 41.31 tokens</li><li>max: 104 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task290_tellmewhy_question_answerability * Dataset: task290_tellmewhy_question_answerability * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 62.72 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.32 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 62.95 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task575_air_dialogue_classification * Dataset: task575_air_dialogue_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 14.19 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.59 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.31 tokens</li><li>max: 42 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task189_snli_neutral_to_contradiction_text_modification * Dataset: task189_snli_neutral_to_contradiction_text_modification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.84 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.73 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.22 tokens</li><li>max: 105 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task026_drop_question_generation * Dataset: task026_drop_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 82 tokens</li><li>mean: 219.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 222.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 96 tokens</li><li>mean: 232.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task162_count_words_starting_with_letter * Dataset: task162_count_words_starting_with_letter * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 32.17 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.76 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.63 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task079_conala_concat_strings * Dataset: task079_conala_concat_strings * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 39.49 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 34.22 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 33.51 tokens</li><li>max: 76 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task610_conllpp_ner * Dataset: task610_conllpp_ner * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 19.53 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.3 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.15 tokens</li><li>max: 54 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task046_miscellaneous_question_typing * Dataset: task046_miscellaneous_question_typing * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 25.34 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 24.92 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 25.11 tokens</li><li>max: 57 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task197_mnli_domain_answer_generation * Dataset: task197_mnli_domain_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 43.91 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 45.21 tokens</li><li>max: 211 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 39.5 tokens</li><li>max: 115 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1325_qa_zre_question_generation_on_subject_relation * Dataset: task1325_qa_zre_question_generation_on_subject_relation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 50.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 49.76 tokens</li><li>max: 180 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 54.01 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task430_senteval_subject_count * Dataset: task430_senteval_subject_count * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 17.36 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.41 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.16 tokens</li><li>max: 34 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task672_nummersense * Dataset: task672_nummersense * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 15.72 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.34 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.28 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task402_grailqa_paraphrase_generation * Dataset: task402_grailqa_paraphrase_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 23 tokens</li><li>mean: 130.03 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 139.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 136.9 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task904_hate_speech_offensive_classification * Dataset: task904_hate_speech_offensive_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 34.87 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 34.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.88 tokens</li><li>max: 148 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task192_hotpotqa_sentence_generation * Dataset: task192_hotpotqa_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 125.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 124.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 134.28 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task069_abductivenli_classification * Dataset: task069_abductivenli_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 33 tokens</li><li>mean: 52.09 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 52.07 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 51.91 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task574_air_dialogue_sentence_generation * Dataset: task574_air_dialogue_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 54 tokens</li><li>mean: 144.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 143.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 66 tokens</li><li>mean: 147.62 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task187_snli_entailment_to_contradiction_text_modification * Dataset: task187_snli_entailment_to_contradiction_text_modification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 30.26 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.08 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 29.35 tokens</li><li>max: 71 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task749_glucose_reverse_cause_emotion_detection * Dataset: task749_glucose_reverse_cause_emotion_detection * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 38 tokens</li><li>mean: 67.95 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 67.23 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.79 tokens</li><li>max: 107 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1552_scitail_question_generation * Dataset: task1552_scitail_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 18.34 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.57 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.86 tokens</li><li>max: 54 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task750_aqua_multiple_choice_answering * Dataset: task750_aqua_multiple_choice_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 33 tokens</li><li>mean: 70.17 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 68.58 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 68.28 tokens</li><li>max: 165 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task327_jigsaw_classification_toxic * Dataset: task327_jigsaw_classification_toxic * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 36.97 tokens</li><li>max: 234 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 41.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 46.13 tokens</li><li>max: 244 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1502_hatexplain_classification * Dataset: task1502_hatexplain_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 28.81 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.8 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.25 tokens</li><li>max: 90 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task328_jigsaw_classification_insult * Dataset: task328_jigsaw_classification_insult * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 50.85 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 60.44 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 63.9 tokens</li><li>max: 249 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task304_numeric_fused_head_resolution * Dataset: task304_numeric_fused_head_resolution * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 121.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 122.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 135.09 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1293_kilt_tasks_hotpotqa_question_answering * Dataset: task1293_kilt_tasks_hotpotqa_question_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 24.85 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 24.21 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.81 tokens</li><li>max: 84 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task216_rocstories_correct_answer_generation * Dataset: task216_rocstories_correct_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 59.48 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 58.43 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 58.2 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1326_qa_zre_question_generation_from_answer * Dataset: task1326_qa_zre_question_generation_from_answer * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 46.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 45.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 49.45 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1338_peixian_equity_evaluation_corpus_sentiment_classifier * Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.69 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.7 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.59 tokens</li><li>max: 17 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1729_personachat_generate_next * Dataset: task1729_personachat_generate_next * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 146.83 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 142.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 144.69 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1202_atomic_classification_xneed * Dataset: task1202_atomic_classification_xneed * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 19.56 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.38 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.24 tokens</li><li>max: 28 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task400_paws_paraphrase_classification * Dataset: task400_paws_paraphrase_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 52.16 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.75 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 52.95 tokens</li><li>max: 97 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task502_scruples_anecdotes_whoiswrong_verification * Dataset: task502_scruples_anecdotes_whoiswrong_verification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 229.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 236.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 235.34 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task088_identify_typo_verification * Dataset: task088_identify_typo_verification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 15.1 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.06 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.41 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task221_rocstories_two_choice_classification * Dataset: task221_rocstories_two_choice_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 47 tokens</li><li>mean: 72.64 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.56 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 73.23 tokens</li><li>max: 108 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task200_mnli_entailment_classification * Dataset: task200_mnli_entailment_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 72.66 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 72.92 tokens</li><li>max: 224 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 73.48 tokens</li><li>max: 226 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task074_squad1.1_question_generation * Dataset: task074_squad1.1_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 30 tokens</li><li>mean: 149.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 160.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 164.94 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task581_socialiqa_question_generation * Dataset: task581_socialiqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 26.47 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 25.5 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 25.89 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1186_nne_hrngo_classification * Dataset: task1186_nne_hrngo_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 33.83 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.53 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 33.3 tokens</li><li>max: 77 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task898_freebase_qa_answer_generation * Dataset: task898_freebase_qa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 19.18 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.45 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.4 tokens</li><li>max: 79 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1408_dart_similarity_classification * Dataset: task1408_dart_similarity_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 59.53 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 61.93 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 48.83 tokens</li><li>max: 124 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task168_strategyqa_question_decomposition * Dataset: task168_strategyqa_question_decomposition * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 42 tokens</li><li>mean: 80.63 tokens</li><li>max: 181 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 78.98 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 77.19 tokens</li><li>max: 166 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1357_xlsum_summary_generation * Dataset: task1357_xlsum_summary_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 67 tokens</li><li>mean: 241.86 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 69 tokens</li><li>mean: 242.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 67 tokens</li><li>mean: 247.11 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task390_torque_text_span_selection * Dataset: task390_torque_text_span_selection * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 47 tokens</li><li>mean: 110.01 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 110.44 tokens</li><li>max: 195 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 110.66 tokens</li><li>max: 196 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task165_mcscript_question_answering_commonsense * Dataset: task165_mcscript_question_answering_commonsense * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 147 tokens</li><li>mean: 197.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 145 tokens</li><li>mean: 196.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 198.04 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1533_daily_dialog_formal_classification * Dataset: task1533_daily_dialog_formal_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 130.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 136.79 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 136.81 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task002_quoref_answer_generation * Dataset: task002_quoref_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 214 tokens</li><li>mean: 255.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 214 tokens</li><li>mean: 255.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 224 tokens</li><li>mean: 255.61 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1297_qasc_question_answering * Dataset: task1297_qasc_question_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 61 tokens</li><li>mean: 84.74 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 85.41 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 84.83 tokens</li><li>max: 125 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task305_jeopardy_answer_generation_normal * Dataset: task305_jeopardy_answer_generation_normal * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 27.67 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.39 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.41 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task029_winogrande_full_object * Dataset: task029_winogrande_full_object * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 7.37 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.33 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.24 tokens</li><li>max: 10 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1327_qa_zre_answer_generation_from_question * Dataset: task1327_qa_zre_answer_generation_from_question * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 54.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 52.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 55.5 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task326_jigsaw_classification_obscene * Dataset: task326_jigsaw_classification_obscene * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 65.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 77.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 73.17 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1542_every_ith_element_from_starting * Dataset: task1542_every_ith_element_from_starting * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 127.39 tokens</li><li>max: 245 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 125.92 tokens</li><li>max: 244 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 123.75 tokens</li><li>max: 238 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task570_recipe_nlg_ner_generation * Dataset: task570_recipe_nlg_ner_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 73.94 tokens</li><li>max: 250 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 73.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 75.51 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1409_dart_text_generation * Dataset: task1409_dart_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 68.05 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 72.93 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 68.0 tokens</li><li>max: 164 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task401_numeric_fused_head_reference * Dataset: task401_numeric_fused_head_reference * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 109.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 117.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 119.84 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task846_pubmedqa_classification * Dataset: task846_pubmedqa_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 32 tokens</li><li>mean: 85.64 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 85.03 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 93.96 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1712_poki_classification * Dataset: task1712_poki_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 52.23 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 55.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 63.09 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task344_hybridqa_answer_generation * Dataset: task344_hybridqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 22.26 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 22.14 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 22.01 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task875_emotion_classification * Dataset: task875_emotion_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 23.04 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.43 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 20.33 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1214_atomic_classification_xwant * Dataset: task1214_atomic_classification_xwant * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 19.65 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.44 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.51 tokens</li><li>max: 31 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task106_scruples_ethical_judgment * Dataset: task106_scruples_ethical_judgment * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 30.0 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.93 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.69 tokens</li><li>max: 58 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task238_iirc_answer_from_passage_answer_generation * Dataset: task238_iirc_answer_from_passage_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 138 tokens</li><li>mean: 242.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 165 tokens</li><li>mean: 242.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 243.38 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1391_winogrande_easy_answer_generation * Dataset: task1391_winogrande_easy_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 26 tokens</li><li>mean: 31.7 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 31.3 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 31.2 tokens</li><li>max: 49 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task195_sentiment140_classification * Dataset: task195_sentiment140_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 22.51 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.98 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 21.42 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task163_count_words_ending_with_letter * Dataset: task163_count_words_ending_with_letter * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 31.97 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.7 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.57 tokens</li><li>max: 43 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task579_socialiqa_classification * Dataset: task579_socialiqa_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 54.15 tokens</li><li>max: 132 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 53.63 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 54.12 tokens</li><li>max: 84 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task569_recipe_nlg_text_generation * Dataset: task569_recipe_nlg_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 192.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 194.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 198.01 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1602_webquestion_question_genreation * Dataset: task1602_webquestion_question_genreation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 23.59 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 24.18 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 22.52 tokens</li><li>max: 120 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task747_glucose_cause_emotion_detection * Dataset: task747_glucose_cause_emotion_detection * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 67.95 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.16 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.84 tokens</li><li>max: 99 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task219_rocstories_title_answer_generation * Dataset: task219_rocstories_title_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 42 tokens</li><li>mean: 67.65 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 66.72 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 66.88 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task178_quartz_question_answering * Dataset: task178_quartz_question_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 57.99 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 57.21 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 56.85 tokens</li><li>max: 102 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task103_facts2story_long_text_generation * Dataset: task103_facts2story_long_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 52 tokens</li><li>mean: 80.5 tokens</li><li>max: 143 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 82.19 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 78.93 tokens</li><li>max: 145 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task301_record_question_generation * Dataset: task301_record_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 140 tokens</li><li>mean: 210.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 139 tokens</li><li>mean: 209.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 143 tokens</li><li>mean: 208.87 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1369_healthfact_sentence_generation * Dataset: task1369_healthfact_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 110 tokens</li><li>mean: 243.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 243.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 113 tokens</li><li>mean: 251.69 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task515_senteval_odd_word_out * Dataset: task515_senteval_odd_word_out * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 19.82 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.22 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.02 tokens</li><li>max: 35 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task496_semeval_answer_generation * Dataset: task496_semeval_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 28.16 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.78 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.71 tokens</li><li>max: 45 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1658_billsum_summarization * Dataset: task1658_billsum_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1204_atomic_classification_hinderedby * Dataset: task1204_atomic_classification_hinderedby * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 22.08 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 22.05 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.51 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1392_superglue_multirc_answer_verification * Dataset: task1392_superglue_multirc_answer_verification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 128 tokens</li><li>mean: 241.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 127 tokens</li><li>mean: 241.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 136 tokens</li><li>mean: 242.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task306_jeopardy_answer_generation_double * Dataset: task306_jeopardy_answer_generation_double * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 27.86 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 27.16 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.47 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1286_openbookqa_question_answering * Dataset: task1286_openbookqa_question_answering * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 39.61 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 38.96 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 38.35 tokens</li><li>max: 89 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task159_check_frequency_of_words_in_sentence_pair * Dataset: task159_check_frequency_of_words_in_sentence_pair * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 50.41 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.35 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.59 tokens</li><li>max: 66 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task151_tomqa_find_location_easy_clean * Dataset: task151_tomqa_find_location_easy_clean * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 50.74 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.23 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.66 tokens</li><li>max: 74 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task323_jigsaw_classification_sexually_explicit * Dataset: task323_jigsaw_classification_sexually_explicit * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 66.2 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 76.82 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 75.6 tokens</li><li>max: 251 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task037_qasc_generate_related_fact * Dataset: task037_qasc_generate_related_fact * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 22.08 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 22.07 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 21.88 tokens</li><li>max: 40 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task027_drop_answer_type_generation * Dataset: task027_drop_answer_type_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 87 tokens</li><li>mean: 229.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 74 tokens</li><li>mean: 230.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 71 tokens</li><li>mean: 232.72 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1596_event2mind_text_generation_2 * Dataset: task1596_event2mind_text_generation_2 * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.04 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.04 tokens</li><li>max: 18 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task141_odd-man-out_classification_category * Dataset: task141_odd-man-out_classification_category * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 18.43 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.37 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.45 tokens</li><li>max: 25 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task194_duorc_answer_generation * Dataset: task194_duorc_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 149 tokens</li><li>mean: 251.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 252.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 148 tokens</li><li>mean: 251.81 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task679_hope_edi_english_text_classification * Dataset: task679_hope_edi_english_text_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 27.62 tokens</li><li>max: 199 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 27.01 tokens</li><li>max: 205 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 29.68 tokens</li><li>max: 194 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task246_dream_question_generation * Dataset: task246_dream_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 80.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 80.34 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 86.98 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1195_disflqa_disfluent_to_fluent_conversion * Dataset: task1195_disflqa_disfluent_to_fluent_conversion * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 19.79 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.84 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.05 tokens</li><li>max: 44 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task065_timetravel_consistent_sentence_classification * Dataset: task065_timetravel_consistent_sentence_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 55 tokens</li><li>mean: 79.44 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 79.28 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 80.05 tokens</li><li>max: 110 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task351_winomt_classification_gender_identifiability_anti * Dataset: task351_winomt_classification_gender_identifiability_anti * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 21.8 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.7 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.83 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task580_socialiqa_answer_generation * Dataset: task580_socialiqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 52.36 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 51.03 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 51.01 tokens</li><li>max: 87 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task583_udeps_eng_coarse_pos_tagging * Dataset: task583_udeps_eng_coarse_pos_tagging * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 40.75 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 39.87 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.43 tokens</li><li>max: 185 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task202_mnli_contradiction_classification * Dataset: task202_mnli_contradiction_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 73.61 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 76.12 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 74.47 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task222_rocstories_two_chioce_slotting_classification * Dataset: task222_rocstories_two_chioce_slotting_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 48 tokens</li><li>mean: 73.08 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 73.29 tokens</li><li>max: 100 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 71.96 tokens</li><li>max: 102 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task498_scruples_anecdotes_whoiswrong_classification * Dataset: task498_scruples_anecdotes_whoiswrong_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 225.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 231.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 231.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task067_abductivenli_answer_generation * Dataset: task067_abductivenli_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 26.76 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 26.09 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 26.35 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task616_cola_classification * Dataset: task616_cola_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 12.44 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.29 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.16 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task286_olid_offense_judgment * Dataset: task286_olid_offense_judgment * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 32.73 tokens</li><li>max: 145 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.79 tokens</li><li>max: 171 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.27 tokens</li><li>max: 169 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task188_snli_neutral_to_entailment_text_modification * Dataset: task188_snli_neutral_to_entailment_text_modification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.76 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.25 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.02 tokens</li><li>max: 84 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task223_quartz_explanation_generation * Dataset: task223_quartz_explanation_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 31.41 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 31.77 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 28.98 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task820_protoqa_answer_generation * Dataset: task820_protoqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 14.71 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.49 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.15 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task196_sentiment140_answer_generation * Dataset: task196_sentiment140_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 36.21 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.8 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 36.21 tokens</li><li>max: 72 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1678_mathqa_answer_selection * Dataset: task1678_mathqa_answer_selection * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 33 tokens</li><li>mean: 70.5 tokens</li><li>max: 177 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 69.11 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 69.75 tokens</li><li>max: 160 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task349_squad2.0_answerable_unanswerable_question_classification * Dataset: task349_squad2.0_answerable_unanswerable_question_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 53 tokens</li><li>mean: 175.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 175.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 175.37 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task154_tomqa_find_location_hard_noise * Dataset: task154_tomqa_find_location_hard_noise * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 129 tokens</li><li>mean: 176.0 tokens</li><li>max: 253 tokens</li></ul> | <ul><li>min: 126 tokens</li><li>mean: 176.09 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 177.44 tokens</li><li>max: 254 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task333_hateeval_classification_hate_en * Dataset: task333_hateeval_classification_hate_en * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 38.53 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 37.38 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 36.64 tokens</li><li>max: 113 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task235_iirc_question_from_subtext_answer_generation * Dataset: task235_iirc_question_from_subtext_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 52.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 50.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 55.69 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1554_scitail_classification * Dataset: task1554_scitail_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.69 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 25.79 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 24.42 tokens</li><li>max: 59 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task210_logic2text_structured_text_generation * Dataset: task210_logic2text_structured_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 31.62 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 30.74 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 32.72 tokens</li><li>max: 89 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task035_winogrande_question_modification_person * Dataset: task035_winogrande_question_modification_person * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 31 tokens</li><li>mean: 36.19 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.74 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.48 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task230_iirc_passage_classification * Dataset: task230_iirc_passage_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1356_xlsum_title_generation * Dataset: task1356_xlsum_title_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 59 tokens</li><li>mean: 240.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 241.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 248.67 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1726_mathqa_correct_answer_generation * Dataset: task1726_mathqa_correct_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 44.19 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 42.51 tokens</li><li>max: 129 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 43.3 tokens</li><li>max: 133 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task302_record_classification * Dataset: task302_record_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 194 tokens</li><li>mean: 253.34 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 198 tokens</li><li>mean: 252.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 195 tokens</li><li>mean: 252.92 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task380_boolq_yes_no_question * Dataset: task380_boolq_yes_no_question * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 26 tokens</li><li>mean: 133.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 138.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 137.7 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task212_logic2text_classification * Dataset: task212_logic2text_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 33.08 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 32.04 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 33.02 tokens</li><li>max: 127 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task748_glucose_reverse_cause_event_detection * Dataset: task748_glucose_reverse_cause_event_detection * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 67.7 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 67.03 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.84 tokens</li><li>max: 105 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task834_mathdataset_classification * Dataset: task834_mathdataset_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 27.58 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 27.78 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.82 tokens</li><li>max: 93 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task350_winomt_classification_gender_identifiability_pro * Dataset: task350_winomt_classification_gender_identifiability_pro * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 21.79 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.63 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.79 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task191_hotpotqa_question_generation * Dataset: task191_hotpotqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 198 tokens</li><li>mean: 255.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 238 tokens</li><li>mean: 255.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 255 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task236_iirc_question_from_passage_answer_generation * Dataset: task236_iirc_question_from_passage_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 135 tokens</li><li>mean: 238.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 237.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 154 tokens</li><li>mean: 239.59 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task217_rocstories_ordering_answer_generation * Dataset: task217_rocstories_ordering_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 72.45 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.26 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 71.03 tokens</li><li>max: 105 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task568_circa_question_generation * Dataset: task568_circa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 9.57 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.53 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.93 tokens</li><li>max: 20 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task614_glucose_cause_event_detection * Dataset: task614_glucose_cause_event_detection * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 67.7 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 67.16 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 68.55 tokens</li><li>max: 103 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task361_spolin_yesand_prompt_response_classification * Dataset: task361_spolin_yesand_prompt_response_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 47.04 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 45.97 tokens</li><li>max: 119 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 47.1 tokens</li><li>max: 128 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task421_persent_sentence_sentiment_classification * Dataset: task421_persent_sentence_sentiment_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 67.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 71.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 72.33 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task203_mnli_sentence_generation * Dataset: task203_mnli_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 39.1 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.55 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 34.25 tokens</li><li>max: 170 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task420_persent_document_sentiment_classification * Dataset: task420_persent_document_sentiment_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 221.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 233.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 227.57 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task153_tomqa_find_location_hard_clean * Dataset: task153_tomqa_find_location_hard_clean * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 161.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 160.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 164.12 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task346_hybridqa_classification * Dataset: task346_hybridqa_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 32.88 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.94 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 31.91 tokens</li><li>max: 75 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1211_atomic_classification_hassubevent * Dataset: task1211_atomic_classification_hassubevent * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 16.28 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.08 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.83 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task360_spolin_yesand_response_generation * Dataset: task360_spolin_yesand_response_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 22.53 tokens</li><li>max: 89 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 21.05 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.8 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task510_reddit_tifu_title_summarization * Dataset: task510_reddit_tifu_title_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 217.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 218.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 222.62 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task511_reddit_tifu_long_text_summarization * Dataset: task511_reddit_tifu_long_text_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 239.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 238.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 245.19 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task345_hybridqa_answer_generation * Dataset: task345_hybridqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 22.16 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.62 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.91 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task270_csrg_counterfactual_context_generation * Dataset: task270_csrg_counterfactual_context_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 63 tokens</li><li>mean: 100.09 tokens</li><li>max: 158 tokens</li></ul> | <ul><li>min: 63 tokens</li><li>mean: 98.76 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 100.29 tokens</li><li>max: 141 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task307_jeopardy_answer_generation_final * Dataset: task307_jeopardy_answer_generation_final * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 29.55 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.3 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.25 tokens</li><li>max: 43 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task001_quoref_question_generation * Dataset: task001_quoref_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 201 tokens</li><li>mean: 254.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 254.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 255.09 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task089_swap_words_verification * Dataset: task089_swap_words_verification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 12.86 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.63 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.25 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1196_atomic_classification_oeffect * Dataset: task1196_atomic_classification_oeffect * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 18.78 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.57 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.51 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task080_piqa_answer_generation * Dataset: task080_piqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 10.85 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.75 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.12 tokens</li><li>max: 26 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1598_nyc_long_text_generation * Dataset: task1598_nyc_long_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 35.49 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 35.61 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 36.63 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task240_tweetqa_question_generation * Dataset: task240_tweetqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 27 tokens</li><li>mean: 51.08 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 50.61 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 51.58 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task615_moviesqa_answer_generation * Dataset: task615_moviesqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 11.45 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 11.43 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.37 tokens</li><li>max: 21 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1347_glue_sts-b_similarity_classification * Dataset: task1347_glue_sts-b_similarity_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 31.15 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.1 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.97 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task114_is_the_given_word_longest * Dataset: task114_is_the_given_word_longest * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 28.84 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.47 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.72 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task292_storycommonsense_character_text_generation * Dataset: task292_storycommonsense_character_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 43 tokens</li><li>mean: 67.9 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 67.11 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 69.09 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task115_help_advice_classification * Dataset: task115_help_advice_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 2 tokens</li><li>mean: 19.92 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.28 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.23 tokens</li><li>max: 137 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task431_senteval_object_count * Dataset: task431_senteval_object_count * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.77 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.16 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.77 tokens</li><li>max: 35 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1360_numer_sense_multiple_choice_qa_generation * Dataset: task1360_numer_sense_multiple_choice_qa_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 32 tokens</li><li>mean: 40.71 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.36 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.32 tokens</li><li>max: 60 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task177_para-nmt_paraphrasing * Dataset: task177_para-nmt_paraphrasing * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 19.93 tokens</li><li>max: 82 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.97 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.26 tokens</li><li>max: 36 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task132_dais_text_modification * Dataset: task132_dais_text_modification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.33 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.07 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.15 tokens</li><li>max: 15 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task269_csrg_counterfactual_story_generation * Dataset: task269_csrg_counterfactual_story_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 49 tokens</li><li>mean: 80.0 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 79.62 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 79.46 tokens</li><li>max: 114 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task233_iirc_link_exists_classification * Dataset: task233_iirc_link_exists_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 145 tokens</li><li>mean: 235.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 233.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 151 tokens</li><li>mean: 234.97 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task161_count_words_containing_letter * Dataset: task161_count_words_containing_letter * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 27 tokens</li><li>mean: 30.99 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.79 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.48 tokens</li><li>max: 42 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1205_atomic_classification_isafter * Dataset: task1205_atomic_classification_isafter * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 20.92 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.64 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.52 tokens</li><li>max: 37 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task571_recipe_nlg_ner_generation * Dataset: task571_recipe_nlg_ner_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 118.42 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 118.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 111.25 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1292_yelp_review_full_text_categorization * Dataset: task1292_yelp_review_full_text_categorization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 136.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 147.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 146.33 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task428_senteval_inversion * Dataset: task428_senteval_inversion * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.68 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.59 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.26 tokens</li><li>max: 34 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task311_race_question_generation * Dataset: task311_race_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 115 tokens</li><li>mean: 254.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 137 tokens</li><li>mean: 254.41 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 255.51 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task429_senteval_tense * Dataset: task429_senteval_tense * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 15.82 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.07 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.3 tokens</li><li>max: 36 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task403_creak_commonsense_inference * Dataset: task403_creak_commonsense_inference * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 30.14 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.54 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.26 tokens</li><li>max: 122 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task929_products_reviews_classification * Dataset: task929_products_reviews_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 69.61 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.61 tokens</li><li>max: 123 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.68 tokens</li><li>max: 123 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task582_naturalquestion_answer_generation * Dataset: task582_naturalquestion_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.7 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.63 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 25 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task237_iirc_answer_from_subtext_answer_generation * Dataset: task237_iirc_answer_from_subtext_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 66.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 64.95 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 61.31 tokens</li><li>max: 161 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task050_multirc_answerability * Dataset: task050_multirc_answerability * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 32.56 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 31.62 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 32.26 tokens</li><li>max: 159 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task184_break_generate_question * Dataset: task184_break_generate_question * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 39.72 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 39.07 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 39.81 tokens</li><li>max: 148 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task669_ambigqa_answer_generation * Dataset: task669_ambigqa_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 12.91 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 12.84 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.74 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task169_strategyqa_sentence_generation * Dataset: task169_strategyqa_sentence_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 35.06 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.24 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.37 tokens</li><li>max: 65 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task500_scruples_anecdotes_title_generation * Dataset: task500_scruples_anecdotes_title_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 225.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 233.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 235.04 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task241_tweetqa_classification * Dataset: task241_tweetqa_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 31 tokens</li><li>mean: 61.77 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.17 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 61.71 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1345_glue_qqp_question_paraprashing * Dataset: task1345_glue_qqp_question_paraprashing * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 16.8 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.75 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.69 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task218_rocstories_swap_order_answer_generation * Dataset: task218_rocstories_swap_order_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 48 tokens</li><li>mean: 72.69 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.72 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 72.12 tokens</li><li>max: 106 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task613_politifact_text_generation * Dataset: task613_politifact_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 24.85 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 23.4 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 22.9 tokens</li><li>max: 61 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1167_penn_treebank_coarse_pos_tagging * Dataset: task1167_penn_treebank_coarse_pos_tagging * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 53.87 tokens</li><li>max: 200 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 53.76 tokens</li><li>max: 220 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 55.02 tokens</li><li>max: 202 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1422_mathqa_physics * Dataset: task1422_mathqa_physics * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 34 tokens</li><li>mean: 72.76 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 71.89 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 72.78 tokens</li><li>max: 155 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task247_dream_answer_generation * Dataset: task247_dream_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 38 tokens</li><li>mean: 160.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 158.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 167.84 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task199_mnli_classification * Dataset: task199_mnli_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 43.48 tokens</li><li>max: 127 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.59 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.16 tokens</li><li>max: 113 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task164_mcscript_question_answering_text * Dataset: task164_mcscript_question_answering_text * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 150 tokens</li><li>mean: 201.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 150 tokens</li><li>mean: 201.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 201.39 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1541_agnews_classification * Dataset: task1541_agnews_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 53.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 52.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 54.13 tokens</li><li>max: 161 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task516_senteval_conjoints_inversion * Dataset: task516_senteval_conjoints_inversion * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 20.15 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.98 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.92 tokens</li><li>max: 34 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task294_storycommonsense_motiv_text_generation * Dataset: task294_storycommonsense_motiv_text_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 40.72 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 41.23 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 40.31 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task501_scruples_anecdotes_post_type_verification * Dataset: task501_scruples_anecdotes_post_type_verification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 230.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 234.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 234.19 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task213_rocstories_correct_ending_classification * Dataset: task213_rocstories_correct_ending_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 62 tokens</li><li>mean: 86.09 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 60 tokens</li><li>mean: 85.37 tokens</li><li>max: 131 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 85.96 tokens</li><li>max: 131 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task821_protoqa_question_generation * Dataset: task821_protoqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 14.97 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.01 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.99 tokens</li><li>max: 93 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task493_review_polarity_classification * Dataset: task493_review_polarity_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 100.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 106.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 112.99 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task308_jeopardy_answer_generation_all * Dataset: task308_jeopardy_answer_generation_all * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 27.95 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 26.96 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.41 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1595_event2mind_text_generation_1 * Dataset: task1595_event2mind_text_generation_1 * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.86 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.95 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.04 tokens</li><li>max: 20 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task040_qasc_question_generation * Dataset: task040_qasc_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 15.06 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.04 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 13.86 tokens</li><li>max: 32 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task231_iirc_link_classification * Dataset: task231_iirc_link_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 179 tokens</li><li>mean: 246.11 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 170 tokens</li><li>mean: 246.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 161 tokens</li><li>mean: 247.03 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1727_wiqa_what_is_the_effect * Dataset: task1727_wiqa_what_is_the_effect * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 95.88 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 95.98 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 96.22 tokens</li><li>max: 183 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task578_curiosity_dialogs_answer_generation * Dataset: task578_curiosity_dialogs_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 229.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 118 tokens</li><li>mean: 235.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 229.13 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task310_race_classification * Dataset: task310_race_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 101 tokens</li><li>mean: 255.03 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 218 tokens</li><li>mean: 255.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 255.03 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task309_race_answer_generation * Dataset: task309_race_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 75 tokens</li><li>mean: 255.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 204 tokens</li><li>mean: 255.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 75 tokens</li><li>mean: 255.25 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task379_agnews_topic_classification * Dataset: task379_agnews_topic_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 54.82 tokens</li><li>max: 193 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 54.53 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 54.86 tokens</li><li>max: 187 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task030_winogrande_full_person * Dataset: task030_winogrande_full_person * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 7.6 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.49 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.37 tokens</li><li>max: 11 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1540_parsed_pdfs_summarization * Dataset: task1540_parsed_pdfs_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 186.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 190.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 192.05 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task039_qasc_find_overlapping_words * Dataset: task039_qasc_find_overlapping_words * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 30.48 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.06 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.67 tokens</li><li>max: 60 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1206_atomic_classification_isbefore * Dataset: task1206_atomic_classification_isbefore * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 21.26 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.84 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.35 tokens</li><li>max: 31 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task157_count_vowels_and_consonants * Dataset: task157_count_vowels_and_consonants * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 28.03 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 27.93 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 28.34 tokens</li><li>max: 39 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task339_record_answer_generation * Dataset: task339_record_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 171 tokens</li><li>mean: 234.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 234.22 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 232.25 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task453_swag_answer_generation * Dataset: task453_swag_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 18.53 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.23 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 17.5 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task848_pubmedqa_classification * Dataset: task848_pubmedqa_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 248.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 249.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 251.72 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task673_google_wellformed_query_classification * Dataset: task673_google_wellformed_query_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 11.57 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.23 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.34 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task676_ollie_relationship_answer_generation * Dataset: task676_ollie_relationship_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 51.45 tokens</li><li>max: 113 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 49.38 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 51.68 tokens</li><li>max: 113 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task268_casehold_legal_answer_generation * Dataset: task268_casehold_legal_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 235 tokens</li><li>mean: 255.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 156 tokens</li><li>mean: 255.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 226 tokens</li><li>mean: 255.94 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task844_financial_phrasebank_classification * Dataset: task844_financial_phrasebank_classification * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 39.74 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 38.28 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 39.06 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task330_gap_answer_generation * Dataset: task330_gap_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 26 tokens</li><li>mean: 107.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 108.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 110.56 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task595_mocha_answer_generation * Dataset: task595_mocha_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 94.35 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 96.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 118.22 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1285_kpa_keypoint_matching * Dataset: task1285_kpa_keypoint_matching * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 30 tokens</li><li>mean: 52.36 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 50.15 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 53.13 tokens</li><li>max: 88 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task234_iirc_passage_line_answer_generation * Dataset: task234_iirc_passage_line_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 143 tokens</li><li>mean: 234.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 235.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 235.94 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task494_review_polarity_answer_generation * Dataset: task494_review_polarity_answer_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 106.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 111.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 112.42 tokens</li><li>max: 249 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task670_ambigqa_question_generation * Dataset: task670_ambigqa_question_generation * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 12.66 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.49 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.24 tokens</li><li>max: 18 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task289_gigaword_summarization * Dataset: task289_gigaword_summarization * Size: 1,018 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 51.54 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 51.94 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 51.44 tokens</li><li>max: 87 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### npr * Dataset: npr * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 12.33 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 148.6 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 115.37 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### nli * Dataset: nli * Size: 49,676 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 20.98 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.92 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.04 tokens</li><li>max: 32 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### SimpleWiki * Dataset: SimpleWiki * Size: 5,070 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 29.18 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 33.55 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 56.1 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### amazon_review_2018 * Dataset: amazon_review_2018 * Size: 99,352 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 11.43 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 86.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 70.62 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### ccnews_title_text * Dataset: ccnews_title_text * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 15.63 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 209.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 197.07 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### agnews * Dataset: agnews * Size: 44,606 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 12.05 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 40.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 46.18 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### xsum * Dataset: xsum * Size: 10,140 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 27.73 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 224.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 230.01 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### msmarco * Dataset: msmarco * Size: 173,354 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 8.96 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 78.76 tokens</li><li>max: 235 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 79.64 tokens</li><li>max: 218 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### yahoo_answers_title_answer * Dataset: yahoo_answers_title_answer * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 16.99 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 76.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 91.49 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### squad_pairs * Dataset: squad_pairs * Size: 24,838 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 14.24 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 152.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 163.22 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### wow * Dataset: wow * Size: 29,908 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 88.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 100 tokens</li><li>mean: 111.97 tokens</li><li>max: 166 tokens</li></ul> | <ul><li>min: 80 tokens</li><li>mean: 113.24 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_counterfactual-avs_triplets * Dataset: mteb-amazon_counterfactual-avs_triplets * Size: 4,055 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 26.99 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 27.29 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.56 tokens</li><li>max: 83 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_massive_intent-avs_triplets * Dataset: mteb-amazon_massive_intent-avs_triplets * Size: 11,661 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 9.43 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.19 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.5 tokens</li><li>max: 28 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_massive_scenario-avs_triplets * Dataset: mteb-amazon_massive_scenario-avs_triplets * Size: 11,661 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 9.61 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.01 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.48 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_reviews_multi-avs_triplets * Dataset: mteb-amazon_reviews_multi-avs_triplets * Size: 198,192 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 46.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 49.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 47.98 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-banking77-avs_triplets * Dataset: mteb-banking77-avs_triplets * Size: 10,139 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 16.61 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.78 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.11 tokens</li><li>max: 83 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-emotion-avs_triplets * Dataset: mteb-emotion-avs_triplets * Size: 16,224 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 22.02 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 17.48 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 22.16 tokens</li><li>max: 72 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-imdb-avs_triplets * Dataset: mteb-imdb-avs_triplets * Size: 24,839 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 208.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 52 tokens</li><li>mean: 223.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 210.03 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-mtop_domain-avs_triplets * Dataset: mteb-mtop_domain-avs_triplets * Size: 15,715 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 10.11 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.66 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.16 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-mtop_intent-avs_triplets * Dataset: mteb-mtop_intent-avs_triplets * Size: 15,715 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 10.08 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.78 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.11 tokens</li><li>max: 28 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-toxic_conversations_50k-avs_triplets * Dataset: mteb-toxic_conversations_50k-avs_triplets * Size: 49,677 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 68.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 90.19 tokens</li><li>max: 252 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 64.54 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-tweet_sentiment_extraction-avs_triplets * Dataset: mteb-tweet_sentiment_extraction-avs_triplets * Size: 27,373 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 20.82 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 20.02 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.66 tokens</li><li>max: 50 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### covid-bing-query-gpt4-avs_triplets * Dataset: covid-bing-query-gpt4-avs_triplets * Size: 5,070 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 15.08 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 37.42 tokens</li><li>max: 239 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 37.25 tokens</li><li>max: 100 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 18,269 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 15.81 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 144.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 143.7 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 512 - `learning_rate`: 5.656854249492381e-05 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `fp16`: True - `gradient_checkpointing`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 512 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5.656854249492381e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: True - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | medi-mteb-dev_cosine_accuracy | |:------:|:-----:|:-------------:|:---------------:|:-----------------------------:| | 0 | 0 | - | - | 0.8358 | | 0.1308 | 500 | 2.6713 | 1.1708 | 0.8820 | | 0.2616 | 1000 | 1.9946 | 1.1040 | 0.8890 | | 0.3925 | 1500 | 2.0138 | 1.0559 | 0.8955 | | 0.5233 | 2000 | 1.7733 | 1.0154 | 0.8976 | | 0.6541 | 2500 | 1.8934 | 1.0145 | 0.8990 | | 0.7849 | 3000 | 1.7916 | 1.0166 | 0.8990 | | 0.9158 | 3500 | 1.8491 | 0.9818 | 0.8981 | | 1.0466 | 4000 | 1.7568 | 0.9473 | 0.9031 | | 1.1774 | 4500 | 1.8666 | 1.0801 | 0.9003 | | 1.3082 | 5000 | 1.6883 | 0.9535 | 0.9008 | | 1.4390 | 5500 | 1.7082 | 1.0652 | 0.9028 | | 1.5699 | 6000 | 1.6634 | 1.0519 | 0.9040 | | 1.7007 | 6500 | 1.689 | 0.9920 | 0.9039 | | 1.8315 | 7000 | 1.6129 | 1.0213 | 0.9021 | | 1.9623 | 7500 | 1.576 | 0.9993 | 0.9033 | | 2.0931 | 8000 | 1.6392 | 1.0826 | 0.9069 | | 2.2240 | 8500 | 1.5947 | 1.1802 | 0.9063 | | 2.3548 | 9000 | 1.6222 | 1.2468 | 0.9075 | | 2.4856 | 9500 | 1.4471 | 1.0080 | 0.9077 | | 2.6164 | 10000 | 1.5689 | 1.1530 | 0.9088 | | 2.7473 | 10500 | 1.4836 | 1.0531 | 0.9080 | | 2.8781 | 11000 | 1.525 | 1.0097 | 0.9091 | | 3.0089 | 11500 | 1.4068 | 1.0630 | 0.9071 | | 3.1397 | 12000 | 1.5666 | 0.9643 | 0.9091 | | 3.2705 | 12500 | 1.4479 | 1.0455 | 0.9077 | | 3.4014 | 13000 | 1.5516 | 1.0711 | 0.9109 | | 3.5322 | 13500 | 1.3551 | 0.9991 | 0.9093 | | 3.6630 | 14000 | 1.4498 | 1.0136 | 0.9093 | | 3.7938 | 14500 | 1.3856 | 1.0710 | 0.9097 | | 3.9246 | 15000 | 1.4329 | 1.0074 | 0.9097 | | 4.0555 | 15500 | 1.3455 | 1.0328 | 0.9094 | | 4.1863 | 16000 | 1.4601 | 1.0259 | 0.9078 | | 4.3171 | 16500 | 1.3684 | 1.0295 | 0.9120 | | 4.4479 | 17000 | 1.3637 | 1.0637 | 0.9090 | | 4.5788 | 17500 | 1.3688 | 1.0929 | 0.9100 | | 4.7096 | 18000 | 1.3419 | 1.1102 | 0.9124 | | 4.8404 | 18500 | 1.3378 | 0.9625 | 0.9129 | | 4.9712 | 19000 | 1.3224 | 1.0812 | 0.9126 | | 5.1020 | 19500 | 1.3579 | 1.0317 | 0.9121 | | 5.2329 | 20000 | 1.3409 | 1.0622 | 0.9107 | | 5.3637 | 20500 | 1.3929 | 1.1232 | 0.9113 | | 5.4945 | 21000 | 1.213 | 1.0926 | 0.9123 | | 5.6253 | 21500 | 1.313 | 1.0791 | 0.9118 | | 5.7561 | 22000 | 1.2606 | 1.0581 | 0.9119 | | 5.8870 | 22500 | 1.3094 | 1.0322 | 0.9134 | | 6.0178 | 23000 | 1.2102 | 1.0039 | 0.9106 | | 6.1486 | 23500 | 1.3686 | 1.0815 | 0.9140 | | 6.2794 | 24000 | 1.2467 | 1.0143 | 0.9126 | | 6.4103 | 24500 | 1.3445 | 1.0778 | 0.9116 | | 6.5411 | 25000 | 1.1894 | 0.9941 | 0.9140 | | 6.6719 | 25500 | 1.2617 | 1.0546 | 0.9121 | | 6.8027 | 26000 | 1.2042 | 1.0126 | 0.9130 | | 6.9335 | 26500 | 1.2559 | 1.0516 | 0.9142 | | 7.0644 | 27000 | 1.2031 | 0.9957 | 0.9146 | | 7.1952 | 27500 | 1.2866 | 1.0564 | 0.9142 | | 7.3260 | 28000 | 1.2477 | 1.0420 | 0.9135 | | 7.4568 | 28500 | 1.1961 | 1.0116 | 0.9151 | | 7.5877 | 29000 | 1.227 | 1.0091 | 0.9154 | | 7.7185 | 29500 | 1.1952 | 1.0307 | 0.9146 | | 7.8493 | 30000 | 1.192 | 0.9344 | 0.9144 | | 7.9801 | 30500 | 1.1871 | 1.0943 | 0.9151 | | 8.1109 | 31000 | 1.2267 | 1.0049 | 0.9150 | | 8.2418 | 31500 | 1.1928 | 1.0673 | 0.9149 | | 8.3726 | 32000 | 1.2942 | 1.0980 | 0.9148 | | 8.5034 | 32500 | 1.1099 | 1.0380 | 0.9151 | | 8.6342 | 33000 | 1.1882 | 1.0734 | 0.9138 | | 8.7650 | 33500 | 1.1365 | 1.0677 | 0.9144 | | 8.8959 | 34000 | 1.2215 | 1.0256 | 0.9160 | | 9.0267 | 34500 | 1.0926 | 1.0198 | 0.9142 | | 9.1575 | 35000 | 1.269 | 1.0395 | 0.9160 | | 9.2883 | 35500 | 1.1528 | 1.0306 | 0.9152 | | 9.4192 | 36000 | 1.2324 | 1.0607 | 0.9158 | | 9.5500 | 36500 | 1.1187 | 1.0418 | 0.9151 | | 9.6808 | 37000 | 1.1722 | 1.0443 | 0.9151 | | 9.8116 | 37500 | 1.1149 | 1.0457 | 0.9152 | | 9.9424 | 38000 | 1.1751 | 1.0245 | 0.9156 | ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.4.0.dev0 - Transformers: 4.46.3 - PyTorch: 2.5.1+cu124 - Accelerate: 0.34.2 - Datasets: 2.21.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "SUMMARIZATION", "PARAPHRASING" ]
[ "PUBMEDQA", "SCIFACT", "SCIQ", "SCITAIL" ]
BioNLP
tintnguyen/bert-base-vietnamese-uncased-st
tintnguyen
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1583079", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:trituenhantaoio/bert-base-vietnamese-uncased", "base_model:finetune:trituenhantaoio/bert-base-vietnamese-uncased", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,732
1,732
8
0
--- base_model: trituenhantaoio/bert-base-vietnamese-uncased library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1583079 - loss:MultipleNegativesRankingLoss widget: - source_sentence: tính bền vững trong chuỗi cung ứng là gì sentences: - 'Tính bền vững của chuỗi cung ứng ::: Tính bền vững của chuỗi cung ứng là một vấn đề kinh doanh ảnh hưởng đến chuỗi cung ứng hoặc mạng lưới hậu cần của tổ chức về mặt môi trường, rủi ro và chi phí lãng phí. Nhu cầu tích hợp các lựa chọn hợp lý về môi trường vào quản lý chuỗi cung ứng ngày càng tăng. Tính bền vững trong chuỗi cung ứng ngày càng được các nhà quản trị cấp cao coi là cần thiết để mang lại lợi nhuận và đã thay thế chi phí tiền tệ, giá trị và tốc độ là chủ đề thảo luận giữa các chuyên gia mua và cung ứng. Một chuỗi cung ứng bền vững nắm bắt các cơ hội tạo ra giá trị và mang lại lợi thế cạnh tranh đáng kể cho những người chấp nhận sớm và đổi mới quy trình.' - 'Ung thư biểu mô tuyến bã ::: Ung thư biểu mô tuyến bã, còn được gọi là bã nhờn tuyến ung thư biểu mô (SGC), ung thư biểu mô tế bào bã nhờn, và ung thư biểu mô tuyến mebomian là một khối u ác tính ở da phổ biến. Hầu hết thường là các u khoảng 10 mm kích thước tại chỗ. Khối u này được cho là phát sinh từ các tuyến bã nhờn trên da và do đó, có thể bắt nguồn từ bất cứ nơi nào trong cơ thể nơi các tuyến này được tìm thấy. Ung thư biểu mô tuyến bã có thể được chia thành hai loại: mắt và ngoại bào. Bởi vì khu vực quanh mắt rất phong phú về loại tuyến này, khu vực này là một trang web phổ biến về nguồn gốc. Nguyên nhân của những tổn thương này là, trong phần lớn các trường hợp, không rõ. Các trường hợp thỉnh thoảng có thể liên quan đến hội chứng Muir-Torre. Do sự hiếm gặp của khối u này và sự thay đổi trong biểu hiện lâm sàng và mô học, SGc thường bị chẩn đoán nhầm là tình trạng viêm hoặc một loại khối u phổ biến hơn.' - 'Dấu thời gian ::: Dấu thời gian là một chuỗi các ký tự hoặc thông tin được mã hóa xác định khi một sự kiện nào đó xảy ra, thường đưa ra ngày và giờ trong ngày, đôi khi chính xác đến một phần nhỏ của một giây. Thuật ngữ này bắt nguồn từ tem cao su được sử dụng trong các văn phòng để đóng dấu ngày hiện tại và đôi khi, bằng mực trên tài liệu giấy, để ghi lại khi nhận được tài liệu. Các ví dụ phổ biến của loại dấu thời gian này là dấu bưu điện trên một chữ cái hoặc thời gian "vào" và "ra" trên thẻ thời gian.' - source_sentence: lãnh thổ mebel nằm ở đâu sentences: - 'Alexander và một ngày tồi tệ, kinh khủng, chán nản, bực bội ::: Alexander và một ngày tồi tệ, kinh khủng, chán nản, bực bội (tựa gốc tiếng Anh: Alexander and the Terrible, Horrible, No Good, Very Bad Day) là phim điện ảnh thiếu nhi hài hước của Mỹ năm 2014 do Miguel Arteta đạo diễn từ kịch bản chắp bút bởi Rob Lieber. Phim có sự tham gia của Steve Carell, Jennifer Garner và Ed Oxenbould, chủ yếu dựa trên cuốn sách thiếu nhi cùng tên năm 1972 của Judith Viorst. Phim khởi chiếu ở Việt Nam vào ngày 20 tháng 11 năm 2014.' - 'Vườn quốc gia Nanda Devi ::: Vườn quốc gia Nanda Devi hay Khu dự trữ sinh quyển Nanda Devi là một vườn quốc gia được thành lập vào năm 1982 bao gồm khu vực tự nhiên xung quanh đỉnh Nanda Devi (7.816 mét) ở bang Uttarakhand, miền bắc Ấn Độ. Toàn bộ vườn quốc gia nằm ở độ cao trên 3.500 m (11.500 ft) so với mực nước biển trung bình. Vườn quốc gia được UNESCO công nhận là Di sản thế giới từ năm 1988 trước khi được mở rộng thêm Vườn quốc gia Thung lũng các loài hoa vào năm 2005 đổi thành Nanda Devi và Vườn quốc gia Thung lũng các loài hoa.' - 'Vùng Klaipėda ::: Vùng Klaipėda (tiếng Litva: Klaipėdos kraštas) hoặc Lãnh thổ Memel (tiếng Đức: Memelland hay Memelgebiet) được định nghĩa bởi Hiệp ước Versailles năm 1919 năm 1920 và được gọi là phần phía bắc của tỉnh East Prussia của Đức, dưới sự điều hành của Entente ''s Hội đồng Đại sứ. Lãnh thổ Memel, cùng với các khu vực khác bị cắt đứt từ Đức (Saar và Danzig) sẽ nằm dưới sự kiểm soát của Liên minh các quốc gia cho đến một ngày trong tương lai khi người dân của các khu vực này sẽ được phép bỏ phiếu về việc liệu đất có trở lại Đức hay không. Ngày nay, Lãnh thổ Memel cũ được kiểm soát bởi Litva, quốc gia đã tổ chức nó thành các quận Klaipeda, Taurage, Marijampole và Alytus.' - source_sentence: mục đích của chiến lược gleichschaltung là gì sentences: - 'Đường sắt cao tốc Thượng Hải – Hàng Châu ::: Tuyến đường sắt cao tốc Thượng Hải Hàng Châu (tiếng Trung: 沪杭 客运 hoặc 沪杭 高速 铁路), còn được gọi là đường sắt cao tốc Huhang hoặc đường sắt chở khách Huhang là tuyến đường sắt cao tốc ở Trung Quốc giữa Thượng Hải và Hàng Châu, Chiết Giang. Tuyến có chiều dài 202 km (126 mi) và được thiết kế cho dịch vụ tàu thương mại với tốc độ 350 km/h (217 dặm / giờ). Nó được xây dựng trong 20 tháng và mở cửa vào ngày 26 tháng 10 năm 2010. Đường dây rút ngắn thời gian di chuyển giữa hai thành phố từ 78 xuống còn 45 phút. Tuyến này cũng được sử dụng bởi các chuyến tàu rời ga Thượng Hải đến Côn Minh và Thâm Quyến, trở thành một phần của Đường sắt cao tốc Thượng Hải Côn Minh và Hành lang đường sắt cao tốc Bờ biển Đông Nam. Nó đã làm cho đề xuất tuyến tàu đệm từ Thượng Hải Hàng Châu không thể triển khai.' - 'Trật tự thế giới mới ::: Thuật ngữ "Trật tự thế giới mới" đã được sử dụng để chỉ bất kỳ giai đoạn lịch sử mới nào chứng minh sự thay đổi mạnh mẽ trong tư tưởng chính trị thế giới và cán cân quyền lực. Mặc dù có nhiều cách hiểu khác nhau về thuật ngữ này, nó chủ yếu gắn liền với khái niệm ý thức hệ về quản trị toàn cầu chỉ trong ý nghĩa của những nỗ lực tập thể mới để xác định, hiểu hoặc giải quyết các vấn đề trên toàn thế giới vượt quá khả năng giải quyết của từng quốc gia.' - 'Gleichschaltung ::: Gleichschaltung trong bối cảnh chính trị - văn hóa là một chiến lược đạt được tầm quan trọng trung tâm, đặc biệt là trong thời kỳ phát xít. Từ những năm 1930, từ này đề cập đến quá trình thống nhất toàn bộ đời sống chính trị xã hội trong giai đoạn tiếp quản quyền lực ở Đức. Mục đích là để 1934 mâu thuẫn hiểu như đa nguyên trong chính phủ và xã hội nên được bãi bỏ và một chế độ độc tài để xây dựng chỉ với một trung tâm quyền lực.' - source_sentence: giáng son sinh ngày mấy sentences: - 'Proxymetacaine ::: Proxymetacaine (INN) hoặc proparacaine (USAN) là một loại thuốc gây tê tại chỗ của nhóm aminoester.' - 'Giáng Son ::: Tạ Thị Giáng Son (sinh ngày 1 tháng 2 năm 1975), thường được biết đến với nghệ danh Giáng Son hay Giáng Sol, là một nữ nhạc sĩ người Việt Nam. Cô là một trong số rất ít những nữ nhạc sĩ thành công vào đầu thập niên 2000 của Việt Nam và là cựu thủ lĩnh sáng lập nên nhóm nhạc 5 Dòng Kẻ. Cô là Ủy viên Ban chấp hành của Hội Nhạc sĩ Việt Nam, hội viên của Hội các nhà soạn nhạc thế giới thế kỷ 21 (Composers 21) và thành viên của nhóm tác giả M6. Hiện cô đang giữ chức Phó trưởng khoa Kịch hát dân tộc tại trường Đại học Sân khấu và Điện ảnh Hà Nội.' - 'Sông Kiến Giang (Thái Bình) ::: Sông Kiến Giang là con sông đào gồm nhiều đoạn khác nhau ở khu vực nam Thái Bình .' - source_sentence: tên thật của tỉnh drava banovina là gì sentences: - 'Drava Banovina ::: Drava Banovina hoặc Drava Banate (tiếng Slovenia: Dravska banovina) là một tỉnh (banovina) của Vương quốc Nam Tư từ năm 1929 đến năm 1941. Tỉnh này bao gồm hầu hết ngày nay Slovenia và được đặt tên cho Drava sông. Thành phố thủ đô của Drava Banovina là Ljubljana.' - 'Đo khoảng cách (vũ trụ) ::: Đo khoảng cách được sử dụng trong vũ trụ học vật lý để đưa ra một khái niệm tự nhiên về khoảng cách giữa hai vật thể hoặc sự kiện trong vũ trụ. Chúng thường được sử dụng để buộc một số lượng có thể quan sát được (như độ chói của một quasar ở xa, dịch chuyển đỏ của một thiên hà xa xôi hoặc kích thước góc của các đỉnh âm trong phổ công suất CMB) với một đại lượng khác không thể quan sát trực tiếp, nhưng thuận tiện hơn cho việc tính toán (chẳng hạn như tọa độ đồng chuyển động của chuẩn tinh, thiên hà, v.v.). Các biện pháp khoảng cách được thảo luận ở đây đều làm giảm khái niệm chung về khoảng cách Euclide ở độ dịch chuyển thấp.' - 'Mùa bão Tây Bắc Thái Bình Dương 1986 ::: Mùa bão năm 1986 ở Tây Bắc Thái Bình Dương không có giới hạn chính thức; nó chạy quanh năm vào năm 1986, nhưng hầu hết các cơn bão nhiệt đới có xu hướng hình thành ở tây bắc Thái Bình Dương giữa tháng Năm và tháng Mười Hai. Những ngày này thường phân định thời kỳ mỗi năm khi hầu hết các cơn bão nhiệt đới hình thành ở tây bắc Thái Bình Dương. Bão nhiệt đới hình thành trong toàn bộ lưu vực phía tây Thái Bình Dương đã được Trung tâm Cảnh báo Bão chung đặt tên. Áp thấp nhiệt đới xâm nhập hoặc hình thành trong khu vực trách nhiệm của Philippines được đặt tên bởi Cơ quan Dịch vụ Khí quyển, Địa vật lý và Thiên văn học Philippines hoặc PAGASA. Điều này thường có thể dẫn đến cùng một cơn bão có hai tên.' --- # SentenceTransformer based on trituenhantaoio/bert-base-vietnamese-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [trituenhantaoio/bert-base-vietnamese-uncased](https://huggingface.co/trituenhantaoio/bert-base-vietnamese-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [trituenhantaoio/bert-base-vietnamese-uncased](https://huggingface.co/trituenhantaoio/bert-base-vietnamese-uncased) <!-- at revision aa3da83c2efadda3b872d634d8448a50c9d283dc --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tintnguyen/bert-base-vietnamese-uncased-st") # Run inference sentences = [ 'tên thật của tỉnh drava banovina là gì', 'Drava Banovina ::: Drava Banovina hoặc Drava Banate (tiếng Slovenia: Dravska banovina) là một tỉnh (banovina) của Vương quốc Nam Tư từ năm 1929 đến năm 1941. Tỉnh này bao gồm hầu hết ngày nay Slovenia và được đặt tên cho Drava sông. Thành phố thủ đô của Drava Banovina là Ljubljana.', 'Mùa bão Tây Bắc Thái Bình Dương 1986 ::: Mùa bão năm 1986 ở Tây Bắc Thái Bình Dương không có giới hạn chính thức; nó chạy quanh năm vào năm 1986, nhưng hầu hết các cơn bão nhiệt đới có xu hướng hình thành ở tây bắc Thái Bình Dương giữa tháng Năm và tháng Mười Hai. Những ngày này thường phân định thời kỳ mỗi năm khi hầu hết các cơn bão nhiệt đới hình thành ở tây bắc Thái Bình Dương. Bão nhiệt đới hình thành trong toàn bộ lưu vực phía tây Thái Bình Dương đã được Trung tâm Cảnh báo Bão chung đặt tên. Áp thấp nhiệt đới xâm nhập hoặc hình thành trong khu vực trách nhiệm của Philippines được đặt tên bởi Cơ quan Dịch vụ Khí quyển, Địa vật lý và Thiên văn học Philippines hoặc PAGASA. Điều này thường có thể dẫn đến cùng một cơn bão có hai tên.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,583,079 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 12.52 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 135.09 tokens</li><li>max: 357 tokens</li></ul> | * Samples: | anchor | positive | |:------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>đặng văn hoàn từng giữ chức vụ nào</code> | <code>Đặng Văn Hoàn ::: Đặng Văn Hoàn là chính trị gia người Việt Nam. Ông từng giữ chức vụ Chủ tịch Ủy ban Mặt trận Tổ quốc Việt Nam tỉnh Quảng Bình.</code> | | <code>đặng văn hoàn là người nào</code> | <code>Đặng Văn Hoàn ::: Đặng Văn Hoàn là chính trị gia người Việt Nam. Ông từng giữ chức vụ Chủ tịch Ủy ban Mặt trận Tổ quốc Việt Nam tỉnh Quảng Bình.</code> | | <code>đặng văn hoàn là ai</code> | <code>Đặng Văn Hoàn ::: Đặng Văn Hoàn là chính trị gia người Việt Nam. Ông từng giữ chức vụ Chủ tịch Ủy ban Mặt trận Tổ quốc Việt Nam tỉnh Quảng Bình.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 32 - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:-----:|:-------------:| | 0.0202 | 500 | 1.2155 | | 0.0404 | 1000 | 0.6001 | | 0.0606 | 1500 | 0.9561 | | 0.0809 | 2000 | 0.2795 | | 0.1011 | 2500 | 0.2862 | | 0.1213 | 3000 | 0.2602 | | 0.1415 | 3500 | 0.2055 | | 0.1617 | 4000 | 0.2193 | | 0.1819 | 4500 | 0.173 | | 0.2021 | 5000 | 0.206 | | 0.2223 | 5500 | 0.2145 | | 0.2426 | 6000 | 0.2737 | | 0.2628 | 6500 | 0.1864 | | 0.2830 | 7000 | 0.1821 | | 0.3032 | 7500 | 0.2188 | | 0.3234 | 8000 | 0.1846 | | 0.3436 | 8500 | 0.1669 | | 0.3638 | 9000 | 0.2594 | | 0.3841 | 9500 | 0.2418 | | 0.4043 | 10000 | 0.1964 | | 0.4245 | 10500 | 0.3534 | | 0.4447 | 11000 | 0.7956 | | 0.4649 | 11500 | 0.6488 | | 0.4851 | 12000 | 0.972 | | 0.5053 | 12500 | 0.3635 | | 0.5255 | 13000 | 0.5703 | | 0.5458 | 13500 | 0.6628 | | 0.5660 | 14000 | 0.5 | | 0.5862 | 14500 | 0.958 | | 0.6064 | 15000 | 0.9945 | | 0.6266 | 15500 | 0.5237 | | 0.6468 | 16000 | 0.219 | | 0.6670 | 16500 | 0.4622 | | 0.6873 | 17000 | 0.326 | | 0.7075 | 17500 | 0.2906 | | 0.7277 | 18000 | 0.2796 | | 0.7479 | 18500 | 0.3304 | | 0.7681 | 19000 | 0.4298 | | 0.7883 | 19500 | 0.3333 | | 0.8085 | 20000 | 0.3124 | | 0.8288 | 20500 | 0.2577 | | 0.8490 | 21000 | 0.2741 | | 0.8692 | 21500 | 0.3273 | | 0.8894 | 22000 | 0.1356 | | 0.9096 | 22500 | 0.0933 | | 0.9298 | 23000 | 0.08 | | 0.9500 | 23500 | 0.0767 | | 0.9702 | 24000 | 0.0702 | | 0.9905 | 24500 | 0.0661 | | 1.0107 | 25000 | 0.12 | | 1.0309 | 25500 | 0.1606 | | 1.0511 | 26000 | 0.6142 | | 1.0713 | 26500 | 0.4077 | | 1.0915 | 27000 | 0.1482 | | 1.1117 | 27500 | 0.1601 | | 1.1320 | 28000 | 0.1061 | | 1.1522 | 28500 | 0.1095 | | 1.1724 | 29000 | 0.1006 | | 1.1926 | 29500 | 0.1138 | | 1.2128 | 30000 | 0.097 | | 1.2330 | 30500 | 0.0993 | | 1.2532 | 31000 | 0.166 | | 1.2734 | 31500 | 0.0771 | | 1.2937 | 32000 | 0.1411 | | 1.3139 | 32500 | 0.0784 | | 1.3341 | 33000 | 0.0963 | | 1.3543 | 33500 | 0.0894 | | 1.3745 | 34000 | 0.1603 | | 1.3947 | 34500 | 0.0911 | | 1.4149 | 35000 | 0.1813 | | 1.4352 | 35500 | 0.3146 | | 1.4554 | 36000 | 0.9285 | | 1.4756 | 36500 | 0.6265 | | 1.4958 | 37000 | 0.5264 | | 1.5160 | 37500 | 0.3998 | | 1.5362 | 38000 | 0.7266 | | 1.5564 | 38500 | 0.2629 | | 1.5766 | 39000 | 0.9727 | | 1.5969 | 39500 | 0.5213 | | 1.6171 | 40000 | 0.9327 | | 1.6373 | 40500 | 0.1096 | | 1.6575 | 41000 | 0.2035 | | 1.6777 | 41500 | 0.3639 | | 1.6979 | 42000 | 0.2284 | | 1.7181 | 42500 | 0.1631 | | 1.7384 | 43000 | 0.1688 | | 1.7586 | 43500 | 0.3155 | | 1.7788 | 44000 | 0.2943 | | 1.7990 | 44500 | 0.253 | | 1.8192 | 45000 | 0.1851 | | 1.8394 | 45500 | 0.1784 | | 1.8596 | 46000 | 0.2623 | | 1.8799 | 46500 | 0.1054 | | 1.9001 | 47000 | 0.0491 | | 1.9203 | 47500 | 0.0445 | | 1.9405 | 48000 | 0.0418 | | 1.9607 | 48500 | 0.0399 | | 1.9809 | 49000 | 0.0396 | ### Framework Versions - Python: 3.11.10 - Sentence Transformers: 3.3.1 - Transformers: 4.46.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CHIA" ]
Non_BioNLP
simonosgoode/nomic_embed_fine_tune_law_1.5
simonosgoode
sentence-similarity
[ "sentence-transformers", "safetensors", "nomic_bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:13500", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:nomic-ai/nomic-embed-text-v1.5", "base_model:finetune:nomic-ai/nomic-embed-text-v1.5", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,731
1,731
9
0
--- base_model: nomic-ai/nomic-embed-text-v1.5 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:13500 - loss:MultipleNegativesRankingLoss widget: - source_sentence: 'cluster: SUMMARY: D Souza v. Canada (Citizenship and Immigration) Court (s) Database Federal Court Decisions Date 2021-12-16 Neutral citation 2021 FC 1430 File numbers IMM-6744-19 Decision Content Date: 20211216 Docket: IMM-6744-19 Citation: 2021 FC 1430 Ottawa, Ontario, December 16, 2021 PRESENT: The Honourable Mr. Justice Favel BETWEEN: RESHMA ANITHA D SOUZA Applicant and THE MINISTER OF CITIZENSHIP AND IMMIGRATION Respondent JUDGMENT AND REASONS I. Nature of the Matter [1] The Applicant seeks judicial review of a November 5, 2019 re-determination decision [Decision] of a visa officer [Officer] pursuant to section 72 of the Immigration and Refugee Protection Act, SC 2001, c 27 [IRPA]. The Officer refused the Applicant’s application for a temporary resident visa and work permit [the Application] because the Officer was not satisfied that the Applicant’s offer of employment [Employment Offer] was genuine. [2] The application for judicial review is allowed. II. Background [3] The Applicant is a citizen of India. In December 2018, the Applicant submitted her Application to work as an in-home child caregiver in Calgary, Alberta. At the time, she was working in the United Arab Emirates. Her Application was based on a positive Labour Market Impact Assessment issued to her prospective employers [Prospective Employers]. [4] In September 2018, the Applicant and the Prospective Employers signed a two-year employment contract, which they subsequently amended to reflect slightly less hours of work. The Prospective Employers’ two children were 8 and 15 years old at the time. In May 2019, the Application was refused. In July 2019, the Applicant applied for leave and for judicial review of the initial refusal. In September 2019, the Applicant agreed to discontinue her application for leave and for judicial review and the Respondent agreed to have another officer re-determine the Application. [5] On re-determination, the Officer requested further information from the Prospective Employers. The Applicant’s legal counsel submitted a September 20, 2019 letter [Legal Counsel’s Letter], in which they provided several documents, including a written employment offer addressed to the female Prospective Employer, updated Notices of Assessment [NOA], and the Applicant’s Employment Offer containing salary details. Ultimately, the Officer refused the Application because he was not satisfied that the Applicant’s Employment Offer was genuine. III. The Decision [6] The Decision consists of a letter dated November 5, 2019 and the accompanying Global Case Management System notes. The Officer determined that the Employment Offer was not genuine. The Officer was not satisfied that the Employment Offer was consistent with the Prospective Employer’s reasonable needs. Further, he was not satisfied that the Prospective Employers could support the Employment Offer. [7] In the earlier application, the Prospective Employers indicated that the female Prospective Employer had turned down employment because they were not able to find childcare. In response to the Officer’s request for additional information, Legal Counsel’s Letter provided submissions in support of the Application and provided, among other requested information, the female Prospective Employer’s written job offer dated October 25, 2018. The Officer mistakenly states the job offer is dated October 25, 2019. The Officer questioned why the Prospective Employers did not provide any job offers prior to the Officer’s request. The Officer suggested that this indicated that the female Prospective Employer’s search for work began only after the Officer requested evidence of job offers. [8] The Officer considered the Prospective Employers’ two most recent NOAs that showed an annual income of $81,423 and $79,717. The Officer took the higher number and added the female Prospective Employer’s benefits of $5,861. The Officer determined that the Prospective Employers would be left with an income of approximately $65,000 after paying the Applicant’s $27,300 salary. The Officer determined that $65,000 is not a reasonable income to live off, particularly when compared to the benefits of retaining an additional $30,000. The Officer concluded that this was evidence that the Prospective Employers may not be able to fulfill the terms of the Employment Offer. [9] With regard to the reasonable needs of the Prospective Employers, the Officer considered the age of their children and their need to hire a childcare provider. The Prospective Employers raised the best interests of the child, but the Officer found they provided little evidence of how the Applicant would improve the interests of the children. For example, the Officer noted that the Prospective Employers indicated that they needed a childcare provider to keep their 16-year-old out of trouble. However, there were few details of what trouble they expected or how a new caregiver would have more success than the teenager’s parents in dealing with a teenager. The Officer further found that it was unclear why the 16-year-old could not supervise the nine-year-old, especially given the costs of childcare. IV. Issues and Standard of Review [10] The issues in this case are: (1) Is the Decision reasonable? (2) Should the Court enter an indirect substitution or make a cost order in favour of the Applicant? [11] The first issue does not engage one of the exceptions set out in Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65 [Vavilov] and is therefore reviewable on the standard of reasonableness (Vavilov at paras 16-17, 23-25). In assessing the reasonableness of a decision, the Court must consider both the outcome and the underlying rationale to assess whether the “decision as a whole is transparent, intelligible and justified” (Vavilov at para 15). For a decision to be reasonable, a decision-maker must adequately account for the evidence before it and be responsive to the Applicant’s submissions (Vavilov at paras 89-96, 125-128). [12] There is no standard of review for the second issue. V. Preliminary Matter [13] At the hearing, the Respondent submitted a supplementary Certified Tribunal Record for filing. The Applicant did not object. The Court accepts it for filing. VI. Parties’ Positions and Analysis A. Is the Decision reasonable? (1) Applicant’s Position [14] The Officer misread or overlooked evidence that was material to the female Prospective Employer’s work situation. First, it was unreasonable for the Officer to conclude that the female Prospective Employer would refuse job offers. The Officer unreasonably speculated that in many families, both parents work without a caregiver. Furthermore, when the Officer reviewed the female Prospective Employer’s job offer he mistakenly said the date was October 25, 2019 when it was actually October 25, 2018. The Officer then questioned why the Prospective Employers had not provided earlier job offers. Finally, the Officer failed to account for the Prospective Employers explanation that she only had one written job offer because the rest were verbal. [15] Second, the Officer failed to consider that the male Prospective Employer travels extensively for work. This led the Officer to generalize about his ability to assist his wife. This generalization had no basis in the record. [16] Third, the Officer stated that the Prospective Employers failed to explain the reduction in work hours. This indicates that the Officer overlooked the letters from the Prospective Employers and the Applicant’s counsel’s submissions. Those letters explain that the change in proposed work hours was to accommodate the female Prospective Employer’s intention to return to school. [17] Finally, the Officer unreasonably concluded that the Prospective Employers intended to have the Applicant care for their 16-year-old son. This conclusion demonstrates that the Officer misread the Employment Offer, job description, and the Prospective Employer’s letter. This evidence makes it clear that the Prospective Employers’ intent is for the Applicant to care for their younger child. [18] With respect to the Prospective Employers’ ability to fulfil the terms of the Employment Offer, the Officer erred in determining that $65,000 “is not a reasonable salary at which to live in order to have a caregiver when compared to the benefits of having an additional $30,000.” The Applicant submits that the Officer departed from the established method for assessing financial sufficiency and adopted a methodology “without any apparent or known rules.” (2) Respondent’s Position [19] The Officer’s error with respect to the date of the female Prospective Employer’s job offer is inconsequential. One written job offer supports the Officer’s finding that there was insufficient evidence that the female Prospective Employer had turned down employment due to a lack of childcare. [20] The Respondent submits that Legal Counsel’s Letter, which provided an explanation for the reduction in the Applicant’s work hours, is not evidence. This letter required corroboration. Therefore, it was reasonable for the Officer to conclude that the Prospective Employers did not provide a clear reason for the reduction in hours. [21] The Officer did not err by stating that the Prospective Employers intended to have the Applicant care for both of their children. The Respondent notes the use of the word “children” in both the Application and the Employment Offer. The Respondent says the Officer was under no obligation to clarify this ambiguity for the Applicant. [22] Finally, the Officer’s assessment of the Prospective Employers’ financial situation, viewed in the broader context, was reasonable. (3) Analysis [23] In Aghaalikhani v Canada (Citizenship and Immigration), 2019 FC 1080 at para 24, Justice Gascon held that “when an administrative tribunal is silent on evidence clearly pointing to an opposite conclusion and squarely contradicting its findings of fact, the Court may intervene and infer that the tribunal overlooked the contradictory evidence when making its decision.” Justice Gascon also held that when parts of the evidence are “misapprehended” and “where the findings do not flow from the evidence” the decision will not be reasonable (at para 17). Likewise, more recently, in Vavilov the Supreme Court of Canada stated: [126] That being said, a reasonable decision is one that is justified in light of the facts: Dunsmuir, para. 47. The decision maker must take the evidentiary record and the general factual matrix that bears on its decision into account, and its decision must be reasonable in light of them: see Southam, at para. 56. The reasonableness of a decision may be jeopardized where the decision maker has fundamentally misapprehended or failed to account for the evidence before it. [24] I am persuaded by the Applicant’s submissions that the Officer overlooked or misapprehended material evidence, rendering the Decision unreasonable. First, the Officer fails to mention the Prospective Employers’ evidence that further job offers were not adduced because they were made orally. The Officer simply declares that “it is unclear why those earlier offers were not provided.” [25] The Officer also incorrectly cites the date on the female Prospective Employer’s job offer by one year. This mistake leads the Officer to infer that her job search began only after the Officer requested evidence of job offers. Based on this misapprehension of evidence, the Officer gives little weight to the female employer’s prospects of future employment, which is the primary reason for the Officer’s concerns about the Prospective Employers’ reasonable needs. [26] The Officer relies on this misapprehension when concluding that it is “unreasonable” for the female Prospective Employer to refuse work “given the number of families in Canada where both parents work without a caregiver.” There is no basis for this conclusion. This statement also indicates that the Officer overlooked the evidence of the male Prospective Employers extensive travel for work. Ultimately, I find the Officer’s findings related to the female Prospective Employer’s job situation unreasonable. [27] Furthermore, I find the Officer’s conclusion about the Prospective Employers’ ability to fulfil the terms of the Employment Offer speculative and not based on the evidence. The Officer failed to explain why the Prospective Employer’s ability to pay the Applicant was insufficient. The Officer makes no mention of the Prospective Employer’s savings nor explains why an income well above the Low Income Cut Off leads to concerns that the Prospective Employers cannot reasonably fulfill the Employment Offer. The reasons do not permit the Court to understand how the Officer arrived at this finding. [28] The Officer also expresses concern that the Prospective Employers could not reasonably fulfill the terms of the Applicant’s Employment Offer because they reduced the work hours in the updated contract by one hour per day. The Officer speculates that if the Prospective Employers’ needs increase to eight hours per day, the “financial arrangement would be even less reasonable.” While I am not convinced by the Applicant’s submissions that the change was “completely explained” by the Prospective Employer’s letter, the Officer nevertheless bases his finding on his assessment that $65,000 is not enough to live off. The Officer’s adverse inference concerning the updated employment hours flows from the flawed financial sufficiency analysis. The “assessment of the employer’s capacity to pay should not be based on speculation” (Bautista v Canada, 2018 FC 669 at para 16). [29] Finally, the Officer makes a number of errors regarding what child the Applicant would care for. The Officer erroneously states that the Prospective Employers “have indicated they want the Applicant to supervise the 16 year old to keep him out of trouble.” In their July 14, 2019 letter, the Prospective Employers explain that their “intention” is to hire a caregiver to care for the younger child. This would allow the female Prospective Employer to focus more of her attention on her 16-year-old son to ensure he does not “take the wrong path with peer group and pressure.” Furthermore, the Employment Offer only lists the younger child as the child in need of care. Finally, I note that the Officer questions why “a new caregiver would have more success” “if a teenager is disobedient to their parents.” This statement is also speculative and not based on the contents of the Application. [30] Ultimately, I find that the Decision is based on overlooked or misapprehended evidence. The Decision is not justified, transparent, and intelligible. Therefore, it is not reasonable. B. Should the Court enter an indirect substitution or make a cost order in favour of the Applicant? (1) Applicant’s Position [31] The Applicant requests a “directed verdict” on the ground that all factual findings have been made. The Applicant states that the Court can make a decision “without wading into the decision-making process on the basis of an incomplete factual record” and without weighing the evidence “in place of the decision-maker.” The Respondent has not submitted contrary evidence and therefore, the Court is not being asked to weigh evidence. The Applicant points to paragraph 142 of Vavilov where the Supreme Court of Canada stated: …An intention that the administrative decision maker decide the matter at first instance cannot give rise to an endless merry-go-round of judicial reviews and subsequent reconsiderations. Declining to remit a matter to the decision maker may be appropriate where it becomes evident to the court, in the course of its review, that a particular outcome is inevitable and that remitting the case would therefore serve no useful purpose. [32] It has been more than a year since the Applicant applied for her work permit. The Applicant submits that allowing a “directed verdict” will spare her from another lengthy delay. (2) Respondent’s Position [33] The Respondent submits that this is not an appropriate case for indirect substitution – the correct term for “directed verdict” in the administrative context – because, contrary to what the Applicant submits, “there is not only one lawful response, or one reasonable conclusion.” The Respondent also submits that this is not an appropriate case for a costs award because “[t]here is no evidence that the Respondent has unnecessarily or unreasonably prolonged the proceedings.” (3) Analysis [34] In Canada (Citizenship and Immigration) v Tennant, 2019 FCA 206, the Federal Court of Appeal [FCA] determined that the remedy of indirect substitution is an exceptional power under the law of judicial review (at para 79). It is available in cases where “the court concludes that there is only one reasonable outcome, so that returning the matter to the administrative decision-maker would be pointless” (at para 82). [35] I agree with the Respondent that this is not an exceptional case warranting indirect substitution. Although I have concluded that the Officer overlooked or misapprehended the evidence, this does not lead to the conclusion that there is “only one reasonable outcome.” [36] Rule 22 of the Federal Courts Citizenship, Immigration and Refugee Protection Rules, SOR/93-22 [Rules], provides that no costs shall be awarded in respect of an application for judicial review unless the Court, for special reason, so orders. The Rules do not define “special reasons.” At paragraph 7 of Ndungu v Canada (Citizenship and Immigration), 2011 FCA 208 [Ndungu] the FCA summarized the non-exhaustive circumstances in which special reasons will be found to justify an award of costs as well as situations that fall short of the “special reasons” standard. In Sisay Teka v Canada (Immigration, Refugees and Citizenship), 2018 FC 314, this Court held that the special reasons exception contemplated in Rule 22 is a “high bar” (at para 41). [37] The Applicant submits that the Respondent has unnecessarily or unreasonably prolonged the proceedings and that this constitutes special reasons (Ndererehe v Canada (Citizenship and Immigration), 2007 FC 880 [Ndererehe]). Ndererehe is distinguishable from the present case. In that case, unlike here, the prolonging of the proceedings led to the Applicants facing risk to their personal safety. Additionally, in Ndererehe the Court found that the Applicant’s situation was oppressive and threatening and that they had suffered since their application was refused (at para 23). Again, no such circumstances are present here. [38] In Ndungu, the FCA made clear that an award of costs cannot be justified merely because “an immigration official has made an erroneous decision” (at para 7). In my view, this case does not meet the high bar for a costs award. VII. Conclusion [39] The Decision is not reasonable. It lacks the requisite degree of transparency, intelligibility, and justification. The application for judicial review is allowed. [40] This is not an appropriate case for the Court to enter an indirect substitution or to award costs. [41] The parties did not raise any question of general importance for certification and none arises. JUDGMENT in IMM-6744-19 THIS COURT’S JUDGMENT is that: The application for judicial review allowed. There is no question for certification. There is no order as to costs. "Paul Favel" Judge FEDERAL COURT SOLICITORS OF RECORD Docket: IMM-6744-19 STYLE OF CAUSE: RESHMA ANITHA D SOUZA v THE MINISTER OF CITZENSHIP AND IMMIGRATION PLACE OF HEARING: HELD BY VIDEOCONFERENCE DATE OF HEARING: june 2, 2021 JUDGMENT AND reasons: FAVEL J. DATED: december 16, 2021 APPEARANCES: Deanna Okun-Nachoff For The Applicant Julio Paoletti For The Respondent SOLICITORS OF RECORD: Deanna Okun-Nachoff McCrea Immigration Law Vancouver, B.C. For The Applicant Julio Paoletti Attorney General of Canada Vancouver, B.C. For The Respondent ' sentences: - 'cluster: CONCLUSION: The court allows the application for judicial review and sets aside the decision of the visa officer. The court declines to enter an indirect substitution or make a cost order in favour of the person concerned, as it does not meet the high bar for a costs award. The court finds that the decision was not reasonable, and the person concerned''s application for a temporary resident visa and work permit should be reconsidered.' - 'cluster: SUMMARY: **(1) Facts** The person concerned, a citizen of India, applied for a temporary resident visa and work permit to work as an in-home child caregiver in Calgary, Alberta. Her application was based on a positive Labour Market Impact Assessment issued to her prospective employers, a couple with two children. The couple signed a two-year employment contract with the person concerned, which they later amended to reflect slightly less hours of work. The person concerned''s application was initially refused, and she applied for leave and judicial review. The case was re-determined by a different officer, who requested further information from the couple. The person concerned''s legal counsel submitted additional documents, including a written employment offer and updated Notices of Assessment. Despite this, the officer refused the application, concluding that the employment offer was not genuine. **(2) Issue** The main issue before the court is whether the decision of the visa officer to refuse the person concerned''s application for a temporary resident visa and work permit was reasonable. The court also considers whether it should enter an indirect substitution or make a cost order in favour of the person concerned. **(3) Rule** The court applies the standard of review of reasonableness, as set out in Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65. The court must consider both the outcome and the underlying rationale of the decision to assess whether it is transparent, intelligible, and justified. **(4) Analysis** The court finds that the decision of the visa officer was not reasonable. The officer overlooked or misapprehended material evidence, including the couple''s explanation for the reduction in work hours and the female employer''s prospects of future employment. The officer also made several errors regarding the couple''s ability to fulfill the terms of the employment offer, including their financial situation and the number of hours they would need to hire a caregiver. The court finds that the officer''s findings were speculative and not based on the evidence. The court also finds that the decision is not justified, transparent, and intelligible. The officer''s conclusion that the employment offer was not genuine is not supported by the evidence, and the officer failed to consider the couple''s explanations and submissions. **(5) Conclusion** The court allows the application for judicial review and sets aside the decision of the visa officer. The court declines to enter an indirect substitution or make a cost order in favour of the person concerned, as it does not meet the high bar for a costs award. The court finds that the decision was not reasonable, and the person concerned''s application for a temporary resident visa and work permit should be reconsidered.' - 'cluster: SUMMARY: **(1) Facts** The person concerned, a visa applicant, sought judicial review of a decision made by the Visa Officer, who was acting on behalf of the Minister of Citizenship and Immigration. The Visa Officer had refused to issue a visa to the applicant due to concerns about his adaptability for personal suitability purposes. Specifically, the Visa Officer had questioned the applicant''s lack of travelling experience. The applicant had not provided any evidence to counter the Visa Officer''s concerns, and the officer had not informed the applicant of her assessment of his evidence during the interview. The applicant''s counsel argued that the Visa Officer had taken into account an irrelevant consideration and that the applicant had not been given an opportunity to respond to the officer''s concerns. **(2) Issue** The issue before the court was whether the Visa Officer had erred in refusing to issue a visa to the applicant due to his lack of travelling experience. The court had to determine whether the officer''s consideration of this factor was relevant and whether the applicant had been given adequate opportunity to respond to the officer''s concerns. **(3) Rule** The court applied the relevant rules and principles of administrative law, including the duty of a decision-maker to consider relevant factors and to provide adequate reasons for their decisions. The court also considered the principle that a decision-maker is not required to inform an applicant of their assessment of their evidence during an interview. **(4) Analysis** In analyzing the issue, the court noted that the applicant''s lack of travelling experience may be relevant to the question of his adaptability for personal suitability purposes. The court also noted that there was no indication that undue emphasis was placed on this consideration by the Visa Officer. Furthermore, the court held that the applicant had not been denied an opportunity to respond to the officer''s concerns, as the officer had not been required to inform the applicant of her assessment of his evidence during the interview. The court concluded that the Visa Officer had not erred in refusing to issue a visa to the applicant. **(5) Conclusion** The judicial review was dismissed, and the decision of the Visa Officer was upheld. The court found that the Visa Officer had not taken into account an irrelevant consideration and that the applicant had been given adequate opportunity to respond to the officer''s concerns. The court''s decision was based on the principles of administrative law and the relevant rules governing the decision-making process of the Visa Officer.' - source_sentence: 'cluster: RULES: Canada (Attorney General) v. Patterson Court (s) Database Federal Court Decisions Date 2004-09-21 Neutral citation 2004 FC 1292 File numbers T-1455-04 Decision Content Date: 20040921 Docket: T-1455-04 Citation: 2004 FC 1292 Ottawa, Ontario this 21st day of September 2004 Present: The Honourable Madam Justice Heneghan BETWEEN: THE ATTORNEY GENERAL OF CANADA Applicant and STEPHEN JOHN PATTERSON Respondent REASONS FOR ORDER AND ORDER INTRODUCTION [1] The Attorney General of Canada (the "Applicant") seeks an interim injunction directing Mr. Stephen John Patterson (the "Respondent") to "forthwith forfeit and surrender possession" to officers of the Canadian Food Inspection Agency ("CFIA") a northern flying squirrel in his possession that he imported from the United States of America into Canada on or about June 26, 2004, through the port of entry at Windsor, Ontario. [2] The Applicant brings this application pursuant to the Canadian Food Inspection Agency Act, S.C. 1997, c. 6, as amended, (the "Act") and the Federal Court Rules, 1998, SOR/1998-106, as amended (the "Rules"). BACKGROUND [3] The Respondent, according to his affidavit filed in this proceeding, is a conservationist and naturalist who has spent the last ten years studying flying squirrels. He describes himself as an expert in that area and a few years ago he created a web site to provide information to interested persons about flying squirrels. [4] The Respondent states that the flying squirrels are indigenous to Canada but it is prohibited to capture such an animal and keep it in captivity, for any purpose. In the spring of this year, he decided to legally import a northern flying squirrel for educational purposes, in order to provide presentation to nature groups, such as children and other visitors to provincial parks. [5] The Respondent obtained an authorization from the Ministry of Natural Resources for the Province of Ontario on June 8, 2004. According to its face, this authorization was issued pursuant to the Fish and Wildlife Conservation Act, 1997, S.O. 1997, c. 41. The authorization is entitled "Authorization to Keep Specially Protected and Game Wildlife in Captivity" and identifies the species of wildlife as "Northern Flying Squirrel". The authorization refers to one animal and is effective from June 8, 2004 until December 31, 2006. [6] The Respondent obtained a permit from the U.S. Fish and Wildlife Service on June 15, 2004. This written authorization is described as an "Import/Export Licence" with effect between June 15, 2004 and May 31, 2005. [7] On June 25, 2004, the Respondent executed a "Declaration for Importation or Exportation of Fish or Wildlife" with the U.S. Fish and Wildlife Service. This permit identifies the species of wildlife by both its official name, "glaucomys sabrinus" and its common name, "northern flying squirrel". [8] On June 26, 2004, the Respondent purchased the northern flying squirrel from Ratkateers Rodentry of Marshall, Indiana. That entity issued the Respondent a "Record of Acquisition, Disposition or Transport of Animals" under the auspices of the U.S. Department of Agriculture, Animal and Plant Health Inspection Service. [9] On the same day, that is June 26, 2004, the Respondent entered Canada by car at the Windsor Ambassador Bridge. He provided all the previously mentioned documentation to the Canada Customs and Revenue Agency officer and was allowed to enter Canada with the northern flying squirrel. [10] On July 5, 2004, the Respondent was contacted by a representative of the CFIA who wanted to know if he had imported a northern flying squirrel into Canada. The Respondent confirmed that he had done so. He was advised that such importation was illegal and that he would "likely have to remove it". [11] On the following day, July 6, 2004, the Respondent received an "Order to Remove" from the CFIA directing the removal of the animal from Canada by July 9, 2004. The Respondent did not remove the northern flying squirrel pursuant to the Order and offered to submit the squirrel to quarantine and to pay the costs to have it tested for any disease. That offer was refused by the Applicant. [12] The Respondent states, in his affidavit, that on August 16, 2004, he brought the squirrel to the Britannia Animal Hospital in Mississauga, Ontario for an examination. According to the affidavit of the examining veterinarian, Dr. Jaques, the squirrel was free of any sign of illness or disease. [13] On or about July 8, 2004, the Respondent made inquiries of the Centre for Disease Control ("CDC") in Atlanta, Georgia, United States of America. He says that he was informed that that body had not found an association between monkeypox and flying squirrels. Further, the Respondent says that he was advised by the CDC that the same information was relayed to Dr. Mroz. of the CFIA. [14] The Applicant filed the affidavit of Dr. Debbie Barr, a Senior Staff Veterinarian with the CFIA. In her affidavit, Dr. Barr related the steps undertaken by the CFIA to effect the removal of the flying squirrel after becoming aware of its presence in Canada. The CFIA issued an Order to Remove pursuant to the Health of Animals Act, S.C. 1990, c. 21, as amended, on July 6, 2004, and specified a removal date of July 9, 2004. [15] As well, Dr. Barr discussed the regulation of imported animals and the monkeypox virus. She described that virus as a "rare viral disease that occurs mostly in central and western Africa." She said that an outbreak of monkeypox was reported in the United States of America in June 2003. The Canadian response to the outbreak of monkeypox in the United States was to exercise its power under the Health of Animals Act, supra, to refuse entry into Canada of "susceptible animals based on suspicion of disease." [16] This led to the enactment of the Prairie Dog and Certain Other Rodents Importation Prohibition Regulations, SOR/2003-310 ("Importation Prohibition Regulations") on September 10, 2003. According to these Regulations, the importation of any squirrel of the family Sciuridae from any country is prohibited. [17] Dr. Barr deposed that when the Importation Prohibition Regulations came into effect, the CDC had not completed its investigation into the monkeypox outbreak in the United States. The CFIA considers the Importation Prohibition Regulations to be part of its ongoing policy to prevent the introduction of animal disease into Canada that could affect either human health or the Canadian livestock industry. [18] The CFIA seeks injunctive relief from the Court because the Respondent has failed to comply with the Order to Remove. Although the issue of the monkeypox virus was addressed in the affidavit of Dr. Barr, the Applicant made it clear on the record during the hearing of this application that he was not relying on that issue as a basis for his request for an interim injunction. DISCUSSION [19] The Applicant is seeking an interim injunction pursuant to section 18 of the Act which provides as follows: 18. The Agency may apply to a judge of a court of competent jurisdiction for an interim injunction enjoining any person from contravening an Act or provision that the Agency enforces or administers by virtue of section 11, whether or not a prosecution has been instituted in respect of that contravention. 18. L''Agence peut demander à un juge d''une juridiction compétente une ordonnance provisoire interdisant toute contravention à une loi ou disposition don''t elle est chargée d''assurer ou de contrôler l''application aux termes de l''article 11 - que des poursuites aient été engagées ou non sous le régime de celle-ci. [20] Section 11 of the Act authorizes the CFIA to supervise the implementation of the statutes, including the Health of Animals Act, supra under which the Order to Remove was issued. Section 11(1) provides as follows: 11. (1) The Agency is responsible for the administration and enforcement of the Agriculture and Agri-Food Administrative Monetary Penalties Act, Canada Agricultural Products Act, Feeds Act, Fertilizers Act, Fish Inspection Act, Health of Animals Act, Meat Inspection Act, Plant Breeders'' Rights Act, Plant Protection Act and Seeds Act. 11. (1) L''Agence est chargée d''assurer et de contrôler l''application des lois suivantes_: la Loi sur les sanctions administratives pécuniaires en matière d''agriculture et d''agroalimentaire, la Loi sur les produits agricoles au Canada, la Loi relative aux aliments du bétail, la Loi sur les engrais, la Loi sur l''inspection du poisson, la Loi sur la santé des animaux, la Loi sur l''inspection des viandes, la Loi sur la protection des obtentions végétales, la Loi sur la protection des végétaux et la Loi sur les semences. [21] The Applicant says that this statutory provision is the only means available to obtain possession of the flying squirrel, for removal from Canada. The Applicant has provided a written undertaking in his submissions to respect any order for damages that may be made by this Court. [22] The parties agree that the applicable test for the granting of an interim injunction is that set out in RJR Macdonald Inc. v. Canada, [1994] 1 S.C.R. 311, that is the existence of a serious issue for trial, irreparable harm if the relief sought is denied and that the balance of convenience favours the moving party. The Applicant bears the burden of establishing all three factors. [23] According to the Applicant, the serious issue arises in relation to the Respondent''s alleged continuing violation of the Importation Prohibition Regulations. [24] In the present case, the sole basis for the underlying application for judicial review is the request for an interim injunction for the delivery up of the flying squirrel now in the possession of the Respondent. The Applicant says that this is the only way that he can seek this relief pursuant to section 18 of the Act. The Respondent says this is not a "serious issue", within the meaning of RJR-MacDonald, supra, since the activity complained about, that is the "importation of the squirrel", is not continuing but has already occurred. [25] In my opinion, it is doubtful whether the Applicant has shown that a serious issue exists here. It is not necessary to decide that issue, however, because I am not satisfied that the Applicant has met the test of irreparable harm. [26] The Applicant characterizes the Respondent''s disobedience of the Order to Remove as constituting irreparable harm. I disagree. The test for irreparable harm is well known. An applicant must produce evidence of irreparable harm that is clear and not speculative, and that evidence must show that irreparable harm would occur, not that it is likely. In this regard, I refer to Centre Ice Ltd. v. National Hockey League (1994), 53 C.P.R. (3d) 34 (Fed. C.A.). [27] The Applicant is not arguing nor has he shown that public health is an issue in this motion. At the same time, the Applicant submits that the Health of Animals Act, supra, prohibits a person from possessing an animal that was known to be imported contrary to that Act or regulations passed pursuant to it. [28] The Respondent has not been charged with any breach of the applicable regulations or of a statute. There is no evidence that he knowingly acted in breach of any statutory regime and that issue is not before this Court. It is not necessary to address the issue of balance of convenience since the Applicant has failed to establish irreparable harm. [29] The Applicant is seeking an interim injunction and it has not met the legal test for obtaining that relief. Accordingly the motion is dismissed with costs to the Respondent. ORDER The motion is dismissed with costs to the Respondent. "E. Heneghan" J.F.C. FEDERAL COURT NAME OF COUNSEL AND SOLICITORS OF RECORD DOCKET: T-1455-04 STYLE OF CAUSE: ATTORNEY GENERAL OF CANADA Applicant and STEPHEN JOHN PATTERSON Respondent PLACE OF HEARING: Toronto, Ontario DATE OF HEARING: September 13, 2004 REASONS FOR ORDER AND ORDER: HENEGHAN J. DATED: September 21, 2004 APPEARANCES: Eric Peterson FOR APPLICANT Clayton C. Ruby Brian Shiller FOR RESPONDENT SOLICITORS OF RECORD: Morris Rosenberg Deputy Attorney General of Canada FOR APPLICANT RUBY & EDWARDH Barristers & Solicitors Toronto, ON FOR RESPONDENT ' sentences: - 'cluster: ISSUES: The main issue before the court is whether the Applicant, the Attorney General of Canada, has met the test for obtaining an interim injunction, specifically the existence of a serious issue for trial, irreparable harm if the relief sought is denied, and that the balance of convenience favours the moving party.' - 'cluster: ANALYSIS: The Court found that the Board did not err by conducting a state protection analysis without first assessing the person concerned''s credibility. Although it would have been preferable for the Board to assess the person concerned''s subjective fear before analyzing the objective component, the failure to do so did not raise a reviewable error. The Court accepted that the Board conducted the state protection analysis while accepting that the person concerned''s allegations were true. The Board engaged with the factual matrix presented by the person concerned''s testimony and found that he failed to rebut the presumption of state protection because he did not take all reasonable steps under the circumstances to seek state protection in Guyana.The Court also found that the Board''s state protection finding was reasonable. The Board thoroughly canvassed the nature of the person concerned''s claim of risk and reviewed the documentary evidence on country conditions in its detailed reasons. The person concerned failed to provide any convincing evidence demonstrating that protection was neither forthcoming nor adequate. The documentary evidence cited by the person concerned lacked specificity and did not directly counter the Board''s conclusion that Guyana is in effective control of its territory and has in place a functioning security force to uphold the laws and constitution of the country.' - 'cluster: RULES: The court applied the test for interim injunctions set out in RJR Macdonald Inc. v. Canada, [1994] 1 S.C.R. 311, which requires the Applicant to establish a serious issue for trial, irreparable harm, and that the balance of convenience favours the moving party. The court also considered section 18 of the Canadian Food Inspection Agency Act, which authorizes the CFIA to apply for an interim injunction to enjoin any person from contravening an Act or provision that the Agency enforces or administers.' - source_sentence: 'cluster: SUMMARY: Choezom v. Canada (Minister of Citizenship and Immigration) Court (s) Database Federal Court Decisions Date 2004-09-30 Neutral citation 2004 FC 1329 File numbers IMM-1420-04 Notes Digest Decision Content Date: 20040930 Docket: IMM-1420-04 Citation: 2004 FC 1329 Ottawa, Ontario, this 30th day of September, 2004 Present: THE HONOURABLE MR. JUSTICE von FINCKENSTEIN BETWEEN: TENDZIN CHOEZOM Applicant and THE MINISTER OF CITIZENSHIP AND IMMIGRATION Respondent REASONS FOR ORDER AND ORDER [1] The 30 year old Applicant is the daughter of Tibetan refugees. She was born in India, where her family continues to reside, but is considered to be a citizen of the People''s Republic of China. [2] From the time of her birth until 1994, the Applicant, as all other Tibetan residents of India, was required to obtain a Registration Certificate (RC), which was renewed annually. In 1994, when she travelled to the United States for the purposes of study and employment, she was issued an Identity Certificate (IC), which she continues to renew periodically. The Applicant must obtain a visa and carry her IC with her if she wishes to visit India. If she were to return to reside in India, she would need to have a valid IC, a visa and would need to first obtain a NORI (No Objection for Return to India). Once back in India she would need to obtain a new RC . While living in India, Tibetans are not permitted to travel to many locations in India without permission from local authorities or the police, [3] The Applicant resided in the United States until 2003. During this time, she did not claim asylum. Instead she travelled to Canada in 2003 where she claimed refugee protection on numerous grounds, including race and religion. [4] The determinative issue before the Board was whether or not the Applicant was a refugee by virtue of the exclusionary clause set out in Article 1(E) of the Refugee Convention. In its Reasons the Board noted: 1. The Applicant has consistently been able to obtain identification documents which provided her with a right to reside in and return to India; 2. The Applicant has been freely able to work in her profession of choice; 3. The Applicant would have been able to pursue schooling if she had not decided to move to the United States; and 4. The Applicant has been provided with the same food rations and medical services as Indian citizens. [5] Accordingly, the Board concluded that the Applicant had the normal rights and obligations of an Indian citizen and therefore was excluded from the definition of convention refugee pursuant to Article 1(E) of the Refugee Convention. The Applicant now seeks judicial review of this decision. ISSUE [6] Did the Board err in concluding that the Applicant was excluded from the definition of Convention Refugee pursuant to Article 1(E) of the Refugee Convention? [7] RELEVANT LEGISLATION Immigration and Refugee Protection Act, R.S.C. 2001, c. 27 (IRPA) Exclusion - Refugee Convention 98. A person referred to in section E or F of Article 1 of the Refugee Convention is not a Convention refugee or a person in need of protection. Exclusion par application de la Convention sur les réfugiés 98. La personne visée aux sections E ou F de l''article premier de la Convention sur les réfugiés ne peut avoir la qualité de réfugié ni de personne à protéger. Convention relating to the Status of Refugees, Article 1 Article 1. - Definition of the term "refugee" E. This Convention shall not apply to a person who is recognized by the competent authorities of the country in which he has taken residence as having the rights and obligations which are attached to the possession of the nationality of that country. Article premier. -- Définition du terme "réfugié" E. Cette Convention ne sera pas applicable à une personne considérée par lesautorités compétentes du pays dans lequel cette personne a établi sa résidencecomme ayant les droits et les obligations attachés à la possession de lanationalité de ce pays. STANDARD OF REVIEW [8] The standard of review for decisions as to whether or not an applicant falls within Article 1(E) of the Refugee Convention was considered by Blais J. in Hassanzadeh v. Canada (M.C.I.), [2003]_F.C.J. No. 1886 at para 18 ; The standard of review is that of the patently unreasonable decision, or an error of law, or a denial of natural justice. The applicant has not argued the latter possibility, and thus we are left with needing to find a patently unreasonable finding of fact or an error of law to overturn the decision. [9] Whether or not an individual will be excluded pursuant to Article 1(E) requires an examination of all of the circumstances of the case, including: a) right to return to and reside for an unlimited period of time in the country of residence, b) the right to study, c) the right to work; and d) the right to access basic social services in that country. See Shamlou v. Canada (Minister of Citizenship and Immigration), [1995] F.C.J. No. 1537; Kanesharan v. Canada (Minister of Citizenship and Immigration), [1996] F.C.J. No. 1278). [10] It is fairly self evident that the most relevant factor is the right of return and the nature of the residence. [11] In other cases involving Indian-born Tibetans the Board consistently has held that Article 1(E) does not apply. See C.R.D. (Re) [2000] C.R.D.D. No. 160, F.F.X. (Re) [2000] C.R.D.D. No. 159. [12] In Kroon v. Canada (M.C.I.), [1995] F.C.J. No. 11) MacKay J held that: In my view, the purpose of Article 1E is to support regular immigration laws of countries in the international community, and within the Immigration Act of this country to support the purposes of that Act and the policies it seeks to legislate, by limiting refugee claims to those who clearly face the threat of persecution. If A faces such a threat in his own country, but is living in another country, with or without refugee status, and there faces no threat of persecution for Convention reasons, or put another way, A there enjoys the same basic rights of status as nationals of the second country, the function of Article 1E is to exclude that person as a potential refugee claimant in a third country. (Underlining added) [13] IRB Document IND33125.X, dated December 23rd, 1999, and IRB Document IND22524.E, dated December 21, 1995, provide evidence regarding the requirements for RC''s IC''s, visas, NORI''s and the internal travel restrictions imposed on Tibetans regarding certain locations. The Board itself did not take issue with this evidence. It, however, came to the following conclusion: Based upon the claimant''s counsel''s admissions and the documentary evidence before me, I find on a balance of probabilities that the claimant has a right of return to India, her former country of residence, that Indian authorities would issue her a Registration Certificate for Tibetans upon her return to India, and that she would not be at risk of being deported to Tibet. In making this finding, I also refer to claimant''s testimony at the hearing. Prior to her departure from India to the United Sates, her father was a member of the Tibetan Government in Exile Assembly, and her mother was a cabinet minister with the Tibetan Government in Exile. Her parents who continue to reside in India travelled abroad frequently and to her knowledge they experienced no difficulties in returning to India after travelling abroad. In addition, the claimant testified that she experienced no difficulties in returning to India from the United States in 1993. [14] I find it difficult to accept this conclusion. On the basis of this evidence, it is self evident that the Applicant (with respect to the fundamental right of return and the nature of the residence in India) does not have the same rights as an Indian citizen. The need for annual RC''s, IC''s, visas, NORI''s and the prohibition to visit certain locations within India are all antithetical to the ''basic rights of status as nationals''. All of these rights are not permanent and their renewal is at the discretion of the Indian government. It may be changed at any time for political, geopolitical (i.e. the need for good relations with China) or security reasons. The fact that there is no evidence that the Indian government has so far refused to issue RC''s, IC''s, visas or NORI''s does not mean it has given up the right to do so. The Tibetans'' existence in India is thus at the sufferance of the Indian government. As right to stay at sufferance does not amount to ''the same basic rights of status as nationals'' of India enjoy. In my view the Board erred in concluding that the Applicant falls within the exclusion set out in Article 1(E) of the Refugee Convention. [15] The Respondent argued as an alternative justification for the Board''s decision that Tibetans may apply for Indian citizenship. However, the evidence on this point is inconclusive and the Board itself was ambivalent on this point, Generally, Indian citizenship is not available to Tibetan refugees. There are, however, some exceptions to this rule, whereby second-generation Tibetans who are born in India may apply for Indian nationality. Some sources, however, suggest that there are formal barriers to Tibetan refugees applying for citizenship, as with all other foreign residents, but the application is likely to be refused. [16] Given this finding there is no need for me to address this point. [17] In light of the foregoing reasons this application will be allowed. ORDER THIS COURT ORDERS that the decision of the Immigration and Refugee Board dated December 10, 2003, is set aside and the matter is referred back to another panel for reconsideration. "K. von Finckenstein" Judge FEDERAL COURT NAMES OF COUNSEL AND SOLICITORS OF RECORD DOCKET: IMM-1420-04 STYLE OF CAUSE: TENDZIN CHOEZOM v. THE MINISTER OF CITIZENSHIP AND IMMIGRATION PLACE OF HEARING: CALGARY, ALBERTA DATE OF HEARING: September 16, 2004, REASONS FOR : DATED: September 30, 2004 APPEARANCES: Mr. G. Michael Sherritt FOR APPLICANT Mr. Robert Drummond FOR RESPONDENT SOLICITORS OF RECORD: Sherritt Greene Calgary, Alberta FOR APPLICANT Morris Rosenberg, Deputy Attorney General of Canada (Edmonton Regional Office) Edmonton, Alberta FOR RESPONDENT ' sentences: - 'cluster: ISSUES: The issue before the court is whether the person concerned is a refugee excluded from the definition of a Convention Refugee pursuant to Article 1(E) of the Refugee Convention. This exclusion applies to individuals who are recognized by the competent authorities of their country of residence as having the rights and obligations attached to the possession of that country''s nationality.' - 'cluster: SUMMARY: **(1) Facts** The person concerned, a 30-year-old daughter of Tibetan refugees, was born in India and is considered a citizen of the People''s Republic of China. She was required to obtain a Registration Certificate (RC) annually until 1994, when she was issued an Identity Certificate (IC) instead. The IC is renewed periodically and is required for her to visit India. She must also obtain a visa and a No Objection for Return to India (NORI) to return to India, where she would need to obtain a new RC. While living in India, Tibetans are subject to certain restrictions, including needing permission from local authorities or the police to travel to certain locations. The person concerned resided in the United States from 1994 to 2003 and did not claim asylum. She then travelled to Canada in 2003 and claimed refugee protection on various grounds, including race and religion. **(2) Issue** The issue before the court is whether the person concerned is a refugee excluded from the definition of a Convention Refugee pursuant to Article 1(E) of the Refugee Convention. This exclusion applies to individuals who are recognized by the competent authorities of their country of residence as having the rights and obligations attached to the possession of that country''s nationality. **(3) Rule** The court must apply the standard of review for decisions regarding Article 1(E) of the Refugee Convention, which is that of a patently unreasonable decision or an error of law. The court must examine all the circumstances of the case, including the right to return to and reside in the country of residence, the right to study, work, and access basic social services. **(4) Analysis** The court found that the Immigration and Refugee Board (IRB) erred in concluding that the person concerned was excluded from the definition of a Convention Refugee. The court noted that the person concerned''s need for annual RCs, ICs, visas, NORIs, and the prohibition on visiting certain locations within India are antithetical to the "basic rights of status as nationals." The court also found that the person concerned''s right to stay in India is at the sufferance of the Indian government and does not amount to the same basic rights of status as nationals of India enjoy. The court also considered the Respondent''s alternative justification that Tibetans may apply for Indian citizenship, but found that the evidence on this point is inconclusive and the Board itself was ambivalent on this issue. **(5) Conclusion** The court allowed the application and set aside the decision of the IRB, referring the matter back to another panel for reconsideration. The court found that the IRB erred in concluding that the person concerned was excluded from the definition of a Convention Refugee pursuant to Article 1(E) of the Refugee Convention.' - 'cluster: RULES: The court applied the standard of reasonableness to review the decision, which requires that a decision be transparent, justifiable, and intelligible. The court found that the Officer''s reasons for refusing the application did not meet this standard, as they were based on a subjective assessment of the person concerned''s qualifications and background, without providing sufficient explanation or justification for the decision.' - source_sentence: 'cluster: CONCLUSION: Wu v. Royal Bank of Canada Court (s) Database Federal Court Decisions Date 2008-08-07 Neutral citation 2008 FC 935 File numbers T-351-08 Decision Content Date: 20080807 Docket: T-351-08 Citation: 2008 FC 935 Vancouver, British Columbia, August 7, 2008 PRESENT: Roger R. Lafrenière, Esquire Prothonotary BETWEEN: LI MIN ("AMANDA") WU Applicant and ROYAL BANK OF CANADA Respondent REASONS FOR ORDER AND ORDER LAFRENIÈRE P. [1] The present motion arises in the context of an application for judicial review in respect of the decision of Adjudicator Petersen made pursuant to the Canada Labour Code dismissing the Applicant’s complaint that she was unjustly dismissed by the Royal Bank of Canada (RBC). The Applicant seeks an order compelling answers to written cross-examination questions that were refused by RBC on the grounds of relevance. Background [2] The Applicant’s employment as a credit adjudication agent with RBC was terminated on July 12, 2006 based on allegations of misappropriation of funds. The Applicant filed a complaint of unjust dismissal and the matter proceeded to adjudication over a period of six days in July 2007. Adjudicator Petersen heard the evidence of a number of witnesses, including the Applicant, who testified on her own behalf. In a 24 page decision dated February 1, 2008, Adjudicator Peterson dismissed the Applicant’s complaint. [3] The Applicant filed a Notice of Application on March 3, 2008 for an order quashing the Adjudicator’s decision and referring the matter back for redetermination. Four main grounds are cited. First, the Adjudicator acted without or beyond jurisdiction by upholding the Applicant’s dismissal for non-work-related conduct and contrary to RBC’s policies, practices and guidelines for discipline. Second, the Adjudicator failed to observe the principle of natural justice and procedural fairness, and in particular failed to provide an interpreter. Third, the Adjudicator erred in law in making his decision. Fourth, the Adjudicator based his decision on erroneous findings of fact made in a perverse or capricious manner. [4] The Applicant filed an affidavit in support of her application on March 28, 2008. Paragraphs 2 to 28 of the affidavit consist of facts leading to the Applicant’s dismissal. It is unclear whether these facts were before the adjudicator, or whether the Applicant is attempting to introduce new evidence. At paragraphs 31 to 35, the Applicant complains about the conduct of the hearing before the Adjudicator. She says that an interpreter was not provided to her at the hearing. She also claims that she was visibly stressed and anxious during cross-examination by RBC’s counsel. She further alleges that she was denied an opportunity to speak to her lawyer before the Adjudicator ruled that the speaking notes which she was referring to during her testimony should be entered into evidence. [5] RBC responded by filing the affidavits of Joan Nicholson, Jennifer Roper and Bob Montgomery. In the last four paragraphs of her two page affidavit, Ms. Nicholson, Manager of Cards Contact Centre with RBC in Vancouver, addresses four matters raised in the Applicant’s affidavit. First, she states that the Applicant was not promoted to the Visa Credit Department, as asserted by the Applicant, but rather that it was considered a lateral move. Second, in response to the Applicant’s assertion that she received no warning that money transferring activity was grounds for discipline, Ms. Nicholson points to the RBC Code of Conduct, which refers specifically to misappropriation. Third, in answer to the Applicant’s assertion that she was denied progressive discipline, Ms. Nicholson declares that RBC has a consistent practice of terminating employees immediately in cases of misappropriation or dishonesty. Fourth, Ms. Nicholson observes that the Applicant did not appear to have any difficulty understanding the proceedings before the Adjudicator. [6] Ms. Roper, who was co-counsel for RBC at the hearing before the Adjudicator, filed an affidavit to respond to the Applicant’s allegations of procedural unfairness. She declares that at no time during the hearing before the Adjudicator did the Applicant request the assistance of an interpreter. Ms. Roper states that she did not observe the Applicant experiencing any difficulty in understanding the proceeding because of language issues. Ms. Roper also fleshes out the facts leading the Adjudicator’s decision to admit the Applicant’s speaking notes into evidence. [7] The Applicant served the Respondent with written cross-examination questions. There are 84 questions addressed to Ms. Nicholson and 37 questions to Ms. Roper. On June 27, 2008, Ms. Nicholson and Ms. Roper provided their written responses to the written cross-examination. In a cover letter to the written responses, counsel for the Respondent objected to a number of the Applicant’s written examination questions and advised that she had instructed Ms. Nicholson and Ms. Roper to not answer the objectionable questions. [8] By this motion, the Applicant seeks an order compelling Ms. Nicholson to answer questions 1, 2, 3, 9, 12, 13, 20, 27, 28, 29, 30, 31, 32, 33, 40, 41, 42, 45, 47, 49, 51, 53, 55, 56, 57, 58, 81, 82, 83, and 84, and an order compelling Ms. Roper to answer questions 1, 2, 6, 7, 8, 9, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 23, 24, 26, 27, 28, 29, 30, 31, 32, 33, 34, 36, and 37, as set out in the two Written Examinations dated June 23, 2008. In response to the motion, RBC conceded that certain questions initially refused were proper cross-examination questions and provided answers by supplementary affidavits sworn by Ms. Nicholson and Ms. Roper. It maintained its objections regarding the balance of the questions. Analysis [9] The Applicant submits that the questions posed in the written cross-examinations are directed to facts sworn by the affiants in their affidavits, and are factually relevant to the judicial review application. RBC counters that the Applicant is attempting to re-litigate her claim, and many of the questions posed to its two affiants are outside the proper scope of cross-examination. It submits that even when a fact has been sworn to in a proceeding, it does not have legal relevance unless its existence or non-existence can assist in determining whether or not the remedy sought can be granted: Merck Frosst Can. Inc. v. Canada (Min. of Health) (1997), 80 C.P.R. (3d) 550 (T.D.); affirmed (1999), 3 C.P.R. (4th) 286 (Fed. C.A.) (“Merck”). [10] The affidavit material in an application for judicial review should be aimed at providing the reviewing court with a record of the proceedings before the Adjudicator, and at supporting an argument going to procedural fairness or jurisdiction. The purpose of a judicial review is to review the decision on the basis of the record before the tribunal, and not to determine, by trial de novo, questions that were not fully canvassed in evidence before it. In Ochapowace First Nation v. Canada (Attorney General), 2007 FC 920, the Court described the rationale for this purpose as follows: [10] The rationale for that rule is well known. To allow additional material to be introduced at judicial review that was not before the decision maker would in effect transform the judicial review hearing into a trial de novo. The purpose of a judicial review application is not to determine whether the decision of a tribunal was correct in absolute terms but rather to determine whether its decision was correct on the basis of the record before it: Chopra, at para 5; Canadian Tire Corp. v. Canadian Bicycle Manufacturers Assn., 2006 FCA 56 (CanLII), 2006 FCA 56 at para 13. [11] Upon carefully reviewing the parties’ affidavits, I conclude that many of the Applicant’s cross-examination questions go beyond testing the affiant’s credibility, beyond establishing the record below for the reviewing court, and beyond the issues of procedural fairness. A party is not entitled to exploit cross-examination in order to correct deficiencies in the evidence before the decision-maker. [12] Bearing these principles in mind, I now turn to the questions addressed to Ms. Nicholson and Ms. Roper. Written Examination of Ms. Nicholson Questions 1 to 4 Ms. Nicholson has provided a response to these questions in her affidavit sworn on July 21, 2008. No further response is required. Question 9 Ms. Nicholson has provided a response to this question in her affidavit sworn on July 21, 2008. No further response is required. Questions 12 and 13 Ms. Nicholson has provided a response in her affidavit sworn on July 21, 2008. Question 20 According to the Applicant, this question seeks to determine “what the Applicant knew, how she knew it, and when she knew it” with regard to misappropriation and kiting. The question is not formally relevant since no deponent has sworn any facts on this issue, or questioned the evidence that was before Adjudicator Petersen with respect to whether the terms of misappropriation and kiting were thoroughly explained to Ms. Wu. Questions 27 and 28 The questions are not formally relevant since no deponent has questioned the evidence that was before Adjudicator Petersen with respect to the definitions of “kiting” contained in the Code of Conduct. Questions 29 and 30 The questions are not relevant. Cross-examination on an affidavit is limited to the facts sworn to by the deponents. Ms. Nicholson’s Affidavit does not contain any facts relating to Ms. Wu’s money transferring activities. In addition, Ms. Wu, in her affidavit, does not take issue with the evidence before Adjudicator Petersen on whether her money transferring activities resulted in money being transferred to another banking institution. Questions 31 and 32 Adjudicator Petersen’s decision sets out the record of the evidence before him on the issue of kiting, misappropriation and the reasons for the Applicant’s termination of employment. The questions are not legally or formally relevant. Question 33 The question is irrelevant as it does not go to facts sworn to by Ms. Nicholson or the deponent of any other affidavits filed in the proceeding. Questions 40 to 42, 45, 47 and 49 Ms. Nicholson has provided a response to these questions in her affidavit sworn on July 21, 2008 outlining the evidence that was before Adjudicator Petersen. No further response is required. Question 51 The Applicant inquires whether RBC terminates all employees who are guilty in situations of misappropriation (emphasis added). The question should be answered in light of the assertion made by Ms. Nicholson at paragraph 6 of her affidavit that RBC has a consistent practice of termination. The ultimate relevance of the question and answer should be left to the judge hearing the application. Question 53 The question is not formally relevant since no deponent has questioned any evidence that was before Adjudicator Petersen with respect to comparing Ms. Wu’s termination for misappropriation with the discipline of other employees caught in situations of misappropriation. The question also goes beyond the proper scope of cross-examination and is an attempt to re‑litigate Ms. Wu’s dismissal. Questions 55 to 58 The questions are not legally or formally relevant. On the face of the decision, Adjudicator Petersen noted that the Applicant had been summarily terminated for cause and based his decision on whether or not the Respondent had grounds for summary termination. Questions 81 to 84 The questions are improper since they have no bearing on Ms. Nicholson’s ability to observe whether Ms. Wu had difficulty understanding the proceedings. Question 83 is also irrelevant relevant. Question 84 has already been answered in Ms. Nicholson’s Answers to Written Examination dated June 26, 2008. Written Examination of Ms. Roper Questions 1, 2 and 23: With respect to question 1, the issue of the pre-hearing applications is only relevant in so far as they relate to whether Ms. Wu required an interpreter. As such, the nature of the other pre-hearing applications has no bearing on the outcome of the litigation. This question is nothing more than a fishing expedition. As for questions 2 and 23, they are irrelevant. Questions 6 and 7: The questions are irrelevant to the judicial review proceedings since the answer to these questions do not assist in determining whether or not the remedies sought by the Applicant can be granted. Questions 8 and 9: The questions have no bearing on Ms. Roper’s ability to comment on her observations while in attendance at the hearing. Whether Ms. Roper herself has ever learned a foreign language has no bearing on the issue of whether Ms. Wu had difficulty understanding the proceedings, and no bearing on whether an interpreter should have been provided. Questions 11 to 15: The application for judicial review does not allege that Adjudicator Petersen’s alleged failure to provide equal time to the parties breached the Applicant’s right to procedural fairness or breached the principles of natural justice. On that basis, the questions are not relevant. Questions 17 to 21: This question asks Ms. Roper to comment on what factors led to her concerns regarding Ms. Wu’s capacity to understand English. Ms Roper does not depose that she herself expressed any concern. Question 17 is therefore not relevant since it is outside the facts sworn to by Ms. Roper. Questions 18 to 21 have been answered in Ms. Roper’s Answers to Written Examination dated June 27, 2008. Question 24: The Respondent concedes that this is a proper cross-examination question. Ms. Roper has now provided an answer to this question in her Affidavit sworn on July 18, 2008. No further response is required. Questions 26 to 29: Ms. Roper answered these questions in her Answers to Written Examination dated June 27, 2008. No further response is required. Questions 30 to 34: The general nature of the questions asked of Ms. Wu is not relevant. Questions 32 is improper because it calls for a conclusion or opinion. Question 33 is irrelevant. Questions 36 and 37: The questions are improper as they go beyond the facts sworn to by Ms. Roper or the deponent of any other affidavits filed in the proceeding. In any event, the question appear to be irrelevant to any issues in the application. Conclusion [13] The Respondent does not object to the Applicant filing a supplement to the Applicant’s Record, provided any supplement is limited to the answers sought in this Motion. In the circumstances, the Applicant is granted leave to serve and file a Supplementary Applicant’s Record. [14] In light of the divided success of the parties, in that RBC conceded that certain questions refused should be answered, and has been ordered to answer an additional one, I conclude that the parties should bear their own costs on this motion. ORDER THIS COURT ORDERS that: 1. Ms. Joan Nicholson, affiant of the Respondent, Royal Bank of Canada, shall answer question 51 of the Applicant’s written examination within 10 days of the date of this Order. 2. The Applicant is granted leave to serve and file a Supplementary Applicant’s Record, limited to the additional answers provided by the Royal Bank of Canada, within 20 days of service of the answer to question 51 of Ms. Nicholson’s written examination. 3. The Respondent shall serve and file the Respondent’s Record within 20 days of service of the Applicant’s Supplementary Record, or the expiration of the time for doing so, whichever is earlier. 4. The motion is otherwise dismissed, without costs. “Roger R. Lafrenière” Prothonotary FEDERAL COURT SOLICITORS OF RECORD DOCKET: T-351-08 STYLE OF CAUSE: LI MIN (“AMANDA”) WU v. ROYAL BANK OF CANADA PLACE OF HEARING: Vancouver, British Columbia DATE OF HEARING: August 7, 2008 REASONS FOR ORDER AND ORDER: LAFRENIÈRE P. DATED: August 7, 2008 APPEARANCES: Thomas S.A. Deprophetis FOR THE APPLICANT Lorene Novakowski FOR THE RESPONDENT SOLICITORS OF RECORD: Coutts Pulver LLP Vancouver, British Columbia FOR THE APPLICANT Fasken Martineau DuMoulin LLP Vancouver, British Columbia FOR THE RESPONDENT ' sentences: - 'cluster: SUMMARY: **(1) Facts** The person concerned, an employee of the Royal Bank of Canada (RBC), was dismissed from her job as a credit adjudication agent on July 12, 2006, due to allegations of misappropriation of funds. She filed a complaint of unjust dismissal, which was heard by an adjudicator. The adjudicator dismissed her complaint in a 24-page decision. The person concerned then applied for judicial review of the adjudicator''s decision, citing four main grounds: the adjudicator acted without or beyond jurisdiction, failed to observe the principle of natural justice and procedural fairness, erred in law, and based his decision on erroneous findings of fact. The person concerned also sought to compel answers to written cross-examination questions that were refused by RBC on the grounds of relevance. RBC filed affidavits from its employees, Joan Nicholson and Jennifer Roper, in response to the person concerned''s allegations. The person concerned then served written cross-examination questions on Nicholson and Roper, but RBC objected to many of the questions, advising that its employees would not answer them. The person concerned brought a motion to compel Nicholson and Roper to answer the questions. **(2) Issue** The issue before the court was whether the person concerned''s written cross-examination questions were relevant and should be answered by Nicholson and Roper. The person concerned argued that the questions were directed to facts sworn by the affiants in their affidavits and were factually relevant to the judicial review application. RBC countered that the person concerned was attempting to re-litigate her claim and that many of the questions posed to its employees were outside the proper scope of cross-examination. **(3) Rule** The court ruled that many of the person concerned''s cross-examination questions went beyond testing the affiants'' credibility, establishing the record below for the reviewing court, and addressing the issues of procedural fairness. The court held that a party is not entitled to exploit cross-examination in order to correct deficiencies in the evidence before the decision-maker. The court ordered that Nicholson answer question 51 of the person concerned''s written examination, which inquired about RBC''s practice of terminating employees who are guilty of misappropriation. The court also granted the person concerned leave to serve and file a Supplementary Applicant''s Record, limited to the additional answers provided by RBC. The court ordered RBC to serve and file the Respondent''s Record within 20 days of service of the Applicant''s Supplementary Record. The motion was otherwise dismissed without costs. **(4) Analysis** The court''s analysis was based on the principles of judicial review and the proper scope of cross-examination. The court held that the purpose of a judicial review application is to review the decision on the basis of the record before the tribunal, and not to determine, by trial de novo, questions that were not fully canvassed in evidence before it. The court also noted that a party is not entitled to exploit cross-examination in order to correct deficiencies in the evidence before the decision-maker. In analyzing the specific questions posed by the person concerned, the court held that many of them were not formally relevant, were outside the proper scope of cross-examination, or were attempts to re-litigate the person concerned''s dismissal. However, the court found that question 51, which inquired about RBC''s practice of terminating employees who are guilty of misappropriation, was relevant and should be answered by Nicholson. **(5) Conclusion** The court concluded that the person concerned''s motion was partially successful, in that RBC conceded that certain questions refused should be answered and was ordered to answer an additional question. The court therefore ordered the parties to bear their own costs on the motion. The court also granted the person concerned leave to serve and file a Supplementary Applicant''s Record, limited to the additional answers provided by RBC.' - 'cluster: CONCLUSION: The court concluded that the person concerned''s motion was partially successful, in that RBC conceded that certain questions refused should be answered and was ordered to answer an additional question. The court therefore ordered the parties to bear their own costs on the motion. The court also granted the person concerned leave to serve and file a Supplementary Applicant''s Record, limited to the additional answers provided by RBC.' - 'cluster: ANALYSIS: The court analyzed the person concerned''s explanations for his failure to disclose the reavailments and found them to be unsatisfactory. The court noted that the person concerned had provided conflicting explanations and that there was no evidence to support his claim of being mentally ill when he returned to Bangladesh. The court also found that the person concerned''s wife could have organized the visas and tickets for the trip to Canada, and there was no evidence of cultural norms or other issues that might have stood in her way.The court concluded that the Board''s decision was reasonable and that the person concerned''s multiple reavailments to Bangladesh undermined the credibility of his alleged subjective fear.' - source_sentence: 'cluster: CONCLUSION: Bacon St-Onge v. Conseil des Innus de Pessamit Court (s) Database Federal Court Decisions Date 2018-06-22 Neutral citation 2018 FC 655 File numbers T-2135-16 Decision Content Date: 20180622 Docket: T-2135-16 Citation: 2018 FC 655 [ENGLISH TRANSLATION] Montréal, Quebec, June 22, 2018 PRESENT: The Honourable Madam Justice St-Louis BETWEEN: JÉRÔME BACON ST-ONGE Applicant and LE CONSEIL DES INNUS DE PESSAMIT RENÉ SIMON ÉRIC CANAPÉ GÉRALD HERVIEUX DIANE RIVERIN JEAN-NOËL RIVERIN RAYMOND ROUSSELOT MARIELLE VACHON Respondents ORDER AND REASONS I. Background [1] On December 21, 2017, the Court upheld the application for judicial review submitted by the Applicant, Jérôme Bacon St-Onge and, in particular, revoked the resolution adopted by the band council on March 8, 2016, adjudged the 2015 Code to be invalid, and voided the election held on August 17, 2016. The Court then asked the parties to make submissions concerning costs. [2] On January 22, 2018, the Respondents filed an appeal of this judgment with the Federal Court of Appeal [FCA]. At the same time, they also filed a motion to stay the execution of said judgment (docket A-42-18), a motion that FCA dismissed on April 23, 2018. [3] On February 6, 2018, the Applicant made his submissions concerning costs. He included an affidavit from Mr. Boulianne and filed Exhibit 1, which included three invoices and two statements of account from the firm of Neashish & Champoux s.e.n.c., indicating that he had been invoiced an amount totalling $82,544.35. On March 23, 2018, the Respondents submitted their representations concerning costs. They attached three items: the order from Prothonotary Morneau refusing the application for the Applicant’s interim costs, news articles, and the notice of appeal of the aforementioned decision dated December 21, 2017. Finally, on April 4, 2018, the Applicant submitted his response concerning costs. [4] The parties did not submit a bill of costs and hence the Court does not know the estimated amount of costs that would be granted according to Column III of Tariff B, if Rule 407 of the Federal Courts Rules, SOR/98-106 [the Rules] were applied. II. Position of the parties [5] Mr. Bacon St-Onge is requesting payment of costs on the attorney-client basis, thus covering all of the professional and legal fees incurred. In support of this request, he basically presented five (5) arguments, namely (1) his application for judicial review was upheld; (2) the application was brought in the public’s interest and it went beyond the scope of his individual interests; (3) unlike the Respondents, he is not in a position to have the First Nation reimburse the legal fees; (4) the case required a considerable amount of work because the facts and applicable law were complex and because the cases consisted of more than 2,000 pages; and (5) the Respondents unjustifiably refused to withdraw from a proceeding that was condemned in advance. [6] Mr. Bacon St-Onge also asked the Court (1) to reserve his right to again apply to a court of competent jurisdiction to claim any order and any additional sum required with respect to costs for the Respondents’ application for review; and (2) to exempt him from all the fees and expenses to be paid to the Respondents as part of this claim, the principal claim and any other ancillary or incidental claim in this case and in the appeal case. [7] To begin with, the Court confirms that it will not decide on these last two claims related either to possible future cases or to the appeal proceedings. Thus, this decision will be limited to the application for costs related to the litigation settled by the judgment delivered last December 21. [8] The Respondents reply that the expenses cannot be granted to the Applicant basically because (1) Prothonotary Morneau had refused the Applicant’s request for interim costs and there is thus res judicata on the question of expenses; and (2) the appeal dated December 21, 2017, suspends the awarding of costs and said costs will only be payable by the Applicant if their appeal is dismissed. [9] The Respondents add that, should costs be granted, (1) they must be calculated according to Column III of Tariff B of the Rules; (2) the questions raised in this case are not of concern to Band members, do not fall outside the individual interests of the Applicant, who showed interest in standing for election, thus showing that he had an individual interest in voiding the elections; (3) the Applicant unreasonably delayed bringing his complaint and the voters and candidates were greatly inconvenienced by the election’s invalidity; (4) the invoices that the Applicant submitted in support of his application for costs do not provide the dates and hours worked in the case and have no probative value, being domestic writings; and (5) the questions to be decided are not particularly complicated. [10] The Applicant replies that Prothonotary Morneau’s order decided on the application for interim costs, proceedings separate from the awarding of costs. The criteria that underlie the awarding of costs are different and, therefore, there is no res judicata in this case. Finally, the Applicant points out that he had no choice other than to turn to the courts because the Respondents refused to consider the Band members’ remarks concerning the illegality of the process for amending the 1994 Code. He thus acted for the good of all Band members. In response to the arguments concerning the format of the invoices submitted, he maintains that they are unsigned writings used in the course of business activities and that they are thus proof of their content. [11] Finally, the Applicant maintains that costs can be granted even if the decision is under appeal (Martselos v. Salt River Nation #195, 2008 FCA 221 at paragraphs 51 to 55). III. Discussion [12] We should first deal with two of the arguments raised by the Respondents: the one related to the thing adjudicated and the one related to the effect of the appeal and the stay motion that were lodged. [13] Thus, the Court agrees with the Applicant’s position and concludes that Prothonotary Morneau’s decision on the interim costs is not res judicata on the awarding of costs at the end of the litigation. At least one of the three criteria established in Angle v. M.N.R., [1975] 2 SCR 248, the one requiring that the same question has been decided, is not satisfied here. The criteria related to a decision on the application for interim costs are different from those considered within the framework of the awarding of costs, and thus it cannot have res judicata. [14] As for the effect of the stay motion and the appeal that the Respondents presented to FCA, the Court notes that the Respondents did not submit any case law to support their argument. First, FCA dismissed the stay motion, and thus it is not necessary to focus on its implications with respect to the awarding of costs. Next, our Court has already agreed that appealing a Federal Court decision does not prevent the taxation of costs in the first instance (Halford v. Seed Hawk Inc., 2004 FC 1259 at paragraph 36). Thus, the Court has not been convinced that the appeal of the decision dated December 21, 2017, suspends the awarding of costs. [15] The Court will therefore decide on the awarding of costs and, in this regard, the Court is convinced that here, the costs must be granted in favour of the Applicant because his application for judicial review was upheld (Ticketnet Corp v. The Queen, [1999] FCA No. 1102, 99 DTC 5429). [16] The awarding of costs between parties is set out in sections 400 to 414 of Part II of Rules. To award costs, courts try to establish a fair balance between three principal objectives, namely “providing compensation, promoting settlement and deterring abusive behaviour” (Air Canada v. Thibodeau, 2007 FCA 115 at paragraph 24). Thus, according to Rule 407, unless the Court orders otherwise, the costs between parties are taxed in compliance with Column III of Tariff Table B. [17] As well, subsection 400(1) of the Rules states that the Court “shall have full discretionary power over the amount and allocation of costs and the determination of by whom they are to be paid.” The Court’s vast discretionary power over the awarding of costs has only two exceptions, related to representative actions and immigration cases, which are not at issue in this case. [18] Otherwise, the Court enjoys vast discretionary power (Salt River Nation #195 v. Martselos, 2008 FCA 221 at paragraphs 52 and 53). The factors that the Court may take into account are stated in subsection 400(3) of the Rules, the text of which is annexed. They include some of the factors raised by the Applicant, such as the importance and complexity of the issues (400(3)(c)), the amount of work (400(3)(g)) and whether the public interest in having the proceeding litigated justifies a particular award of costs (400(3)(h)). [19] The Court has the power to award a gross sum or to issue a more general order (Consorzio del Prosciutto di Parma v. Maple Leaf Meats Inc. (CA), 2002 FCA 417 at paragraphs 8 to 10). [20] The Court must therefore decide whether the costs will be assessed through taxation or by the awarding of a gross sum and must also decide whether there is cause to award a specific, higher amount either on the attorney-client basis or on the basis of the public interest. [21] To begin with, the Court rules out the payment of costs on the attorney-client basis because nothing in the case indicates that the Respondents demonstrated “reprehensible, scandalous or outrageous conduct” (Young v. Young, [1993] 4 SCR 3 at p. 134; Quebec (Attorney General) v. Lacombe, 2010 SCC 38 at paragraph 67). [22] As for a specific amount on the basis of public interest, the Supreme Court established, in the Carter decision (Carter v. Canada (Attorney General), 2015 SCC 5 at paragraph 140), a two-component criterion for awarding special costs to a successful party representing the public interest: . . . First, the case must involve matters of public interest that are truly exceptional. It is not enough that the issues raised have not previously been resolved or that they transcend the individual interests of the successful litigant: they must also have a significant and widespread societal impact. Second, in addition to showing that they have no personal, proprietary or pecuniary interest in the litigation that would justify the proceedings on economic grounds, the plaintiffs must show that it would not have been possible to effectively pursue the litigation in question with private funding. In those rare cases, it will be contrary to the interests of justice to ask the individual litigants (or, more likely, pro bono counsel) to bear the majority of the financial burden associated with pursuing the claim. [23] In this case, the Court notes that determining the electoral code’s validity is as much an interest for the Band as it is for the Applicant because the latter was a candidate in the elections whose cancellation he requested. Thus the Applicant cannot maintain that he had no individual interest in the litigation and here, at least one of the Supreme Court’s aforementioned criteria has not been satisfied. [24] In addition, it seems fair to argue that the Applicant is not in a position to get the Band to reimburse him for his legal fees. The Respondents have not submitted evidence showing that they paid their legal fees (Bellegarde v. Poitras, 2009 FC 1212 at paragraph 8) and it seems plausible to find that they are not paying them themselves, since they are members of the Band council. [25] Finally, the Court can find only that the workload and the complexity of the case or that the behaviour of the Respondents, having continued the proceedings, in themselves justify the awarding of special costs. [26] Hence, because the electoral code’s validity is effectively also a question of interest for the Band and because the Applicant is solely responsible for the litigation costs, the Court is convinced that the situation is an argument for awarding costs higher than those in Column III of Tariff B. In the absence of the parties’ bill of costs, the Court finds it difficult to set a “higher” amount by gross sum. Therefore, the Court will instead grant the Applicant costs through taxation, according to the upper band of Column V of Tariff B. JUDGMENT in file T-2135-16 THIS COURT’S JUDGMENT is that: The Respondents are to pay costs to the Applicant according to the upper band of Column V of Tariff B; “Martine St-Louis” Judge Rule 400(3) Factors in awarding costs (3) In exercising its discretion under subsection (1), the Court may consider (a) the result of the proceeding; (b) the amounts claimed and the amounts recovered; (c) the importance and complexity of the issues; (d) the apportionment of liability; (e) any written offer to settle; (f) any offer to contribute made under rule 421; (g) the amount of work; (h) whether the public interest in having the proceeding litigated justifies a particular award of costs; (i) any conduct of a party that tended to shorten or unnecessarily lengthen the duration of the proceeding; (j) the failure by a party to admit anything that should have been admitted or to serve a request to admit; (k) whether any step in the proceeding was (i) improper, vexatious or unnecessary, or (ii) taken through negligence, mistake or excessive caution; (l) whether more than one set of costs should be allowed, where two or more parties were represented by different solicitors or were represented by the same solicitor but separated their defence unnecessarily; (m) whether two or more parties, represented by the same solicitor, initiated separate proceedings unnecessarily; (n) whether a party who was successful in an action exaggerated a claim, including a counterclaim or third party claim, to avoid the operation of rules 292 to 299; (n.1) whether the expense required to have an expert witness give evidence was justified given (i) the nature of the litigation, its public significance and any need to clarify the law, (ii) the number, complexity or technical nature of the issues in dispute, or (iii) the amount in dispute in the proceeding; and (o) any other matter that it considers relevant. Règle 400(3) Facteurs à prendre en compte (3) Dans l’exercice de son pouvoir discrétionnaire en application du paragraphe (1), la Cour peut tenir compte de l’un ou l’autre des facteurs suivants : a) le résultat de l’instance; b) les sommes réclamées et les sommes recouvrées; c) l’importance et la complexité des questions en litige; d) le partage de la responsabilité; e) toute offre écrite de règlement; f) toute offre de contribution faite en vertu de la règle 421; g) la charge de travail; h) le fait que l’intérêt public dans la résolution judiciaire de l’instance justifie une adjudication particulière des dépens; i) la conduite d’une partie qui a eu pour effet d’abréger ou de prolonger inutilement la durée de l’instance; j) le défaut de la part d’une partie de signifier une demande visée à la règle 255 ou de reconnaître ce qui aurait dû être admis; k) la question de savoir si une mesure prise au cours de l’instance, selon le cas : (i) était inappropriée, vexatoire ou inutile, (ii) a été entreprise de manière négligente, par erreur ou avec trop de circonspection; l) la question de savoir si plus d’un mémoire de dépens devrait être accordé lorsque deux ou plusieurs parties sont représentées par différents avocats ou lorsque, étant représentées par le même avocat, elles ont scindé inutilement leur défense; m) la question de savoir si deux ou plusieurs parties représentées par le même avocat ont engagé inutilement des instances distinctes; n) la question de savoir si la partie qui a eu gain de cause dans une action a exagéré le montant de sa réclamation, notamment celle indiquée dans la demande reconventionnelle ou la mise en cause, pour éviter l’application des règles 292 à 299; n.1) la question de savoir si les dépenses engagées pour la déposition d’un témoin expert étaient justifiées compte tenu de l’un ou l’autre des facteurs suivants : (i) la nature du litige, son importance pour le public et la nécessité de clarifier le droit, (ii) le nombre, la complexité ou la nature technique des questions en litige, (iii) la somme en litige; o) toute autre question qu’elle juge pertinente. FEDERAL COURT SOLICITORS OF RECORD DOCKET: T-2135-16 STYLE OF CAUSE: JÉRÔME BACON ST-ONGE v. THE CONSEIL DES INNUS DE PESSAMIT, RENÉ SIMON, ÉRIC CANAPÉ, GÉRALD HERVIEUX, DIANE RIVERIN, JEAN-NOEL RIVERIN, RAYMON ROUSSELOT, MARIELLE VACHON REASONS FOR ORDER AND ORDER: ST-LOUIS J. DATED: June 22, 2018 WRITTEN SUBMISSIONS BY: François Boulianne FOR THE APPLICANT Kenneth Gauthier For the respondents SOLICITORS OF RECORD: Neashish & Champoux, s.e.n.c. Wendake, Quebec FOR THE APPLICANT Kenneth Gauthier Counsel Baie-Comeau, Quebec For the respondents ' sentences: - 'cluster: RULES: The Immigration and Refugee Protection Act states that a foreign national against whom a removal order is made must leave Canada immediately and must be enforced as soon as is reasonably practicable (Section 48(2)). However, enforcement officers have a limited discretion to defer the removal of persons who have been ordered to leave Canada, particularly in cases involving compelling personal circumstances.' - 'cluster: ANALYSIS: The Court analyzed the factors to be considered in awarding costs, as set out in Rule 400(3) of the Federal Courts Rules. The Court considered the importance and complexity of the issues, the amount of work, and the public interest in having the proceeding litigated. The Court also considered the conduct of the parties, including the Respondents'' refusal to consider the Band members'' remarks concerning the illegality of the process for amending the 1994 Code. The Court found that the Applicant was solely responsible for the litigation costs and that the situation justified awarding costs higher than those in Column III of Tariff B. However, the Court did not find that the case met the two-component criterion for awarding special costs to a successful party representing the public interest, as established by the Supreme Court in the Carter decision.' - 'cluster: CONCLUSION: The Court concluded that the Respondents were to pay costs to the Applicant according to the upper band of Column V of Tariff B. The Court''s decision was based on its discretionary power to award costs, taking into account the factors set out in Rule 400(3) of the Federal Courts Rules. The Court''s decision was intended to provide a fair balance between the parties and to reflect the complexity and importance of the issues in the case.' --- # SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) <!-- at revision 679199c2575b5bfe93b06161d06cd7c16ebe4124 --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("simonosgoode/nomic_embed_fine_tune_law_1.5") # Run inference sentences = [ 'cluster: CONCLUSION: Bacon St-Onge v. Conseil des Innus de Pessamit\nCourt (s) Database\nFederal Court Decisions\nDate\n2018-06-22\nNeutral citation\n2018 FC 655\nFile numbers\nT-2135-16\nDecision Content\nDate: 20180622\nDocket: T-2135-16\nCitation: 2018 FC 655\n[ENGLISH TRANSLATION]\nMontréal, Quebec, June 22, 2018\nPRESENT: The Honourable Madam Justice St-Louis\nBETWEEN:\nJÉRÔME BACON ST-ONGE\nApplicant\nand\nLE CONSEIL DES INNUS DE PESSAMIT\nRENÉ SIMON\nÉRIC CANAPÉ\nGÉRALD HERVIEUX\nDIANE RIVERIN\nJEAN-NOËL RIVERIN\nRAYMOND ROUSSELOT\nMARIELLE VACHON\nRespondents\nORDER AND REASONS\nI. Background\n[1] On December 21, 2017, the Court upheld the application for judicial review submitted by the Applicant, Jérôme Bacon St-Onge and, in particular, revoked the resolution adopted by the band council on March 8, 2016, adjudged the 2015 Code to be invalid, and voided the election held on August 17, 2016. The Court then asked the parties to make submissions concerning costs.\n[2] On January 22, 2018, the Respondents filed an appeal of this judgment with the Federal Court of Appeal [FCA]. At the same time, they also filed a motion to stay the execution of said judgment (docket A-42-18), a motion that FCA dismissed on April 23, 2018.\n[3] On February 6, 2018, the Applicant made his submissions concerning costs. He included an affidavit from Mr. Boulianne and filed Exhibit 1, which included three invoices and two statements of account from the firm of Neashish & Champoux s.e.n.c., indicating that he had been invoiced an amount totalling $82,544.35. On March 23, 2018, the Respondents submitted their representations concerning costs. They attached three items: the order from Prothonotary Morneau refusing the application for the Applicant’s interim costs, news articles, and the notice of appeal of the aforementioned decision dated December 21, 2017. Finally, on April 4, 2018, the Applicant submitted his response concerning costs.\n[4] The parties did not submit a bill of costs and hence the Court does not know the estimated amount of costs that would be granted according to Column III of Tariff B, if Rule 407 of the Federal Courts Rules, SOR/98-106 [the Rules] were applied.\nII. Position of the parties\n[5] Mr. Bacon St-Onge is requesting payment of costs on the attorney-client basis, thus covering all of the professional and legal fees incurred. In support of this request, he basically presented five (5) arguments, namely (1) his application for judicial review was upheld; (2) the application was brought in the public’s interest and it went beyond the scope of his individual interests; (3) unlike the Respondents, he is not in a position to have the First Nation reimburse the legal fees; (4) the case required a considerable amount of work because the facts and applicable law were complex and because the cases consisted of more than 2,000 pages; and (5) the Respondents unjustifiably refused to withdraw from a proceeding that was condemned in advance.\n[6] Mr. Bacon St-Onge also asked the Court (1) to reserve his right to again apply to a court of competent jurisdiction to claim any order and any additional sum required with respect to costs for the Respondents’ application for review; and (2) to exempt him from all the fees and expenses to be paid to the Respondents as part of this claim, the principal claim and any other ancillary or incidental claim in this case and in the appeal case.\n[7] To begin with, the Court confirms that it will not decide on these last two claims related either to possible future cases or to the appeal proceedings. Thus, this decision will be limited to the application for costs related to the litigation settled by the judgment delivered last December 21.\n[8] The Respondents reply that the expenses cannot be granted to the Applicant basically because (1) Prothonotary Morneau had refused the Applicant’s request for interim costs and there is thus res judicata on the question of expenses; and (2) the appeal dated December 21, 2017, suspends the awarding of costs and said costs will only be payable by the Applicant if their appeal is dismissed.\n[9] The Respondents add that, should costs be granted, (1) they must be calculated according to Column III of Tariff B of the Rules; (2) the questions raised in this case are not of concern to Band members, do not fall outside the individual interests of the Applicant, who showed interest in standing for election, thus showing that he had an individual interest in voiding the elections; (3) the Applicant unreasonably delayed bringing his complaint and the voters and candidates were greatly inconvenienced by the election’s invalidity; (4) the invoices that the Applicant submitted in support of his application for costs do not provide the dates and hours worked in the case and have no probative value, being domestic writings; and (5) the questions to be decided are not particularly complicated.\n[10] The Applicant replies that Prothonotary Morneau’s order decided on the application for interim costs, proceedings separate from the awarding of costs. The criteria that underlie the awarding of costs are different and, therefore, there is no res judicata in this case. Finally, the Applicant points out that he had no choice other than to turn to the courts because the Respondents refused to consider the Band members’ remarks concerning the illegality of the process for amending the 1994 Code. He thus acted for the good of all Band members. In response to the arguments concerning the format of the invoices submitted, he maintains that they are unsigned writings used in the course of business activities and that they are thus proof of their content.\n[11] Finally, the Applicant maintains that costs can be granted even if the decision is under appeal (Martselos v. Salt River Nation #195, 2008 FCA 221 at paragraphs 51 to 55).\nIII. Discussion\n[12] We should first deal with two of the arguments raised by the Respondents: the one related to the thing adjudicated and the one related to the effect of the appeal and the stay motion that were lodged.\n[13] Thus, the Court agrees with the Applicant’s position and concludes that Prothonotary Morneau’s decision on the interim costs is not res judicata on the awarding of costs at the end of the litigation. At least one of the three criteria established in Angle v. M.N.R., [1975] 2 SCR 248, the one requiring that the same question has been decided, is not satisfied here. The criteria related to a decision on the application for interim costs are different from those considered within the framework of the awarding of costs, and thus it cannot have res judicata.\n[14] As for the effect of the stay motion and the appeal that the Respondents presented to FCA, the Court notes that the Respondents did not submit any case law to support their argument. First, FCA dismissed the stay motion, and thus it is not necessary to focus on its implications with respect to the awarding of costs. Next, our Court has already agreed that appealing a Federal Court decision does not prevent the taxation of costs in the first instance (Halford v. Seed Hawk Inc., 2004 FC 1259 at paragraph 36). Thus, the Court has not been convinced that the appeal of the decision dated December 21, 2017, suspends the awarding of costs.\n[15] The Court will therefore decide on the awarding of costs and, in this regard, the Court is convinced that here, the costs must be granted in favour of the Applicant because his application for judicial review was upheld (Ticketnet Corp v. The Queen, [1999] FCA No. 1102, 99 DTC 5429).\n[16] The awarding of costs between parties is set out in sections 400 to 414 of Part II of Rules. To award costs, courts try to establish a fair balance between three principal objectives, namely “providing compensation, promoting settlement and deterring abusive behaviour” (Air Canada v. Thibodeau, 2007 FCA 115 at paragraph 24). Thus, according to Rule 407, unless the Court orders otherwise, the costs between parties are taxed in compliance with Column III of Tariff Table B.\n[17] As well, subsection 400(1) of the Rules states that the Court “shall have full discretionary power over the amount and allocation of costs and the determination of by whom they are to be paid.” The Court’s vast discretionary power over the awarding of costs has only two exceptions, related to representative actions and immigration cases, which are not at issue in this case.\n[18] Otherwise, the Court enjoys vast discretionary power (Salt River Nation #195 v. Martselos, 2008 FCA 221 at paragraphs 52 and 53). The factors that the Court may take into account are stated in subsection 400(3) of the Rules, the text of which is annexed. They include some of the factors raised by the Applicant, such as the importance and complexity of the issues (400(3)(c)), the amount of work (400(3)(g)) and whether the public interest in having the proceeding litigated justifies a particular award of costs (400(3)(h)).\n[19] The Court has the power to award a gross sum or to issue a more general order (Consorzio del Prosciutto di Parma v. Maple Leaf Meats Inc. (CA), 2002 FCA 417 at paragraphs 8 to 10).\n[20] The Court must therefore decide whether the costs will be assessed through taxation or by the awarding of a gross sum and must also decide whether there is cause to award a specific, higher amount either on the attorney-client basis or on the basis of the public interest.\n[21] To begin with, the Court rules out the payment of costs on the attorney-client basis because nothing in the case indicates that the Respondents demonstrated “reprehensible, scandalous or outrageous conduct” (Young v. Young, [1993] 4 SCR 3 at p. 134; Quebec (Attorney General) v. Lacombe, 2010 SCC 38 at paragraph 67).\n[22] As for a specific amount on the basis of public interest, the Supreme Court established, in the Carter decision (Carter v. Canada (Attorney General), 2015 SCC 5 at paragraph 140), a two-component criterion for awarding special costs to a successful party representing the public interest:\n. . . First, the case must involve matters of public interest that are truly exceptional. It is not enough that the issues raised have not previously been resolved or that they transcend the individual interests of the successful litigant: they must also have a significant and widespread societal impact. Second, in addition to showing that they have no personal, proprietary or pecuniary interest in the litigation that would justify the proceedings on economic grounds, the plaintiffs must show that it would not have been possible to effectively pursue the litigation in question with private funding. In those rare cases, it will be contrary to the interests of justice to ask the individual litigants (or, more likely, pro bono counsel) to bear the majority of the financial burden associated with pursuing the claim.\n[23] In this case, the Court notes that determining the electoral code’s validity is as much an interest for the Band as it is for the Applicant because the latter was a candidate in the elections whose cancellation he requested. Thus the Applicant cannot maintain that he had no individual interest in the litigation and here, at least one of the Supreme Court’s aforementioned criteria has not been satisfied.\n[24] In addition, it seems fair to argue that the Applicant is not in a position to get the Band to reimburse him for his legal fees. The Respondents have not submitted evidence showing that they paid their legal fees (Bellegarde v. Poitras, 2009 FC 1212 at paragraph 8) and it seems plausible to find that they are not paying them themselves, since they are members of the Band council.\n[25] Finally, the Court can find only that the workload and the complexity of the case or that the behaviour of the Respondents, having continued the proceedings, in themselves justify the awarding of special costs.\n[26] Hence, because the electoral code’s validity is effectively also a question of interest for the Band and because the Applicant is solely responsible for the litigation costs, the Court is convinced that the situation is an argument for awarding costs higher than those in Column III of Tariff B. In the absence of the parties’ bill of costs, the Court finds it difficult to set a “higher” amount by gross sum. Therefore, the Court will instead grant the Applicant costs through taxation, according to the upper band of Column V of Tariff B.\nJUDGMENT in file T-2135-16\nTHIS COURT’S JUDGMENT is that:\nThe Respondents are to pay costs to the Applicant according to the upper band of Column V of Tariff B;\n“Martine St-Louis”\nJudge\nRule 400(3)\nFactors in awarding costs\n(3) In exercising its discretion under subsection (1), the Court may consider\n(a) the result of the proceeding;\n(b) the amounts claimed and the amounts recovered;\n(c) the importance and complexity of the issues;\n(d) the apportionment of liability;\n(e) any written offer to settle;\n(f) any offer to contribute made under rule 421;\n(g) the amount of work;\n(h) whether the public interest in having the proceeding litigated justifies a particular award of costs;\n(i) any conduct of a party that tended to shorten or unnecessarily lengthen the duration of the proceeding;\n(j) the failure by a party to admit anything that should have been admitted or to serve a request to admit;\n(k) whether any step in the proceeding was\n(i) improper, vexatious or unnecessary, or\n(ii) taken through negligence, mistake or excessive caution;\n(l) whether more than one set of costs should be allowed, where two or more parties were represented by different solicitors or were represented by the same solicitor but separated their defence unnecessarily;\n(m) whether two or more parties, represented by the same solicitor, initiated separate proceedings unnecessarily;\n(n) whether a party who was successful in an action exaggerated a claim, including a counterclaim or third party claim, to avoid the operation of rules 292 to 299;\n(n.1) whether the expense required to have an expert witness give evidence was justified given\n(i) the nature of the litigation, its public significance and any need to clarify the law,\n(ii) the number, complexity or technical nature of the issues in dispute, or\n(iii) the amount in dispute in the proceeding; and\n(o) any other matter that it considers relevant.\nRègle 400(3)\nFacteurs à prendre en compte\n(3) Dans l’exercice de son pouvoir discrétionnaire en application du paragraphe (1), la Cour peut tenir compte de l’un ou l’autre des facteurs suivants :\na) le résultat de l’instance;\nb) les sommes réclamées et les sommes recouvrées;\nc) l’importance et la complexité des questions en litige;\nd) le partage de la responsabilité;\ne) toute offre écrite de règlement;\nf) toute offre de contribution faite en vertu de la règle 421;\ng) la charge de travail;\nh) le fait que l’intérêt public dans la résolution judiciaire de l’instance justifie une adjudication particulière des dépens;\ni) la conduite d’une partie qui a eu pour effet d’abréger ou de prolonger inutilement la durée de l’instance;\nj) le défaut de la part d’une partie de signifier une demande visée à la règle 255 ou de reconnaître ce qui aurait dû être admis;\nk) la question de savoir si une mesure prise au cours de l’instance, selon le cas :\n(i) était inappropriée, vexatoire ou inutile,\n(ii) a été entreprise de manière négligente, par erreur ou avec trop de circonspection;\nl) la question de savoir si plus d’un mémoire de dépens devrait être accordé lorsque deux ou plusieurs parties sont représentées par différents avocats ou lorsque, étant représentées par le même avocat, elles ont scindé inutilement leur défense;\nm) la question de savoir si deux ou plusieurs parties représentées par le même avocat ont engagé inutilement des instances distinctes;\nn) la question de savoir si la partie qui a eu gain de cause dans une action a exagéré le montant de sa réclamation, notamment celle indiquée dans la demande reconventionnelle ou la mise en cause, pour éviter l’application des règles 292 à 299;\nn.1) la question de savoir si les dépenses engagées pour la déposition d’un témoin expert étaient justifiées compte tenu de l’un ou l’autre des facteurs suivants :\n(i) la nature du litige, son importance pour le public et la nécessité de clarifier le droit,\n(ii) le nombre, la complexité ou la nature technique des questions en litige,\n(iii) la somme en litige;\no) toute autre question qu’elle juge pertinente.\nFEDERAL COURT\nSOLICITORS OF RECORD\nDOCKET:\nT-2135-16\nSTYLE OF CAUSE:\nJÉRÔME BACON ST-ONGE v. THE CONSEIL DES INNUS DE PESSAMIT, RENÉ SIMON, ÉRIC CANAPÉ, GÉRALD HERVIEUX, DIANE RIVERIN, JEAN-NOEL RIVERIN, RAYMON ROUSSELOT, MARIELLE VACHON\nREASONS FOR ORDER AND ORDER:\nST-LOUIS J.\nDATED:\nJune 22, 2018\nWRITTEN SUBMISSIONS BY:\nFrançois Boulianne\nFOR THE APPLICANT\nKenneth Gauthier\nFor the respondents\nSOLICITORS OF RECORD:\nNeashish & Champoux, s.e.n.c.\nWendake, Quebec\nFOR THE APPLICANT\nKenneth Gauthier\nCounsel\nBaie-Comeau, Quebec\nFor the respondents\n', "cluster: CONCLUSION: The Court concluded that the Respondents were to pay costs to the Applicant according to the upper band of Column V of Tariff B. The Court's decision was based on its discretionary power to award costs, taking into account the factors set out in Rule 400(3) of the Federal Courts Rules. The Court's decision was intended to provide a fair balance between the parties and to reflect the complexity and importance of the issues in the case.", "cluster: ANALYSIS: The Court analyzed the factors to be considered in awarding costs, as set out in Rule 400(3) of the Federal Courts Rules. The Court considered the importance and complexity of the issues, the amount of work, and the public interest in having the proceeding litigated. The Court also considered the conduct of the parties, including the Respondents' refusal to consider the Band members' remarks concerning the illegality of the process for amending the 1994 Code. The Court found that the Applicant was solely responsible for the litigation costs and that the situation justified awarding costs higher than those in Column III of Tariff B. However, the Court did not find that the case met the two-component criterion for awarding special costs to a successful party representing the public interest, as established by the Supreme Court in the Carter decision.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 13,500 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 410 tokens</li><li>mean: 2901.01 tokens</li><li>max: 6550 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 209.23 tokens</li><li>max: 1169 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 219.56 tokens</li><li>max: 1261 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>cluster: ANALYSIS: Jalota v. Canada (Citizenship and Immigration)<br>Court (s) Database<br>Federal Court Decisions<br>Date<br>2013-11-19<br>Neutral citation<br>2013 FC 1176<br>File numbers<br>IMM-3349-13<br>Decision Content<br>Date: 20131119<br>Docket:<br>IMM-3349-13<br>Citation: 2013 FC 1176<br>Ottawa, Ontario, November 19, 2013<br>PRESENT: The Honourable Mr. Justice Phelan<br>BETWEEN:<br>MANAV JALOTA<br>Applicant<br>and<br>THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>Respondent<br>REASONS FOR JUDGMENT AND JUDGMENT<br>I. INTRODUCTION<br>[1] This is the judicial review of a decision refusing Mr. Jalota’s application to restore his temporary resident status as a student.<br>The restoration application was denied because the Officer was not satisfied that (a) the Applicant was a genuine temporary resident and student; (b) the Applicant had sufficient funds; (c) he would leave Canada at the end of the authorized study; and (d) the co-op component of his studies met some specified criteria.<br>II. BACKGROUND<br>[2] The Applicant is from India and obtained a study permit valid until December 31, 2012.<br>[3] Having arrived in Canada in January 2011, the Applicant started at one college, transferred to another and finally to a third. All of these transfers are permitted under his permit.<br>[4] In September 2012 the Applicant applied for an extension of his permit which was refused. The reason for refusal was the officer’s belief that he was not a genuine student, which was stated in the following strong terms:<br>You have submitted documentation which lacks credibility as part of your application. This has diminished the overall credibility of your submission.<br>[5] This is a critical aspect of this whole matter because the refusal does not say in what manner the documents lack credibility; however, the credibility finding is part of the overall restoration file (it was contained in the Tribunal Record as part of the material before the Officer on the restoration matter).<br>[6] In January 2013 the Applicant made an application to restore status. In so doing, the Applicant followed the Document Checklist – Student issued by Citizenship and Immigration Canada [CIC]. That document is divided into three parts: the first part applies to “All Applicants”, the second applies to a “Study Permit” and the third applies to “Restoration of Temporary Status”.<br>[7] Under the Restoration part of the Checklist, CIC asks for:<br>1. photocopies of passport pages (also requested under Study Permit);<br>2. copy of current immigration document; and<br>3. documents related to loss of status.<br>[8] In the covering letter for the application to restore, the Applicant enclosed transcripts and attendance records. He went on to note that he had funds to carry him through the school semester and offered to supply evidence if required.<br>[9] The application for restoration was denied for the reasons earlier described.<br>[10] The Global Case Management System [GCMS] identified the Officer’s concerns:<br>• failure to submit proof of funds;<br>• failure to submit transcripts from previous studies;<br>• absence of studies from August 2011 to January 2012; and<br>• failure in a college letter of acceptance to give certain details of co-op program’s importance.<br>There is nothing in those Notes concerning the failure to leave Canada at the end of the study permit.<br>[11] The Respondent concedes that there was a breach of procedural fairness when the Officer failed to request prior transcripts but based the decision on the failure to produce those transcripts.<br>[12] The Respondent contends that despite the breach of procedural fairness, the decision is reasonable based on the absence of financial information. The Respondent seeks to carve out financial information as a discrete ground for the decision which can breathe life into an otherwise infirm decision.<br>III. ANALYSIS<br>[13] It is well accepted that a breach of natural justice in decision making is an error of law and jurisdiction which results in the whole decision being quashed. There are very limited and exceptional circumstances where a breach will not have that result – such as where the breach could not have affected the result (Lahocsinszky v Canada (Minister of Citizenship and Immigration), 2004 FC 275, 129 ACWS (3d) 769).<br>[14] In the present instance the breach is the grounds for one of the findings against the Applicant. It cannot be said that the breach did not affect the result. This is not a case where it is futile to send the matter back for redetermination because the result is inevitable (Mobil Oil Canada Ltd v Canada-Newfoundland Offshore Petroleum Board, [1994] 1 SCR 202).<br>[15] Therefore, on that ground alone, this judicial review will be granted.<br>[16] However, it is important to address the Respondent’s claim that the absence of financial information was fatal to the restoration application and therefore the decision should be sustained.<br>[17] The Respondent’s own checklist does not ask for any financial information per se as part of a restoration application, although it is listed as a requirement for study permit applications. For restoration applications, the key requirement is production of the documents related to loss of status. If the Respondent wished to have financial or other documents, it should have asked for them either in the Checklist or by additional request. Any confusion in the Checklist lies at the feet of the Respondent and it is a further breach of procedural fairness to have a misleading document supplied to the public (Lim v Canada (Minister of Citizenship and Immigration), 2005 FC 657, 272 FTR 293).<br>[18] This breach is particularly so where the Applicant confirmed his financial situation and informed the Respondent that he was prepared to provide evidence if asked. It is no answer to say that section 182 of the Immigration and Refugee Regulations, SOR/2002-227, requires, on a restoration application, that an applicant meet the initial requirements for their stay. The Applicant met the initial requirements at the time of his first application, the evidence was in the file and there was nothing to suggest that anything had changed.<br>[19] The core problem with this matter is the requirement to produce documents related to the loss of status – as specified in the Checklist. It requires one to determine the cause of the loss of status.<br>[20] It is not clear from the decision denying the extension of the permit – the cause of the loss of status – what the real problem was with the Applicant’s documents. It is unfair to leave a party with questions as to what was incredulous about his documents. Unfortunately, in this case, the Applicant never asked.<br>[21] However, since the stated reason for refusal was concern about the Applicant being a genuine student, not that he lacked sufficient funds, it is reasonable to conclude that the loss of status related to academic matters not financial. It was therefore reasonable for the Applicant to address those issues in his restoration application – which he did.<br>[22] It is incumbent on the Respondent to state the reasons for loss of status in sufficient terms that an applicant can address those reasons in any further relief he may claim.<br>[23] The Respondent acknowledges that on this issue, it breached procedural fairness in respect to not requesting earlier transcripts. The problem is deeper than that. The core unfairness lay in the reasons for denying the extension which then led to the problems in addressing the restoration application.<br>[24] The Respondent’s reliance on financial issues is a new-found basis; not the basis for the original loss of status.<br>[25] It is a breach of procedural fairness to rely on a grounds not cited in the original decision without giving the Applicant notice that this ground of financial sustainability is now at issue – just as it was a breach of procedural fairness to not give notice that earlier transcripts were required (a point the Respondent properly concedes).<br>[26] These breaches of procedural fairness are further grounds for quashing the decision denying restoration.<br>[27] Lastly, the Respondent has put forward no basis for concluding that the Applicant would not leave Canada. It is not sufficient to just run through the various grounds for denial of the application, as if checking off a list, without giving reasons for the conclusion.<br>[28] As an overall conclusion, the allegation of lack of credibility of the Applicant because of problems with his documents seems to have permeated this file. If the Respondent had problems with the documents, it was obliged to state what those problems were. Conclusionary statements are not “reasons”.<br>IV. CONCLUSION<br>[29] This judicial review will be granted, the decision will be quashed, and the matter referred back to a different officer for a fresh determination.<br>[30] There are no questions for certification.<br>JUDGMENT<br>THIS COURT’S JUDGMENT is that the application for judicial review is granted, the decision is quashed and the matter is to be referred back to a different officer for a fresh determination.<br>"Michael L. Phelan"<br>Judge<br>FEDERAL COURT<br>SOLICITORS OF RECORD<br>DOCKET:<br>IMM-3349-13<br>STYLE OF CAUSE:<br>MANAV JALOTA v THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>PLACE OF HEARING:<br>Vancouver, British Columbia<br>DATE OF HEARING:<br>November 14, 2013<br>REASONS FOR JUDGMENT AND JUDGMENT:<br>PHELAN J.<br>DATED:<br>november 19, 2013<br>APPEARANCES:<br>Laura Best<br>For The Applicant<br>Adam Taylor<br>For The Respondent<br>SOLICITORS OF RECORD:<br>Embarkation Law Group<br>Barristers and Solicitors<br>Vancouver, British Columbia<br>For The Applicant<br>William F. Pentney<br>Deputy Attorney General of Canada<br>Vancouver, British Columbia<br>For The Respondent<br></code> | <code>cluster: ANALYSIS: The court analyzed the case by considering the principles of natural justice and procedural fairness. The court held that a breach of natural justice in decision-making is an error of law and jurisdiction that results in the whole decision being quashed, unless there are exceptional circumstances where the breach could not have affected the result. In this case, the breach of procedural fairness was not limited to the failure to request prior transcripts but was also evident in the unclear decision-making process and the officer's reliance on new-found grounds.The court also examined the Document Checklist – Student issued by Citizenship and Immigration Canada and found that it did not ask for financial information per se as part of a restoration application. The key requirement for restoration applications was the production of documents related to the loss of status. The court concluded that the officer's failure to request financial information was a breach of procedural fairness, particularly since the person concerned had confirmed his financial situation and offered to provide evidence if required.</code> | <code>cluster: SUMMARY: **(1) Facts**<br><br>The person concerned, a student from India, initially obtained a study permit valid until December 31, 2012. He started his studies at one college, transferred to another, and then to a third, all of which were permitted under his study permit. In September 2012, the person concerned applied for an extension of his permit, but it was refused because the officer believed he was not a genuine student. This decision was based on the officer's finding that the documentation submitted lacked credibility. However, the officer did not specify what was lacking in credibility.<br><br>The person concerned then made an application to restore his temporary resident status as a student in January 2013. He followed the Document Checklist – Student issued by Citizenship and Immigration Canada, which included providing photocopies of passport pages, a copy of his current immigration document, and documents related to the loss of status. The application was denied for reasons similar to the initial extension refusal, including the lack of proof of funds, failure to submit transcripts from previous studies, absence of studies from August 2011 to January 2012, and failure in a college letter of acceptance to give certain details of the co-op program's importance.<br><br>**(2) Issue**<br><br>The issue before the court was whether the decision to deny the person concerned's application to restore his temporary resident status as a student was reasonable and whether the officer's failure to request prior transcripts and the lack of clarity in the decision-making process breached procedural fairness.<br><br>**(3) Rule**<br><br>The court ruled that the decision to deny the person concerned's application to restore his temporary resident status as a student was unreasonable due to the breach of procedural fairness. The officer failed to request prior transcripts, which was a critical piece of information, and the decision-making process was unclear. The court also found that the officer's reliance on financial issues as a basis for denying the restoration application was a new-found ground that was not cited in the original decision, which was a further breach of procedural fairness.<br><br>**(4) Analysis**<br><br>The court analyzed the case by considering the principles of natural justice and procedural fairness. The court held that a breach of natural justice in decision-making is an error of law and jurisdiction that results in the whole decision being quashed, unless there are exceptional circumstances where the breach could not have affected the result. In this case, the breach of procedural fairness was not limited to the failure to request prior transcripts but was also evident in the unclear decision-making process and the officer's reliance on new-found grounds.<br><br>The court also examined the Document Checklist – Student issued by Citizenship and Immigration Canada and found that it did not ask for financial information per se as part of a restoration application. The key requirement for restoration applications was the production of documents related to the loss of status. The court concluded that the officer's failure to request financial information was a breach of procedural fairness, particularly since the person concerned had confirmed his financial situation and offered to provide evidence if required.<br><br>**(5) Conclusion**<br><br>The court concluded that the decision to deny the person concerned's application to restore his temporary resident status as a student was unreasonable due to the breach of procedural fairness. The court granted the judicial review, quashed the decision, and referred the matter back to a different officer for a fresh determination.</code> | | <code>cluster: ISSUES: Melo Castrillon v. Canada (Citizenship and Immigration)<br>Court (s) Database<br>Federal Court Decisions<br>Date<br>2018-05-01<br>Neutral citation<br>2018 FC 470<br>File numbers<br>IMM-1617-17<br>Decision Content<br>Date: 20180501<br>Docket: IMM-1617-17<br>Citation: 2018 FC 470<br>[ENGLISH TRANSLATION]<br>Ottawa, Ontario, May 1, 2018<br>PRESENT: The Honourable Mr. Justice Roy<br>BETWEEN:<br>RUBY AMPARO MELO CASTRILLON<br>Applicant<br>and<br>THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>Respondent<br>JUDGMENT AND REASONS<br>[1] Ruby Amparo Melo Castrillon seeks judicial review (under section 72 of the Immigration and Refugee Protection Act (S.C. 2001, c. 27) [IRPA]) of the decision of the Refugee Protection Division (RPD) finding that Ms. Melo Castrillon is not a Convention refugee or a person in need of protection.<br>I. Preliminary Issue<br>[2] The RPD’s decision was made under more nuanced circumstances. Its decision on March 13, 2017, related to Ms. Melo Castrillon and four members of her immediate family. Ms. Melo Castrillon is excluded under section 98 of the IRPA. With regard to the other four claimants, the RPD found that there was no serious possibility of them being considered Convention refugees or persons in need of protection given the lack of credibility of their claim. The RPD also seems to have found that this is the case for Ms. Melo Castrillon. Ms. Melo Castrillon is the sole applicant in this judicial review. She seeks judicial review of only the aspect of the decision regarding her exclusion under section 98.<br>[3] It is paradoxical for the applicant to seek judicial review of only one part of the RPD’s decision. As counsel for the respondent observed, it seems that the RPD not only declared Ms. Melo Castrillon to be excluded, but also found that she was not a Convention refugee or a person in need of protection. The applicant is contesting the first finding that she is excluded, but not the finding that none of the claimants could qualify under sections 96 and 97 of the IRPA. If that is true, even if the applicant were successful in her case before the Court, that would not set aside the finding that she is not a Convention refugee or a person in need of protection because she is not contesting that aspect of the decision. This makes the application for judicial review moot since, either way, the applicant cannot succeed in her effort to benefit from sections 96 and 97 of the IRPA (Borowski v. Canada (Attorney General), [1989] 1 SCR 342 [Borowski] at page 353).<br>[4] Nevertheless, the Court heard the parties because leave was given by this Court, and it decided to review the application for judicial review on merit even though it is moot (Borowski, at pages 358-363). The Court is convinced that the applicant could reasonably be excluded pursuant to section 98 of the IRPA.<br>II. Issue<br>[5] Ms. Melo Castrillon, who is the mother of the principal claimant before the RPD, is subject to a particular refusal because, according to the RPD, she is excluded under Article 1E of the Convention relating to the Status of Refugees. Article 1E reads as follows:<br>1E. This Convention shall not apply to a person who is recognized by the competent authorities of the country in which he has taken residence as having the rights and obligations which are attached to the possession of the nationality of that country.<br>1E. Cette Convention ne sera pas applicable à une personne considérée par les autorités compétentes du pays dans lequel cette personne a établi sa résidence comme ayant les droits et les obligations attachés à la possession de la nationalité de ce pays.<br>Section 98 of the IRPA incorporates the consequences of being subject to Article 1E into Canadian law. Section 98 reads as follows:<br>Exclusion — Refugee Convention<br>Exclusion par application de la Convention sur les réfugiés<br>98 A person referred to in section E or F of Article 1 of the Refugee Convention is not a Convention refugee or a person in need of protection.<br>98 La personne visée aux sections E ou F de l’article premier de la Convention sur les réfugiés ne peut avoir la qualité de réfugié ni de personne à protéger.<br>[6] The only decision the RPD made on the basis of section 98 was the one regarding Ms. Melo Castrillon’s exclusion.<br>III. Facts<br>[7] The applicant obtained permanent resident status in Italy on March 12, 2013. She had been living in Italy since August 2007. She decided to leave Italy on May 29, 2015, and return to her home country of Colombia, where her family was living. However, she did not remain there for long. After travelling to the United States in January 2016, she and her immediate family arrived at the Canadian border on January 22, 2016. They then filed a refugee claim. They were arriving from Colombia.<br>[8] Ms. Melo Castrillon reported that she had left Italy on May 29, 2015, to return to Colombia. There were two hearings before the RPD, on May 4, 2016, and on June 23, 2016. This is of some importance, since a claim was made that permanent resident status may be lost in Italy if a person does not reside there for a period of 12 consecutive months. Indeed, the applicant claims that her absence from Italy resulted in her losing her permanent resident status and, therefore, that section 98 of the IRPA did not apply after May 29, 2016. Since the RPD hearing did not end until June 23, 2016, this would indicate that the RPD erred in excluding the applicant under section 98 because she had been absent for more than 12 consecutive months.<br>[9] Therefore, the question is as to whether Ms. Melo Castrillon had lost her permanent resident status in Italy, meaning that Article 1E of the Convention could no longer be validly applied to her and that she could therefore claim refugee or person in need of protection status in Canada.<br>IV. The RPD’s decision<br>[10] The Minister of Public Safety and Emergency Preparedness intervened before the RPD under subsection 170(e) of the IRPA. It has been established that the applicant was a resident of Italy between August 2007 and May 29, 2015. The Minister made allegations about Ms. Melo Castrillon’s legal situation. The Minister alleges that she claimed to have permanent resident status in Italy during her point-of-entry interview on January 23, 2016. In addition, the Minister said that he had received confirmation from the Italian authorities that Ms. Melo Castrillon holds a permanent residence permit issued on March 12, 2013. The Minister stated that there are conditions that could result in the loss of permanent resident status in Italy. However, the applicant did not file any such evidence. That is why the Minister is arguing that there is a prima facie case that the applicant was still a permanent resident in Italy on the day of the RPD hearing. According to the Minister, this would mean that section 98 of the IRPA provides that Ms. Melo Castrillon is simply excluded by the application of Article 1E of the Convention and cannot be considered a Convention refugee or a person in need of protection in Canada.<br>[11] The RPD placed little importance on the fact that the applicant apparently claimed on two different forms that she had begun living in Italy in November 2004 and had resided in Italy since August 2007. Furthermore, it is established that she stated during her interview on January 23, 2016, that she had permanent resident status in Italy.<br>[12] Therefore, the following issues were before the RPD:<br>a) Ms. Melo Castrillon was a permanent resident of Italy until her departure on May 29, 2015;<br>b) The period of 12 consecutive months is being considered for the purposes of Canadian law from the date of departure until the hearing before the RPD;<br>c) Therefore, the one-year period had not elapsed on the day the applicant made a refugee claim in Canada or on May 4, 2016, the day the hearing began, but it had elapsed on June 23, 2016, the day of the second hearing;<br>d) A person may lose permanent resident status in Italy if they are outside the European Union for a period of 12 consecutive months;<br>e) The RPD was satisfied that permanent resident status in Italy entitles the holder to return there. Furthermore, the RPD found that a permanent resident in Italy has the same rights and obligations as Italian citizens within the meaning of section 98. The RPD based this finding in particular on the index of the National Documentation Package on Italy (May 31, 2016), a national package made available to the public by the Immigration and Refugee Board of Canada. In particular, the RPD seems to have based its conclusion on the following paragraph:<br>7. Rights of Individuals Holding an EC Long-Term Residence Permit The State Police website indicates that individuals holding an EC Long-Term Residence Permit are entitled to enter Italy without a visa, to work, to have access to social benefits and services provided by the Italian government, and to “participate in local public life” (Italy 29 Mar. 2010). The Ministry of Interior’s Staying in Italy Legally indicates that foreign nationals with a valid residence permit are granted the same education rights as Italian citizens (ibid. n.d., 21). The same source indicates that foreign nationals with a “regular residence permit” are required to register with the National Health Service (Servizio Sanitario Nazionale, SSN), and are entitled by law to receive health care and have “equal treatment as Italian citizens regarding compulsory contributions, health care given in Italy by the SSN and its time limit” (ibid., 23).<br>f) The burden was on the applicant to demonstrate to the RPD’s satisfaction that she had lost permanent resident status. The RPD stated the following in this regard:<br>[47] [translation] That being said, according to recent evidence from Italian authorities regarding the claimant, there is simply the possibility of losing status; they did not indicate that she was going to lose her status in Italy, nor that she had lost her status at the time of the hearing. Moreover, in the documents the applicant submitted regarding her communications with Italian authorities, there is no confirmation that she had lost her permanent resident status.<br>V. Standard of review and analysis<br>[13] It has been well established that the role of a reviewing judge is solely to ensure that the decision made is legal. Therefore, for certain issues, the reviewing judge must decide whether a given issue is reviewable on the standard of correctness. As the law stands, these issues are rare. In the vast majority of cases, the applicable standard of review is reasonableness (see the recent Williams Lake Indian Band v. Canada (Aboriginal Affairs and Northern Development), 2018 SCC 4, at para. 26 et seq., for an illustration of the changes in the law on the appropriate standard). That is the case here, where Article 1E of the Convention must be interpreted. In Majebi v. Canada (Citizenship and Immigration), 2016 FCA 274 [Majebi], the Federal Court of Appeal stated the following:<br>[5] First, we disagree that the Federal Court incorrectly reviewed the decision of the Appeal Division on the reasonableness standard of review. As the Federal Court correctly noted, this Court has expressed different opinions on the standard of review that applies to decisions interpreting international instruments. However, authorities that pre-date the articulation of the presumption of reasonableness review set out in cases such as Alberta (Information and Privacy Commissioner) v. Alberta Teachers’ Association, 2011 SCC 61 (CanLII), [2011] 3 S.C.R 654 must be approached with caution. In the present case we agree with the Federal Court that nothing in the legislative context reveals Parliament’s intent “not to protect the tribunal’s jurisdiction” (Mouvement laïque québécois v. Saguenay (City), 2015 SCC 16 (CanLII), [2015] 2 S.C.R. 3, at paragraph 46). Nor does the interpretation of the Convention fall into one of the categories of questions to which the correctness standard continues to apply as explained in Alberta Teachers’ at paragraph 30. This conclusion is consistent with the more recent decision of this Court in B010 v. Canada (Citizenship and Immigration), 2013 FCA 87 (CanLII), [2014] 4 F.C.R. 326, at paragraphs 58-72.<br>[6] It follows that the Appeal Division’s interpretation of the Convention was correctly reviewed on the reasonableness standard of review.<br>[14] In applying that standard, the Court is seeking what makes a decision reasonable. Does the decision fall within a range of possible, acceptable outcomes which are defensible in respect of the facts and law? Was there justification, transparency and intelligibility within the decision-making process? (Dunsmuir v. New Brunswick, 2008 SCC 9; 2008 1 SCR 190 at para. 47).<br>[15] No one disputes that the review of the application of Article 1E must be done at a certain point. The parties and the Court agree that the review of the application of Article 1E of the Convention is performed on the last day of the hearing before the RPD. In Majebi, the Court wrote:<br>[7] The Appeal Division applied the decision of this Court in Canada (Citizenship and Immigration) v. Zeng, 2010 FCA 118 (CanLII), [2011] 4 F.C.R. 3 to conclude that the appellants’ status should be considered as of the last day of the hearing before the Refugee Protection Division. We agree with the Federal Court that this was a reasonable conclusion for the Appeal Division to reach.<br>[16] As James C. Hathaway and Michelle Foster explain in The Law of Refugee Status, 2nd ed. (Cambridge University Press, 2014), Article 1E of the Convention relating to the Status of Refugees provides that protection is no longer available to a certain category of persons (the other being in Article 1D). These persons benefit from the protection of another state, meaning that the protection of a substitute state, in this case, Canada, is not required. In short, if Ms. Melo Castrillon was able to benefit from the protection of another state at the time of her refugee claim hearing, the claim had to be made to that state.<br>[17] Of course, there are conditions that result in the loss of Convention benefits. Essentially, these are cases of “de facto nationals,” those who have the rights and obligations attached to the possession of the nationality of that country.<br>[18] In this case, the issue is to determine whether the applicant was still a “de facto national” of Italy because of her permanent resident status, which entitled her to enter Italy without a visa, among other things. The applicant limited her dispute to her claim that she had lost her permanent resident status 12 months after she left Italy. She is arguing that the loss of status would be automatic.<br>VI. Analysis<br>[19] The applicant says that she did research to confirm whether or not she had lost her status. Neither on the day of the hearing before the RPD, nor since, including on the day of the Court hearing, was she able to determine whether she is a permanent resident of Italy. This, in itself, indicates that the status is not automatically lost. At most, the status can be revoked from those who are absent from the country for 12 months.<br>[20] Therefore, the sole issue before the Court is to determine whether the RPD’s decision that the applicant had permanent resident status on June 23, 2016, is reasonable. It is not disputed that a person who has been absent from Italy for more than 12 consecutive months could lose their permanent resident status. The question is to determine whether the loss of status is automatic.<br>[21] To succeed in her claim, the applicant was required to convince the RPD that as of May 30, 2016, she had lost her permanent resident status in Italy. That loss would have to be automatic, or practically so.<br>[22] As previously noted, the applicant attempted to determine her status in Italy. What is relevant for our purposes is her status on the day of the hearing, June 23, 2016. Despite her attempts, she was unable to confirm her status. If the status was automatically lost on May 30, 2016, i.e. 12 months after she left Italy, it would have been simple for the Italian authorities to confirm the loss of status. That was not the case. This seems to confirm the documentary evidence indicating that refugee status may be revoked if the permanent resident is not on European Community (EC) territory for 12 consecutive months.<br>[23] The reality is that the applicant has not even established that she was absent from Italy and the EC for 12 consecutive months. All we know is that she apparently left Italy on May 29, 2015. Regardless, what is relevant in this case is that the RPD concluded on the documentary evidence only that the possibility of status revocation exists; it is not lost automatically. It appears that there needs to be an act of revocation. As the RPD observed, if revocation was automatic, there should have been a simple and direct response from the Italian authorities, which suggests that the interpretation of the documentary evidence is correct. Therefore, it must be reasonable.<br>[24] I consulted the documentary evidence on file and do not doubt the reasonableness of the RPD finding that revocation of permanent residence is possible, but not automatic.<br>[25] Referencing Canada (Citizenship and Immigration) v. Zeng, 2010 FCA 118 [Zeng], the RPD concluded that Ms. Melo Castrillon had essentially a similar status to that of Italian citizens. Therefore, she is excluded if she was a permanent resident on the day of the RPD hearing, in other words, if she had not lost her permanent resident status on the day of the hearing. Paragraphs 28 and 29 of Zeng speak for themselves:<br>[28] Considering all relevant factors to the date of the hearing, does the claimant have status, substantially similar to that of its nationals, in the third country? If the answer is yes, the claimant is excluded. If the answer is no, the next question is whether the claimant previously had such status and lost it, or had access to such status and failed to acquire it. If the answer is no, the claimant is not excluded under Article 1E. If the answer is yes, the RPD must consider and balance various factors. These include, but are not limited to, the reason for the loss of status (voluntary or involuntary), whether the claimant could return to the third country, the risk the claimant would face in the home country, Canada’s international obligations, and any other relevant facts.<br>[29] It will be for the RPD to weigh the factors and arrive at a determination as to whether the exclusion will apply in the particular circumstances.<br>[26] The RPD found that the applicant had permanent resident status on the day of the hearing and that this status is essentially similar to that of Italian citizens. Therefore, it was unnecessary to proceed with an analysis based on the decision tree proposed by the Federal Court of Appeal.<br>[27] If the applicant cannot establish whether she was automatically excluded from permanent resident status, it was perfectly reasonable for the RPD to conclude that she had that status on the day of the hearing.<br>VII. Conclusion<br>[28] Two questions arise upon review of Article 1E of the Convention. Firstly, does a person’s status in the country where they resided entitle them to the same benefits that the country’s citizens receive? Secondly, does this person still have this status if that is the country where they are a “de facto national”? If so, that is the country where the person must seek refuge.<br>[29] The RPD hearing took place more than 12 months after the applicant left Italy. I cannot find anything unreasonable in considering that the permanent resident status may be revoked after 12 months, but is not automatically revoked. It being established that the applicant had that status when she left Italy, which is not disputed, the burden was on the applicant to establish to the RPD’s satisfaction that the status was automatically or otherwise revoked. This was not done. Consequently, the RPD’s decision was reasonable on its face as to the maintenance of the status on the hearing date. The rights conferred by this status in Italy are similar to the rights and obligations attached to the possession of the nationality of that country, as required by Article 1E. As a result, the application for judicial review must be dismissed, because the RPD’s decision is reasonable.<br>[30] The parties did not indicate that there is a question within the meaning of subsection 74(d) of the IRPA. There is no serious question of general importance to certify.<br>JUDGMENT in file IMM-1617-17<br>THIS COURT’S JUDGMENT is that:<br>The application for judicial review is dismissed;<br>No questions of general importance are proposed or certified.<br>“Yvan Roy”<br>Judge<br>FEDERAL COURT<br>SOLICITORS OF RECORD<br>DOCKET:<br>IMM-1617-17<br>STYLE OF CAUSE:<br>RUBY AMPARO MELO CASTRILLON v. THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>PLACE OF HEARING:<br>Ottawa, Ontario<br>DATE OF HEARING:<br>April 16, 2018<br>JUDGMENT AND REASONS:<br>ROY J.<br>DATED:<br>May 1, 2018<br>APPEARANCES:<br>Nancy Munoz Ramirez<br>For the applicant<br>Thi My Dung Tran<br>For the respondent<br>SOLICITORS OF RECORD:<br>Nancy Cristina Munoz Ramirez<br>ROA Services Juridiques<br>Montréal, Quebec<br>For the applicant<br>Attorney General of Canada<br>Montréal, Quebec<br>For the respondent<br></code> | <code>cluster: ISSUES: The person concerned sought judicial review of the RPD's decision, arguing that she had lost her permanent resident status in Italy after being absent for 12 consecutive months. The issue before the court was whether the RPD's decision that the person concerned had permanent resident status on June 23, 2016, was reasonable.</code> | <code>cluster: SUMMARY: **(1) Facts**<br><br>The person concerned, Ruby Amparo Melo Castrillon, is a Colombian national who obtained permanent resident status in Italy on March 12, 2013. She had been living in Italy since August 2007. However, she left Italy on May 29, 2015, and returned to Colombia. She then traveled to the United States and eventually arrived at the Canadian border on January 22, 2016, with her immediate family. They filed a refugee claim, and the Refugee Protection Division (RPD) conducted two hearings, on May 4, 2016, and June 23, 2016. The RPD found that the person concerned was excluded under section 98 of the Immigration and Refugee Protection Act (IRPA) because she was a permanent resident of Italy until her departure on May 29, 2015, and had not lost her permanent resident status.<br><br>**(2) Issue**<br><br>The person concerned sought judicial review of the RPD's decision, arguing that she had lost her permanent resident status in Italy after being absent for 12 consecutive months. The issue before the court was whether the RPD's decision that the person concerned had permanent resident status on June 23, 2016, was reasonable.<br><br>**(3) Rule**<br><br>The court applied the reasonableness standard of review, as established in the case of Majebi v. Canada (Citizenship and Immigration). The court considered the documentary evidence and the RPD's findings, which indicated that the possibility of status revocation exists, but it is not automatic. The court also considered the case of Canada (Citizenship and Immigration) v. Zeng, which established that a person's status in the country where they resided entitles them to the same benefits that the country's citizens receive.<br><br>**(4) Analysis**<br><br>The court analyzed the RPD's decision and found that it was reasonable. The court noted that the person concerned had not established that she was automatically excluded from permanent resident status and that the RPD's conclusion that she had that status on the day of the hearing was reasonable. The court also considered the documentary evidence, which indicated that refugee status may be revoked if the permanent resident is not on European Community (EC) territory for 12 consecutive months. However, the court found that the RPD's interpretation of the documentary evidence was correct, and that revocation of permanent residence is possible, but not automatic.<br><br>**(5) Conclusion**<br><br>The court concluded that the RPD's decision was reasonable and dismissed the person concerned's application for judicial review. The court found that the person concerned had not established that she had lost her permanent resident status in Italy and that the RPD's decision was consistent with the requirements of Article 1E of the Convention relating to the Status of Refugees. The court also noted that the person concerned had not established that there was a serious question of general importance to certify.</code> | | <code>cluster: FACTS: Bell Canada v. Lackman<br>Court (s) Database<br>Federal Court Decisions<br>Date<br>2017-06-30<br>Neutral citation<br>2017 FC 634<br>File numbers<br>T-800-17<br>Decision Content<br>Date: 20170629<br>Docket: T-800-17<br>Citation: 2017 FC 634<br>Ottawa, Ontario, June 29, 2017<br>PRESENT: The Honourable Mr. Justice Bell<br>BETWEEN:<br>BELL CANADA<br>BELL EXPRESSVU LIMITED PARTNERSHIP BELL MEDIA INC.<br>VIDEOTRON S.E.N.C.<br>GROUPE TVA INC.<br>ROGERS COMMUNICATIONS CANADA INC.<br>ROGERS MEDIA INC.<br>Plaintiffs<br>and<br>ADAM LACKMAN dba TVADDONS.AG<br>Defendant<br>ORDER WITH REASONS<br>I. Introduction<br>[1] On June 9, 2017, a Justice of this Court issued an Anton Piller Order and an Interim Injunction following motions made ex parte and in camera by the Plaintiffs.<br>[2] Without a doubt, the consideration of ex parte orders – orders that are made without notice to or appearance by the defending party – constitute one of the most challenging issues facing judges in our adversarial system. When the ‘adversary’ is not present, the norms and very foundation of our justice system face serious challenges. For this reason, it is trite law that a party seeking an ex parte order must provide full and frank disclosure to the court. This full and frank disclosure extends not only to the factual underpinnings of the motion, but to the relevant jurisprudence and statutory provisions that might impact upon a judge tasked with rendering a decision in such circumstances. The relevant jurisprudence in this matter has been developed over the past 30 years and sets out the circumstances under which an Anton Piller order may be issued, and how such an order should be executed.<br>[3] Anton Piller orders are essentially civil search warrants that give a plaintiff access to the premises of the defendant, without notice, to search for and to seize property. While the plaintiff or the plaintiff’s representative cannot enter the premises without the permission of the occupant, that permission is normally obtained upon threat of contempt proceedings.<br>[4] The leading case regarding Anton Piller orders is Celanese Canada Inc. v. Murray Demolition Corporation, 2006 SCC 36, [2006] 2 S.C.R. 189 (Celanese). In his opening paragraph in Celanese, Justice Binnie, speaking for the Court stated:<br>An Anton Piller order bears an uncomfortable resemblance to a private search warrant. No notice is given to the party against whom it is issued. Indeed, defendants usually first learn of them when they are served and executed, without having had an opportunity to challenge them or the evidence on which they were granted. The defendant may have no idea a claim is even pending. The order is not placed in the hands of a public authority for execution, but authorizes a private party to insist on entrance to the premises of its opponent to conduct a surprise search, the purpose of which is to seize and preserve evidence to further its claim in a private dispute. The only justification for such an extraordinary remedy is that the plaintiff has a strong prima facie case and can demonstrate that on the facts, absent such an order, there is a real possibility relevant evidence will be destroyed or otherwise made to disappear. The protection of the party against whom an Anton Piller order is issued ought to be threefold: a carefully drawn order which identifies the material to be seized and sets out safeguards to deal, amongst other things, with privileged documents; a vigilant court-appointed supervising solicitor who is independent of the parties; and a sense of responsible self-restraint on the part of those executing the order.<br>[5] Hence, the only justification for what amounts to a party’s right to execute a search warrant in a private dispute (an Anton Piller order) is a demonstrated need to preserve relevant evidence where there is a real possibility of destruction or disappearance of that evidence.<br>[6] In Celanese at para 35, the Court sets out four essential conditions which must be established by the plaintiff before an Anton Piller order may issue. These conditions are reaffirmed in British Columbia (Attorney General) v. Malik, 2011 SCC 18 at para 29, [2011] 1 S.C.R. 657 (Malik), and state:<br>There must be a strong prima facie case;<br>The damage to the plaintiff of the defendant’s alleged misconduct, potential or actual, must be very serious;<br>There must be convincing evidence that the defendant has in its possession incriminating documents or things; and<br>There must be a real possibility that the defendant may destroy such material before the discovery process can do its work.<br>[7] The four conditions were deemed to have been met by the justice at the June 9, 2017 hearing, and an Anton Piller Order was issued. Both the Anton Piller Order and the Interim Injunction were made for a period of 14 days only.<br>[8] The Anton Piller Order was fully executed within the 14 days set out in the order. On June 21, 2017, the Interim Injunction was extended to June 30, 2017, on consent of the parties, in order to provide the Court the opportunity to more fully consider the Plaintiffs’ review motion, in which they seek:<br>A declaration that the Anton Piller Order was lawfully issued and that the Order and accompanying Interim Injunction were lawfully carried out;<br>An order authorizing the Plaintiffs to withdraw a deposit of $50,000 deposited on June 9, 2017 as security for damages;<br>An order that paragraphs C-17 to C-20 of the Order made on June 9, 2017 remain valid until final determination of this proceeding;<br>An interlocutory injunction pursuant to Rule 373 of the Federal Courts Rules, SOR/98-106 (the Rules), which would effectively result in the continuation of the interim injunction issued on June 9, 2017 until final determination of this proceeding.<br>An order for a mandatory injunction that would require the Defendant to continue to provide login credentials, passwords and other necessary access to material that was targeted by the Anton Piller Order. The effect of this order, if granted, would be to provide a continuous search warrant to the Plaintiffs until this matter is finally determined.<br>Costs to be awarded on a solicitor-client basis.<br>II. The Parties and the Relevant Facts<br>[9] The Plaintiffs are corporations, limited partnerships or general partnerships who are either broadcasters who operate television stations, or broadcasting distribution undertakings pursuant to the Broadcasting Act, S.C. 1991, c. 11 who receive broadcasts from several televisions stations. The broadcasters contend that they own the Canadian rights to communicate a variety of programs to the public by telecommunication via television and online broadcast. More precisely, the Plaintiffs Bell Media Inc., Rogers Media Inc. and Groupe TVA Inc. (the “Broadcasters”) contend that, among other things, they hold the Canadian rights to undertake the following actions pursuant to section 3 of the Copyright Act, R.S.C. 1985, c. C-42 (the Act):<br>(a) Communicate the Plaintiffs programs to the public by telecommunication via television broadcast, including the right to<br>(b) make the Plaintiffs programs available to the public by telecommunications via television broadcast in a way that allows a member of the public to have access to them from a place and at a time individually chosen by that member of the public; and<br>(c) authorize such acts.<br>[10] The broadcasting undertakings contend that they own the right to transmit television broadcasts to subscribers by various means of telecommunication, such as by satellite signal, co-axial cable, fibre optics, and hybrid fibre optics/co-axial cable.<br>[11] The Defendant is a software developer who has developed add-ons to an open source media player application known as KODI. Some KODI add-ons permit users to gain access to a vast amount of video content allegedly owned and distributed by the Plaintiffs. The Defendant has developed add-ons that permit users to access material which is clearly “non-infringing content”, as well as material which the Plaintiffs claim to be “infringing content”. To the extent that the add-ons developed by the Defendant permit access to allegedly “infringing content”, the Plaintiffs contend that they suffer damages. While the Defendant has not yet filed a Statement of Defence in this matter, he claims there is no violation of the Act flowing from his operations. He contends his activities are protected by subparagraph 2.4(1)(b) of the Act, which provides as follows:<br>[…] a person whose only act in respect of the communication of a work or other subject-matter to the public consists of providing the means of telecommunication necessary for another person to so communicate the work or other subject-matter does not communicate that work or other subject-matter to the public; […].<br>[12] The Defendant says the jurisprudence arising from the Society of Composers, Authors and Music Publishers of Canada v. Canadian Association of Internet Providers, 2004 SCC 45, [2004] 2 S.C.R. 427 (SOCAN), supports his position that he is not “communicating” any content. He is merely making it accessible, much like Google and other search engines. In fact, he candidly refers to his operation as a “mini-Google”.<br>[13] In addition to claiming that the products developed by him are compliant with the Act, the Defendant contends that the Act provides the Plaintiffs with a potential remedy should they conclude a violation exists. He says that remedy is found in paragraph 41.27(5) of the Act, whereby the Plaintiffs may provide him notice of the alleged infringement and afford him the opportunity to remedy the violation. The Defendant contends that, if there was any violation of the Act, which he denies, the Plaintiffs employed a “bombe atomique” by requesting an Anton Piller order instead of exercising the less draconian methods available under the Act.<br>III. The Issues before me<br>[14] The Court’s role at this early stage of the litigation is clearly not to decide the merits of the case (Celanese, above, para 1). My role is to apply a de novo evaluation of the Anton Piller Order after having heard the opposing point of view (John Stagliano Inc. v. Elmaleh, 2006 FC 585 at para 110, 292 FTR 208; Canadian Private Copying Collective v Amico Imaging Services Inc, 2004 FC 469 at paras 27-28, 249 FTR 312). This requires the Court to reconsider the four requirements necessary for the issuance of an Anton Piller order, namely: (i) that there is a strong prima facie case; (ii) that the damage to the plaintiff caused by the defendant’s alleged misconduct, potential or actual, is very serious; (iii) that there is convincing evidence that the defendant has in its possession incriminating documents or things; and (iv) that there is a real possibility that the defendant may destroy such material before the discovery process can do its work (Celanese, above, at para 35; Malik, above, at para 29).<br>[15] With respect to the interlocutory injunction, I must determine whether the Plaintiffs have met the tripartite test set out in Manitoba (Attorney General) v. Metropolitan Stores (MTS) Ltd., [1987] 1 SCR 110, 38 DLR (4th) 321 (Metropolitan Stores), and RJR MacDonald Inc. v. Canada (Attorney General), [1994] 1 SCR 311, 111 DLR (4th) 385 (RJR MacDonald). That is: (i) is there a serious issue to be tried?; (ii) have the Plaintiffs demonstrated they will suffer irreparable harm if the injunction is not granted?; and; (iii) does the balance of convenience favour the granting of the injunction? The test is conjunctive. I also note that the “strong prima facie case” requirement for the issuance of an Anton Piller order is a higher standard than the “serious issue to be tried” standard applicable to the first criteria of the test for an interlocutory injunction (Indian Manufacturing Ltd. v. Lo, [1996] 2 FCR 647, 67 CPR (3d) 132; Havana House Cigar & Tobacco Merchants Ltd. v. Doe, [1999] FCJ No. 1225 at para 27, 1 CPR (4th) 521).<br>IV. The Evidence<br>[16] In the affidavit principally relied upon by the Plaintiffs, the affiant deposes, among other things, that the Defendant’s business, known as “TVAddons”, hosts over “1500 Add-ons in total”. He also deposes that, of these 1500 Add-ons, there is a curated list of 22 Add-ons, “almost all of which are infringing Add-ons”. It follows that, from the Plaintiffs’ own evidence, just over 1% of the Add-ons developed by the Defendant are alleged to be “infringing Add-ons”. This is consistent with the Defendant’s affidavit, wherein he deposes that there are 1400 Add-ons available on the TVAddons website, the majority of which are unrelated to alleged “illegitimate Hosting Sites”.<br>[17] Both the Plaintiffs’ and the Defendant’s principal affiants describe the KODI software application as an “open source”, which means that it is available to the general public for use and/or modification of its original design. The Defendant deposes that the KODI application “without any add-on added to it, is used to search, execute, stream or download any type of digital files such as pictures, music, videogames, videos, interactive files, etc.”, and then goes on to say that, contrary to what is affirmed by the Plaintiffs’ affiant, “KODI is not limited to accessing content that is on the computer of the users. The KODI application includes a list of add-ons that are Web Search add-ons”. The Plaintiff’s affiant spent some time in his affidavit explaining how the Defendant would have accessed the work “Orphan Black” by using his Add-ons. The Defendant, in his affidavit, demonstrated the same search results using Google, hence his assertion that his site is a “mini-Google” and is therefore contemplated by the exceptions set out and discussed above in SOCAN.<br>[18] On this Review Motion, the complete hearing before the justice who granted the Anton Piller Order is to be considered. Part of that record contains the following exchange:<br>Justice: And on the next page, paragraph 5, so the experts would deactivate the TV Add-ons domains and sub-domains, so you really want to neutralize the Defendant’s operations?<br>Lawyer for the Plaintiffs: Yeah, completely.<br>Justice: Completely…<br>Lawyer for the Plaintiffs: Yeah.<br>Justice: So it’s more than saying you’re enjoined of not operating or communicating, you really want to neutralize the guy.<br>Lawyer for the Plaintiffs: Yeah, completely, that’s for sure. Yeah. We use his passwords, we shut down everything, we change the password and we change everything and it cannot be reactivated by him or by someone else. That’s the goal.<br>[19] According to the Anton Piller Order, the “search” was to be conducted between the hours of 8 a.m. and 8 p.m., unless it was reasonably necessary to depart from those hours. I conclude that this search includes any interview considered necessary by the independent solicitor and/or Plaintiffs’ counsel. In his affidavit, the independent solicitor deposed that on June 12, 2017, the questioning of the Defendant commenced at 2:40 p.m. and “lasted until approximately midnight”. The interrogation (my word) of the Defendant therefore lasted more than 9 hours. I acknowledge that the interrogation was interrupted, according to the independent solicitor, by dinner, and an opportunity for the Defendant to speak to his lawyer. However, it is important to note that the Defendant was not permitted to refuse to answer questions under fear of contempt proceedings, and his counsel was not permitted to clarify the answers to questions. I conclude unhesitatingly that the Defendant was subjected to an examination for discovery without any of the protections normally afforded to litigants in such circumstances (discovery). Here, I would add that the ‘questions’ were not really questions at all. They took the form of orders or directions. For example, the Defendant was told to “provide to the bailiff” or “disclose to the Plaintiffs’ solicitors”.<br>[20] I find the most egregious part of the questioning to be in in the independent solicitor’s affidavit, wherein he deposes that counsel for the Plaintiffs “provided Defendant Lackman with some names” of other people who might be operating similar websites. It appears the Defendant was required to associate that list of 30 names with names, addresses and other data about individuals that might have some knowledge or relationship to those names. The list and the responses of the Defendant are found on three complete pages in the exhibits of the independent solicitor’s affidavit. I conclude that those questions, posed by Plaintiffs’ counsel, were solely made in furtherance of their investigation and constituted a hunt for further evidence, as opposed to the preservation of then existing evidence.<br>V. Analysis<br>A. Anton Piller Order<br>[21] The Anton Pillar Order under review was purposely designed by counsel for the Plaintiffs, as admitted by them, to completely shut down the Defendant’s operations. To the Plaintiffs, it mattered not that, by their own estimate, just over 1% of the Add-ons developed by the Defendant were allegedly used to infringe copyright. I therefore conclude that the purpose of the Anton Piller Order under review was only partly designed to preserve evidence that might be destroyed or that could disappear. I am of the view that its true purpose was to destroy the livelihood of the Defendant, deny him the financial resources to finance a defence to the claim made against him, and to provide an opportunity for discovery of the Defendant in circumstances where none of the procedural safeguards of our civil justice system could be engaged.<br>[22] With respect to the issue of whether there exists a “strong prima facie case”, I am not convinced. While I acknowledge the purpose of this review is not to try the case, I have nevertheless assessed the strength of the case made out by the Plaintiffs. In doing so, I have carefully considered the arguments made by Defendant’s counsel in relation to his interpretation of the Act and the application of SOCAN to the facts. I have also carefully considered the affidavits offered by both the Defendant and the Plaintiffs’ affiants. I am impressed by the forthright manner in which the Defendant describes his knowledge and use of the open source KODI software and the similarities between TVAddons and Google. The actions performed by the Plaintiffs’ expert to access allegedly infringing material at TVAddons were replicated by the Defendant using Google. In my view, the jurisprudence from SOCAN becomes relevant to this issue. While the prima facie case may have appeared strong before the justice who heard the matter ex parte, the presence of the adversary in the courtroom and the arguments advanced have demonstrated there is nothing more than a serious issue to be tried. The higher threshold of a strong prima facie case is not met.<br>[23] In the absence of a strong prima facie case, and in the presence of an overly broad order designed to do much more than preserve evidence that might be destroyed or that might disappear, there is little purpose in conducting any further analysis on the issuance and execution of the Anton Piller Order. I conclude that it must fall. I now turn to the issue of the Interlocutory Injunction.<br>B. Interlocutory Injunction<br>[24] While I accept that there exists a serious issue to be tried, and acknowledge that the Plaintiffs may well suffer irreparable harm if the interlocutory injunction is not issued, I am not satisfied that the balance of convenience favours the granting of an interlocutory injunction.<br>[25] The Defendant has demonstrated he has an arguable case that he is not violating the Act. He has also deposed that TVAddons is his only source of income. If an injunction were granted by this Court, it would effectively bring this litigation to a close, as the Plaintiffs’ admittedly seek to neutralize the Defendant in such a way that it would be impossible for his add-ons to be reactivated “by him or someone else”. Furthermore, if the Defendant is “neutralized” in this way, he may lack the financial resources to mount his defence. In considering the balance of convenience, I also repeat that the Plaintiffs admit that the vast majority of add-ons are non-infringing. Whether the remaining approximately 1% are infringing is very much up for debate. For these reasons, I find the balance of convenience favours the Defendant, and no interlocutory injunction will be issued.<br>C. Other<br>[26] The Plaintiffs have requested a return of their $50,000 deposit paid as security for damages. Given that the Anton Piller Order is now declared unlawful, I leave it to the parties to negotiate the amount, if any, of that deposit that is to be forfeited to the Defendant. Failing agreement among the parties on that issue within 90 days from the issuance of this Order, they may return to this Court for argument and resolution of that issue.<br>[27] Finally, the Plaintiffs have requested costs to be assessed on a solicitor-client basis. The Defendant indicated that whether he should win or lose on this Review Motion, he considers it appropriate that costs be awarded in the cause. As a result, this Court will order that costs be in the cause.<br>THIS COURT ORDERS that:<br>The Interim Injunction issued by this Court on June 9, 2017 is extended, on consent of the parties until June 30, 2017 or until further order of the Court, whichever occurs first. (see paragraph (e) below);<br>The motion request by the Plaintiffs for a declaration that the execution of the Anton Piller Order and Interim Injunction were lawfully conducted is dismissed;<br>The motion request by the Plaintiffs to authorize the withdrawal from the Court of $50,000 filed on June 9, 2017 as security for damages is dismissed;<br>The motion request for an order that paragraphs C-17 to C-20 of the Anton Piller Order made on June 9, 2017 remain in effect until final determination of this proceeding is dismissed. For greater certainty, the Anton Piller Order is fully vacated and declared null and void;<br>The motion for an Interlocutory Injunction is dismissed. Effective immediately, the Interim Injunction issued on June 9, 2017 and extended on June 21, 2017 is vacated;<br>All remaining orders sought by the Plaintiffs in their amended Notice of Motion filed on June 16, 2017 and heard on June 21, 2017 are dismissed;<br>All articles seized during the execution of the Anton Piller Order, including, but not limited to phones, computers, computer equipment, records, communications or evidence proving that communications were made between the Defendant and third parties, domain names, subdomain names, passwords, login credentials, banking information, corporate registry information, information regarding hosting accounts, server information, codes, programmer information, and all transcripts and recordings of the Defendant in response to any questions put to him by any person in the course of the execution of the Anton Piller Order are to be delivered to the Defendant, and no copies of any such materials are to be maintained by independent counsel, plaintiffs, or any person other than the Defendant;<br>All affidavits filed by the Plaintiffs in support of the motion for an Anton Piller order are to be sealed and marked “Subject to an order of confidentiality” and placed in Court file T-800-17, and are to remain confidential and under seal until further order of the Court;<br>Costs are in the cause;<br>I will remain seized with jurisdiction over any motions or requests for directions with respect to the contents of this order.<br>“B. Richard Bell”<br>Judge<br>FEDERAL COURT<br>SOLICITORS OF RECORD<br>DOCKET:<br>T-800-17<br>STYLE OF CAUSE:<br>BELL CANADA, BELL EXPRESS VU LIMITED PARTNERSHIP, BELL MEDIA INC., VIDEOTRON S.E.N.C., GROUPE TVA INC., ROGERS COMMUNICATIONS CANADA INC., ROGERS MEDIA INC., v ADAM LACKMAN DBA TV ADDONS.AG<br>PLACE OF HEARING:<br>Montréal, Quebec<br>DATE OF HEARING:<br>June 21, 2017<br>REASONS FOR ORDER:<br>BELL J.<br>DATED:<br>June 30, 2017<br>APPEARANCES:<br>Me François Guay<br>Me Guillaume Lavoie St-Marie<br>For The Plaintiff<br>Me Éva Richard, Me Karim Renno<br>Me Hilal El Ayoubi<br>FOR THE DEFENDANT<br>SOLICITORS OF RECORD:<br>Me Karl Delwaide, Me Marie-Gabrielle Bélanger<br>Fasken Martineau DuMoulin S.E.N.C.R.L., s.r.l.<br>Tour de la Bourse, bureau 3700<br>800, rue Victoria<br>Montréal, QC H4Z 1E9<br>For The Plaintiff<br>Me Bernard Letarte, Me Ludovic Sirois - Attorney General of Canada - Justice Canada<br>Bureau SAT-6060<br>284 Wellington Street<br>Ottawa, ON K1A 0H8<br>for the defendant<br></code> | <code>cluster: FACTS: This case involves a dispute between the plaintiffs, a group of Canadian broadcasters and broadcasting undertakings, and the defendant, a software developer, Adam Lackman. The plaintiffs alleged that the defendant's software, known as TVAddons, allowed users to access copyrighted content without permission. The defendant claimed that his software was compliant with the Copyright Act and that he was not communicating any content, but rather making it accessible like a search engine. The plaintiffs sought an Anton Piller order, which is a civil search warrant that allows a plaintiff to search the defendant's premises without notice, to seize evidence and preserve it. The order was granted by a justice of the Federal Court, but the defendant challenged it on a review motion.</code> | <code>cluster: ANALYSIS: The court's analysis was based on the principles established in the leading case of Celanese Canada Inc. v. Murray Demolition Corporation, which sets out the conditions necessary for the issuance of an Anton Piller order. The court found that the plaintiffs had failed to meet these conditions, particularly with respect to the prima facie case and the scope of the order. The court also considered the defendant's argument that his software was compliant with the Copyright Act and that he was not communicating any content, but rather making it accessible like a search engine. The court found that this argument had merit and that the plaintiffs had not met the requirements for an interlocutory injunction.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 1,500 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 370 tokens</li><li>mean: 2955.16 tokens</li><li>max: 6550 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 213.29 tokens</li><li>max: 1042 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 206.64 tokens</li><li>max: 973 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>cluster: FACTS: Murphy v. Canada (Attorney General)<br>Court (s) Database<br>Federal Court Decisions<br>Date<br>2016-11-02<br>Neutral citation<br>2016 FC 1208<br>File numbers<br>T-192-16<br>Decision Content<br>Date: 20161102<br>Docket: T-192-16<br>Citation: 2016 FC 1208<br>Ottawa, Ontario, November 2, 2016<br>PRESENT: The Honourable Mr. Justice Brown<br>BETWEEN:<br>DAPHNE MURPHY<br>Applicant<br>and<br>THE ATTORNEY GENERAL OF CANADA<br>Respondent<br>JUDGMENT AND REASONS<br>I. Nature of the Matter [1] This is an application for judicial review brought by Daphne Murphy [the Applicant] under s. 18.1 of the Federal Courts Act, RSC 1985, c F-7 of a decision made on October 8, 2015, by a member of the Social Security Tribunal – Appeal Division (SST-AD) [SST-AD Decision] denying the Applicant’s application for leave to appeal. The Applicant sought leave in order to appeal a decision of the Social Security Tribunal – General Division (SST-GD) made on August 28, 2015 [SST-GD Decision], which had dismissed the Applicant’s appeal from a decision denying her application for Canada Pension Plan (CPP) disability benefits.<br>[2] The Applicant is a self-represented litigant. She designated her husband as a representative to assist her after suffering a stroke in November 2011.<br>[3] Judicial review is granted for the following reasons.<br>II. Facts [4] The Applicant is a 58-year-old woman from Gander, Newfoundland. The record shows that she has a significant speech impairment, which was apparent at the hearing. She advises she was unable to speak until she was 7 years old. She obtained a grade 8 education. She married in 1979 and it appears she divorced in 1993. Her husband was in the armed forces. She has two children.<br>[5] She has a very extensive history of attendances on physicians. A very large number of visits to various doctors and associated reports are documented in the Certified Tribunal Record (CTR) from 2011 going back to the 1990s.<br>[6] She is able to access the CPP through credits accumulated by way of credit-splitting with her former husband from 1979 to 1993. The record indicates she was not otherwise working; she was raising her two children and advises she was doing some babysitting to make her own money. She also has contributions from her own work in 2007 and 2008, but they were not sufficient to entitle her to disability under the Canada Pension Plan, RSC 1985, c C-8 [CPP Act]. This will be discussed in more detail later.<br>[7] Her application for a CPP disability pension was made under the ‘late application provisions’, the effect of which is that she may obtain a CPP disability pension if she establishes she was severely disabled (as defined) as of December 31, 1997, and remained severely disabled continuously since then.<br>[8] The SST-AD conceded the Applicant is currently disabled as a consequence of injuries to her right knee sustained in a fall on September 1, 2009, and damages resulting from a stroke on November 29, 2011. Following the stroke, the Applicant has not been able to work. She has trouble speaking, spelling and walking without a cane or walker. She is only able to walk a little distance before her knee gives out on her and as a result, she is continuously falling. She drags her foot when walking and requires assistance in order to shower, bathe, eat, go for walks and/or do housework. As noted, she has a significant speech impairment – she stutters and therefore has trouble expressing her thoughts.<br>[9] Therefore, the issue is not whether she is severely disabled now; the issues are whether she was severely disabled as of December 1997, and whether she has remained so continuously.<br>[10] On April 19, 2011, the Applicant’s claim for disability benefits was denied [Denial Letter]. On January 18, 2012, her claim was again denied after reconsideration by CPP staff [Reconsideration Denial Letter]. The Denial Letter included a list of documents that had been reviewed and considered and specific factors that were considered in coming to a decision. It provided the following as the basis for denial:<br>We recognize you have identified limitations resulting from your knee injury and we realize that you may not have been able to do labour intensive work since 2009. However, we concluded that your condition did not start until 2009 and this would not have any effect or [sic] an ability to work in December 1997.<br>[11] She appealed to the SST-GD.<br>[12] One relevant factor in the Applicant’s appeal was her work experience. Her contact with the workplace between 1979 and 2011 was minimal. She only made work-related contributions to CPP in 2 of those 32 years, namely 2007 and 2008. She did not work enough in either 2006 or 2010 to warrant CPP premiums, and the small premium she paid in 2006 ($60.53) was refunded. She made no payment in 2010. According to the Respondent’s Record, she worked six months at Tim Hortons’s in Windsor, Nova Scotia, three weeks at Baskin Robbins and two months at Swiss Chalet in Sackville, Nova Scotia. As I mentioned earlier, she also did some babysitting a long time ago.<br>[13] The Applicant’s appeal to the SST-GD was a paper appeal. In other words, although the Applicant might have had a de novo hearing, the matter proceeded without a hearing on the basis of a file review. The Applicant was given notice that the SST-GD intended to conduct a paper appeal; she was invited to comment and submit additional material, but took no position in that regard. In its decision, the SST-GD explained its reasons for conducting a paper appeal:<br>[2] The hearing of this appeal was by decision on the record for the following reasons:<br>a) The issues under appeal are not complex;<br>b) There are no gaps in the information in the file or need for clarification;<br>c) Credibility is not a prevailing issue;<br>d) The form of hearing respects the requirement under the Social Security Tribunal Regulations to proceed as informally and quickly as circumstances, fairness and natural justice permit.<br>[14] I pause to note that the paper record before the SST-GD contained a notation by CPP staff, dated September 28, 2011, that the Applicant had a “significant speech impairment.” The same note states that: “[H]er speech impediment prevents her from doing phone work as well as her education.” The decision by the SST-GD to proceed without an oral hearing does not refer to this notation, nor to the Applicant’s speech impairment.<br>[15] Based on its paper review, the SST-GD denied the Applicant’s appeal. The SST-GD found that the Applicant failed to prove, on a balance of probabilities, that she had a severe and prolonged disability on or before her minimum qualifying period (MQP) of December 31, 1997.<br>[16] On the issue of the severity of the Applicant’s disability, the SST-GD stated:<br>[16] There is very little medical evidence prior to the Appellant’s MQP [the date her minimum qualifying period ended, i.e., December 31, 1997]. The evidence on file indicates the Appellant suffered from general medical ailments. The evidence also indicates that the Appellant was able to work for numerous years and attend school after her MQP. Her education and limited work experience may present barriers to employment, but the Tribunal must consider her medical condition as the primary factor.<br>[emphasis added]<br>[17] Although the SST-GD acknowledged that the Applicant was unable to work at the time of its review, it concluded that there was “no evidence to support that [the Applicant] had a severe disability on or before December 31, 1997 that continues to this day.” Because the test for disability under the CPP Act is conjunctive, the Member did not make a finding on the prolonged criterion.<br>[18] The Applicant sought leave to appeal to the SST-AD, which denied her application on October 8, 2015.<br>III. Decision under Review [19] The SST-AD indicated that, in order to succeed on an application for leave to appeal to the SST-AD under the Department of Employment and Social Development Act, SC 2005, c 34 (DESDA), the Applicant must present some arguable ground upon which the proposed appeal might succeed, citing Kerth v Canada (Minister of Development), [1999] FCJ No 1252 (FC) [Kerth]. The SST-AD also cited case law for the proposition that an arguable case is “akin to whether legally an applicant has a reasonable chance of success”: Canada (Minister of Human Resources Development v Hogervorst, 2007 FCA 41; Fancy v Canada (Attorney General), 2010 FCA 63.<br>[20] The SST-AD noted that, pursuant to s. 58(1) of DESDA, there are only three grounds under which an appeal to the SST-AD can be considered:<br>a) the General Division failed to observe a principle of natural justice or otherwise acted beyond or refused to exercise its jurisdiction;<br>b) the General Division erred in law in making its decision, whether or not the error appears on the face of the record; or<br>c) the General Division based its decision on an erroneous finding of fact that it made in a perverse or capricious manner or without regard for the material before it.<br>[21] The SST-AD member made the following findings:<br>[6] The Applicant requested leave to appeal on the basis that she disagreed with the General Division decision. She set out her physical limitations to support this argument. I accept that the Applicant currently has these limitations. The General Division decision correctly stated, however, that in order for the Applicant to receive a Canada Pension Plan disability pension, she had to have been disabled on or before December 31, 1997. It clearly set out the basis for its conclusion that the Applicant was not disabled at that time.<br>[7] The Applicant’s arguments in support of her request for leave to appeal do not point to any error made by the General Division, or to any breach of the principles of natural justice. Therefore, they are not grounds of appeal under the Act.<br>[8] The Applicant also argued that she needed money to pay for medication. This argument also does not point to any error or to a breach of natural justice by the General Division. Leave to appeal cannot be granted on this basis.<br>[22] The SST-AD found that the Applicant had not presented a ground under s. 58 of DESDA upon which she had a reasonable chance of success and consequently denied her application for leave to appeal.<br>[23] It is from this decision that the Applicant seeks judicial review.<br>IV. Issues [24] This matter raises the following issues:<br>1. Whether the SST-AD member’s finding that the Applicant did not present a ground of appeal with a reasonable chance of success under s. 58 of DESDA was reasonable?<br>2. Whether there is an arguable issue under any of the grounds provided in s. 58(1) of DESDA?<br>3. Whether the Member acted unreasonably in finding that there was no reasonable chance for success pursuant to s. 58(2) of DESDA, considering the evidence provided by the Applicant and the law surrounding the definition of “severe”?<br>V. Standard of Review [25] In Dunsmuir v New Brunswick, 2008 SCC 9 at paras 57, 62 [Dunsmuir], the Supreme Court of Canada held that a standard of review analysis is unnecessary where “the jurisprudence has already determined in a satisfactory manner the degree of deference to be accorded with regard to a particular category of question.” The Respondent correctly submits that the decision by the SST-AD granting or refusing leave to appeal should be reviewed in this Court on the reasonableness standard: Tracey v Canada (Attorney General), 2015 FC 1300 at para 17; Canada (Attorney General) v Hoffman, 2015 FC 1348 at para 27. In addition, Canada (Attorney General) v O’Keefe, 2016 FC 503 at para 17 indicates that “substantial deference” should be given to the SST-AD.<br>[26] In Dunsmuir at para 47, the Supreme Court of Canada explained what is required of a court reviewing on the reasonableness standard of review:<br>A court conducting a review for reasonableness inquires into the qualities that make a decision reasonable, referring both to the process of articulating the reasons and to outcomes. In judicial review, reasonableness is concerned mostly with the existence of justification, transparency and intelligibility within the decision-making process. But it is also concerned with whether the decision falls within a range of possible, acceptable outcomes which are defensible in respect of the facts and law.<br>VI. Relevant Provisions [27] DESDA governs the operation of the Social Security Tribunal. The grounds for appeal are specifically set out in s. 58(1) of DESDA. The grounds for granting leave to appeal are set out in s. 58(2):<br>Grounds of appeal<br>Moyens d’appel<br>58 (1) The only grounds of appeal are that<br>58 (1) Les seuls moyens d’appel sont les suivants :<br>(a) the General Division failed to observe a principle of natural justice or otherwise acted beyond or refused to exercise its jurisdiction;<br>a) la division générale n’a pas observé un principe de justice naturelle ou a autrement excédé ou refusé d’exercer sa compétence;<br>(b) the General Division erred in law in making its decision, whether or not the error appears on the face of the record; or<br>b) elle a rendu une décision entachée d’une erreur de droit, que l’erreur ressorte ou non à la lecture du dossier;<br>(c) the General Division based its decision on an erroneous finding of fact that it made in a perverse or capricious manner or without regard for the material before it.<br>c) elle a fondé sa décision sur une conclusion de fait erronée, tirée de façon abusive ou arbitraire ou sans tenir compte des éléments portés à sa connaissance.<br>Criteria<br>Critère<br>(2) Leave to appeal is refused if the Appeal Division is satisfied that the appeal has no reasonable chance of success.<br>(2) La division d’appel rejette la demande de permission d’en appeler si elle est convaincue que l’appel n’a aucune chance raisonnable de succès.<br>Decision<br>Décision<br>(3) The Appeal Division must either grant or refuse leave to appeal.<br>(3) Elle accorde ou refuse cette permission.<br>[28] The requirements for disability benefits are set out in sections 42 and 44 of the CPP Act. Subsection 44(1)(b)(ii) is referred to as the ‘late applicant provision’ and applies to the Applicant in this case. Under section 44(1)(b)(ii), a disability pension may be paid to a contributor:<br>• Who has not reached 65 years of age;<br>• To whom no retirement pension is payable;<br>• Who is disabled; and,<br>• Who is a contributor to whom a disability pension would have been payable at the time the contributor is deemed to have become disabled if an application for a disability pension had been received before the contributor’s application for a disability pension was actually received.<br>[29] Subsection 42(2) sets out when a person is deemed disabled. A person is considered disabled when they have a “severe and prolonged” mental or physical disability. A disability is considered “severe” when it renders the person incapable regularly of pursuing any substantially gainful occupation: CPP Act s. 42(2)(a)(i). A disability is considered “prolonged” where it is likely to be long continued and of indefinite duration or is likely to result in death: CPP Act s. 42(2)(a)(ii). This section is conjunctive; a person must satisfy both the “severe” and “prolonged” criteria in order to be found disabled within the meaning of the CPP Act. If they fail to satisfy one of the two criteria, the other need not be assessed. Paragraph 42(2)(b) puts a temporal limit on when a person may be deemed disabled.<br>VII. Analysis [30] In my respectful view, judicial review should be granted in this case.<br>[31] I say this because, on the critical issue of the severity of the Applicant’s disability, the SST-GD misapprehended critical and central evidence concerning the Applicant’s attachment to the workplace, thereby erring in law and basing its decision on an erroneous finding of fact without regard for the material before it, which in my respectful opinion are both bases upon which the SST-AD acting reasonably ought to have granted leave to appeal.<br>[32] This critical misapprehension occurs in the following passage of the reasons of the SST-GD:<br>The evidence also indicates that the Appellant was able to work for numerous years and attend school after her MQP.<br>[emphasis added]<br>[33] The case turned on the Applicant’s employability, that is, her ability to work. This finding is of central importance because it misstates the nature of the Applicant’s ability to work, and does so in a manner that is not defensible on the record because it is contrary to the record.<br>[34] There was in fact no evidence that the Applicant was able to work for a single continuous year, let alone the “numerous years” found by the SST-GD. The facts of this case do not support the finding that she “was able to work for numerous years”.<br>[35] Indeed, the record shows that over the relevant 32 year period (1979 to 2011), the Applicant’s attachment to the workforce was extremely limited: her short term work in 2007 (in Newfoundland) and 2008 (in Nova Scotia) and very little else except babysitting many years ago in Newfoundland. In my respectful view, the SST-GD’s conclusion regarding the Applicant’s workforce attachment was not supported by the evidence before it. The decision is based on a misapprehension of the evidence; in addition, in this central respect, there is no evidence to support it which is an error of law.<br>[36] In my respectful view, this misapprehension of the evidence, and the absence of evidentiary support, reasonably meets the test set out in paragraph 58(1)(c) of DESDA which provides that a ground of appeal exists where the SST-GD “based its decision on an erroneous finding of fact that it made in a perverse or capricious manner or without regard for the material before it”. This finding also constituted an error of law per paragraph 58(1)(b) of DESDA. In my view, the Applicant therefore had two arguable grounds upon which her proposed appeal might succeed per Kerth; the Applicant has a reasonable chance of success on these grounds.<br>[37] In my view, a proper consideration of the evidence may have led to a different outcome, namely the grant of leave to appeal with the possible result that the SST-AD would refer the matter to the General Division for redetermination pursuant to s. 59(1) of DESDA, or grant other relief.<br>[38] As a consequence, the SST-AD’s decision was not reasonable because it was not justified on the facts and law, as Dunsmuir requires. In my view, this aspect of the decision’s unreasonableness is sufficient basis on which to grant judicial review.<br>[39] I wish to add that the Applicant, during the hearing before me, said that she was unable to obtain work because of her speech impediment, which, as was noted by CPP staff, is a significant impairment. She advised that employers she contacted declined to hire her because her speech impediment would be disruptive to other staff and upsetting to customers. She stated she was not wanted because of the speech impediment she was born with. She said she could not even get employment in a back room of a chain restaurant because of her speech impediment. She says she was dismissed on account of her speech impairment. She challenges one record suggesting otherwise: an employer said she ceased working because she moved and, although she had moved, it was only from Sackville to Bedford, which she said is a 20 minute drive.<br>[40] The jurisprudence, as the SST-GD acknowledged, establishes that the ‘severity’ criteria for CPP disability pension purposes must be assessed in a “real world” context: Villani v Canada (A.G.), 2001 FCA 248, which entails keeping in mind factors such as age, level of education, language proficiency and past work, life experience and, importantly, employability.<br>[41] In my view, in making these verbal submissions to the Court at the hearing, the Applicant raises her “real world” considerations which, if accepted, might entitle her to the disability pension she seeks because these submissions speak directly to the core issue of her employability. Villani requires consideration of the “real world” matters of her significant speech impediment, and employability which may be related, in the assessment of her alleged severe disability.<br>[42] The Federal Court of Appeal describes the “real world” approach. In the words of Isaac, J.A. (as he then was):<br>[33] The “real world” approach was first adopted by the Board in Edward Leduc v. Minister of National Health and Welfare, CCH Canadian Employment Benefits and Pension Guide Reports, Transfer Binder 1986-1992 at ¶ 8546, pp. 6021-6022 (January 29, 1988). In that case, the Board found for the applicant on the following basis:<br>The Board is advised by medical authority that despite the handicaps under which the Appellant is suffering, there might exist the possibility that he might be able to pursue some unspecified form of substantially gainful employment. In an abstract and theoretical sense, this might well be true. However, the Appellant does not live in an abstract and theoretical world. He lives in a real world, people [sic] by real employers who are required to face up to the realities of commercial enterprise. The question is whether it is realistic to postulate that, given all of the Appellant’s well documented difficulties, any employer would even remotely consider engaging the Appellant. This Board cannot envision any circumstances in which such might be the case. In the Board’s opinion, the Appellant, Edward Leduc, is for all intents and purposes, unemployable.<br>[43] The Federal Court of Appeal in Villani requires that the SST-GD and SST-AD interpret and apply the CPP Act in a large and liberal manner: paragraph 27 of Villani states:<br>In Canada, courts have been especially careful to apply a liberal construction to so-called ‘social legislation.’ In Rizzo & Rizzo Shoes Ltd. (Re), [1998] 1 S.C.R. 27 at para 36, the Supreme Court emphasized that benefits-conferring legislation ought to be interpreted in a broad and generous manner and that any doubt arising form the language of such legislation ought to be resolved in favour of the claimant…. has been adopted in a number of Supreme Court decisions dealing with the Unemployment Insurance Act, 1971.<br>[44] At paragraph 28, Villani also notes that the CPP Act is benefits-conferring legislation, and at para 29 states that any ambiguity flowing from its words must be resolved in favour of a claimant for disability benefits. Also of importance for the case at bar is that Villani requires that the proper application of the severity test involves consideration of the applicant’s employability:<br>44 In my respectful view, the Board has invoked the wrong legal test for disability insofar as it relates to the requirement that such disability must be "severe". The proper test for severity is the one that treats each word in the definition as contributing something to the statutory requirement. Those words, read together, suggest that the severity test involves an aspect of employability.<br>[45] The failure to reasonably determine the Applicant’s workforce attachment means that her Villani real-world ‎assessment was incomplete at best. This is further reason why judicial review must be granted. The Applicant had a statute-established right, supported by the case law, to a more comprehensive disability review that considers her employability in the “real world” in which she lived and lives. In my view, she did not have such a review.<br>[46] In this connection, I doubt very much that a proper Villani assessment may take place without a de novo hearing before the SST-GD given her limited education and limited ability to make written representations, her speech impediment as documented by CPP staff, coupled with the difficulty she has expressing her thoughts.<br>[47] While the Applicant did not explicitly raise these real world considerations in her written filings, they certainly were on the record as a result of her discussions with CPP staff. As outlined above, the paper record reviewed by the SST-GD contained a CPP staff member’s note that the Applicant had a “significant speech impairment”. In addition, to reiterate, the same CPP staff stated that: “[H]er speech impediment prevents her from doing phone work as well as her education.”<br>[48] Importantly, this Applicant is still in the system and in my view should have an opportunity to have these considerations addressed; they are important to her, they were raised on the record, but were not considered either by the SST-GD, or by the SST-AD. In my respectful view, it is not safe to leave them unaddressed.<br>[49] In my view, the Applicant’s “real world” issues and employability are best assessed in a de novo appeal before the SST-GD.<br>[50] I am concerned as all parties must be with delay in resolving the Applicant’s rights. Her significant speech impediment and “real world” situation and employability were first documented by CPP staff more than five years ago: the relevant CPP’s Development Contact Record is dated September 28, 2011. Given this and the importance of bringing this matter to a resolution, and in light of the fact that the Applicant is now disabled, and considering subsection 18.3(3) of the Federal Courts Act, I considered but decided against directing that the SST-AD cause the SST-GD to proceed with a fresh appeal de novo so that the Applicant’s real world employability may be assessed as required by Villani, together with other issues the Applicant may raise. I decline to do so because this is a matter for the SST-AD to determine.<br>VIII. Conclusion [51] Judicial review is granted.<br>IX. Costs [52] The Respondent did not seek costs; in my view this is not a case for costs.<br>X. Procedural Note – Style of Cause [53] The Respondent correctly requests that the style of cause in this matter be amended to show the respondent as the Attorney General of Canada. The Applicant consents and therefore it is so ordered, effective immediately.<br>JUDGMENT<br>THIS COURT’S JUDGMENT is that:<br>1. The style of cause is amended to show the Attorney General of Canada as the Respondent, effective immediately.<br>2. Judicial review is granted and the Decision of the SST-AD dated October 8, 2015 is set aside.<br>3. This matter is remitted to a differently constituted SST-AD for redetermination.<br>4. There is no order as to costs.<br>"Henry S. Brown"<br>Judge<br>FEDERAL COURT<br>SOLICITORS OF RECORD<br>DOCKET:<br>T-192-16<br>STYLE OF CAUSE:<br>DAPHNE MURPHY v THE ATTORNEY GENERAL OF CANADA<br>PLACE OF HEARING:<br>Halifax, Nova Scotia<br>DATE OF HEARING:<br>SEPTEMBER 13, 2016<br>JUDGMENT AND reasons:<br>BROWN J.<br>DATED:<br>NOVEMBER 2, 2016<br>APPEARANCES:<br>Daphne Murphy<br>ON HER OWN BEHALF<br>Hasan Junaid<br>For The Respondent<br>SOLICITORS OF RECORD:<br>- NIL -<br>self-represented Applicant<br>William F. Pentney<br>Deputy Attorney General of Canada<br>Department of Justice<br>ESDC Legal Services<br>Gatineau, Quebec<br>For The Respondent<br></code> | <code>cluster: FACTS: The person concerned, a 58-year-old woman from Gander, Newfoundland, applied for Canada Pension Plan (CPP) disability benefits. She had a significant speech impairment and was unable to work due to a stroke she suffered in 2011 and a knee injury she sustained in 2009. She had a limited education and work experience, with only a few short-term jobs between 1979 and 2011. Her application was initially denied in 2011 and again in 2012 after reconsideration. She appealed the decision to the Social Security Tribunal – General Division (SST-GD), which conducted a paper appeal and denied her application in 2015. The SST-GD found that she failed to prove that she had a severe and prolonged disability on or before December 31, 1997, the minimum qualifying period (MQP) for CPP disability benefits. The person concerned then sought leave to appeal to the Social Security Tribunal – Appeal Division (SST-AD), which denied her application in 2015. She subsequently applied for judicial review of the SST-AD's decision.</code> | <code>cluster: CONCLUSION: In conclusion, the court's decision in this case was based on the principle that the SST must interpret and apply the CPP Act in a liberal and generous manner. The court found that the SST-GD and SST-AD had failed to apply this principle in the person concerned's case, and that their decisions were therefore unreasonable. The court's decision was also based on the principle that the person concerned had a right to a more comprehensive disability review that considers her employability in the real world. The court's conclusion was that the SST-AD's decision was not reasonable and that the matter should be remitted to a differently constituted SST-AD for redetermination.</code> | | <code>cluster: CONCLUSION: Altamirano v. Canada (Citizenship and Immigration)<br>Court (s) Database<br>Federal Court Decisions<br>Date<br>2023-07-19<br>Neutral citation<br>2023 FC 989<br>File numbers<br>IMM-4441-22<br>Decision Content<br>Date: 20230719<br>Docket: IMM-4441-22<br>Citation: 2023 FC 989<br>Ottawa, Ontario, July 19, 2023<br>PRESENT: The Honourable Mr. Justice Ahmed<br>BETWEEN:<br>JOEL MARTINEZ ALTAMIRANO<br>EUSEBIA ROSALIA REYES LUNA<br>ABAD GILBERTO MORA REYES AZUCENA MORA REYES GAEL MARTINEZ MORA<br>Applicants<br>and<br>THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>Respondent<br>JUDGMENT AND REASONS<br>I. Overview [1] The Applicants seek judicial review of a decision of the Refugee Appeal Division (“RAD”) dated April 26, 2022, confirming the determination of the Refugee Protection Division (“RPD”) that the Applicants are neither Convention refugees nor persons in need of protection under sections 96 and 97(1) of the Immigration and Refugee Protection Act, SC 2001, c 27 (“IRPA”).<br>[2] The RAD upheld the RPD’s refusal of the refugee claim on the basis that the Applicants have a viable internal flight alternative (“IFA”) in Merida, Mexico.<br>[3] The Applicants submit that the RAD mischaracterized the Applicants’ nexus to one of the Convention grounds enumerated under section 2(1) of IRPA, and conducted an unreasonable assessment of the IFA in light of the Applicants’ evidence.<br>[4] For the reasons that follow, I find that the RAD’s decision is reasonable. This application for judicial review is therefore dismissed.<br>II. Facts<br>A. The Applicants [5] Joel Martinez Altamirano (the “Principal Applicant”), his wife, Azucena Mora Reyes (the “Associate Applicant Azucena”), and their child, Gael Martinez Mora (the “Minor Applicant”), are citizens of Mexico. The Principal Applicant’s mother-in-law, Eusebia Rosalia Reyes Luna (the “Associate Applicant Eusebia”) and brother-in-law, Abad Gilberto Mora Reyes (the “Associate Applicant Abad”), are also citizens of Mexico.<br>[6] The Associate Applicant Eusebia and her four sons—Jorgé, Ulises, Neftali, and the Associate Applicant Abad—owned a butcher shop in Tehuacán, Puebla, Mexico. The Associate Applicant Azucena assisted at the shop on weekends.<br>[7] In August 2017, the Applicants claim that Neftali received a phone call from a member of the Jalisco New Generation Cartel (“CJNG”), demanding protection money to continue operating the family business. Neftali hung up the call.<br>[8] In September 2017, the Associate Applicant Abad allegedly received a threatening phone call from a CJNG commander. The Associate Applicant Abad travelled to Canada in November 2017 due to stress associated with this call, but returned to Mexico in December 2018.<br>[9] In June 2018, Jorgé allegedly received a phone call from a CJNG member demanding money. The Applicants claim that this phone call prompted Jorgé to travel to Canada in June 2018. He made a claim for refugee protection in March 2019, but later withdrew his claim.<br>[10] On January 28, 2019, the Associate Applicant Eusebia allegedly received a phone call from the CJNG, informing her that they had kidnapped her son, Ulises, and demanding a ransom of 1 million pesos. The Applicants claim that they paid the CJNG a ransom of 200,000 pesos. When the money was paid, CJMG released Ulises on January 31, 2019, after which he relocated to San Quintin, Baja California, Mexico, where he currently resides.<br>[11] In February 2019, the Associate Applicant Azucena allegedly began receiving phone calls from unknown numbers. On March 22, 2019, the Associate Applicant Azucena, the Principal Applicant, and the Minor Applicant arrived in Canada and made claims for refugee protection on arrival. Fearing that he would also be kidnapped, the Associate Applicant Abad returned to Canada in March 2019 and made a claim for refugee protection. In August 2019, the Associate Applicant Eusebia travelled to Canada and made a claim for refugee protection.<br>[12] In October 2019, Jorgé returned to Mexico. Upon his return, he relocated to San Quintin, Baja California, Mexico to reside with his brother, Ulises. Neftali relocated to Laredo, Nuevo Leon. Neftali manages the family business in Puebla from his residence in Baja California.<br>[13] The Applicants testified that Neftali received phone calls from unknown numbers, that there was a suspicious van parked outside the family residence approximately a year after Ulises’ kidnapping, and that their neighbours received phone calls asking for the Associate Applicant Eusebia’s whereabouts.<br>[14] The Applicants’ refugee claims are based on the fear of persecution or harm in Mexico at the hands of the CJNG cartel for failing to pay the total amount demanded as ransom for the kidnapping of Ulises.<br>B. RPD Decision [15] In a decision dated December 30, 2021, the RPD found that the Applicants are neither Convention refugees nor persons in need of protection pursuant to sections 96 and 97 of IRPA, on the basis that they have a viable IFA in Merida.<br>[16] The RPD found that while the Applicants’ allege that they are victims of criminality by the CJNG; there is insufficient evidence to establish a nexus between these acts and any of the enumerated Convention grounds. The RPD therefore concluded that the Applicants’ claims must be assessed under section 97(1) of IRPA.<br>[17] The RPD found that the Applicants have a viable IFA in Merida. The test to determine a viable IFA requires that: (1) there is no serious possibility of persecution or risk of harm in the IFA, and (2) it is reasonable in the Applicant’s circumstances to relocate to the IFA (Rasaratnam v Canada (Minister of Employment and Immigration), [1992] 1 FC 706). The second prong of the test places a high evidentiary burden on the Applicant to demonstrate that relocation to the IFA would be unreasonable (Ranganathan v Canada (Minister of Citizenship and Immigration), 2003 FC 1367) (“Ranganathan”).<br>[18] On the first prong of the test, the RPD first noted that Mexico is a large country that allows for freedom of movement and a right to travel within the country. The RPD found that the National Documentation Package (“NDP”) for Mexico provided mixed information about whether the CJNG has a presence or influence in Yucatan. According to the NDP, the CJNG is a “national cartel” and the “most dangerous and largest” cartel in Mexico. Although the NDP states that the CJNG has presence throughout Mexico, it also states that the CJNG is not present in Merida or anywhere in Yucatan.<br>[19] However, the RPD found no reliable evidence that the CJNG has attempted to locate the Applicants since they left Mexico, Ulises or Jorgé in San Quintin, or Neftali in Laredo. The RPD also found that the Applicants’ testimony regarding suspicious phone calls from unknown numbers, their neighbours receiving phone calls asking for the Associate Applicant Eusebia’s whereabouts, or a suspicious truck parked outside their family is speculative and does not substantiate their claims that they are being pursued by CJNG. The RPD found no evidence to explain why the cartel would rekindle their interest in the Applicants since they left Mexico.<br>[20] The RPD found that while the NDP evidence confirms that criminal organizations in Mexico extort profitable business owners, the NDP does not specifically state that CJNG members pursue individuals that have failed to pay extortion fees throughout Mexico, only that a criminal organization may be motivated to track someone in Mexico if they owe a large debt, are the object of a vendetta, or refused to work for the gang. The NDP does not specify the amount of unpaid debt that would motivate an organization like CJNG to pursue an individual throughout Mexico. The RPD found that based on this evidence, the Applicants do not have the profile of people whom the CJNG would pursue across Mexico.<br>[21] The RPD accepted the objective evidence stating that the CJNG have various means to locate their targets and that they have sophisticated methods of communicating with one another. However, the RPD found limited evidence to demonstrate that the CJNG would be motivated to pursue the Applicants in Merida, and that the NDP does not mention CJNG using any of its sophisticated methods of communication to track those who refuse to pay extortion fees. For these reasons, the RPD found that on the basis of the available evidence, the Applicants do not face a risk of harm or danger of torture at the hands of the CJNG in Merida.<br>[22] At the second prong of the test for an IFA, the RPD noted the high threshold that must be met for the Applicants to demonstrate that relocation to the proposed IFA is unreasonable, requiring proof of adverse conditions that would jeopardize their lives and safety, such that relocation would be unduly harsh (Ranganathan at para 15; Thirunavukkarasu v Canada (Minister of Employment and Immigration), [1994] 1 FC 589 at 598 (CA)).<br>[23] The RPD acknowledged the Applicants’ contention that relocating their business in Merida, once again making them vulnerable to extortion. The RPD found that this is a generalized risk faced by all business in Mexico and does not preclude relocation. The RPD further found insufficient evidence to show that the Applicants could not find other employment opportunities, given their individual employment experience. The RPD noted that the Applicants are all educated and although they may encounter difficulties in finding employment and housing in Merida, their past experiences would help them to find work and accommodation. The RPD further noted that women face gender-based discrimination in Yucatan, but found that relocation is not unreasonable for both the Associate Applicants Azucena and Eusebia, considering their individual circumstances and their history of gainful employment.<br>[24] The RPD ultimately found that the Applicants failed to discharge their burden under the second prong of the IFA test and, in turn, relocation to Merida is reasonable in light of the evidence and circumstances.<br>C. Decision under Review [25] In a decision dated April 26, 2022, the RAD dismissed the Applicants’ appeal and confirmed the RPD’s finding that the Applicants have a viable IFA in Merida.<br>(1) Nexus with a Convention Ground [26] On appeal, the Applicants submitted that the RPD erred in finding that their claims failed to establish a nexus between their fear of persecution and an enumerated Convention ground, specifically in ignoring the ground of political opinion or membership in a particular social group, that group being successful business owners.<br>[27] The RAD found that the Applicants’ particular circumstances do not meet the definitions for either of these enumerated grounds. The RAD noted that the relevant question is whether the agent of persecution considers the Applicants’ conduct to be political or attributes political activities to them (Inzunza v Canada (Employment and Immigration), 1979 CanLII 2530 (FCA)). The RAD found that there is no evidence that the CJNG cartel has viewed the Applicants’ actions as being political in nature and persecuted them on this basis, or that the cartel kidnapped people for political reasons rather than criminal activity for the purpose of obtaining ransom money. The RAD further found that being a business owner does not qualify within the three broad categories of a “particularly social group” outlined in Canada (Attorney General) v Ward, [1993] 2 SCR 689.<br>(2) IFA [28] The RAD found that the RPD erred in its finding with respect to the cartel’s means to locate the Applicants, but that it did not err in finding that the CJNG lacks the motivation to pursue the Applicants in the IFA or that the Applicants failed to demonstrate that relocation to the IFA is unreasonable.<br>[29] In assessing the cartel’s means to locate the Applicants, the RAD found that, contrary to the RPD’s finding, the NDP evidence demonstrates that the cartel is present in any proposed IFA location and that it would consequently have the means to locate the Applicants throughout Mexico, including in the state of Yucatan. That being said, the RAD agreed with the RPD’s finding that there is insufficient evidence to demonstrate that the CJNG was motivated to pursue the Applicants throughout Mexico. The RAD acknowledged the Applicants’ submission that the CJNG remains motivated to locate them for making only a partial payment of the ransom demanded for the release of Ulises, but ultimately found that their belief in the cartel’s motivation is unsubstantiated by evidence on the record. The RAD cited this Court’s decision in Olusola v Canada (Citizenship and Immigration), 2020 FC 799 (“Olusola”) for the proposition that while a claimant’s sworn testimony creates the presumption of truthfulness, this is “not a presumption that everything the witness believes to be true, but has no direct knowledge of, is actually true” (Olusola at para 25).<br>[30] The RAD further noted that the CJNG’s threatening demands and kidnapping of Ulises were actions associated with the Applicants’ successful business, rather than borne out of a personal vendetta. Therefore, the current operation of the butchery business by Neftali would result in Neftali becoming the target of the cartel’s demands. However, the Applicants testified that they are unaware of any attempts by the cartel to contact their family members in Mexico. The RAD found that the lack of evidence demonstrating the cartel’s efforts to pursue the Applicants for their continuing business, specifically those family members who have relocated to other parts of Mexico, is indicative of a lack of motivation to pursue the Applicants in the proposed IFA.<br>[31] On appeal, the Applicants submitted that the RPD erred in its assessment of the second prong of the IFA test, concerning whether the Applicants’ relocation to the proposed IFA is reasonable. In support of this submission, the Applicants cited this Court’s decisions in Zaytoun v Canada (Citizenship and Immigration), 2014 FC 939 (“Zaytoun”) and Cruz Martinez v Canada (Citizenship and Immigration), 2008 FC 399 (“Cruz Martinez”).<br>[32] The RAD found that the Applicants’ circumstances can be differentiated from those in Zaytoun because unlike the applicant in that case, the Applicants are undistinguishable from a majority of the population by religion or by surname. The RAD acknowledged the Applicants’ reliance on Cruz Martinez for the proposition that the decision-maker must demonstrate that an IFA is qualitatively different from parts of the country where there is a reasonable chance of persecution. However, the RAD found that Cruz Martinez does not displace the burden on an applicant to establish that relocation to the proposed IFA is unreasonable. The RAD found that the RPD reasonably assessed the issue of the cartel’s motivation and was not required to demonstrate that the proposed IFA is qualitatively different than other regions of Mexico because the burden of proof remains with the Applicants.<br>[33] Noting that the Applicants made no further submissions on appeal concerning the second prong of the IFA test, the RAD agreed with the RPD’s finding that the Applicants’ failed to discharge their burden to establish that relocation to Merida would be unreasonable in their circumstances. For these reasons, the RAD dismissed the appeal and upheld the RPD’s decision.<br>III. Issue and Standard of Review<br>[34] The sole issue is whether the RAD’s decision is reasonable.<br>[35] The standard of review is not disputed. The parties agree that the applicable standard of review is reasonableness (Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65 at paras 16–17, 23–25) (“Vavilov”). I agree.<br>[36] Reasonableness is a deferential, but robust, standard of review (Vavilov at paras 12-13). The reviewing court must determine whether the decision under review, including both its rationale and outcome, is transparent, intelligible and justified (Vavilov at para 15). A reasonable decision is one that is based on an internally coherent and rational chain of analysis and that is justified in relation to the facts and law that constrain the decision-maker (Vavilov at para 85). Whether a decision is reasonable depends on the relevant administrative setting, the record before the decision-maker, and the impact of the decision on those affected by its consequences (Vavilov at paras 88-90, 94, 133-135).<br>[37] For a decision to be unreasonable, the applicant must establish the decision contains flaws that are sufficiently central or significant (Vavilov at para 100). Not all errors or concerns about a decision will warrant intervention. A reviewing court must refrain from reweighing evidence before the decision-maker, and it should not interfere with factual findings absent exceptional circumstances (Vavilov at para 125). Flaws or shortcomings must be more than superficial or peripheral to the merits of the decision, or a “minor misstep” (Vavilov at para 100; Canada (Citizenship and Immigration) v Mason, 2021 FCA 156 at para 36).<br>IV. Analysis<br>[38] The Applicants submit that the RAD erroneously found that their claims do not establish a nexus with a Convention ground and that they have a viable IFA in Merida, rendering the decision unreasonable.<br>[39] In my view, the RAD did not err in either of these aspects. It reasonably found that the Applicants’ claims do not establish a nexus with a Convention ground and conducted a reasonable analysis of the IFA.<br>A. Nexus with a Convention Ground [40] The Applicants submit that the RAD erred in finding insufficient evidence to establish that the cartel’s actions were motivated by an enumerated Convention ground, particularly the ground of membership in a social group. Specifically, the Applicants contend that the RAD mischaracterized the relevant social group in the Applicants’ case as being business owners, when it should have considered the Applicants’ membership in the social group of people who have failed to pay ransom to a criminal organization.<br>[41] The Applicants rely on Loyo de Xicara v Canada (Citizenship and Immigration), 2013 FC 593 (“Loyo de Xicara”), where this Court found that the RPD unreasonably determined that “a personalized risk or threat loses this characteristic based on the mere fact that the criminal conduct in question is common in a given country” (at para 24). The Applicants also rely on Benegas v Canada (Citizenship and Immigration), 2015 FC 45 (“Benegas”), where this Court found that the RPD unreasonably concluded that the attacks against the applicant were not motivated by his political opinion, without regard to the fact that the applicant can be easily identified as a resistor to recruitment by his visible scars from previous attacks (at para 34).<br>[42] The Respondent submits that the Applicants’ submissions on this issue lack merit because the Applicants testified that although they paid less than the amount that was demanded as ransom for the release of Ulises, it nonetheless secured his release. The Respondent submits that given these facts, the Applicants fail to show that they should have been assessed as part of the social group of people who failed to pay ransom to the CJNG.<br>[43] I first note that the Applicants’ submissions do not expand on how the factual scenarios or legal findings in either Loyo de Xicara or Benegas is applicable to their circumstances. The RAD’s reasons do not state that there is no nexus between the cartel’s actions and the Convention grounds of political opinion or membership in a social group because the cartel’s actions are common and therefore constitute a generalized risk.<br>[44] I do not find that the RAD erred by failing to consider the nexus between the Applicants’ claims and their membership in the social group of people who have not paid the full amount of ransom demanded by criminal organizations. The Applicants’ evidence reveals that although they did not pay the full amount, the payment ultimately secured the release of Ulises. It is reasonable for the RAD not to consider the potential nexus between the Applicants’ claims and their membership in the social group of those who do not pay ransom amounts as demanded. This is also reasonable in light of the minimal evidence proffered by the Applicants to demonstrate that they, or other members of their family in Mexico, have been pursued on the basis of their failure to pay the full ransom amount.<br>[45] For these reasons, I do not find that the RAD erred in assessing in whether the cartel’s actions were motivated by the Applicants’ membership in the social group of business owners and, in turn, finding that their claims do not establish a nexus with a Convention ground.<br>B. IFA [46] The Applicants submit that the RAD’s assessment of the IFA is unreasonable on two grounds: 1) in its finding that the CJNG cartel lacks the motivation to pursue the Applicants in the proposed IFA and 2) in its finding that their relocation to the proposed IFA is reasonable in the Applicants’ circumstances.<br>[47] The Applicants submit that the RAD erred in finding that there is no evidence to support the cartel’s ongoing interest in and pursuit of the Applicants throughout Mexico. The Applicants note that the Associate Applicant Abad testified that members of the family have received phone calls from unknown numbers. They submit that the mere fact that Ulises was released following the ransom payment does not reasonably lead to the finding that the cartel is unmotivated to pursue the Applicants or that relocation is reasonable in the Applicants’ circumstances.<br>[48] The Respondent submits that the Applicants’ submissions on the IFA issue amount to a request that this Court reweigh the evidence. The Respondent submits that the RAD reasonably found that the Applicants proffered insufficient evidence to demonstrate that the CJNG had a continued interest in them or were still motivated to pursue them throughout Mexico. The Respondent contends that the RAD based this finding on a reasonable assessment of the evidence, which shows that the family business remains operational, that the Applicants are unaware of attempts by the cartel to contact any family members associated with the business, and that their family members have relocated within Mexico without being pursued. The Respondent submits that the RAD reasonably found that the Applicants failed to meet their onus to establish that they would face a risk in the IFA or that relocation would be unreasonable.<br>[49] I agree. I note that the Applicants’ submissions on these alleged errors are largely vague and unclear. What remains of their submissions appears to request that this Court reassess the evidence, which is not this Court’s role on review (Vavilov at para 125). Their allegation that the RAD unreasonably assessed the cartel’s motivation to pursue them and the reasonableness of their relocation to Merida is unsupported by references to the decision or clear evidence of a reviewable error. A review of the RAD’s reasons reveals a thorough and reasonable assessment of the facts that is responsive to the evidentiary record. The Applicants are unable to demonstrate that the CJNG attempted to contact or pursue them since they paid a portion of the ransom and Ulises was released. There is no evidence that their family members in Mexico, who are associated with the continuing business and relocated within Mexico, were pursued.<br>[50] In the absence of this evidence, the RAD’s reasons exhibit a clear line of analysis to arrive at the conclusion that the cartel lacks the motivation to pursue them in the IFA, supported by the factual and legal constrains (Vavilov at paras 99, 102). Given the minimal evidence provided, the RAD reasonably found that the Applicants failed to establish that relocation to Merida would be unreasonable, and provides clear and cogent reasons for this finding. For these reasons, I find that the RAD reasonably assessed the IFA issue in relation to the Applicants’ circumstances and its decision is therefore reasonable.<br>V. Conclusion<br>[51] This application for judicial review is dismissed. The RAD’s decision is reasonable in light of the Applicants’ circumstances and evidence. No questions for certification were raised, and I agree that none arise.<br>JUDGMENT in IMM-4441-22<br>THIS COURT’S JUDGMENT is that:<br>This application for judicial review is dismissed.<br>There is no question to certify.<br>“Shirzad A.”<br>Judge<br>FEDERAL COURT<br>SOLICITORS OF RECORD<br>DOCKET:<br>IMM-4441-22<br>STYLE OF CAUSE:<br>JOEL MARTINEZ ALTAMIRANO, EUSEBIA ROSALIA REYES LUNA, ABAD GILBERTO MORA REYES, AZUCENA MORA REYES AND GAEL MARTINEZ MORA v THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>PLACE OF HEARING:<br>Toronto, Ontario<br>DATE OF HEARING:<br>March 29, 2023<br>JUDGMENT and reasons:<br>AHMED J.<br>DATED:<br>July 19, 2023<br>APPEARANCES:<br>Khatidja Moloo-Alam<br>For The Applicants<br>Asha Gafar<br>For The Respondent<br>SOLICITORS OF RECORD:<br>Green and Spiegel LLP Barristers and Solicitors Toronto, Ontario<br>For The Applicants<br>Attorney General of Canada Toronto, Ontario<br>For The Respondent<br></code> | <code>cluster: CONCLUSION: The court concluded that the RAD's decision is reasonable in light of the Applicants' circumstances and evidence. The application for judicial review is therefore dismissed.</code> | <code>cluster: SUMMARY: **(1) Facts**<br><br>The Applicants, Joel Martinez Altamirano, his wife Azucena Mora Reyes, and their child Gael Martinez Mora, along with Azucena's mother Eusebia Rosalia Reyes Luna and brother Abad Gilberto Mora Reyes, are Mexican citizens who made claims for refugee protection in Canada. The Applicants claimed to be victims of the Jalisco New Generation Cartel (CJNG) in Mexico, alleging that they were extorted and threatened after failing to pay a ransom for the release of Eusebia's son Ulises, who was kidnapped by the cartel in 2019. The Applicants claimed that they feared persecution or harm in Mexico at the hands of the CJNG cartel if they returned.<br><br>The Refugee Protection Division (RPD) found that the Applicants were not Convention refugees or persons in need of protection under sections 96 and 97 of the Immigration and Refugee Protection Act (IRPA). The RPD determined that the Applicants had a viable internal flight alternative (IFA) in Merida, Mexico, and that relocation to Merida was reasonable in their circumstances.<br><br>The Refugee Appeal Division (RAD) upheld the RPD's decision, finding that the Applicants' claims did not establish a nexus with a Convention ground and that they had a viable IFA in Merida. The RAD found that the Applicants had failed to demonstrate that the CJNG had a continued interest in pursuing them or that relocation to Merida would be unreasonable.<br><br>**(2) Issue**<br><br>The issue before the court is whether the RAD's decision is reasonable. The Applicants submit that the RAD erred in finding that their claims do not establish a nexus with a Convention ground and that they have a viable IFA in Merida.<br><br>**(3) Rule**<br><br>The court applied the standard of review of reasonableness, as established in Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65. The court found that the RAD's decision was reasonable in light of the Applicants' circumstances and evidence.<br><br>**(4) Analysis**<br><br>The court analyzed the Applicants' submissions on the nexus with a Convention ground and the IFA issue. The Applicants argued that the RAD erred in finding insufficient evidence to establish that the cartel's actions were motivated by an enumerated Convention ground, particularly the ground of membership in a social group. However, the court found that the RAD's reasons were reasonable and that the Applicants' evidence did not establish a nexus between their claims and the Convention grounds.<br><br>The Applicants also argued that the RAD erred in finding that the CJNG cartel lacks the motivation to pursue them in the proposed IFA and that their relocation to the proposed IFA is reasonable in their circumstances. However, the court found that the RAD's reasons were reasonable and that the Applicants failed to establish that the CJNG had a continued interest in pursuing them or that relocation to Merida would be unreasonable.<br><br>**(5) Conclusion**<br><br>The court concluded that the RAD's decision is reasonable in light of the Applicants' circumstances and evidence. The application for judicial review is therefore dismissed.</code> | | <code>cluster: CONCLUSION: Osipova v. Canada (Citizenship and Immigration)<br>Court (s) Database<br>Federal Court Decisions<br>Date<br>2024-07-05<br>Neutral citation<br>2024 FC 1055<br>File numbers<br>IMM-9267-23<br>Decision Content<br>Date: 20240705<br>Docket: IMM-9267-23<br>Citation: 2024 FC 1055<br>Ottawa, Ontario, July 5, 2024<br>PRESENT: The Honourable Madam Justice Aylen<br>BETWEEN:<br>LIUDMILA OSIPOVA<br>Applicant<br>and<br>THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>Respondent<br>JUDGMENT AND REASONS<br>[1] The Applicant, a 73-year old mother and grandmother of Russian citizenship, seeks judicial review of a reconsideration decision dated May 26, 2023, made by a Senior Immigration Officer [Officer] at Immigration, Refugees and Citizenship Canada, refusing the Applicant’s application for permanent residence from within Canada on humanitarian and compassionate [H&C] grounds under subsection 25(1) of the Immigration and Refugee Protection Act, SC 2001, c 27 [IRPA].<br>[2] The Applicant asserts that the Officer’s decision was unreasonable on the basis that the Officer: (a) failed to conduct a proper assessment of hardship relating to a potential return to Russia based on the Applicant’s personal characteristics and her establishment in Canada; (b) erred in their assessment of the best interests of the child [BIOC] as they failed to be alert, alive and sensitive to the best interests of the Applicant’s grandchild; and (c) failed to give proper consideration to the evidence provided by the Applicant with respect to adverse country conditions in Russia and the hardship she would face in her home country.<br>[3] The sole issue for determination by this Court is whether the Officer’s decision was reasonable.<br>[4] The parties agree and I concur that the applicable standard of review of an H&C decision is reasonableness [see Kanthasamy v Canada (Citizenship and Immigration), 2015 SCC 61 at para 44 [Kanthasamy]]. When reviewing for reasonableness, the Court must take a “reasons first” approach and determine whether the decision under review, including both its rationale and outcome, is transparent, intelligible and justified [see Mason v Canada (Citizenship and Immigration), 2023 SCC 21 at paras 8, 59]. A reasonable decision is one that is based on an internally coherent and rational chain of analysis and that is justified in relation to the facts and law that constrain the decision-maker [see Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65 at paras 15, 85]. The Court will intervene only if it is satisfied there are sufficiently serious shortcomings in the decision such that it cannot be said to exhibit the requisite degree of justification, intelligibility and transparency [see Adeniji-Adele v Canada (Citizenship and Immigration), 2020 FC 418 at para 11].<br>[5] Subsection 25(1) of the IRPA gives the Minister discretion to exempt foreign nationals from the ordinary requirements of that statute and grant permanent resident status in Canada if the Minister is of the opinion that such relief is justified by H&C considerations. An H&C determination under subsection 25(1) of the IRPA is a global one, where all the relevant considerations are to be weighed cumulatively in order to determine if relief is justified in the circumstances. Relief is considered justified if the circumstances would excite in a reasonable person in a civilized community a desire to relieve the misfortunes of another [see Kanthasamy, supra at paras 13, 28; Caleb v Canada (Citizenship and Immigration), 2020 FC 1018 at para 10].<br>[6] While the Applicant has asserted a number of grounds of review, I am satisfied that the Officer’s BIOC analysis was sufficiently flawed so as to render their decision unreasonable.<br>[7] Subsection 25(1) of the IRPA mandates that officers consider the BIOC. In Kanthasamy, the Supreme Court of Canada states the following with respect to the BIOC:<br>[35] The “best interests” principle is “highly contextual” because of the “multitude of factors that may impinge on the child’s best interest”: Canadian Foundation for Children, Youth and the Law v. Canada (Attorney General), 2004 SCC 4 (CanLII), [2004] 1 S.C.R. 76, at para. 11; Gordon v. Goertz, 1996 CanLII 191 (SCC), [1996] 2 S.C.R. 27, at para. 20. It must therefore be applied in a manner responsive to each child’s particular age, capacity, needs and maturity: see A.C. v. Manitoba (Director of Child and Family Services), 2009 SCC 30 (CanLII), [2009] 2 S.C.R. 181, at para. 89. The child’s level of development will guide its precise application in the context of a particular case.<br>[…]<br>[39] A decision under s. 25(1) will therefore be found to be unreasonable if the interests of children affected by the decision are not sufficiently considered: Baker, at para. 75. This means that decision-makers must do more than simply state that the interests of a child have been taken into account: Hawthorne, at para. 32. Those interests must be “well identified and defined” and examined “with a great deal of attention” in light of all the evidence: Legault v. Canada (Minister of Citizenship and Immigration), 2002 FCA 125 (CanLII), [2002] 4 F.C. 358 (C.A.), at paras. 12 and 31; Kolosovs v. Canada (Minister of Citizenship and Immigration), 2008 FC 165 (CanLII), 323 F.T.R. 181, at paras. 9-12.<br>[40] Where, as here, the legislation specifically directs that the best interests of a child who is “directly affected” be considered, those interests are a singularly significant focus and perspective: A.C., at paras. 80-81. […]<br>[8] The BIOC includes “such matters as children’s rights, needs, and best interests; maintaining connections between family members,” among other factors [see Kanthasamy, supra at para 34 citing Agraira v Canada (Public Safety and Emergency Preparedness), 2013 SCC 36 (CanLII), [2013] 2 SCR 559 at para 41]. Although there is no “specific formula” for assessing the BIOC factor, the test above, as articulated in Kanthasamy, must be met [see Motrichko v Canada (Citizenship and Immigration), 2017 FC 516 at para 22 [Motrichko]].<br>[9] The issue, therefore, is whether the interests of the Applicant’s granddaughter were “well identified and defined” by the Officer and examined “with a great deal of attention,” in light of all the evidence. If not, then the Officer’s decision is unreasonable.<br>[10] The evidence before the Officer was that the Applicant had been residing with her daughter and her son-in-law in Canada as a visitor since 2017. When the Applicant submitted her H&C application, her daughter was pregnant. The Applicant updated her application following the birth of her granddaughter in February of 2022; evidence was provided that the Applicant is involved in the upbringing of her infant granddaughter and will take on an increasingly important role in caring for her when her daughter’s maternity leave ends and both parents are working on a full-time basis.<br>[11] The Officer’s reasons for decision related to the best interests of the Applicant’s granddaughter provide, in their entirety, as follows:<br>A factor to be considered in assessing a child’s welfare is the level of dependency between the child and the applicant. With regard to this factor, the applicant submits that during the time she has been present in Canada, she has assisted in the care and upbringing of Sophie. Undoubtedly, the applicant has forged an emotional attachment to her.<br>Notwithstanding, Sophie does not appear to be wholly dependent on the applicant. It would be reasonable to expect that Sophie will continue to live in Canada with her parents as her primary caregivers. While I do not doubt that the interaction and support the applicant has provided to Sophie is of value, there is insufficient objective evidence to establish that the applicant’s return to Russia would compromise Sophie’s best interests.<br>[Emphasis added.]<br>[12] I find that the Officer’s highly generalized BIOC assessment renders the Officer’s decision unreasonable [see Motrichko, supra at para 26]. It was incumbent on the Officer to properly identify and define the granddaughter’s needs and to examine them “with a great deal of attention,” as Kanthasamy requires. The Officer’s BIOC analysis falls short of this standard. As in Chamas v Canada (Citizenship and Immigration), 2021 FC 1352 [Chamas], the Officer never identified what was in the child’s best interest, or how the granddaughter would be affected by the Applicant’s departure. The Officer merely acknowledged that the Applicant has been involved in her granddaughter’s care and upbringing, and that “the [A]pplicant has forged an emotional attachment to her,” without addressing the granddaughter’s attachment to the Applicant. The Officer fails to consider what needs the granddaughter might have, or how the Applicant’s return to Russia might impact the granddaughter. In particular, the emotional and practical hardships the Applicant’s granddaughter would face if the Applicant is forced to leave the country are not addressed in detail, despite there being evidence of hardship on the record [see Motrichko, supra at para 27]. For example, the Applicant’s daughter provided a letter stating that she would be returning to work after her maternity leave and that she needed the Applicant’s help to raise and care for the child. This is a very practical form of support that the Applicant cannot provide from Russia, yet the Officer fails to grapple with this evidence and address whether it is in the best interests of the granddaughter for the Applicant to provide this care.<br>[13] Further, the Officer failed to properly apply the test set out in Kanthasamy by placing undue emphasis on the degree to which the granddaughter depends on the Applicant. The Officer concluded that the granddaughter “does not appear to be wholly dependent on the [A]pplicant,” and that she would “continue to live in Canada with her parents as her primary caregivers.” As Chamas and Motrichko make clear, the fact that an applicant is not a primary caregiver is not determinative. In Motrichko, this Court noted that “the analysis the Officer was called upon to undertake was not whether the grandchildren would manage or survive in the absence of their grandmother but how they would be impacted, both practically and emotionally, by the departure of the [a]pplicant in the particular circumstances of the case” [see Motrichko, supra at para 27]. The same is true here. However, much like in Chamas, the Officer stopped asking what, if any, impact the Applicant’s departure would have on her granddaughter after determining that the Applicant was not her primary caregiver [see Chamas, supra at para 42].<br>[14] The Respondent asserts that while the Applicant and her daughter provided letters stating that the daughter will be returning to work upon completion of her maternity leave, the Applicant failed to provide sufficient evidence to demonstrate that her removal would undermine the granddaughter’s best interests, such as the inability to seek alternative childcare arrangements or the degree of the Applicant’s involvement in her granddaughter’s day-to-day needs. The Respondent asserts that absent this evidence, it was open to the Officer to find that separation between the Applicant and her grandchild alone is insufficient to warrant H&C relief. However, the Respondent’s explanation constitutes an impermissible attempt to supplement the reasons of the Officer [see Ehigiator v Canada (Citizenship and Immigration), 2023 FC 308 at para 53]. Although it was open to the Officer to conclude that the Applicant’s evidence was insufficient because she failed to demonstrate an inability to seek alternative childcare arrangements or the degree of involvement she has in her granddaughter’s day-to-day needs, the Officer did not provide any such justification for their decision.<br>[15] Accordingly, I find that the Officer’s BIOC analysis was unreasonable, which rendered the decision as a whole unreasonable. As such, I need not go on to consider the other grounds of review raised by the Applicant.<br>[16] The application for judicial review is allowed, the decision is set aside and the matter is remitted to a different officer for redetermination. Prior to the redetermination, the Applicant shall be given an opportunity to provide updated submissions and documentation in support of her application.<br>[17] Neither party proposed a question for certification and I agree that none arises.<br>JUDGMENT in IMM-9267-23<br>THIS COURT’S JUDGMENT is that:<br>The application for judicial review is allowed.<br>The decision of the Senior Immigration Officer dated May 26, 2023, refusing the Applicant’s application for permanent residence based on humanitarian and compassionate grounds is set aside and the matter is remitted back to a different officer for redetermination. Prior to the redetermination, the Applicant shall be given an opportunity to provide updated submissions and documentation in support of her application.<br>The parties proposed no question for certification and none arises.<br>“Mandy Aylen”<br>Judge<br>FEDERAL COURT<br>SOLICITORS OF RECORD<br>DOCKET:<br>IMM-9267-23<br>STYLE OF CAUSE:<br>LIUDMILA OSIPOVA v THE MINISTER OF CITIZENSHIP AND IMMIGRATION<br>PLACE OF HEARING:<br>TORONTO, ONTARIO<br>DATE OF HEARING:<br>JULY 4, 2024<br>JUDGMENT and reasons:<br>AyLEN J.<br>DATED:<br>JULY 5, 2024<br>APPEARANCES:<br>John Yoon<br>For The Applicant<br>Eli Lo Re<br>For The Respondent<br>SOLICITORS OF RECORD:<br>Dov Maierovitz Barrister and Solicitor Toronto, Ontario<br>For The Applicant<br>Attorney General of Canada Toronto, Ontario<br>For The Respondent<br></code> | <code>cluster: CONCLUSION: The court allowed the application for judicial review, set aside the decision, and remitted the matter back to a different officer for redetermination. Prior to the redetermination, the person concerned would be given an opportunity to provide updated submissions and documentation in support of her application. The court found that the Officer's BIOC analysis was unreasonable, which rendered the decision as a whole unreasonable, and that the person concerned had raised sufficient grounds for judicial review.</code> | <code>cluster: ISSUES: The sole issue before the court was whether the Officer's decision was reasonable. The person concerned argued that the Officer's decision was unreasonable due to several factors, including a failure to conduct a proper assessment of hardship, an error in assessing the best interests of the child, and a failure to give proper consideration to adverse country conditions in Russia.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | |:------:|:----:|:-------------:|:------:| | 0.0296 | 100 | 0.7554 | 0.0647 | | 0.0593 | 200 | 0.0222 | 0.0314 | | 0.0889 | 300 | 0.0359 | 0.0220 | | 0.1185 | 400 | 0.0189 | 0.0175 | | 0.1481 | 500 | 0.024 | 0.0145 | | 0.1778 | 600 | 0.0164 | 0.0112 | | 0.2074 | 700 | 0.0337 | 0.0139 | | 0.2370 | 800 | 0.0141 | 0.0092 | | 0.2667 | 900 | 0.0088 | 0.0106 | | 0.2963 | 1000 | 0.0093 | 0.0106 | | 0.3259 | 1100 | 0.0217 | 0.0111 | | 0.3556 | 1200 | 0.0063 | 0.0095 | | 0.3852 | 1300 | 0.0188 | 0.0116 | | 0.4148 | 1400 | 0.0184 | 0.0078 | | 0.4444 | 1500 | 0.0146 | 0.0084 | | 0.4741 | 1600 | 0.0035 | 0.0073 | | 0.5037 | 1700 | 0.0062 | 0.0089 | | 0.5333 | 1800 | 0.0052 | 0.0058 | | 0.5630 | 1900 | 0.0035 | 0.0070 | | 0.5926 | 2000 | 0.0137 | 0.0057 | | 0.6222 | 2100 | 0.0027 | 0.0056 | | 0.6519 | 2200 | 0.0066 | 0.0059 | | 0.6815 | 2300 | 0.0174 | 0.0067 | | 0.7111 | 2400 | 0.0061 | 0.0054 | | 0.7407 | 2500 | 0.0046 | 0.0053 | | 0.7704 | 2600 | 0.002 | 0.0050 | | 0.8 | 2700 | 0.0086 | 0.0044 | | 0.8296 | 2800 | 0.008 | 0.0045 | | 0.8593 | 2900 | 0.0074 | 0.0039 | | 0.8889 | 3000 | 0.001 | 0.0039 | | 0.9185 | 3100 | 0.0038 | 0.0038 | | 0.9481 | 3200 | 0.0073 | 0.0036 | | 0.9778 | 3300 | 0.0014 | 0.0036 | ### Framework Versions - Python: 3.11.9 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.4.1+cu121 - Accelerate: 1.0.1 - Datasets: 3.0.2 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
[ "BEAR", "CAS", "MQP" ]
Non_BioNLP
odunola/UAE-Large-VI
odunola
feature-extraction
[ "sentence-transformers", "onnx", "safetensors", "bert", "feature-extraction", "mteb", "sentence_embedding", "feature_extraction", "transformers", "transformers.js", "en", "arxiv:2309.12871", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,702
1,702
92
0
--- language: - en library_name: sentence-transformers license: apache-2.0 tags: - mteb - sentence_embedding - feature_extraction - transformers - transformers.js model-index: - name: UAE-Large-V1 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.55223880597015 - type: ap value: 38.264070815317794 - type: f1 value: 69.40977934769845 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.84267499999999 - type: ap value: 89.57568507997713 - type: f1 value: 92.82590734337774 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.292 - type: f1 value: 47.90257816032778 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 42.105 - type: map_at_10 value: 58.181000000000004 - type: map_at_100 value: 58.653999999999996 - type: map_at_1000 value: 58.657000000000004 - type: map_at_3 value: 54.386 - type: map_at_5 value: 56.757999999999996 - type: mrr_at_1 value: 42.745 - type: mrr_at_10 value: 58.437 - type: mrr_at_100 value: 58.894999999999996 - type: mrr_at_1000 value: 58.897999999999996 - type: mrr_at_3 value: 54.635 - type: mrr_at_5 value: 56.99999999999999 - type: ndcg_at_1 value: 42.105 - type: ndcg_at_10 value: 66.14999999999999 - type: ndcg_at_100 value: 68.048 - type: ndcg_at_1000 value: 68.11399999999999 - type: ndcg_at_3 value: 58.477000000000004 - type: ndcg_at_5 value: 62.768 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 9.110999999999999 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 23.447000000000003 - type: precision_at_5 value: 16.159000000000002 - type: recall_at_1 value: 42.105 - type: recall_at_10 value: 91.11 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 70.341 - type: recall_at_5 value: 80.797 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 49.02580759154173 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.093601280163554 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.19590406875427 - type: mrr value: 77.09547992788991 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.86678362843676 - type: cos_sim_spearman value: 86.1423242570783 - type: euclidean_pearson value: 85.98994198511751 - type: euclidean_spearman value: 86.48209103503942 - type: manhattan_pearson value: 85.6446436316182 - type: manhattan_spearman value: 86.21039809734357 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.69155844155844 - type: f1 value: 87.68109381943547 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.37501687500394 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 37.23401405155885 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.232 - type: map_at_10 value: 41.404999999999994 - type: map_at_100 value: 42.896 - type: map_at_1000 value: 43.028 - type: map_at_3 value: 37.925 - type: map_at_5 value: 39.865 - type: mrr_at_1 value: 36.338 - type: mrr_at_10 value: 46.969 - type: mrr_at_100 value: 47.684 - type: mrr_at_1000 value: 47.731 - type: mrr_at_3 value: 44.063 - type: mrr_at_5 value: 45.908 - type: ndcg_at_1 value: 36.338 - type: ndcg_at_10 value: 47.887 - type: ndcg_at_100 value: 53.357 - type: ndcg_at_1000 value: 55.376999999999995 - type: ndcg_at_3 value: 42.588 - type: ndcg_at_5 value: 45.132 - type: precision_at_1 value: 36.338 - type: precision_at_10 value: 9.17 - type: precision_at_100 value: 1.4909999999999999 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 20.315 - type: precision_at_5 value: 14.793000000000001 - type: recall_at_1 value: 30.232 - type: recall_at_10 value: 60.67399999999999 - type: recall_at_100 value: 83.628 - type: recall_at_1000 value: 96.209 - type: recall_at_3 value: 45.48 - type: recall_at_5 value: 52.354 - type: map_at_1 value: 32.237 - type: map_at_10 value: 42.829 - type: map_at_100 value: 44.065 - type: map_at_1000 value: 44.199 - type: map_at_3 value: 39.885999999999996 - type: map_at_5 value: 41.55 - type: mrr_at_1 value: 40.064 - type: mrr_at_10 value: 48.611 - type: mrr_at_100 value: 49.245 - type: mrr_at_1000 value: 49.29 - type: mrr_at_3 value: 46.561 - type: mrr_at_5 value: 47.771 - type: ndcg_at_1 value: 40.064 - type: ndcg_at_10 value: 48.388 - type: ndcg_at_100 value: 52.666999999999994 - type: ndcg_at_1000 value: 54.67100000000001 - type: ndcg_at_3 value: 44.504 - type: ndcg_at_5 value: 46.303 - type: precision_at_1 value: 40.064 - type: precision_at_10 value: 9.051 - type: precision_at_100 value: 1.4500000000000002 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 21.444 - type: precision_at_5 value: 15.045 - type: recall_at_1 value: 32.237 - type: recall_at_10 value: 57.943999999999996 - type: recall_at_100 value: 75.98700000000001 - type: recall_at_1000 value: 88.453 - type: recall_at_3 value: 46.268 - type: recall_at_5 value: 51.459999999999994 - type: map_at_1 value: 38.797 - type: map_at_10 value: 51.263000000000005 - type: map_at_100 value: 52.333 - type: map_at_1000 value: 52.393 - type: map_at_3 value: 47.936 - type: map_at_5 value: 49.844 - type: mrr_at_1 value: 44.389 - type: mrr_at_10 value: 54.601 - type: mrr_at_100 value: 55.300000000000004 - type: mrr_at_1000 value: 55.333 - type: mrr_at_3 value: 52.068999999999996 - type: mrr_at_5 value: 53.627 - type: ndcg_at_1 value: 44.389 - type: ndcg_at_10 value: 57.193000000000005 - type: ndcg_at_100 value: 61.307 - type: ndcg_at_1000 value: 62.529 - type: ndcg_at_3 value: 51.607 - type: ndcg_at_5 value: 54.409 - type: precision_at_1 value: 44.389 - type: precision_at_10 value: 9.26 - type: precision_at_100 value: 1.222 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 23.03 - type: precision_at_5 value: 15.887 - type: recall_at_1 value: 38.797 - type: recall_at_10 value: 71.449 - type: recall_at_100 value: 88.881 - type: recall_at_1000 value: 97.52 - type: recall_at_3 value: 56.503 - type: recall_at_5 value: 63.392 - type: map_at_1 value: 27.291999999999998 - type: map_at_10 value: 35.65 - type: map_at_100 value: 36.689 - type: map_at_1000 value: 36.753 - type: map_at_3 value: 32.995000000000005 - type: map_at_5 value: 34.409 - type: mrr_at_1 value: 29.04 - type: mrr_at_10 value: 37.486000000000004 - type: mrr_at_100 value: 38.394 - type: mrr_at_1000 value: 38.445 - type: mrr_at_3 value: 35.028 - type: mrr_at_5 value: 36.305 - type: ndcg_at_1 value: 29.04 - type: ndcg_at_10 value: 40.613 - type: ndcg_at_100 value: 45.733000000000004 - type: ndcg_at_1000 value: 47.447 - type: ndcg_at_3 value: 35.339999999999996 - type: ndcg_at_5 value: 37.706 - type: precision_at_1 value: 29.04 - type: precision_at_10 value: 6.192 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 14.802000000000001 - type: precision_at_5 value: 10.305 - type: recall_at_1 value: 27.291999999999998 - type: recall_at_10 value: 54.25299999999999 - type: recall_at_100 value: 77.773 - type: recall_at_1000 value: 90.795 - type: recall_at_3 value: 39.731 - type: recall_at_5 value: 45.403999999999996 - type: map_at_1 value: 18.326 - type: map_at_10 value: 26.290999999999997 - type: map_at_100 value: 27.456999999999997 - type: map_at_1000 value: 27.583000000000002 - type: map_at_3 value: 23.578 - type: map_at_5 value: 25.113000000000003 - type: mrr_at_1 value: 22.637 - type: mrr_at_10 value: 31.139 - type: mrr_at_100 value: 32.074999999999996 - type: mrr_at_1000 value: 32.147 - type: mrr_at_3 value: 28.483000000000004 - type: mrr_at_5 value: 29.963 - type: ndcg_at_1 value: 22.637 - type: ndcg_at_10 value: 31.717000000000002 - type: ndcg_at_100 value: 37.201 - type: ndcg_at_1000 value: 40.088 - type: ndcg_at_3 value: 26.686 - type: ndcg_at_5 value: 29.076999999999998 - type: precision_at_1 value: 22.637 - type: precision_at_10 value: 5.7090000000000005 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.894 - type: precision_at_5 value: 9.328 - type: recall_at_1 value: 18.326 - type: recall_at_10 value: 43.824999999999996 - type: recall_at_100 value: 67.316 - type: recall_at_1000 value: 87.481 - type: recall_at_3 value: 29.866999999999997 - type: recall_at_5 value: 35.961999999999996 - type: map_at_1 value: 29.875 - type: map_at_10 value: 40.458 - type: map_at_100 value: 41.772 - type: map_at_1000 value: 41.882999999999996 - type: map_at_3 value: 37.086999999999996 - type: map_at_5 value: 39.153 - type: mrr_at_1 value: 36.381 - type: mrr_at_10 value: 46.190999999999995 - type: mrr_at_100 value: 46.983999999999995 - type: mrr_at_1000 value: 47.032000000000004 - type: mrr_at_3 value: 43.486999999999995 - type: mrr_at_5 value: 45.249 - type: ndcg_at_1 value: 36.381 - type: ndcg_at_10 value: 46.602 - type: ndcg_at_100 value: 51.885999999999996 - type: ndcg_at_1000 value: 53.895 - type: ndcg_at_3 value: 41.155 - type: ndcg_at_5 value: 44.182 - type: precision_at_1 value: 36.381 - type: precision_at_10 value: 8.402 - type: precision_at_100 value: 1.278 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 19.346 - type: precision_at_5 value: 14.09 - type: recall_at_1 value: 29.875 - type: recall_at_10 value: 59.065999999999995 - type: recall_at_100 value: 80.923 - type: recall_at_1000 value: 93.927 - type: recall_at_3 value: 44.462 - type: recall_at_5 value: 51.89 - type: map_at_1 value: 24.94 - type: map_at_10 value: 35.125 - type: map_at_100 value: 36.476 - type: map_at_1000 value: 36.579 - type: map_at_3 value: 31.840000000000003 - type: map_at_5 value: 33.647 - type: mrr_at_1 value: 30.936000000000003 - type: mrr_at_10 value: 40.637 - type: mrr_at_100 value: 41.471000000000004 - type: mrr_at_1000 value: 41.525 - type: mrr_at_3 value: 38.013999999999996 - type: mrr_at_5 value: 39.469 - type: ndcg_at_1 value: 30.936000000000003 - type: ndcg_at_10 value: 41.295 - type: ndcg_at_100 value: 46.92 - type: ndcg_at_1000 value: 49.183 - type: ndcg_at_3 value: 35.811 - type: ndcg_at_5 value: 38.306000000000004 - type: precision_at_1 value: 30.936000000000003 - type: precision_at_10 value: 7.728 - type: precision_at_100 value: 1.226 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.237 - type: precision_at_5 value: 12.42 - type: recall_at_1 value: 24.94 - type: recall_at_10 value: 54.235 - type: recall_at_100 value: 78.314 - type: recall_at_1000 value: 93.973 - type: recall_at_3 value: 38.925 - type: recall_at_5 value: 45.505 - type: map_at_1 value: 26.250833333333333 - type: map_at_10 value: 35.46875 - type: map_at_100 value: 36.667 - type: map_at_1000 value: 36.78025 - type: map_at_3 value: 32.56733333333334 - type: map_at_5 value: 34.20333333333333 - type: mrr_at_1 value: 30.8945 - type: mrr_at_10 value: 39.636833333333335 - type: mrr_at_100 value: 40.46508333333333 - type: mrr_at_1000 value: 40.521249999999995 - type: mrr_at_3 value: 37.140166666666666 - type: mrr_at_5 value: 38.60999999999999 - type: ndcg_at_1 value: 30.8945 - type: ndcg_at_10 value: 40.93441666666667 - type: ndcg_at_100 value: 46.062416666666664 - type: ndcg_at_1000 value: 48.28341666666667 - type: ndcg_at_3 value: 35.97575 - type: ndcg_at_5 value: 38.3785 - type: precision_at_1 value: 30.8945 - type: precision_at_10 value: 7.180250000000001 - type: precision_at_100 value: 1.1468333333333334 - type: precision_at_1000 value: 0.15283333333333332 - type: precision_at_3 value: 16.525583333333334 - type: precision_at_5 value: 11.798333333333332 - type: recall_at_1 value: 26.250833333333333 - type: recall_at_10 value: 52.96108333333333 - type: recall_at_100 value: 75.45908333333334 - type: recall_at_1000 value: 90.73924999999998 - type: recall_at_3 value: 39.25483333333333 - type: recall_at_5 value: 45.37950000000001 - type: map_at_1 value: 24.595 - type: map_at_10 value: 31.747999999999998 - type: map_at_100 value: 32.62 - type: map_at_1000 value: 32.713 - type: map_at_3 value: 29.48 - type: map_at_5 value: 30.635 - type: mrr_at_1 value: 27.607 - type: mrr_at_10 value: 34.449000000000005 - type: mrr_at_100 value: 35.182 - type: mrr_at_1000 value: 35.254000000000005 - type: mrr_at_3 value: 32.413 - type: mrr_at_5 value: 33.372 - type: ndcg_at_1 value: 27.607 - type: ndcg_at_10 value: 36.041000000000004 - type: ndcg_at_100 value: 40.514 - type: ndcg_at_1000 value: 42.851 - type: ndcg_at_3 value: 31.689 - type: ndcg_at_5 value: 33.479 - type: precision_at_1 value: 27.607 - type: precision_at_10 value: 5.66 - type: precision_at_100 value: 0.868 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 13.446 - type: precision_at_5 value: 9.264 - type: recall_at_1 value: 24.595 - type: recall_at_10 value: 46.79 - type: recall_at_100 value: 67.413 - type: recall_at_1000 value: 84.753 - type: recall_at_3 value: 34.644999999999996 - type: recall_at_5 value: 39.09 - type: map_at_1 value: 17.333000000000002 - type: map_at_10 value: 24.427 - type: map_at_100 value: 25.576 - type: map_at_1000 value: 25.692999999999998 - type: map_at_3 value: 22.002 - type: map_at_5 value: 23.249 - type: mrr_at_1 value: 20.716 - type: mrr_at_10 value: 28.072000000000003 - type: mrr_at_100 value: 29.067 - type: mrr_at_1000 value: 29.137 - type: mrr_at_3 value: 25.832 - type: mrr_at_5 value: 27.045 - type: ndcg_at_1 value: 20.716 - type: ndcg_at_10 value: 29.109 - type: ndcg_at_100 value: 34.797 - type: ndcg_at_1000 value: 37.503 - type: ndcg_at_3 value: 24.668 - type: ndcg_at_5 value: 26.552999999999997 - type: precision_at_1 value: 20.716 - type: precision_at_10 value: 5.351 - type: precision_at_100 value: 0.955 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 11.584999999999999 - type: precision_at_5 value: 8.362 - type: recall_at_1 value: 17.333000000000002 - type: recall_at_10 value: 39.604 - type: recall_at_100 value: 65.525 - type: recall_at_1000 value: 84.651 - type: recall_at_3 value: 27.199 - type: recall_at_5 value: 32.019 - type: map_at_1 value: 26.342 - type: map_at_10 value: 35.349000000000004 - type: map_at_100 value: 36.443 - type: map_at_1000 value: 36.548 - type: map_at_3 value: 32.307 - type: map_at_5 value: 34.164 - type: mrr_at_1 value: 31.063000000000002 - type: mrr_at_10 value: 39.703 - type: mrr_at_100 value: 40.555 - type: mrr_at_1000 value: 40.614 - type: mrr_at_3 value: 37.141999999999996 - type: mrr_at_5 value: 38.812000000000005 - type: ndcg_at_1 value: 31.063000000000002 - type: ndcg_at_10 value: 40.873 - type: ndcg_at_100 value: 45.896 - type: ndcg_at_1000 value: 48.205999999999996 - type: ndcg_at_3 value: 35.522 - type: ndcg_at_5 value: 38.419 - type: precision_at_1 value: 31.063000000000002 - type: precision_at_10 value: 6.866 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 16.014 - type: precision_at_5 value: 11.604000000000001 - type: recall_at_1 value: 26.342 - type: recall_at_10 value: 53.40200000000001 - type: recall_at_100 value: 75.251 - type: recall_at_1000 value: 91.13799999999999 - type: recall_at_3 value: 39.103 - type: recall_at_5 value: 46.357 - type: map_at_1 value: 23.71 - type: map_at_10 value: 32.153999999999996 - type: map_at_100 value: 33.821 - type: map_at_1000 value: 34.034 - type: map_at_3 value: 29.376 - type: map_at_5 value: 30.878 - type: mrr_at_1 value: 28.458 - type: mrr_at_10 value: 36.775999999999996 - type: mrr_at_100 value: 37.804 - type: mrr_at_1000 value: 37.858999999999995 - type: mrr_at_3 value: 34.123999999999995 - type: mrr_at_5 value: 35.596 - type: ndcg_at_1 value: 28.458 - type: ndcg_at_10 value: 37.858999999999995 - type: ndcg_at_100 value: 44.194 - type: ndcg_at_1000 value: 46.744 - type: ndcg_at_3 value: 33.348 - type: ndcg_at_5 value: 35.448 - type: precision_at_1 value: 28.458 - type: precision_at_10 value: 7.4510000000000005 - type: precision_at_100 value: 1.5 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 15.809999999999999 - type: precision_at_5 value: 11.462 - type: recall_at_1 value: 23.71 - type: recall_at_10 value: 48.272999999999996 - type: recall_at_100 value: 77.134 - type: recall_at_1000 value: 93.001 - type: recall_at_3 value: 35.480000000000004 - type: recall_at_5 value: 41.19 - type: map_at_1 value: 21.331 - type: map_at_10 value: 28.926000000000002 - type: map_at_100 value: 29.855999999999998 - type: map_at_1000 value: 29.957 - type: map_at_3 value: 26.395999999999997 - type: map_at_5 value: 27.933000000000003 - type: mrr_at_1 value: 23.105 - type: mrr_at_10 value: 31.008000000000003 - type: mrr_at_100 value: 31.819999999999997 - type: mrr_at_1000 value: 31.887999999999998 - type: mrr_at_3 value: 28.466 - type: mrr_at_5 value: 30.203000000000003 - type: ndcg_at_1 value: 23.105 - type: ndcg_at_10 value: 33.635999999999996 - type: ndcg_at_100 value: 38.277 - type: ndcg_at_1000 value: 40.907 - type: ndcg_at_3 value: 28.791 - type: ndcg_at_5 value: 31.528 - type: precision_at_1 value: 23.105 - type: precision_at_10 value: 5.323 - type: precision_at_100 value: 0.815 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.384 - type: precision_at_5 value: 9.02 - type: recall_at_1 value: 21.331 - type: recall_at_10 value: 46.018 - type: recall_at_100 value: 67.364 - type: recall_at_1000 value: 86.97 - type: recall_at_3 value: 33.395 - type: recall_at_5 value: 39.931 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 17.011000000000003 - type: map_at_10 value: 28.816999999999997 - type: map_at_100 value: 30.761 - type: map_at_1000 value: 30.958000000000002 - type: map_at_3 value: 24.044999999999998 - type: map_at_5 value: 26.557 - type: mrr_at_1 value: 38.696999999999996 - type: mrr_at_10 value: 50.464 - type: mrr_at_100 value: 51.193999999999996 - type: mrr_at_1000 value: 51.219 - type: mrr_at_3 value: 47.339999999999996 - type: mrr_at_5 value: 49.346000000000004 - type: ndcg_at_1 value: 38.696999999999996 - type: ndcg_at_10 value: 38.53 - type: ndcg_at_100 value: 45.525 - type: ndcg_at_1000 value: 48.685 - type: ndcg_at_3 value: 32.282 - type: ndcg_at_5 value: 34.482 - type: precision_at_1 value: 38.696999999999996 - type: precision_at_10 value: 11.895999999999999 - type: precision_at_100 value: 1.95 - type: precision_at_1000 value: 0.254 - type: precision_at_3 value: 24.038999999999998 - type: precision_at_5 value: 18.332 - type: recall_at_1 value: 17.011000000000003 - type: recall_at_10 value: 44.452999999999996 - type: recall_at_100 value: 68.223 - type: recall_at_1000 value: 85.653 - type: recall_at_3 value: 28.784 - type: recall_at_5 value: 35.66 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.516 - type: map_at_10 value: 21.439 - type: map_at_100 value: 31.517 - type: map_at_1000 value: 33.267 - type: map_at_3 value: 15.004999999999999 - type: map_at_5 value: 17.793999999999997 - type: mrr_at_1 value: 71.25 - type: mrr_at_10 value: 79.071 - type: mrr_at_100 value: 79.325 - type: mrr_at_1000 value: 79.33 - type: mrr_at_3 value: 77.708 - type: mrr_at_5 value: 78.546 - type: ndcg_at_1 value: 58.62500000000001 - type: ndcg_at_10 value: 44.889 - type: ndcg_at_100 value: 50.536 - type: ndcg_at_1000 value: 57.724 - type: ndcg_at_3 value: 49.32 - type: ndcg_at_5 value: 46.775 - type: precision_at_1 value: 71.25 - type: precision_at_10 value: 36.175000000000004 - type: precision_at_100 value: 11.940000000000001 - type: precision_at_1000 value: 2.178 - type: precision_at_3 value: 53.583000000000006 - type: precision_at_5 value: 45.550000000000004 - type: recall_at_1 value: 9.516 - type: recall_at_10 value: 27.028000000000002 - type: recall_at_100 value: 57.581 - type: recall_at_1000 value: 80.623 - type: recall_at_3 value: 16.313 - type: recall_at_5 value: 20.674 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.74999999999999 - type: f1 value: 46.46706502669774 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 77.266 - type: map_at_10 value: 84.89999999999999 - type: map_at_100 value: 85.109 - type: map_at_1000 value: 85.123 - type: map_at_3 value: 83.898 - type: map_at_5 value: 84.541 - type: mrr_at_1 value: 83.138 - type: mrr_at_10 value: 89.37 - type: mrr_at_100 value: 89.432 - type: mrr_at_1000 value: 89.43299999999999 - type: mrr_at_3 value: 88.836 - type: mrr_at_5 value: 89.21 - type: ndcg_at_1 value: 83.138 - type: ndcg_at_10 value: 88.244 - type: ndcg_at_100 value: 88.98700000000001 - type: ndcg_at_1000 value: 89.21900000000001 - type: ndcg_at_3 value: 86.825 - type: ndcg_at_5 value: 87.636 - type: precision_at_1 value: 83.138 - type: precision_at_10 value: 10.47 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.36 - type: recall_at_1 value: 77.266 - type: recall_at_10 value: 94.063 - type: recall_at_100 value: 96.993 - type: recall_at_1000 value: 98.414 - type: recall_at_3 value: 90.228 - type: recall_at_5 value: 92.328 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.319 - type: map_at_10 value: 36.943 - type: map_at_100 value: 38.951 - type: map_at_1000 value: 39.114 - type: map_at_3 value: 32.82 - type: map_at_5 value: 34.945 - type: mrr_at_1 value: 44.135999999999996 - type: mrr_at_10 value: 53.071999999999996 - type: mrr_at_100 value: 53.87 - type: mrr_at_1000 value: 53.90200000000001 - type: mrr_at_3 value: 50.77199999999999 - type: mrr_at_5 value: 52.129999999999995 - type: ndcg_at_1 value: 44.135999999999996 - type: ndcg_at_10 value: 44.836 - type: ndcg_at_100 value: 51.754 - type: ndcg_at_1000 value: 54.36 - type: ndcg_at_3 value: 41.658 - type: ndcg_at_5 value: 42.354 - type: precision_at_1 value: 44.135999999999996 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.952 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.093 - type: recall_at_1 value: 22.319 - type: recall_at_10 value: 51.528 - type: recall_at_100 value: 76.70700000000001 - type: recall_at_1000 value: 92.143 - type: recall_at_3 value: 38.641 - type: recall_at_5 value: 43.653999999999996 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.182 - type: map_at_10 value: 65.146 - type: map_at_100 value: 66.023 - type: map_at_1000 value: 66.078 - type: map_at_3 value: 61.617999999999995 - type: map_at_5 value: 63.82299999999999 - type: mrr_at_1 value: 80.365 - type: mrr_at_10 value: 85.79 - type: mrr_at_100 value: 85.963 - type: mrr_at_1000 value: 85.968 - type: mrr_at_3 value: 84.952 - type: mrr_at_5 value: 85.503 - type: ndcg_at_1 value: 80.365 - type: ndcg_at_10 value: 73.13499999999999 - type: ndcg_at_100 value: 76.133 - type: ndcg_at_1000 value: 77.151 - type: ndcg_at_3 value: 68.255 - type: ndcg_at_5 value: 70.978 - type: precision_at_1 value: 80.365 - type: precision_at_10 value: 15.359 - type: precision_at_100 value: 1.7690000000000001 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 44.024 - type: precision_at_5 value: 28.555999999999997 - type: recall_at_1 value: 40.182 - type: recall_at_10 value: 76.793 - type: recall_at_100 value: 88.474 - type: recall_at_1000 value: 95.159 - type: recall_at_3 value: 66.036 - type: recall_at_5 value: 71.391 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.7796 - type: ap value: 89.24883716810874 - type: f1 value: 92.7706903433313 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.016 - type: map_at_10 value: 34.408 - type: map_at_100 value: 35.592 - type: map_at_1000 value: 35.64 - type: map_at_3 value: 30.459999999999997 - type: map_at_5 value: 32.721000000000004 - type: mrr_at_1 value: 22.593 - type: mrr_at_10 value: 34.993 - type: mrr_at_100 value: 36.113 - type: mrr_at_1000 value: 36.156 - type: mrr_at_3 value: 31.101 - type: mrr_at_5 value: 33.364 - type: ndcg_at_1 value: 22.579 - type: ndcg_at_10 value: 41.404999999999994 - type: ndcg_at_100 value: 47.018 - type: ndcg_at_1000 value: 48.211999999999996 - type: ndcg_at_3 value: 33.389 - type: ndcg_at_5 value: 37.425000000000004 - type: precision_at_1 value: 22.579 - type: precision_at_10 value: 6.59 - type: precision_at_100 value: 0.938 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.241000000000001 - type: precision_at_5 value: 10.59 - type: recall_at_1 value: 22.016 - type: recall_at_10 value: 62.927 - type: recall_at_100 value: 88.72 - type: recall_at_1000 value: 97.80799999999999 - type: recall_at_3 value: 41.229 - type: recall_at_5 value: 50.88 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.01732786137711 - type: f1 value: 93.76353126402202 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.91746466028272 - type: f1 value: 57.715651682646765 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.5030262273033 - type: f1 value: 74.6693629986121 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.74781439139207 - type: f1 value: 79.96684171018774 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.2156206892017 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.180539484816137 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.51125957874274 - type: mrr value: 33.777037359249995 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 7.248 - type: map_at_10 value: 15.340000000000002 - type: map_at_100 value: 19.591 - type: map_at_1000 value: 21.187 - type: map_at_3 value: 11.329 - type: map_at_5 value: 13.209999999999999 - type: mrr_at_1 value: 47.678 - type: mrr_at_10 value: 57.493 - type: mrr_at_100 value: 58.038999999999994 - type: mrr_at_1000 value: 58.07 - type: mrr_at_3 value: 55.36600000000001 - type: mrr_at_5 value: 56.635999999999996 - type: ndcg_at_1 value: 46.129999999999995 - type: ndcg_at_10 value: 38.653999999999996 - type: ndcg_at_100 value: 36.288 - type: ndcg_at_1000 value: 44.765 - type: ndcg_at_3 value: 43.553 - type: ndcg_at_5 value: 41.317 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 28.669 - type: precision_at_100 value: 9.158 - type: precision_at_1000 value: 2.207 - type: precision_at_3 value: 40.97 - type: precision_at_5 value: 35.604 - type: recall_at_1 value: 7.248 - type: recall_at_10 value: 19.46 - type: recall_at_100 value: 37.214000000000006 - type: recall_at_1000 value: 67.64099999999999 - type: recall_at_3 value: 12.025 - type: recall_at_5 value: 15.443999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.595000000000002 - type: map_at_10 value: 47.815999999999995 - type: map_at_100 value: 48.811 - type: map_at_1000 value: 48.835 - type: map_at_3 value: 43.225 - type: map_at_5 value: 46.017 - type: mrr_at_1 value: 35.689 - type: mrr_at_10 value: 50.341 - type: mrr_at_100 value: 51.044999999999995 - type: mrr_at_1000 value: 51.062 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.918 - type: ndcg_at_1 value: 35.66 - type: ndcg_at_10 value: 55.859 - type: ndcg_at_100 value: 59.864 - type: ndcg_at_1000 value: 60.419999999999995 - type: ndcg_at_3 value: 47.371 - type: ndcg_at_5 value: 51.995000000000005 - type: precision_at_1 value: 35.66 - type: precision_at_10 value: 9.27 - type: precision_at_100 value: 1.1520000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.63 - type: precision_at_5 value: 15.655 - type: recall_at_1 value: 31.595000000000002 - type: recall_at_10 value: 77.704 - type: recall_at_100 value: 94.774 - type: recall_at_1000 value: 98.919 - type: recall_at_3 value: 56.052 - type: recall_at_5 value: 66.623 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.489 - type: map_at_10 value: 85.411 - type: map_at_100 value: 86.048 - type: map_at_1000 value: 86.064 - type: map_at_3 value: 82.587 - type: map_at_5 value: 84.339 - type: mrr_at_1 value: 82.28 - type: mrr_at_10 value: 88.27199999999999 - type: mrr_at_100 value: 88.362 - type: mrr_at_1000 value: 88.362 - type: mrr_at_3 value: 87.372 - type: mrr_at_5 value: 87.995 - type: ndcg_at_1 value: 82.27 - type: ndcg_at_10 value: 89.023 - type: ndcg_at_100 value: 90.191 - type: ndcg_at_1000 value: 90.266 - type: ndcg_at_3 value: 86.37 - type: ndcg_at_5 value: 87.804 - type: precision_at_1 value: 82.27 - type: precision_at_10 value: 13.469000000000001 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.797 - type: precision_at_5 value: 24.734 - type: recall_at_1 value: 71.489 - type: recall_at_10 value: 95.824 - type: recall_at_100 value: 99.70599999999999 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 88.099 - type: recall_at_5 value: 92.285 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 60.52398807444541 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 65.34855891507871 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.188000000000001 - type: map_at_10 value: 13.987 - type: map_at_100 value: 16.438 - type: map_at_1000 value: 16.829 - type: map_at_3 value: 9.767000000000001 - type: map_at_5 value: 11.912 - type: mrr_at_1 value: 25.6 - type: mrr_at_10 value: 37.744 - type: mrr_at_100 value: 38.847 - type: mrr_at_1000 value: 38.894 - type: mrr_at_3 value: 34.166999999999994 - type: mrr_at_5 value: 36.207 - type: ndcg_at_1 value: 25.6 - type: ndcg_at_10 value: 22.980999999999998 - type: ndcg_at_100 value: 32.039 - type: ndcg_at_1000 value: 38.157000000000004 - type: ndcg_at_3 value: 21.567 - type: ndcg_at_5 value: 19.070999999999998 - type: precision_at_1 value: 25.6 - type: precision_at_10 value: 12.02 - type: precision_at_100 value: 2.5100000000000002 - type: precision_at_1000 value: 0.396 - type: precision_at_3 value: 20.333000000000002 - type: precision_at_5 value: 16.98 - type: recall_at_1 value: 5.188000000000001 - type: recall_at_10 value: 24.372 - type: recall_at_100 value: 50.934999999999995 - type: recall_at_1000 value: 80.477 - type: recall_at_3 value: 12.363 - type: recall_at_5 value: 17.203 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.24286275535398 - type: cos_sim_spearman value: 82.62333770991818 - type: euclidean_pearson value: 84.60353717637284 - type: euclidean_spearman value: 82.32990108810047 - type: manhattan_pearson value: 84.6089049738196 - type: manhattan_spearman value: 82.33361785438936 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.87428858503165 - type: cos_sim_spearman value: 79.09145886519929 - type: euclidean_pearson value: 86.42669231664036 - type: euclidean_spearman value: 80.03127375435449 - type: manhattan_pearson value: 86.41330338305022 - type: manhattan_spearman value: 80.02492538673368 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.67912277322645 - type: cos_sim_spearman value: 89.6171319711762 - type: euclidean_pearson value: 86.56571917398725 - type: euclidean_spearman value: 87.71216907898948 - type: manhattan_pearson value: 86.57459050182473 - type: manhattan_spearman value: 87.71916648349993 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.71957379085862 - type: cos_sim_spearman value: 85.01784075851465 - type: euclidean_pearson value: 84.7407848472801 - type: euclidean_spearman value: 84.61063091345538 - type: manhattan_pearson value: 84.71494352494403 - type: manhattan_spearman value: 84.58772077604254 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.40508326325175 - type: cos_sim_spearman value: 89.50912897763186 - type: euclidean_pearson value: 87.82349070086627 - type: euclidean_spearman value: 88.44179162727521 - type: manhattan_pearson value: 87.80181927025595 - type: manhattan_spearman value: 88.43205129636243 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.35846741715478 - type: cos_sim_spearman value: 86.61172476741842 - type: euclidean_pearson value: 84.60123125491637 - type: euclidean_spearman value: 85.3001948141827 - type: manhattan_pearson value: 84.56231142658329 - type: manhattan_spearman value: 85.23579900798813 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.94539129818824 - type: cos_sim_spearman value: 88.99349064256742 - type: euclidean_pearson value: 88.7142444640351 - type: euclidean_spearman value: 88.34120813505011 - type: manhattan_pearson value: 88.70363008238084 - type: manhattan_spearman value: 88.31952816956954 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 68.29910260369893 - type: cos_sim_spearman value: 68.79263346213466 - type: euclidean_pearson value: 68.41627521422252 - type: euclidean_spearman value: 66.61602587398579 - type: manhattan_pearson value: 68.49402183447361 - type: manhattan_spearman value: 66.80157792354453 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.43703906343708 - type: cos_sim_spearman value: 89.06081805093662 - type: euclidean_pearson value: 87.48311456299662 - type: euclidean_spearman value: 88.07417597580013 - type: manhattan_pearson value: 87.48202249768894 - type: manhattan_spearman value: 88.04758031111642 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.49080620485203 - type: mrr value: 96.19145378949301 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 59.317 - type: map_at_10 value: 69.296 - type: map_at_100 value: 69.738 - type: map_at_1000 value: 69.759 - type: map_at_3 value: 66.12599999999999 - type: map_at_5 value: 67.532 - type: mrr_at_1 value: 62 - type: mrr_at_10 value: 70.176 - type: mrr_at_100 value: 70.565 - type: mrr_at_1000 value: 70.583 - type: mrr_at_3 value: 67.833 - type: mrr_at_5 value: 68.93299999999999 - type: ndcg_at_1 value: 62 - type: ndcg_at_10 value: 74.069 - type: ndcg_at_100 value: 76.037 - type: ndcg_at_1000 value: 76.467 - type: ndcg_at_3 value: 68.628 - type: ndcg_at_5 value: 70.57600000000001 - type: precision_at_1 value: 62 - type: precision_at_10 value: 10 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.667 - type: precision_at_5 value: 17.4 - type: recall_at_1 value: 59.317 - type: recall_at_10 value: 87.822 - type: recall_at_100 value: 96.833 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 73.06099999999999 - type: recall_at_5 value: 77.928 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.88910891089108 - type: cos_sim_ap value: 97.236958456951 - type: cos_sim_f1 value: 94.39999999999999 - type: cos_sim_precision value: 94.39999999999999 - type: cos_sim_recall value: 94.39999999999999 - type: dot_accuracy value: 99.82574257425742 - type: dot_ap value: 94.94344759441888 - type: dot_f1 value: 91.17352056168507 - type: dot_precision value: 91.44869215291752 - type: dot_recall value: 90.9 - type: euclidean_accuracy value: 99.88415841584158 - type: euclidean_ap value: 97.2044250782305 - type: euclidean_f1 value: 94.210786739238 - type: euclidean_precision value: 93.24191968658178 - type: euclidean_recall value: 95.19999999999999 - type: manhattan_accuracy value: 99.88613861386139 - type: manhattan_ap value: 97.20683205497689 - type: manhattan_f1 value: 94.2643391521197 - type: manhattan_precision value: 94.02985074626866 - type: manhattan_recall value: 94.5 - type: max_accuracy value: 99.88910891089108 - type: max_ap value: 97.236958456951 - type: max_f1 value: 94.39999999999999 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.53940781726187 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.71865011295108 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.3218674533331 - type: mrr value: 56.28279910449028 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.723915667479673 - type: cos_sim_spearman value: 32.029070449745234 - type: dot_pearson value: 28.864944212481454 - type: dot_spearman value: 27.939266999596725 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.231 - type: map_at_10 value: 1.949 - type: map_at_100 value: 10.023 - type: map_at_1000 value: 23.485 - type: map_at_3 value: 0.652 - type: map_at_5 value: 1.054 - type: mrr_at_1 value: 86 - type: mrr_at_10 value: 92.067 - type: mrr_at_100 value: 92.067 - type: mrr_at_1000 value: 92.067 - type: mrr_at_3 value: 91.667 - type: mrr_at_5 value: 92.067 - type: ndcg_at_1 value: 83 - type: ndcg_at_10 value: 76.32900000000001 - type: ndcg_at_100 value: 54.662 - type: ndcg_at_1000 value: 48.062 - type: ndcg_at_3 value: 81.827 - type: ndcg_at_5 value: 80.664 - type: precision_at_1 value: 86 - type: precision_at_10 value: 80 - type: precision_at_100 value: 55.48 - type: precision_at_1000 value: 20.938000000000002 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.39999999999999 - type: recall_at_1 value: 0.231 - type: recall_at_10 value: 2.158 - type: recall_at_100 value: 13.344000000000001 - type: recall_at_1000 value: 44.31 - type: recall_at_3 value: 0.6779999999999999 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.524 - type: map_at_10 value: 10.183 - type: map_at_100 value: 16.625 - type: map_at_1000 value: 18.017 - type: map_at_3 value: 5.169 - type: map_at_5 value: 6.772 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 47.128 - type: mrr_at_100 value: 48.458 - type: mrr_at_1000 value: 48.473 - type: mrr_at_3 value: 44.897999999999996 - type: mrr_at_5 value: 45.306000000000004 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 24.928 - type: ndcg_at_100 value: 37.613 - type: ndcg_at_1000 value: 48.528 - type: ndcg_at_3 value: 28.829 - type: ndcg_at_5 value: 25.237 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 22.448999999999998 - type: precision_at_100 value: 8.02 - type: precision_at_1000 value: 1.537 - type: precision_at_3 value: 30.612000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.524 - type: recall_at_10 value: 16.38 - type: recall_at_100 value: 49.529 - type: recall_at_1000 value: 83.598 - type: recall_at_3 value: 6.411 - type: recall_at_5 value: 8.932 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.09020000000001 - type: ap value: 14.451710060978993 - type: f1 value: 54.7874410609049 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.745331069609506 - type: f1 value: 60.08387848592697 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.71549485462037 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.39345532574357 - type: cos_sim_ap value: 78.16796549696478 - type: cos_sim_f1 value: 71.27713276123171 - type: cos_sim_precision value: 68.3115626511853 - type: cos_sim_recall value: 74.51187335092348 - type: dot_accuracy value: 85.12248912201228 - type: dot_ap value: 69.26039256107077 - type: dot_f1 value: 65.04294321240867 - type: dot_precision value: 63.251059586138126 - type: dot_recall value: 66.93931398416886 - type: euclidean_accuracy value: 87.07754664123503 - type: euclidean_ap value: 77.7872176038945 - type: euclidean_f1 value: 70.85587801278899 - type: euclidean_precision value: 66.3519115614924 - type: euclidean_recall value: 76.01583113456465 - type: manhattan_accuracy value: 87.07754664123503 - type: manhattan_ap value: 77.7341400185556 - type: manhattan_f1 value: 70.80310880829015 - type: manhattan_precision value: 69.54198473282443 - type: manhattan_recall value: 72.1108179419525 - type: max_accuracy value: 87.39345532574357 - type: max_ap value: 78.16796549696478 - type: max_f1 value: 71.27713276123171 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.09457833663213 - type: cos_sim_ap value: 86.33024314706873 - type: cos_sim_f1 value: 78.59623733719248 - type: cos_sim_precision value: 74.13322413322413 - type: cos_sim_recall value: 83.63104404065291 - type: dot_accuracy value: 88.3086894089339 - type: dot_ap value: 83.92225241805097 - type: dot_f1 value: 76.8721826377781 - type: dot_precision value: 72.8168044077135 - type: dot_recall value: 81.40591315060055 - type: euclidean_accuracy value: 88.77052043311213 - type: euclidean_ap value: 85.7410710218755 - type: euclidean_f1 value: 77.97705489398781 - type: euclidean_precision value: 73.77713657598241 - type: euclidean_recall value: 82.68401601478288 - type: manhattan_accuracy value: 88.73753250281368 - type: manhattan_ap value: 85.72867199072802 - type: manhattan_f1 value: 77.89774182922812 - type: manhattan_precision value: 74.23787931635857 - type: manhattan_recall value: 81.93717277486911 - type: max_accuracy value: 89.09457833663213 - type: max_ap value: 86.33024314706873 - type: max_f1 value: 78.59623733719248 --- # [Universal AnglE Embedding](https://github.com/SeanLee97/AnglE) > Follow us on GitHub: https://github.com/SeanLee97/AnglE. 🔥 Our universal English sentence embedding `WhereIsAI/UAE-Large-V1` achieves **SOTA** on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) with an average score of 64.64! ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/635cc29de7aef2358a9b03ee/jY3tr0DCMdyJXOihSqJFr.jpeg) # Usage ```bash python -m pip install -U angle-emb ``` 1) Non-Retrieval Tasks ```python from angle_emb import AnglE angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda() vec = angle.encode('hello world', to_numpy=True) print(vec) vecs = angle.encode(['hello world1', 'hello world2'], to_numpy=True) print(vecs) ``` 2) Retrieval Tasks For retrieval purposes, please use the prompt `Prompts.C`. ```python from angle_emb import AnglE, Prompts angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda() angle.set_prompt(prompt=Prompts.C) vec = angle.encode({'text': 'hello world'}, to_numpy=True) print(vec) vecs = angle.encode([{'text': 'hello world1'}, {'text': 'hello world2'}], to_numpy=True) print(vecs) ``` # Citation If you use our pre-trained models, welcome to support us by citing our work: ``` @article{li2023angle, title={AnglE-optimized Text Embeddings}, author={Li, Xianming and Li, Jing}, journal={arXiv preprint arXiv:2309.12871}, year={2023} } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
TheBloke/med42-70B-GPTQ
TheBloke
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "m42", "health", "healthcare", "clinical-llm", "en", "base_model:m42-health/med42-70b", "base_model:quantized:m42-health/med42-70b", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
1,698
1,698
134
1
--- base_model: m42-health/med42-70b language: - en license: other license_name: med42 model_name: Med42 70B pipeline_tag: text-generation tags: - m42 - health - healthcare - clinical-llm inference: false model_creator: M42 Health model_type: llama prompt_template: '<|system|>: You are a helpful medical assistant created by M42 Health in the UAE. <|prompter|>:{prompt} <|assistant|>: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Med42 70B - GPTQ - Model creator: [M42 Health](https://huggingface.co/m42-health) - Original model: [Med42 70B](https://huggingface.co/m42-health/med42-70b) <!-- description start --> ## Description This repo contains GPTQ model files for [M42 Health's Med42 70B](https://huggingface.co/m42-health/med42-70b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/med42-70B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/med42-70B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/med42-70B-GGUF) * [M42 Health's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/m42-health/med42-70b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Med42 ``` <|system|>: You are a helpful medical assistant created by M42 Health in the UAE. <|prompter|>:{prompt} <|assistant|>: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [M42 Health's Med42 70B](https://huggingface.co/m42-health/med42-70b). <!-- licensing end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KobaldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/med42-70B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [Medical Meadow WikiDoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/med42-70B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Medical Meadow WikiDoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/med42-70B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Medical Meadow WikiDoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/med42-70B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [Medical Meadow WikiDoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/med42-70B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/med42-70B-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `med42-70B-GPTQ`: ```shell mkdir med42-70B-GPTQ huggingface-cli download TheBloke/med42-70B-GPTQ --local-dir med42-70B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir med42-70B-GPTQ huggingface-cli download TheBloke/med42-70B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir med42-70B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir med42-70B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/med42-70B-GPTQ --local-dir med42-70B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/med42-70B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/med42-70B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/med42-70B-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `med42-70B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/med42-70B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|system|>: You are a helpful medical assistant created by M42 Health in the UAE. <|prompter|>:{prompt} <|assistant|>: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/med42-70B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|system|>: You are a helpful medical assistant created by M42 Health in the UAE. <|prompter|>:{prompt} <|assistant|>: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: M42 Health's Med42 70B # **Med42 - Clinical Large Language Model** Med42 is an open-access clinical large language model (LLM) developed by M42 to expand access to medical knowledge. Built off LLaMA-2 and comprising 70 billion parameters, this generative AI system provides high-quality answers to medical questions. ## Model Details *Note: Use of this model is governed by the M42 Health license. In order to download the model weights (and tokenizer), please read the [Med42 License](https://huggingface.co/spaces/m42-health/License) and accept our License by requesting access here.* Beginning with the base LLaMa-2 model, Med42 was instruction-tuned on a dataset of ~250M tokens compiled from different open-access sources, including medical flashcards, exam questions, and open-domain dialogues. **Model Developers:** M42 Health AI Team **Finetuned from model:** Llama-2 - 70B **Context length:** 4k tokens **Input:** Text only data **Output:** Model generates text only **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance. **License:** A custom license is available [here](https://huggingface.co/spaces/m42-health/License) **Research Paper:** TBA ## Intended Use Med42 is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases include: - Medical question answering - Patient record summarization - Aiding medical diagnosis - General health Q&A To get the expected features and performance for the model, a specific formatting needs to be followed, including the `<|system|>`, `<|prompter|>` and `<|assistant|>` tags. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name_or_path = "m42-health/med42-70b" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) prompt = "What are the symptoms of diabetes ?" prompt_template=f''' <|system|>: You are a helpful medical assistant created by M42 Health in the UAE. <|prompter|>:{prompt} <|assistant|>: ''' input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True,eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, max_new_tokens=512) print(tokenizer.decode(output[0])) ``` ## Hardware and Software The training process was performed on the Condor Galaxy 1 (CG-1) supercomputer platform. ## Evaluation Results Med42 achieves achieves competitive performance on various medical benchmarks, including MedQA, MedMCQA, PubMedQA, HeadQA, and Measuring Massive Multitask Language Understanding (MMLU) clinical topics. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We compare the performance with that reported for other models (ClinicalCamel-70B, GPT-3.5, GPT-4.0, Med-PaLM 2). |Dataset|Med42|ClinicalCamel-70B|GPT-3.5|GPT-4.0|Med-PaLM-2 (5-shot)*| |---|---|---|---|---|---| |MMLU Clinical Knowledge|74.3|69.8|69.8|86.0|88.3| |MMLU College Biology|84.0|79.2|72.2|95.1|94.4| |MMLU College Medicine|68.8|67.0|61.3|76.9|80.9| |MMLU Medical Genetics|86.0|69.0|70.0|91.0|90.0| |MMLU Professional Medicine|79.8|71.3|70.2|93.0|95.2| |MMLU Anatomy|67.4|62.2|56.3|80.0|77.8| |MedMCQA|60.9|47.0|50.1|69.5|71.3| |MedQA|61.5|53.4|50.8|78.9|79.7| |USMLE Self-Assessment|71.7|-|49.1|83.8|-| |USMLE Sample Exam|72.0|54.3|56.9|84.3|-| **We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*. ### Key performance metrics: - Med42 achieves a 72% accuracy on the US Medical Licensing Examination (USMLE) sample exam, surpassing the prior state of the art among openly available medical LLMs. - 61.5% on MedQA dataset (compared to 50.8% for GPT-3.5) - Consistently higher performance on MMLU clinical topics compared to GPT-3.5. ## Limitations & Safe Use - Med42 is not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety. - Potential for generating incorrect or harmful information. - Risk of perpetuating biases in training data. Use this model responsibly! Do not rely on it for medical usage without rigorous safety testing. ## Accessing Med42 and Reporting Issues Please report any software "bug" or other problems through one of the following means: - Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42) - Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/YMJu3kcKat](https://forms.office.com/r/YMJu3kcKat) - M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/) - Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]>
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
[ "MEDQA", "PUBMEDQA" ]
BioNLP
twadada/ft-gcp-pca
twadada
null
[ "mteb", "model-index", "region:us" ]
1,726
1,726
0
0
--- tags: - mteb model-index: - name: fasttext_main_PCA_gcp results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: None config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.31343283582088 - type: ap value: 39.6402606765834 - type: f1 value: 70.3627511633638 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: None config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 67.2898 - type: ap value: 61.87754668001793 - type: f1 value: 67.2166271872683 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: None config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 33.116 - type: f1 value: 32.58360486720766 - task: type: Retrieval dataset: name: MTEB ArguAna type: None config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 19.346 - type: map_at_10 value: 32.623000000000005 - type: map_at_100 value: 33.829 - type: map_at_1000 value: 33.857 - type: map_at_3 value: 28.307 - type: map_at_5 value: 30.523 - type: mrr_at_1 value: 19.915 - type: mrr_at_10 value: 32.846 - type: mrr_at_100 value: 34.050999999999995 - type: mrr_at_1000 value: 34.079 - type: mrr_at_3 value: 28.497 - type: mrr_at_5 value: 30.72 - type: ndcg_at_1 value: 19.346 - type: ndcg_at_10 value: 40.458 - type: ndcg_at_100 value: 46.293 - type: ndcg_at_1000 value: 47.043 - type: ndcg_at_3 value: 31.381999999999998 - type: ndcg_at_5 value: 35.385 - type: precision_at_1 value: 19.346 - type: precision_at_10 value: 6.572 - type: precision_at_100 value: 0.9299999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 13.442000000000002 - type: precision_at_5 value: 10.014000000000001 - type: recall_at_1 value: 19.346 - type: recall_at_10 value: 65.718 - type: recall_at_100 value: 92.959 - type: recall_at_1000 value: 98.86200000000001 - type: recall_at_3 value: 40.327 - type: recall_at_5 value: 50.071 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: None config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.28634616254917 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: None config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 29.88368073053745 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: None config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 51.46344355474199 - type: mrr value: 64.81851558721364 - task: type: STS dataset: name: MTEB BIOSSES type: None config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 74.25527135158369 - type: cos_sim_spearman value: 70.15762268541177 - type: euclidean_pearson value: 73.6693089435787 - type: euclidean_spearman value: 70.15762268541177 - type: manhattan_pearson value: 75.24108161238829 - type: manhattan_spearman value: 71.50403890658741 - task: type: Classification dataset: name: MTEB Banking77Classification type: None config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 67.78571428571428 - type: f1 value: 66.71649216312889 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: None config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.4188358879818 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: None config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 26.33046520373852 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: None config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 18.970000000000002 - type: map_at_10 value: 25.256 - type: map_at_100 value: 26.401000000000003 - type: map_at_1000 value: 26.545 - type: map_at_3 value: 23.347 - type: map_at_5 value: 24.393 - type: mrr_at_1 value: 24.464 - type: mrr_at_10 value: 30.580000000000002 - type: mrr_at_100 value: 31.514999999999997 - type: mrr_at_1000 value: 31.594 - type: mrr_at_3 value: 28.660000000000004 - type: mrr_at_5 value: 29.812 - type: ndcg_at_1 value: 24.464 - type: ndcg_at_10 value: 29.643000000000004 - type: ndcg_at_100 value: 35.012 - type: ndcg_at_1000 value: 38.118 - type: ndcg_at_3 value: 26.796 - type: ndcg_at_5 value: 27.944999999999997 - type: precision_at_1 value: 24.464 - type: precision_at_10 value: 5.694 - type: precision_at_100 value: 1.054 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 13.257 - type: precision_at_5 value: 9.413 - type: recall_at_1 value: 18.970000000000002 - type: recall_at_10 value: 37.131 - type: recall_at_100 value: 61.602000000000004 - type: recall_at_1000 value: 83.092 - type: recall_at_3 value: 27.663 - type: recall_at_5 value: 31.499 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: None config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 15.466 - type: map_at_10 value: 20.806 - type: map_at_100 value: 21.709999999999997 - type: map_at_1000 value: 21.831 - type: map_at_3 value: 19.035 - type: map_at_5 value: 20.058999999999997 - type: mrr_at_1 value: 19.744999999999997 - type: mrr_at_10 value: 25.237 - type: mrr_at_100 value: 25.983 - type: mrr_at_1000 value: 26.057000000000002 - type: mrr_at_3 value: 23.482 - type: mrr_at_5 value: 24.437 - type: ndcg_at_1 value: 19.744999999999997 - type: ndcg_at_10 value: 24.512999999999998 - type: ndcg_at_100 value: 28.669 - type: ndcg_at_1000 value: 31.606 - type: ndcg_at_3 value: 21.547 - type: ndcg_at_5 value: 22.953000000000003 - type: precision_at_1 value: 19.744999999999997 - type: precision_at_10 value: 4.682 - type: precision_at_100 value: 0.853 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 10.531 - type: precision_at_5 value: 7.631 - type: recall_at_1 value: 15.466 - type: recall_at_10 value: 31.183 - type: recall_at_100 value: 49.45 - type: recall_at_1000 value: 69.92699999999999 - type: recall_at_3 value: 22.62 - type: recall_at_5 value: 26.331 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: None config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 23.147000000000002 - type: map_at_10 value: 30.44 - type: map_at_100 value: 31.47 - type: map_at_1000 value: 31.566 - type: map_at_3 value: 28.301 - type: map_at_5 value: 29.546 - type: mrr_at_1 value: 26.52 - type: mrr_at_10 value: 33.45 - type: mrr_at_100 value: 34.316 - type: mrr_at_1000 value: 34.386 - type: mrr_at_3 value: 31.317 - type: mrr_at_5 value: 32.605000000000004 - type: ndcg_at_1 value: 26.52 - type: ndcg_at_10 value: 34.664 - type: ndcg_at_100 value: 39.499 - type: ndcg_at_1000 value: 42.076 - type: ndcg_at_3 value: 30.564000000000004 - type: ndcg_at_5 value: 32.625 - type: precision_at_1 value: 26.52 - type: precision_at_10 value: 5.592 - type: precision_at_100 value: 0.877 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 13.542000000000002 - type: precision_at_5 value: 9.53 - type: recall_at_1 value: 23.147000000000002 - type: recall_at_10 value: 44.563 - type: recall_at_100 value: 66.117 - type: recall_at_1000 value: 85.48599999999999 - type: recall_at_3 value: 33.507999999999996 - type: recall_at_5 value: 38.532 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: None config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 10.094 - type: map_at_10 value: 14.167 - type: map_at_100 value: 14.790000000000001 - type: map_at_1000 value: 14.893 - type: map_at_3 value: 12.656999999999998 - type: map_at_5 value: 13.538 - type: mrr_at_1 value: 11.073 - type: mrr_at_10 value: 15.293000000000001 - type: mrr_at_100 value: 15.931999999999999 - type: mrr_at_1000 value: 16.027 - type: mrr_at_3 value: 13.729 - type: mrr_at_5 value: 14.633 - type: ndcg_at_1 value: 11.073 - type: ndcg_at_10 value: 16.814 - type: ndcg_at_100 value: 20.307 - type: ndcg_at_1000 value: 23.468 - type: ndcg_at_3 value: 13.780000000000001 - type: ndcg_at_5 value: 15.328 - type: precision_at_1 value: 11.073 - type: precision_at_10 value: 2.734 - type: precision_at_100 value: 0.47200000000000003 - type: precision_at_1000 value: 0.079 - type: precision_at_3 value: 5.951 - type: precision_at_5 value: 4.407 - type: recall_at_1 value: 10.094 - type: recall_at_10 value: 24.031 - type: recall_at_100 value: 40.89 - type: recall_at_1000 value: 65.644 - type: recall_at_3 value: 15.857 - type: recall_at_5 value: 19.595000000000002 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: None config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 5.864 - type: map_at_10 value: 8.579 - type: map_at_100 value: 9.216000000000001 - type: map_at_1000 value: 9.339 - type: map_at_3 value: 7.668 - type: map_at_5 value: 8.121 - type: mrr_at_1 value: 7.587000000000001 - type: mrr_at_10 value: 10.879999999999999 - type: mrr_at_100 value: 11.59 - type: mrr_at_1000 value: 11.692 - type: mrr_at_3 value: 9.783999999999999 - type: mrr_at_5 value: 10.344000000000001 - type: ndcg_at_1 value: 7.587000000000001 - type: ndcg_at_10 value: 10.722 - type: ndcg_at_100 value: 14.313999999999998 - type: ndcg_at_1000 value: 17.901 - type: ndcg_at_3 value: 8.859 - type: ndcg_at_5 value: 9.628 - type: precision_at_1 value: 7.587000000000001 - type: precision_at_10 value: 1.9900000000000002 - type: precision_at_100 value: 0.45399999999999996 - type: precision_at_1000 value: 0.08800000000000001 - type: precision_at_3 value: 4.312 - type: precision_at_5 value: 3.159 - type: recall_at_1 value: 5.864 - type: recall_at_10 value: 15.261 - type: recall_at_100 value: 31.591 - type: recall_at_1000 value: 58.426 - type: recall_at_3 value: 10.064 - type: recall_at_5 value: 11.941 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: None config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 15.870000000000001 - type: map_at_10 value: 21.12 - type: map_at_100 value: 22.166 - type: map_at_1000 value: 22.291 - type: map_at_3 value: 19.063 - type: map_at_5 value: 20.263 - type: mrr_at_1 value: 18.961 - type: mrr_at_10 value: 24.798000000000002 - type: mrr_at_100 value: 25.678 - type: mrr_at_1000 value: 25.758 - type: mrr_at_3 value: 22.650000000000002 - type: mrr_at_5 value: 23.852999999999998 - type: ndcg_at_1 value: 18.961 - type: ndcg_at_10 value: 25.009999999999998 - type: ndcg_at_100 value: 30.232 - type: ndcg_at_1000 value: 33.23 - type: ndcg_at_3 value: 21.313 - type: ndcg_at_5 value: 23.136000000000003 - type: precision_at_1 value: 18.961 - type: precision_at_10 value: 4.562 - type: precision_at_100 value: 0.885 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 9.785 - type: precision_at_5 value: 7.276000000000001 - type: recall_at_1 value: 15.870000000000001 - type: recall_at_10 value: 33.265 - type: recall_at_100 value: 56.367999999999995 - type: recall_at_1000 value: 77.34100000000001 - type: recall_at_3 value: 22.872999999999998 - type: recall_at_5 value: 27.541 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: None config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 11.029 - type: map_at_10 value: 16.307 - type: map_at_100 value: 17.258000000000003 - type: map_at_1000 value: 17.397000000000002 - type: map_at_3 value: 14.707999999999998 - type: map_at_5 value: 15.476 - type: mrr_at_1 value: 13.469999999999999 - type: mrr_at_10 value: 19.494 - type: mrr_at_100 value: 20.366 - type: mrr_at_1000 value: 20.464 - type: mrr_at_3 value: 17.751 - type: mrr_at_5 value: 18.63 - type: ndcg_at_1 value: 13.469999999999999 - type: ndcg_at_10 value: 19.863 - type: ndcg_at_100 value: 24.613 - type: ndcg_at_1000 value: 28.107 - type: ndcg_at_3 value: 16.716 - type: ndcg_at_5 value: 17.959 - type: precision_at_1 value: 13.469999999999999 - type: precision_at_10 value: 3.7560000000000002 - type: precision_at_100 value: 0.7270000000000001 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 8.105 - type: precision_at_5 value: 5.845000000000001 - type: recall_at_1 value: 11.029 - type: recall_at_10 value: 27.633000000000003 - type: recall_at_100 value: 48.532 - type: recall_at_1000 value: 73.16 - type: recall_at_3 value: 18.834 - type: recall_at_5 value: 21.919 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 12.789666666666665 - type: map_at_10 value: 17.44066666666667 - type: map_at_100 value: 18.274416666666664 - type: map_at_1000 value: 18.397666666666666 - type: map_at_3 value: 15.894749999999998 - type: map_at_5 value: 16.714249999999996 - type: mrr_at_1 value: 15.513749999999998 - type: mrr_at_10 value: 20.397250000000003 - type: mrr_at_100 value: 21.16466666666667 - type: mrr_at_1000 value: 21.25375 - type: mrr_at_3 value: 18.7975 - type: mrr_at_5 value: 19.643333333333338 - type: ndcg_at_1 value: 15.513749999999998 - type: ndcg_at_10 value: 20.66291666666667 - type: ndcg_at_100 value: 24.884500000000003 - type: ndcg_at_1000 value: 28.035583333333335 - type: ndcg_at_3 value: 17.804 - type: ndcg_at_5 value: 19.046333333333333 - type: precision_at_1 value: 15.513749999999998 - type: precision_at_10 value: 3.7079999999999993 - type: precision_at_100 value: 0.6968333333333332 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 8.267916666666668 - type: precision_at_5 value: 5.9515 - type: recall_at_1 value: 12.789666666666665 - type: recall_at_10 value: 27.520416666666662 - type: recall_at_100 value: 46.91583333333334 - type: recall_at_1000 value: 69.91024999999999 - type: recall_at_3 value: 19.35025 - type: recall_at_5 value: 22.548583333333337 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: None config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 9.551 - type: map_at_10 value: 13.431999999999999 - type: map_at_100 value: 14.13 - type: map_at_1000 value: 14.213999999999999 - type: map_at_3 value: 12.015 - type: map_at_5 value: 12.708 - type: mrr_at_1 value: 11.35 - type: mrr_at_10 value: 15.35 - type: mrr_at_100 value: 16.037000000000003 - type: mrr_at_1000 value: 16.111 - type: mrr_at_3 value: 13.88 - type: mrr_at_5 value: 14.616999999999999 - type: ndcg_at_1 value: 11.35 - type: ndcg_at_10 value: 16.082 - type: ndcg_at_100 value: 19.875 - type: ndcg_at_1000 value: 22.219 - type: ndcg_at_3 value: 13.306000000000001 - type: ndcg_at_5 value: 14.413 - type: precision_at_1 value: 11.35 - type: precision_at_10 value: 2.8369999999999997 - type: precision_at_100 value: 0.509 - type: precision_at_1000 value: 0.077 - type: precision_at_3 value: 6.084 - type: precision_at_5 value: 4.324999999999999 - type: recall_at_1 value: 9.551 - type: recall_at_10 value: 22.461000000000002 - type: recall_at_100 value: 40.611000000000004 - type: recall_at_1000 value: 58.172999999999995 - type: recall_at_3 value: 14.732000000000001 - type: recall_at_5 value: 17.524 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: None config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 6.572 - type: map_at_10 value: 9.33 - type: map_at_100 value: 9.933 - type: map_at_1000 value: 10.035 - type: map_at_3 value: 8.399 - type: map_at_5 value: 8.863999999999999 - type: mrr_at_1 value: 8.396 - type: mrr_at_10 value: 11.518 - type: mrr_at_100 value: 12.135 - type: mrr_at_1000 value: 12.223 - type: mrr_at_3 value: 10.415000000000001 - type: mrr_at_5 value: 10.952 - type: ndcg_at_1 value: 8.396 - type: ndcg_at_10 value: 11.439 - type: ndcg_at_100 value: 14.616000000000001 - type: ndcg_at_1000 value: 17.66 - type: ndcg_at_3 value: 9.613 - type: ndcg_at_5 value: 10.324 - type: precision_at_1 value: 8.396 - type: precision_at_10 value: 2.185 - type: precision_at_100 value: 0.447 - type: precision_at_1000 value: 0.084 - type: precision_at_3 value: 4.646 - type: precision_at_5 value: 3.372 - type: recall_at_1 value: 6.572 - type: recall_at_10 value: 15.811 - type: recall_at_100 value: 30.384 - type: recall_at_1000 value: 53.134 - type: recall_at_3 value: 10.5 - type: recall_at_5 value: 12.385 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: None config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 12.595999999999998 - type: map_at_10 value: 16.471 - type: map_at_100 value: 17.189 - type: map_at_1000 value: 17.307 - type: map_at_3 value: 15.207 - type: map_at_5 value: 15.745999999999999 - type: mrr_at_1 value: 15.204999999999998 - type: mrr_at_10 value: 19.536 - type: mrr_at_100 value: 20.216 - type: mrr_at_1000 value: 20.319000000000003 - type: mrr_at_3 value: 18.113 - type: mrr_at_5 value: 18.752 - type: ndcg_at_1 value: 15.204999999999998 - type: ndcg_at_10 value: 19.422 - type: ndcg_at_100 value: 23.462 - type: ndcg_at_1000 value: 26.861 - type: ndcg_at_3 value: 16.869999999999997 - type: ndcg_at_5 value: 17.692 - type: precision_at_1 value: 15.204999999999998 - type: precision_at_10 value: 3.274 - type: precision_at_100 value: 0.598 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 7.617999999999999 - type: precision_at_5 value: 5.167999999999999 - type: recall_at_1 value: 12.595999999999998 - type: recall_at_10 value: 25.753999999999998 - type: recall_at_100 value: 44.803 - type: recall_at_1000 value: 69.986 - type: recall_at_3 value: 18.406 - type: recall_at_5 value: 20.584 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: None config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 13.288 - type: map_at_10 value: 18.784 - type: map_at_100 value: 19.769000000000002 - type: map_at_1000 value: 19.976 - type: map_at_3 value: 16.773 - type: map_at_5 value: 17.809 - type: mrr_at_1 value: 17.194000000000003 - type: mrr_at_10 value: 22.591 - type: mrr_at_100 value: 23.51 - type: mrr_at_1000 value: 23.602999999999998 - type: mrr_at_3 value: 20.817 - type: mrr_at_5 value: 21.706 - type: ndcg_at_1 value: 17.194000000000003 - type: ndcg_at_10 value: 22.887 - type: ndcg_at_100 value: 27.337 - type: ndcg_at_1000 value: 31.064000000000004 - type: ndcg_at_3 value: 19.554 - type: ndcg_at_5 value: 21.060000000000002 - type: precision_at_1 value: 17.194000000000003 - type: precision_at_10 value: 4.565 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 9.223 - type: precision_at_5 value: 7.115 - type: recall_at_1 value: 13.288 - type: recall_at_10 value: 30.520999999999997 - type: recall_at_100 value: 51.746 - type: recall_at_1000 value: 77.274 - type: recall_at_3 value: 20.405 - type: recall_at_5 value: 24.266 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: None config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 11.029 - type: map_at_10 value: 14.596 - type: map_at_100 value: 15.261 - type: map_at_1000 value: 15.378 - type: map_at_3 value: 13.564000000000002 - type: map_at_5 value: 14.048 - type: mrr_at_1 value: 12.2 - type: mrr_at_10 value: 16.04 - type: mrr_at_100 value: 16.698 - type: mrr_at_1000 value: 16.811 - type: mrr_at_3 value: 14.972 - type: mrr_at_5 value: 15.379000000000001 - type: ndcg_at_1 value: 12.2 - type: ndcg_at_10 value: 16.896 - type: ndcg_at_100 value: 20.678 - type: ndcg_at_1000 value: 24.117 - type: ndcg_at_3 value: 14.729999999999999 - type: ndcg_at_5 value: 15.493000000000002 - type: precision_at_1 value: 12.2 - type: precision_at_10 value: 2.625 - type: precision_at_100 value: 0.486 - type: precision_at_1000 value: 0.087 - type: precision_at_3 value: 6.161 - type: precision_at_5 value: 4.1770000000000005 - type: recall_at_1 value: 11.029 - type: recall_at_10 value: 22.631 - type: recall_at_100 value: 40.896 - type: recall_at_1000 value: 67.28 - type: recall_at_3 value: 16.741 - type: recall_at_5 value: 18.465999999999998 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: None config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 6.7540000000000004 - type: map_at_10 value: 11.886 - type: map_at_100 value: 13.389999999999999 - type: map_at_1000 value: 13.586 - type: map_at_3 value: 9.657 - type: map_at_5 value: 10.859 - type: mrr_at_1 value: 15.895999999999999 - type: mrr_at_10 value: 24.598 - type: mrr_at_100 value: 25.740000000000002 - type: mrr_at_1000 value: 25.802999999999997 - type: mrr_at_3 value: 21.488 - type: mrr_at_5 value: 23.224 - type: ndcg_at_1 value: 15.895999999999999 - type: ndcg_at_10 value: 17.952 - type: ndcg_at_100 value: 24.755 - type: ndcg_at_1000 value: 28.666999999999998 - type: ndcg_at_3 value: 13.84 - type: ndcg_at_5 value: 15.426 - type: precision_at_1 value: 15.895999999999999 - type: precision_at_10 value: 5.928 - type: precision_at_100 value: 1.322 - type: precision_at_1000 value: 0.203 - type: precision_at_3 value: 10.488999999999999 - type: precision_at_5 value: 8.508000000000001 - type: recall_at_1 value: 6.7540000000000004 - type: recall_at_10 value: 22.767 - type: recall_at_100 value: 46.748 - type: recall_at_1000 value: 69.135 - type: recall_at_3 value: 12.923000000000002 - type: recall_at_5 value: 17.092 - task: type: Retrieval dataset: name: MTEB DBPedia type: None config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 2.406 - type: map_at_10 value: 5.536 - type: map_at_100 value: 8.086 - type: map_at_1000 value: 8.802999999999999 - type: map_at_3 value: 3.718 - type: map_at_5 value: 4.536 - type: mrr_at_1 value: 29.75 - type: mrr_at_10 value: 38.872 - type: mrr_at_100 value: 39.656000000000006 - type: mrr_at_1000 value: 39.705 - type: mrr_at_3 value: 36.042 - type: mrr_at_5 value: 37.592 - type: ndcg_at_1 value: 21.75 - type: ndcg_at_10 value: 16.145 - type: ndcg_at_100 value: 18.795 - type: ndcg_at_1000 value: 25.217 - type: ndcg_at_3 value: 18.021 - type: ndcg_at_5 value: 17.213 - type: precision_at_1 value: 29.75 - type: precision_at_10 value: 15.225 - type: precision_at_100 value: 4.907 - type: precision_at_1000 value: 1.114 - type: precision_at_3 value: 22.083 - type: precision_at_5 value: 19.2 - type: recall_at_1 value: 2.406 - type: recall_at_10 value: 9.638 - type: recall_at_100 value: 24.758 - type: recall_at_1000 value: 47.137 - type: recall_at_3 value: 4.61 - type: recall_at_5 value: 6.660000000000001 - task: type: Classification dataset: name: MTEB EmotionClassification type: None config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 37.135 - type: f1 value: 33.97265311696195 - task: type: Retrieval dataset: name: MTEB FEVER type: None config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 14.610000000000001 - type: map_at_10 value: 21.112000000000002 - type: map_at_100 value: 21.921 - type: map_at_1000 value: 21.996 - type: map_at_3 value: 19.064 - type: map_at_5 value: 20.191 - type: mrr_at_1 value: 15.601999999999999 - type: mrr_at_10 value: 22.404 - type: mrr_at_100 value: 23.205000000000002 - type: mrr_at_1000 value: 23.273 - type: mrr_at_3 value: 20.247 - type: mrr_at_5 value: 21.436 - type: ndcg_at_1 value: 15.601999999999999 - type: ndcg_at_10 value: 25.064999999999998 - type: ndcg_at_100 value: 29.319 - type: ndcg_at_1000 value: 31.456 - type: ndcg_at_3 value: 20.801 - type: ndcg_at_5 value: 22.835 - type: precision_at_1 value: 15.601999999999999 - type: precision_at_10 value: 3.936 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.083 - type: precision_at_3 value: 8.821 - type: precision_at_5 value: 6.372999999999999 - type: recall_at_1 value: 14.610000000000001 - type: recall_at_10 value: 36.260999999999996 - type: recall_at_100 value: 56.433 - type: recall_at_1000 value: 73.064 - type: recall_at_3 value: 24.634 - type: recall_at_5 value: 29.520000000000003 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: None config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 5.639 - type: map_at_10 value: 9.072 - type: map_at_100 value: 10.041 - type: map_at_1000 value: 10.224 - type: map_at_3 value: 7.627000000000001 - type: map_at_5 value: 8.376999999999999 - type: mrr_at_1 value: 10.802 - type: mrr_at_10 value: 16.269 - type: mrr_at_100 value: 17.209 - type: mrr_at_1000 value: 17.313000000000002 - type: mrr_at_3 value: 14.299999999999999 - type: mrr_at_5 value: 15.35 - type: ndcg_at_1 value: 10.802 - type: ndcg_at_10 value: 12.826 - type: ndcg_at_100 value: 17.979 - type: ndcg_at_1000 value: 22.305 - type: ndcg_at_3 value: 10.365 - type: ndcg_at_5 value: 11.242 - type: precision_at_1 value: 10.802 - type: precision_at_10 value: 3.688 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 6.893000000000001 - type: precision_at_5 value: 5.432 - type: recall_at_1 value: 5.639 - type: recall_at_10 value: 16.936999999999998 - type: recall_at_100 value: 37.964999999999996 - type: recall_at_1000 value: 64.747 - type: recall_at_3 value: 9.655 - type: recall_at_5 value: 12.361 - task: type: Retrieval dataset: name: MTEB HotpotQA type: None config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 13.794999999999998 - type: map_at_10 value: 19.503 - type: map_at_100 value: 20.301 - type: map_at_1000 value: 20.403 - type: map_at_3 value: 17.865000000000002 - type: map_at_5 value: 18.762 - type: mrr_at_1 value: 27.589000000000002 - type: mrr_at_10 value: 34.12 - type: mrr_at_100 value: 34.855000000000004 - type: mrr_at_1000 value: 34.917 - type: mrr_at_3 value: 32.305 - type: mrr_at_5 value: 33.263999999999996 - type: ndcg_at_1 value: 27.589000000000002 - type: ndcg_at_10 value: 25.378 - type: ndcg_at_100 value: 29.279 - type: ndcg_at_1000 value: 31.867 - type: ndcg_at_3 value: 22.131999999999998 - type: ndcg_at_5 value: 23.655 - type: precision_at_1 value: 27.589000000000002 - type: precision_at_10 value: 5.679 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.123 - type: precision_at_3 value: 14.036000000000001 - type: precision_at_5 value: 9.629 - type: recall_at_1 value: 13.794999999999998 - type: recall_at_10 value: 28.393 - type: recall_at_100 value: 44.092 - type: recall_at_1000 value: 61.458 - type: recall_at_3 value: 21.053 - type: recall_at_5 value: 24.072 - task: type: Classification dataset: name: MTEB ImdbClassification type: None config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 67.62120000000002 - type: ap value: 61.977838836960366 - type: f1 value: 67.4139917360422 - task: type: Retrieval dataset: name: MTEB MSMARCO type: None config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 3.5409999999999995 - type: map_at_10 value: 6.231 - type: map_at_100 value: 6.872 - type: map_at_1000 value: 6.970999999999999 - type: map_at_3 value: 5.106999999999999 - type: map_at_5 value: 5.698 - type: mrr_at_1 value: 3.567 - type: mrr_at_10 value: 6.356000000000001 - type: mrr_at_100 value: 7.0040000000000004 - type: mrr_at_1000 value: 7.101 - type: mrr_at_3 value: 5.193 - type: mrr_at_5 value: 5.804 - type: ndcg_at_1 value: 3.567 - type: ndcg_at_10 value: 8.045 - type: ndcg_at_100 value: 11.846 - type: ndcg_at_1000 value: 14.926 - type: ndcg_at_3 value: 5.6610000000000005 - type: ndcg_at_5 value: 6.737 - type: precision_at_1 value: 3.567 - type: precision_at_10 value: 1.43 - type: precision_at_100 value: 0.345 - type: precision_at_1000 value: 0.061 - type: precision_at_3 value: 2.45 - type: precision_at_5 value: 2.0140000000000002 - type: recall_at_1 value: 3.5409999999999995 - type: recall_at_10 value: 13.794 - type: recall_at_100 value: 32.921 - type: recall_at_1000 value: 57.863 - type: recall_at_3 value: 7.175 - type: recall_at_5 value: 9.768 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: None config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.63337893296853 - type: f1 value: 87.08830964839531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: None config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 63.12585499316006 - type: f1 value: 43.45902259676137 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: None config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.41223940820444 - type: f1 value: 58.47877966227865 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: None config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.21788836583725 - type: f1 value: 66.63251304134974 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: None config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 28.886280318557983 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: None config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 24.031171971800816 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: None config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 27.38298623223962 - type: mrr value: 27.842813890242628 - task: type: Retrieval dataset: name: MTEB NFCorpus type: None config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 4.2700000000000005 - type: map_at_10 value: 8.479000000000001 - type: map_at_100 value: 10.402000000000001 - type: map_at_1000 value: 11.612 - type: map_at_3 value: 6.761 - type: map_at_5 value: 7.489 - type: mrr_at_1 value: 31.269000000000002 - type: mrr_at_10 value: 40.519 - type: mrr_at_100 value: 41.388000000000005 - type: mrr_at_1000 value: 41.453 - type: mrr_at_3 value: 38.029 - type: mrr_at_5 value: 39.546 - type: ndcg_at_1 value: 29.876 - type: ndcg_at_10 value: 23.665 - type: ndcg_at_100 value: 22.468 - type: ndcg_at_1000 value: 32.029999999999994 - type: ndcg_at_3 value: 26.968999999999998 - type: ndcg_at_5 value: 25.763 - type: precision_at_1 value: 31.269000000000002 - type: precision_at_10 value: 17.183 - type: precision_at_100 value: 6.003 - type: precision_at_1000 value: 1.925 - type: precision_at_3 value: 24.871 - type: precision_at_5 value: 21.981 - type: recall_at_1 value: 4.2700000000000005 - type: recall_at_10 value: 12.097 - type: recall_at_100 value: 24.349 - type: recall_at_1000 value: 58.178 - type: recall_at_3 value: 8.256 - type: recall_at_5 value: 9.498 - task: type: Retrieval dataset: name: MTEB NQ type: None config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 5.685 - type: map_at_10 value: 9.511 - type: map_at_100 value: 10.421 - type: map_at_1000 value: 10.531 - type: map_at_3 value: 8.007 - type: map_at_5 value: 8.815000000000001 - type: mrr_at_1 value: 6.6339999999999995 - type: mrr_at_10 value: 10.767 - type: mrr_at_100 value: 11.65 - type: mrr_at_1000 value: 11.748 - type: mrr_at_3 value: 9.231 - type: mrr_at_5 value: 10.056 - type: ndcg_at_1 value: 6.6339999999999995 - type: ndcg_at_10 value: 12.200999999999999 - type: ndcg_at_100 value: 17.013 - type: ndcg_at_1000 value: 20.108999999999998 - type: ndcg_at_3 value: 9.09 - type: ndcg_at_5 value: 10.520999999999999 - type: precision_at_1 value: 6.6339999999999995 - type: precision_at_10 value: 2.335 - type: precision_at_100 value: 0.508 - type: precision_at_1000 value: 0.08 - type: precision_at_3 value: 4.365 - type: precision_at_5 value: 3.418 - type: recall_at_1 value: 5.685 - type: recall_at_10 value: 19.325 - type: recall_at_100 value: 41.971000000000004 - type: recall_at_1000 value: 65.877 - type: recall_at_3 value: 11.03 - type: recall_at_5 value: 14.360999999999999 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: None config: default split: test revision: None metrics: - type: map_at_1 value: 56.274 - type: map_at_10 value: 68.382 - type: map_at_100 value: 69.24 - type: map_at_1000 value: 69.286 - type: map_at_3 value: 65.36999999999999 - type: map_at_5 value: 67.163 - type: mrr_at_1 value: 64.8 - type: mrr_at_10 value: 73.11800000000001 - type: mrr_at_100 value: 73.478 - type: mrr_at_1000 value: 73.489 - type: mrr_at_3 value: 71.307 - type: mrr_at_5 value: 72.42 - type: ndcg_at_1 value: 64.85 - type: ndcg_at_10 value: 73.656 - type: ndcg_at_100 value: 76.44800000000001 - type: ndcg_at_1000 value: 77.066 - type: ndcg_at_3 value: 69.478 - type: ndcg_at_5 value: 71.577 - type: precision_at_1 value: 64.85 - type: precision_at_10 value: 11.246 - type: precision_at_100 value: 1.399 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 30.197000000000003 - type: precision_at_5 value: 20.122 - type: recall_at_1 value: 56.274 - type: recall_at_10 value: 84.25200000000001 - type: recall_at_100 value: 95.269 - type: recall_at_1000 value: 99.007 - type: recall_at_3 value: 72.452 - type: recall_at_5 value: 78.108 - type: map_at_1 value: 1.778 - type: map_at_10 value: 3.9440000000000004 - type: map_at_100 value: 4.835 - type: map_at_1000 value: 5.065 - type: map_at_3 value: 2.935 - type: map_at_5 value: 3.427 - type: mrr_at_1 value: 8.7 - type: mrr_at_10 value: 13.711 - type: mrr_at_100 value: 14.748 - type: mrr_at_1000 value: 14.901 - type: mrr_at_3 value: 11.683 - type: mrr_at_5 value: 12.793 - type: ndcg_at_1 value: 8.7 - type: ndcg_at_10 value: 7.3389999999999995 - type: ndcg_at_100 value: 12.021999999999998 - type: ndcg_at_1000 value: 17.527 - type: ndcg_at_3 value: 6.797000000000001 - type: ndcg_at_5 value: 6.0040000000000004 - type: precision_at_1 value: 8.7 - type: precision_at_10 value: 3.83 - type: precision_at_100 value: 1.082 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 6.267 - type: precision_at_5 value: 5.220000000000001 - type: recall_at_1 value: 1.778 - type: recall_at_10 value: 7.786999999999999 - type: recall_at_100 value: 21.995 - type: recall_at_1000 value: 49.128 - type: recall_at_3 value: 3.823 - type: recall_at_5 value: 5.327 - type: map_at_1 value: 0.146 - type: map_at_10 value: 0.853 - type: map_at_100 value: 4.044 - type: map_at_1000 value: 9.74 - type: map_at_3 value: 0.35000000000000003 - type: map_at_5 value: 0.498 - type: mrr_at_1 value: 57.99999999999999 - type: mrr_at_10 value: 67.786 - type: mrr_at_100 value: 68.221 - type: mrr_at_1000 value: 68.221 - type: mrr_at_3 value: 64.667 - type: mrr_at_5 value: 66.967 - type: ndcg_at_1 value: 48.0 - type: ndcg_at_10 value: 42.696 - type: ndcg_at_100 value: 30.304 - type: ndcg_at_1000 value: 27.717000000000002 - type: ndcg_at_3 value: 45.765 - type: ndcg_at_5 value: 44.635000000000005 - type: precision_at_1 value: 56.00000000000001 - type: precision_at_10 value: 46.6 - type: precision_at_100 value: 31.879999999999995 - type: precision_at_1000 value: 13.514000000000001 - type: precision_at_3 value: 51.333 - type: precision_at_5 value: 50.4 - type: recall_at_1 value: 0.146 - type: recall_at_10 value: 1.094 - type: recall_at_100 value: 7.049999999999999 - type: recall_at_1000 value: 27.473 - type: recall_at_3 value: 0.382 - type: recall_at_5 value: 0.577 - task: type: Clustering dataset: name: MTEB RedditClustering type: None config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 37.28513797770202 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: None config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 44.91884744639453 - task: type: STS dataset: name: MTEB SICK-R type: None config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 70.47406828053933 - type: cos_sim_spearman value: 61.458973019024654 - type: euclidean_pearson value: 65.8472323185386 - type: euclidean_spearman value: 61.45905802077228 - type: manhattan_pearson value: 66.6819317121732 - type: manhattan_spearman value: 61.97831416291467 - task: type: STS dataset: name: MTEB STS12 type: None config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 59.32490921670001 - type: cos_sim_spearman value: 53.34536574314851 - type: euclidean_pearson value: 56.93037980583926 - type: euclidean_spearman value: 53.34545415668656 - type: manhattan_pearson value: 60.418057231064346 - type: manhattan_spearman value: 56.191657563671406 - task: type: STS dataset: name: MTEB STS13 type: None config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 69.44381319090809 - type: cos_sim_spearman value: 71.89239151101017 - type: euclidean_pearson value: 71.08466353671255 - type: euclidean_spearman value: 71.8923431546743 - type: manhattan_pearson value: 72.82399635347963 - type: manhattan_spearman value: 73.46471776869852 - task: type: STS dataset: name: MTEB STS14 type: None config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 69.59255573008079 - type: cos_sim_spearman value: 66.62913411736977 - type: euclidean_pearson value: 68.95610510950463 - type: euclidean_spearman value: 66.62909813588546 - type: manhattan_pearson value: 69.84273035920991 - type: manhattan_spearman value: 67.48063231044473 - task: type: STS dataset: name: MTEB STS15 type: None config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 73.45619996226283 - type: cos_sim_spearman value: 74.83055449232837 - type: euclidean_pearson value: 74.62134064628388 - type: euclidean_spearman value: 74.83054707684443 - type: manhattan_pearson value: 76.61314608080133 - type: manhattan_spearman value: 76.86877561306177 - task: type: STS dataset: name: MTEB STS16 type: None config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 72.63938715237121 - type: cos_sim_spearman value: 73.94101941299299 - type: euclidean_pearson value: 73.1193483863693 - type: euclidean_spearman value: 73.94156997484005 - type: manhattan_pearson value: 74.19841840522552 - type: manhattan_spearman value: 75.04135720147993 - task: type: STS dataset: name: MTEB STS17 (en-en) type: None config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.09546255206122 - type: cos_sim_spearman value: 77.70366353790475 - type: euclidean_pearson value: 76.32709309757098 - type: euclidean_spearman value: 77.70366353790475 - type: manhattan_pearson value: 76.94194479561031 - type: manhattan_spearman value: 78.46592204504032 - task: type: STS dataset: name: MTEB STS22 (en) type: None config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 58.49441445114376 - type: cos_sim_spearman value: 59.05955673992238 - type: euclidean_pearson value: 60.158391183785994 - type: euclidean_spearman value: 59.05955673992238 - type: manhattan_pearson value: 61.0421570895761 - type: manhattan_spearman value: 59.192623617984765 - task: type: STS dataset: name: MTEB STSBenchmark type: None config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 67.22940000979654 - type: cos_sim_spearman value: 65.01437530414441 - type: euclidean_pearson value: 66.45644685287122 - type: euclidean_spearman value: 65.01439378456281 - type: manhattan_pearson value: 68.55942824733042 - type: manhattan_spearman value: 67.0638150575607 - task: type: Reranking dataset: name: MTEB SciDocsRR type: None config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 65.91428709750188 - type: mrr value: 87.6550990766677 - task: type: Retrieval dataset: name: MTEB SciFact type: None config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 34.75 - type: map_at_10 value: 45.217 - type: map_at_100 value: 46.12 - type: map_at_1000 value: 46.174 - type: map_at_3 value: 42.638999999999996 - type: map_at_5 value: 44.025 - type: mrr_at_1 value: 36.333 - type: mrr_at_10 value: 46.498 - type: mrr_at_100 value: 47.21 - type: mrr_at_1000 value: 47.254000000000005 - type: mrr_at_3 value: 44.0 - type: mrr_at_5 value: 45.283 - type: ndcg_at_1 value: 36.333 - type: ndcg_at_10 value: 50.782000000000004 - type: ndcg_at_100 value: 55.010999999999996 - type: ndcg_at_1000 value: 56.486000000000004 - type: ndcg_at_3 value: 45.511 - type: ndcg_at_5 value: 47.867 - type: precision_at_1 value: 36.333 - type: precision_at_10 value: 7.3 - type: precision_at_100 value: 0.967 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 18.556 - type: precision_at_5 value: 12.467 - type: recall_at_1 value: 34.75 - type: recall_at_10 value: 66.75 - type: recall_at_100 value: 86.406 - type: recall_at_1000 value: 97.833 - type: recall_at_3 value: 52.333 - type: recall_at_5 value: 57.972 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: None config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.609900990099 - type: cos_sim_ap value: 84.84797689028449 - type: cos_sim_f1 value: 78.88531618435157 - type: cos_sim_precision value: 84.98845265588915 - type: cos_sim_recall value: 73.6 - type: dot_accuracy value: 99.609900990099 - type: dot_ap value: 84.84797689028449 - type: dot_f1 value: 78.88531618435157 - type: dot_precision value: 84.98845265588915 - type: dot_recall value: 73.6 - type: euclidean_accuracy value: 99.609900990099 - type: euclidean_ap value: 84.84797689028449 - type: euclidean_f1 value: 78.88531618435157 - type: euclidean_precision value: 84.98845265588915 - type: euclidean_recall value: 73.6 - type: manhattan_accuracy value: 99.65544554455445 - type: manhattan_ap value: 87.87137216622611 - type: manhattan_f1 value: 81.47757255936675 - type: manhattan_precision value: 86.25698324022346 - type: manhattan_recall value: 77.2 - type: max_accuracy value: 99.65544554455445 - type: max_ap value: 87.87137216622611 - type: max_f1 value: 81.47757255936675 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: None config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 48.25386667167564 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: None config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 30.471037924796036 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: None config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 39.18682846959484 - type: mrr value: 39.50644841269841 - task: type: Summarization dataset: name: MTEB SummEval type: None config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 28.946936299379463 - type: cos_sim_spearman value: 30.066436215046654 - type: dot_pearson value: 28.946936196147593 - type: dot_spearman value: 30.018099216048675 - task: type: Retrieval dataset: name: MTEB Touche2020 type: None config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 1.7080000000000002 - type: map_at_10 value: 7.288 - type: map_at_100 value: 11.64 - type: map_at_1000 value: 13.227 - type: map_at_3 value: 3.6769999999999996 - type: map_at_5 value: 5.559 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 37.412 - type: mrr_at_100 value: 38.509 - type: mrr_at_1000 value: 38.513999999999996 - type: mrr_at_3 value: 31.633 - type: mrr_at_5 value: 35.306 - type: ndcg_at_1 value: 18.367 - type: ndcg_at_10 value: 19.139999999999997 - type: ndcg_at_100 value: 29.513 - type: ndcg_at_1000 value: 41.931000000000004 - type: ndcg_at_3 value: 18.381 - type: ndcg_at_5 value: 20.009 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 18.570999999999998 - type: precision_at_100 value: 6.796 - type: precision_at_1000 value: 1.461 - type: precision_at_3 value: 21.088 - type: precision_at_5 value: 22.857 - type: recall_at_1 value: 1.7080000000000002 - type: recall_at_10 value: 12.484 - type: recall_at_100 value: 40.387 - type: recall_at_1000 value: 78.199 - type: recall_at_3 value: 4.5969999999999995 - type: recall_at_5 value: 8.273 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: None config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.0384 - type: ap value: 14.428610751309057 - type: f1 value: 54.76005976202749 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: None config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 52.79003961516695 - type: f1 value: 52.98288621191831 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: None config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 36.071952470860104 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: None config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 80.92030756392681 - type: cos_sim_ap value: 53.647013401002674 - type: cos_sim_f1 value: 53.12840372745977 - type: cos_sim_precision value: 49.07221104404203 - type: cos_sim_recall value: 57.9155672823219 - type: dot_accuracy value: 80.92030756392681 - type: dot_ap value: 53.647013401002674 - type: dot_f1 value: 53.12840372745977 - type: dot_precision value: 49.07221104404203 - type: dot_recall value: 57.9155672823219 - type: euclidean_accuracy value: 80.92030756392681 - type: euclidean_ap value: 53.647013401002674 - type: euclidean_f1 value: 53.12840372745977 - type: euclidean_precision value: 49.07221104404203 - type: euclidean_recall value: 57.9155672823219 - type: manhattan_accuracy value: 80.9322286463611 - type: manhattan_ap value: 52.933397134497554 - type: manhattan_f1 value: 51.88005711565922 - type: manhattan_precision value: 47.247507585609014 - type: manhattan_recall value: 57.519788918205805 - type: max_accuracy value: 80.9322286463611 - type: max_ap value: 53.647013401002674 - type: max_f1 value: 53.12840372745977 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: None config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 85.58427445958007 - type: cos_sim_ap value: 77.9469627905898 - type: cos_sim_f1 value: 70.96631205673759 - type: cos_sim_precision value: 68.20505538199374 - type: cos_sim_recall value: 73.9605789959963 - type: dot_accuracy value: 85.58427445958007 - type: dot_ap value: 77.94696228563409 - type: dot_f1 value: 70.96631205673759 - type: dot_precision value: 68.20505538199374 - type: dot_recall value: 73.9605789959963 - type: euclidean_accuracy value: 85.58427445958007 - type: euclidean_ap value: 77.94696250005988 - type: euclidean_f1 value: 70.96631205673759 - type: euclidean_precision value: 68.20505538199374 - type: euclidean_recall value: 73.9605789959963 - type: manhattan_accuracy value: 86.07521248108046 - type: manhattan_ap value: 78.88285193661817 - type: manhattan_f1 value: 71.94010876474329 - type: manhattan_precision value: 66.44487932159166 - type: manhattan_recall value: 78.42623960578996 - type: max_accuracy value: 86.07521248108046 - type: max_ap value: 78.88285193661817 - type: max_f1 value: 71.94010876474329 ---
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
jncraton/e5-small-v2-ct2-int8
jncraton
null
[ "transformers", "mteb", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
1,688
1,688
9
0
--- language: - en license: mit tags: - mteb model-index: - name: e5-small-v2 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.59701492537313 - type: ap value: 41.67064885731708 - type: f1 value: 71.86465946398573 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.265875 - type: ap value: 87.67633085349644 - type: f1 value: 91.24297521425744 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.882000000000005 - type: f1 value: 45.08058870381236 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 20.697 - type: map_at_10 value: 33.975 - type: map_at_100 value: 35.223 - type: map_at_1000 value: 35.260000000000005 - type: map_at_3 value: 29.776999999999997 - type: map_at_5 value: 32.035000000000004 - type: mrr_at_1 value: 20.982 - type: mrr_at_10 value: 34.094 - type: mrr_at_100 value: 35.343 - type: mrr_at_1000 value: 35.38 - type: mrr_at_3 value: 29.884 - type: mrr_at_5 value: 32.141999999999996 - type: ndcg_at_1 value: 20.697 - type: ndcg_at_10 value: 41.668 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 48.305 - type: ndcg_at_3 value: 32.928000000000004 - type: ndcg_at_5 value: 36.998999999999995 - type: precision_at_1 value: 20.697 - type: precision_at_10 value: 6.636 - type: precision_at_100 value: 0.924 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.035 - type: precision_at_5 value: 10.398 - type: recall_at_1 value: 20.697 - type: recall_at_10 value: 66.35799999999999 - type: recall_at_100 value: 92.39 - type: recall_at_1000 value: 99.36 - type: recall_at_3 value: 42.105 - type: recall_at_5 value: 51.991 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.1169517447068 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 34.79553720107097 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.10811337308168 - type: mrr value: 71.56410763751482 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 78.46834918248696 - type: cos_sim_spearman value: 79.4289182755206 - type: euclidean_pearson value: 76.26662973727008 - type: euclidean_spearman value: 78.11744260952536 - type: manhattan_pearson value: 76.08175262609434 - type: manhattan_spearman value: 78.29395265552289 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.63636363636364 - type: f1 value: 81.55779952376953 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.88541137137571 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.05205685274407 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.293999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.315000000000005 - type: map_at_1000 value: 41.451 - type: map_at_3 value: 37.194 - type: map_at_5 value: 38.728 - type: mrr_at_1 value: 37.053000000000004 - type: mrr_at_10 value: 45.281 - type: mrr_at_100 value: 46.188 - type: mrr_at_1000 value: 46.245999999999995 - type: mrr_at_3 value: 43.228 - type: mrr_at_5 value: 44.366 - type: ndcg_at_1 value: 37.053000000000004 - type: ndcg_at_10 value: 45.086 - type: ndcg_at_100 value: 50.756 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 41.416 - type: ndcg_at_5 value: 43.098 - type: precision_at_1 value: 37.053000000000004 - type: precision_at_10 value: 8.34 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.647000000000002 - type: precision_at_5 value: 13.877 - type: recall_at_1 value: 30.293999999999997 - type: recall_at_10 value: 54.309 - type: recall_at_100 value: 78.59 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 43.168 - type: recall_at_5 value: 48.192 - type: map_at_1 value: 28.738000000000003 - type: map_at_10 value: 36.925999999999995 - type: map_at_100 value: 38.017 - type: map_at_1000 value: 38.144 - type: map_at_3 value: 34.446 - type: map_at_5 value: 35.704 - type: mrr_at_1 value: 35.478 - type: mrr_at_10 value: 42.786 - type: mrr_at_100 value: 43.458999999999996 - type: mrr_at_1000 value: 43.507 - type: mrr_at_3 value: 40.648 - type: mrr_at_5 value: 41.804 - type: ndcg_at_1 value: 35.478 - type: ndcg_at_10 value: 42.044 - type: ndcg_at_100 value: 46.249 - type: ndcg_at_1000 value: 48.44 - type: ndcg_at_3 value: 38.314 - type: ndcg_at_5 value: 39.798 - type: precision_at_1 value: 35.478 - type: precision_at_10 value: 7.764 - type: precision_at_100 value: 1.253 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 18.047 - type: precision_at_5 value: 12.637 - type: recall_at_1 value: 28.738000000000003 - type: recall_at_10 value: 50.659 - type: recall_at_100 value: 68.76299999999999 - type: recall_at_1000 value: 82.811 - type: recall_at_3 value: 39.536 - type: recall_at_5 value: 43.763999999999996 - type: map_at_1 value: 38.565 - type: map_at_10 value: 50.168 - type: map_at_100 value: 51.11 - type: map_at_1000 value: 51.173 - type: map_at_3 value: 47.044000000000004 - type: map_at_5 value: 48.838 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 53.596999999999994 - type: mrr_at_100 value: 54.211 - type: mrr_at_1000 value: 54.247 - type: mrr_at_3 value: 51.202000000000005 - type: mrr_at_5 value: 52.608999999999995 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 55.694 - type: ndcg_at_100 value: 59.518 - type: ndcg_at_1000 value: 60.907 - type: ndcg_at_3 value: 50.395999999999994 - type: ndcg_at_5 value: 53.022999999999996 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 8.84 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.153 - type: precision_at_5 value: 15.260000000000002 - type: recall_at_1 value: 38.565 - type: recall_at_10 value: 68.65 - type: recall_at_100 value: 85.37400000000001 - type: recall_at_1000 value: 95.37400000000001 - type: recall_at_3 value: 54.645999999999994 - type: recall_at_5 value: 60.958 - type: map_at_1 value: 23.945 - type: map_at_10 value: 30.641000000000002 - type: map_at_100 value: 31.599 - type: map_at_1000 value: 31.691000000000003 - type: map_at_3 value: 28.405 - type: map_at_5 value: 29.704000000000004 - type: mrr_at_1 value: 25.537 - type: mrr_at_10 value: 32.22 - type: mrr_at_100 value: 33.138 - type: mrr_at_1000 value: 33.214 - type: mrr_at_3 value: 30.151 - type: mrr_at_5 value: 31.298 - type: ndcg_at_1 value: 25.537 - type: ndcg_at_10 value: 34.638000000000005 - type: ndcg_at_100 value: 39.486 - type: ndcg_at_1000 value: 41.936 - type: ndcg_at_3 value: 30.333 - type: ndcg_at_5 value: 32.482 - type: precision_at_1 value: 25.537 - type: precision_at_10 value: 5.153 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.429 - type: precision_at_5 value: 8.723 - type: recall_at_1 value: 23.945 - type: recall_at_10 value: 45.412 - type: recall_at_100 value: 67.836 - type: recall_at_1000 value: 86.467 - type: recall_at_3 value: 34.031 - type: recall_at_5 value: 39.039 - type: map_at_1 value: 14.419 - type: map_at_10 value: 20.858999999999998 - type: map_at_100 value: 22.067999999999998 - type: map_at_1000 value: 22.192 - type: map_at_3 value: 18.673000000000002 - type: map_at_5 value: 19.968 - type: mrr_at_1 value: 17.785999999999998 - type: mrr_at_10 value: 24.878 - type: mrr_at_100 value: 26.021 - type: mrr_at_1000 value: 26.095000000000002 - type: mrr_at_3 value: 22.616 - type: mrr_at_5 value: 23.785 - type: ndcg_at_1 value: 17.785999999999998 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 31.05 - type: ndcg_at_1000 value: 34.052 - type: ndcg_at_3 value: 21.117 - type: ndcg_at_5 value: 23.048 - type: precision_at_1 value: 17.785999999999998 - type: precision_at_10 value: 4.590000000000001 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.313 - type: recall_at_1 value: 14.419 - type: recall_at_10 value: 34.477999999999994 - type: recall_at_100 value: 60.02499999999999 - type: recall_at_1000 value: 81.646 - type: recall_at_3 value: 23.515 - type: recall_at_5 value: 28.266999999999996 - type: map_at_1 value: 26.268 - type: map_at_10 value: 35.114000000000004 - type: map_at_100 value: 36.212 - type: map_at_1000 value: 36.333 - type: map_at_3 value: 32.436 - type: map_at_5 value: 33.992 - type: mrr_at_1 value: 31.761 - type: mrr_at_10 value: 40.355999999999995 - type: mrr_at_100 value: 41.125 - type: mrr_at_1000 value: 41.186 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.463 - type: ndcg_at_1 value: 31.761 - type: ndcg_at_10 value: 40.422000000000004 - type: ndcg_at_100 value: 45.458999999999996 - type: ndcg_at_1000 value: 47.951 - type: ndcg_at_3 value: 35.972 - type: ndcg_at_5 value: 38.272 - type: precision_at_1 value: 31.761 - type: precision_at_10 value: 7.103 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.779 - type: precision_at_5 value: 11.877 - type: recall_at_1 value: 26.268 - type: recall_at_10 value: 51.053000000000004 - type: recall_at_100 value: 72.702 - type: recall_at_1000 value: 89.521 - type: recall_at_3 value: 38.619 - type: recall_at_5 value: 44.671 - type: map_at_1 value: 25.230999999999998 - type: map_at_10 value: 34.227000000000004 - type: map_at_100 value: 35.370000000000005 - type: map_at_1000 value: 35.488 - type: map_at_3 value: 31.496000000000002 - type: map_at_5 value: 33.034 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 39.045 - type: mrr_at_100 value: 39.809 - type: mrr_at_1000 value: 39.873 - type: mrr_at_3 value: 36.663000000000004 - type: mrr_at_5 value: 37.964 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 39.472 - type: ndcg_at_100 value: 44.574999999999996 - type: ndcg_at_1000 value: 47.162 - type: ndcg_at_3 value: 34.929 - type: ndcg_at_5 value: 37.002 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 7.055 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.591 - type: precision_at_5 value: 11.667 - type: recall_at_1 value: 25.230999999999998 - type: recall_at_10 value: 50.42100000000001 - type: recall_at_100 value: 72.685 - type: recall_at_1000 value: 90.469 - type: recall_at_3 value: 37.503 - type: recall_at_5 value: 43.123 - type: map_at_1 value: 24.604166666666664 - type: map_at_10 value: 32.427166666666665 - type: map_at_100 value: 33.51474999999999 - type: map_at_1000 value: 33.6345 - type: map_at_3 value: 30.02366666666667 - type: map_at_5 value: 31.382333333333328 - type: mrr_at_1 value: 29.001166666666666 - type: mrr_at_10 value: 36.3315 - type: mrr_at_100 value: 37.16683333333333 - type: mrr_at_1000 value: 37.23341666666668 - type: mrr_at_3 value: 34.19916666666667 - type: mrr_at_5 value: 35.40458333333334 - type: ndcg_at_1 value: 29.001166666666666 - type: ndcg_at_10 value: 37.06883333333334 - type: ndcg_at_100 value: 41.95816666666666 - type: ndcg_at_1000 value: 44.501583333333336 - type: ndcg_at_3 value: 32.973499999999994 - type: ndcg_at_5 value: 34.90833333333334 - type: precision_at_1 value: 29.001166666666666 - type: precision_at_10 value: 6.336 - type: precision_at_100 value: 1.0282499999999999 - type: precision_at_1000 value: 0.14391666666666664 - type: precision_at_3 value: 14.932499999999996 - type: precision_at_5 value: 10.50825 - type: recall_at_1 value: 24.604166666666664 - type: recall_at_10 value: 46.9525 - type: recall_at_100 value: 68.67816666666667 - type: recall_at_1000 value: 86.59783333333334 - type: recall_at_3 value: 35.49783333333333 - type: recall_at_5 value: 40.52525000000001 - type: map_at_1 value: 23.559 - type: map_at_10 value: 29.023 - type: map_at_100 value: 29.818 - type: map_at_1000 value: 29.909000000000002 - type: map_at_3 value: 27.037 - type: map_at_5 value: 28.225 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 31.962000000000003 - type: mrr_at_100 value: 32.726 - type: mrr_at_1000 value: 32.800000000000004 - type: mrr_at_3 value: 30.266 - type: mrr_at_5 value: 31.208999999999996 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 32.53 - type: ndcg_at_100 value: 36.758 - type: ndcg_at_1000 value: 39.362 - type: ndcg_at_3 value: 28.985 - type: ndcg_at_5 value: 30.757 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.219 - type: precision_at_5 value: 8.527999999999999 - type: recall_at_1 value: 23.559 - type: recall_at_10 value: 40.585 - type: recall_at_100 value: 60.306000000000004 - type: recall_at_1000 value: 80.11 - type: recall_at_3 value: 30.794 - type: recall_at_5 value: 35.186 - type: map_at_1 value: 16.384999999999998 - type: map_at_10 value: 22.142 - type: map_at_100 value: 23.057 - type: map_at_1000 value: 23.177 - type: map_at_3 value: 20.29 - type: map_at_5 value: 21.332 - type: mrr_at_1 value: 19.89 - type: mrr_at_10 value: 25.771 - type: mrr_at_100 value: 26.599 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.962 - type: mrr_at_5 value: 24.934 - type: ndcg_at_1 value: 19.89 - type: ndcg_at_10 value: 25.97 - type: ndcg_at_100 value: 30.605 - type: ndcg_at_1000 value: 33.619 - type: ndcg_at_3 value: 22.704 - type: ndcg_at_5 value: 24.199 - type: precision_at_1 value: 19.89 - type: precision_at_10 value: 4.553 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 10.541 - type: precision_at_5 value: 7.46 - type: recall_at_1 value: 16.384999999999998 - type: recall_at_10 value: 34.001 - type: recall_at_100 value: 55.17100000000001 - type: recall_at_1000 value: 77.125 - type: recall_at_3 value: 24.618000000000002 - type: recall_at_5 value: 28.695999999999998 - type: map_at_1 value: 23.726 - type: map_at_10 value: 31.227 - type: map_at_100 value: 32.311 - type: map_at_1000 value: 32.419 - type: map_at_3 value: 28.765 - type: map_at_5 value: 30.229 - type: mrr_at_1 value: 27.705000000000002 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 35.931000000000004 - type: mrr_at_1000 value: 36 - type: mrr_at_3 value: 32.603 - type: mrr_at_5 value: 34.117999999999995 - type: ndcg_at_1 value: 27.705000000000002 - type: ndcg_at_10 value: 35.968 - type: ndcg_at_100 value: 41.197 - type: ndcg_at_1000 value: 43.76 - type: ndcg_at_3 value: 31.304 - type: ndcg_at_5 value: 33.661 - type: precision_at_1 value: 27.705000000000002 - type: precision_at_10 value: 5.942 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.868 - type: precision_at_5 value: 9.944 - type: recall_at_1 value: 23.726 - type: recall_at_10 value: 46.786 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.2 - type: recall_at_3 value: 33.981 - type: recall_at_5 value: 39.893 - type: map_at_1 value: 23.344 - type: map_at_10 value: 31.636999999999997 - type: map_at_100 value: 33.065 - type: map_at_1000 value: 33.300000000000004 - type: map_at_3 value: 29.351 - type: map_at_5 value: 30.432 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.587 - type: mrr_at_100 value: 36.52 - type: mrr_at_1000 value: 36.597 - type: mrr_at_3 value: 33.696 - type: mrr_at_5 value: 34.713 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.61 - type: ndcg_at_100 value: 41.88 - type: ndcg_at_1000 value: 45.105000000000004 - type: ndcg_at_3 value: 33.038000000000004 - type: ndcg_at_5 value: 34.331 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 6.917 - type: precision_at_100 value: 1.3599999999999999 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 15.547 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.344 - type: recall_at_10 value: 45.782000000000004 - type: recall_at_100 value: 69.503 - type: recall_at_1000 value: 90.742 - type: recall_at_3 value: 35.160000000000004 - type: recall_at_5 value: 39.058 - type: map_at_1 value: 20.776 - type: map_at_10 value: 27.285999999999998 - type: map_at_100 value: 28.235 - type: map_at_1000 value: 28.337 - type: map_at_3 value: 25.147000000000002 - type: map_at_5 value: 26.401999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 29.409999999999997 - type: mrr_at_100 value: 30.275000000000002 - type: mrr_at_1000 value: 30.354999999999997 - type: mrr_at_3 value: 27.418 - type: mrr_at_5 value: 28.592000000000002 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 31.239 - type: ndcg_at_100 value: 35.965 - type: ndcg_at_1000 value: 38.602 - type: ndcg_at_3 value: 27.174 - type: ndcg_at_5 value: 29.229 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.776 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 8.022 - type: recall_at_1 value: 20.776 - type: recall_at_10 value: 41.294 - type: recall_at_100 value: 63.111 - type: recall_at_1000 value: 82.88600000000001 - type: recall_at_3 value: 30.403000000000002 - type: recall_at_5 value: 35.455999999999996 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 9.376 - type: map_at_10 value: 15.926000000000002 - type: map_at_100 value: 17.585 - type: map_at_1000 value: 17.776 - type: map_at_3 value: 13.014000000000001 - type: map_at_5 value: 14.417 - type: mrr_at_1 value: 20.195 - type: mrr_at_10 value: 29.95 - type: mrr_at_100 value: 31.052000000000003 - type: mrr_at_1000 value: 31.108000000000004 - type: mrr_at_3 value: 26.667 - type: mrr_at_5 value: 28.458 - type: ndcg_at_1 value: 20.195 - type: ndcg_at_10 value: 22.871 - type: ndcg_at_100 value: 29.921999999999997 - type: ndcg_at_1000 value: 33.672999999999995 - type: ndcg_at_3 value: 17.782999999999998 - type: ndcg_at_5 value: 19.544 - type: precision_at_1 value: 20.195 - type: precision_at_10 value: 7.394 - type: precision_at_100 value: 1.493 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 13.073 - type: precision_at_5 value: 10.436 - type: recall_at_1 value: 9.376 - type: recall_at_10 value: 28.544999999999998 - type: recall_at_100 value: 53.147999999999996 - type: recall_at_1000 value: 74.62 - type: recall_at_3 value: 16.464000000000002 - type: recall_at_5 value: 21.004 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.415000000000001 - type: map_at_10 value: 18.738 - type: map_at_100 value: 27.291999999999998 - type: map_at_1000 value: 28.992 - type: map_at_3 value: 13.196 - type: map_at_5 value: 15.539 - type: mrr_at_1 value: 66.5 - type: mrr_at_10 value: 74.518 - type: mrr_at_100 value: 74.86 - type: mrr_at_1000 value: 74.87 - type: mrr_at_3 value: 72.375 - type: mrr_at_5 value: 73.86200000000001 - type: ndcg_at_1 value: 54.37499999999999 - type: ndcg_at_10 value: 41.317 - type: ndcg_at_100 value: 45.845 - type: ndcg_at_1000 value: 52.92 - type: ndcg_at_3 value: 44.983000000000004 - type: ndcg_at_5 value: 42.989 - type: precision_at_1 value: 66.5 - type: precision_at_10 value: 33.6 - type: precision_at_100 value: 10.972999999999999 - type: precision_at_1000 value: 2.214 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 8.415000000000001 - type: recall_at_10 value: 24.953 - type: recall_at_100 value: 52.48199999999999 - type: recall_at_1000 value: 75.093 - type: recall_at_3 value: 14.341000000000001 - type: recall_at_5 value: 18.468 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.06499999999999 - type: f1 value: 41.439327599975385 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 66.02 - type: map_at_10 value: 76.68599999999999 - type: map_at_100 value: 76.959 - type: map_at_1000 value: 76.972 - type: map_at_3 value: 75.024 - type: map_at_5 value: 76.153 - type: mrr_at_1 value: 71.197 - type: mrr_at_10 value: 81.105 - type: mrr_at_100 value: 81.232 - type: mrr_at_1000 value: 81.233 - type: mrr_at_3 value: 79.758 - type: mrr_at_5 value: 80.69 - type: ndcg_at_1 value: 71.197 - type: ndcg_at_10 value: 81.644 - type: ndcg_at_100 value: 82.645 - type: ndcg_at_1000 value: 82.879 - type: ndcg_at_3 value: 78.792 - type: ndcg_at_5 value: 80.528 - type: precision_at_1 value: 71.197 - type: precision_at_10 value: 10.206999999999999 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.868000000000002 - type: precision_at_5 value: 19.559 - type: recall_at_1 value: 66.02 - type: recall_at_10 value: 92.50699999999999 - type: recall_at_100 value: 96.497 - type: recall_at_1000 value: 97.956 - type: recall_at_3 value: 84.866 - type: recall_at_5 value: 89.16199999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.948 - type: map_at_10 value: 29.833 - type: map_at_100 value: 31.487 - type: map_at_1000 value: 31.674000000000003 - type: map_at_3 value: 26.029999999999998 - type: map_at_5 value: 28.038999999999998 - type: mrr_at_1 value: 34.721999999999994 - type: mrr_at_10 value: 44.214999999999996 - type: mrr_at_100 value: 44.994 - type: mrr_at_1000 value: 45.051 - type: mrr_at_3 value: 41.667 - type: mrr_at_5 value: 43.032 - type: ndcg_at_1 value: 34.721999999999994 - type: ndcg_at_10 value: 37.434 - type: ndcg_at_100 value: 43.702000000000005 - type: ndcg_at_1000 value: 46.993 - type: ndcg_at_3 value: 33.56 - type: ndcg_at_5 value: 34.687 - type: precision_at_1 value: 34.721999999999994 - type: precision_at_10 value: 10.401 - type: precision_at_100 value: 1.7049999999999998 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 22.531000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 17.948 - type: recall_at_10 value: 45.062999999999995 - type: recall_at_100 value: 68.191 - type: recall_at_1000 value: 87.954 - type: recall_at_3 value: 31.112000000000002 - type: recall_at_5 value: 36.823 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.644 - type: map_at_10 value: 57.658 - type: map_at_100 value: 58.562000000000005 - type: map_at_1000 value: 58.62500000000001 - type: map_at_3 value: 54.022999999999996 - type: map_at_5 value: 56.293000000000006 - type: mrr_at_1 value: 73.288 - type: mrr_at_10 value: 80.51700000000001 - type: mrr_at_100 value: 80.72 - type: mrr_at_1000 value: 80.728 - type: mrr_at_3 value: 79.33200000000001 - type: mrr_at_5 value: 80.085 - type: ndcg_at_1 value: 73.288 - type: ndcg_at_10 value: 66.61 - type: ndcg_at_100 value: 69.723 - type: ndcg_at_1000 value: 70.96000000000001 - type: ndcg_at_3 value: 61.358999999999995 - type: ndcg_at_5 value: 64.277 - type: precision_at_1 value: 73.288 - type: precision_at_10 value: 14.17 - type: precision_at_100 value: 1.659 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 39.487 - type: precision_at_5 value: 25.999 - type: recall_at_1 value: 36.644 - type: recall_at_10 value: 70.851 - type: recall_at_100 value: 82.94399999999999 - type: recall_at_1000 value: 91.134 - type: recall_at_3 value: 59.230000000000004 - type: recall_at_5 value: 64.997 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.00280000000001 - type: ap value: 80.46302061021223 - type: f1 value: 85.9592921596419 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.541 - type: map_at_10 value: 34.625 - type: map_at_100 value: 35.785 - type: map_at_1000 value: 35.831 - type: map_at_3 value: 30.823 - type: map_at_5 value: 32.967999999999996 - type: mrr_at_1 value: 23.180999999999997 - type: mrr_at_10 value: 35.207 - type: mrr_at_100 value: 36.315 - type: mrr_at_1000 value: 36.355 - type: mrr_at_3 value: 31.483 - type: mrr_at_5 value: 33.589999999999996 - type: ndcg_at_1 value: 23.195 - type: ndcg_at_10 value: 41.461 - type: ndcg_at_100 value: 47.032000000000004 - type: ndcg_at_1000 value: 48.199999999999996 - type: ndcg_at_3 value: 33.702 - type: ndcg_at_5 value: 37.522 - type: precision_at_1 value: 23.195 - type: precision_at_10 value: 6.526999999999999 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.308000000000002 - type: precision_at_5 value: 10.507 - type: recall_at_1 value: 22.541 - type: recall_at_10 value: 62.524 - type: recall_at_100 value: 88.228 - type: recall_at_1000 value: 97.243 - type: recall_at_3 value: 41.38 - type: recall_at_5 value: 50.55 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.69949840401279 - type: f1 value: 92.54141471311786 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.56041951664386 - type: f1 value: 55.88499977508287 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.62071284465365 - type: f1 value: 69.36717546572152 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.35843981170142 - type: f1 value: 76.15496453538884 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.33664956793118 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.883839621715524 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.096874986740758 - type: mrr value: 30.97300481932132 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.4 - type: map_at_10 value: 11.852 - type: map_at_100 value: 14.758 - type: map_at_1000 value: 16.134 - type: map_at_3 value: 8.558 - type: map_at_5 value: 10.087 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.05800000000001 - type: mrr_at_100 value: 52.689 - type: mrr_at_1000 value: 52.742999999999995 - type: mrr_at_3 value: 50.205999999999996 - type: mrr_at_5 value: 51.367 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 32.449 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 38.351 - type: ndcg_at_3 value: 37.044 - type: ndcg_at_5 value: 35.275 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 23.87 - type: precision_at_100 value: 7.625 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 34.365 - type: precision_at_5 value: 30.341 - type: recall_at_1 value: 5.4 - type: recall_at_10 value: 15.943999999999999 - type: recall_at_100 value: 29.805 - type: recall_at_1000 value: 61.695 - type: recall_at_3 value: 9.539 - type: recall_at_5 value: 12.127 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 36.047000000000004 - type: map_at_10 value: 51.6 - type: map_at_100 value: 52.449999999999996 - type: map_at_1000 value: 52.476 - type: map_at_3 value: 47.452 - type: map_at_5 value: 49.964 - type: mrr_at_1 value: 40.382 - type: mrr_at_10 value: 54.273 - type: mrr_at_100 value: 54.859 - type: mrr_at_1000 value: 54.876000000000005 - type: mrr_at_3 value: 51.014 - type: mrr_at_5 value: 52.983999999999995 - type: ndcg_at_1 value: 40.353 - type: ndcg_at_10 value: 59.11300000000001 - type: ndcg_at_100 value: 62.604000000000006 - type: ndcg_at_1000 value: 63.187000000000005 - type: ndcg_at_3 value: 51.513 - type: ndcg_at_5 value: 55.576 - type: precision_at_1 value: 40.353 - type: precision_at_10 value: 9.418 - type: precision_at_100 value: 1.1440000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.078000000000003 - type: precision_at_5 value: 16.250999999999998 - type: recall_at_1 value: 36.047000000000004 - type: recall_at_10 value: 79.22200000000001 - type: recall_at_100 value: 94.23 - type: recall_at_1000 value: 98.51100000000001 - type: recall_at_3 value: 59.678 - type: recall_at_5 value: 68.967 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 68.232 - type: map_at_10 value: 81.674 - type: map_at_100 value: 82.338 - type: map_at_1000 value: 82.36099999999999 - type: map_at_3 value: 78.833 - type: map_at_5 value: 80.58 - type: mrr_at_1 value: 78.64 - type: mrr_at_10 value: 85.164 - type: mrr_at_100 value: 85.317 - type: mrr_at_1000 value: 85.319 - type: mrr_at_3 value: 84.127 - type: mrr_at_5 value: 84.789 - type: ndcg_at_1 value: 78.63 - type: ndcg_at_10 value: 85.711 - type: ndcg_at_100 value: 87.238 - type: ndcg_at_1000 value: 87.444 - type: ndcg_at_3 value: 82.788 - type: ndcg_at_5 value: 84.313 - type: precision_at_1 value: 78.63 - type: precision_at_10 value: 12.977 - type: precision_at_100 value: 1.503 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.113 - type: precision_at_5 value: 23.71 - type: recall_at_1 value: 68.232 - type: recall_at_10 value: 93.30199999999999 - type: recall_at_100 value: 98.799 - type: recall_at_1000 value: 99.885 - type: recall_at_3 value: 84.827 - type: recall_at_5 value: 89.188 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.71879170816294 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 59.65866311751794 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.218 - type: map_at_10 value: 10.337 - type: map_at_100 value: 12.131 - type: map_at_1000 value: 12.411 - type: map_at_3 value: 7.4270000000000005 - type: map_at_5 value: 8.913 - type: mrr_at_1 value: 20.8 - type: mrr_at_10 value: 30.868000000000002 - type: mrr_at_100 value: 31.903 - type: mrr_at_1000 value: 31.972 - type: mrr_at_3 value: 27.367 - type: mrr_at_5 value: 29.372 - type: ndcg_at_1 value: 20.8 - type: ndcg_at_10 value: 17.765 - type: ndcg_at_100 value: 24.914 - type: ndcg_at_1000 value: 30.206 - type: ndcg_at_3 value: 16.64 - type: ndcg_at_5 value: 14.712 - type: precision_at_1 value: 20.8 - type: precision_at_10 value: 9.24 - type: precision_at_100 value: 1.9560000000000002 - type: precision_at_1000 value: 0.32299999999999995 - type: precision_at_3 value: 15.467 - type: precision_at_5 value: 12.94 - type: recall_at_1 value: 4.218 - type: recall_at_10 value: 18.752 - type: recall_at_100 value: 39.7 - type: recall_at_1000 value: 65.57300000000001 - type: recall_at_3 value: 9.428 - type: recall_at_5 value: 13.133000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04338850207233 - type: cos_sim_spearman value: 78.5054651430423 - type: euclidean_pearson value: 80.30739451228612 - type: euclidean_spearman value: 78.48377464299097 - type: manhattan_pearson value: 80.40795049052781 - type: manhattan_spearman value: 78.49506205443114 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.11596224442962 - type: cos_sim_spearman value: 76.20997388935461 - type: euclidean_pearson value: 80.56858451349109 - type: euclidean_spearman value: 75.92659183871186 - type: manhattan_pearson value: 80.60246102203844 - type: manhattan_spearman value: 76.03018971432664 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.34691640755737 - type: cos_sim_spearman value: 82.4018369631579 - type: euclidean_pearson value: 81.87673092245366 - type: euclidean_spearman value: 82.3671489960678 - type: manhattan_pearson value: 81.88222387719948 - type: manhattan_spearman value: 82.3816590344736 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.2836092579524 - type: cos_sim_spearman value: 78.99982781772064 - type: euclidean_pearson value: 80.5184271010527 - type: euclidean_spearman value: 78.89777392101904 - type: manhattan_pearson value: 80.53585705018664 - type: manhattan_spearman value: 78.92898405472994 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.7349907750784 - type: cos_sim_spearman value: 87.7611234446225 - type: euclidean_pearson value: 86.98759326731624 - type: euclidean_spearman value: 87.58321319424618 - type: manhattan_pearson value: 87.03483090370842 - type: manhattan_spearman value: 87.63278333060288 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.75873694924825 - type: cos_sim_spearman value: 83.80237999094724 - type: euclidean_pearson value: 83.55023725861537 - type: euclidean_spearman value: 84.12744338577744 - type: manhattan_pearson value: 83.58816983036232 - type: manhattan_spearman value: 84.18520748676501 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.21630882940174 - type: cos_sim_spearman value: 87.72382883437031 - type: euclidean_pearson value: 88.69933350930333 - type: euclidean_spearman value: 88.24660814383081 - type: manhattan_pearson value: 88.77331018833499 - type: manhattan_spearman value: 88.26109989380632 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.11854063060489 - type: cos_sim_spearman value: 63.14678634195072 - type: euclidean_pearson value: 61.679090067000864 - type: euclidean_spearman value: 62.28876589509653 - type: manhattan_pearson value: 62.082324165511004 - type: manhattan_spearman value: 62.56030932816679 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.00319882832645 - type: cos_sim_spearman value: 85.94529772647257 - type: euclidean_pearson value: 85.6661390122756 - type: euclidean_spearman value: 85.97747815545827 - type: manhattan_pearson value: 85.58422770541893 - type: manhattan_spearman value: 85.9237139181532 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.16198731863916 - type: mrr value: 94.25202702163487 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 54.761 - type: map_at_10 value: 64.396 - type: map_at_100 value: 65.07 - type: map_at_1000 value: 65.09899999999999 - type: map_at_3 value: 61.846000000000004 - type: map_at_5 value: 63.284 - type: mrr_at_1 value: 57.667 - type: mrr_at_10 value: 65.83099999999999 - type: mrr_at_100 value: 66.36800000000001 - type: mrr_at_1000 value: 66.39399999999999 - type: mrr_at_3 value: 64.056 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 57.667 - type: ndcg_at_10 value: 68.854 - type: ndcg_at_100 value: 71.59100000000001 - type: ndcg_at_1000 value: 72.383 - type: ndcg_at_3 value: 64.671 - type: ndcg_at_5 value: 66.796 - type: precision_at_1 value: 57.667 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.444 - type: precision_at_5 value: 16.667 - type: recall_at_1 value: 54.761 - type: recall_at_10 value: 80.9 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 69.672 - type: recall_at_5 value: 75.083 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8079207920792 - type: cos_sim_ap value: 94.88470927617445 - type: cos_sim_f1 value: 90.08179959100204 - type: cos_sim_precision value: 92.15481171548117 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.58613861386138 - type: dot_ap value: 82.94822578881316 - type: dot_f1 value: 77.33333333333333 - type: dot_precision value: 79.36842105263158 - type: dot_recall value: 75.4 - type: euclidean_accuracy value: 99.8069306930693 - type: euclidean_ap value: 94.81367858031837 - type: euclidean_f1 value: 90.01009081735621 - type: euclidean_precision value: 90.83503054989816 - type: euclidean_recall value: 89.2 - type: manhattan_accuracy value: 99.81188118811882 - type: manhattan_ap value: 94.91405337220161 - type: manhattan_f1 value: 90.2763561924258 - type: manhattan_precision value: 92.45283018867924 - type: manhattan_recall value: 88.2 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 94.91405337220161 - type: max_f1 value: 90.2763561924258 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 58.511599500053094 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.984728147814707 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.93428193939015 - type: mrr value: 50.916557911043206 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.562500894537145 - type: cos_sim_spearman value: 31.162587976726307 - type: dot_pearson value: 22.633662187735762 - type: dot_spearman value: 22.723000282378962 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.219 - type: map_at_10 value: 1.871 - type: map_at_100 value: 10.487 - type: map_at_1000 value: 25.122 - type: map_at_3 value: 0.657 - type: map_at_5 value: 1.0699999999999998 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 89.567 - type: mrr_at_100 value: 89.748 - type: mrr_at_1000 value: 89.748 - type: mrr_at_3 value: 88.667 - type: mrr_at_5 value: 89.567 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 74.533 - type: ndcg_at_100 value: 55.839000000000006 - type: ndcg_at_1000 value: 49.748 - type: ndcg_at_3 value: 79.53099999999999 - type: ndcg_at_5 value: 78.245 - type: precision_at_1 value: 84 - type: precision_at_10 value: 78.4 - type: precision_at_100 value: 56.99999999999999 - type: precision_at_1000 value: 21.98 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.8 - type: recall_at_1 value: 0.219 - type: recall_at_10 value: 2.02 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.739999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.5029999999999997 - type: map_at_10 value: 11.042 - type: map_at_100 value: 16.326999999999998 - type: map_at_1000 value: 17.836 - type: map_at_3 value: 6.174 - type: map_at_5 value: 7.979 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 52.617000000000004 - type: mrr_at_100 value: 53.351000000000006 - type: mrr_at_1000 value: 53.351000000000006 - type: mrr_at_3 value: 46.939 - type: mrr_at_5 value: 50.714000000000006 - type: ndcg_at_1 value: 38.775999999999996 - type: ndcg_at_10 value: 27.125 - type: ndcg_at_100 value: 35.845 - type: ndcg_at_1000 value: 47.377 - type: ndcg_at_3 value: 29.633 - type: ndcg_at_5 value: 28.378999999999998 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 24.082 - type: precision_at_100 value: 6.877999999999999 - type: precision_at_1000 value: 1.463 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 28.571 - type: recall_at_1 value: 3.5029999999999997 - type: recall_at_10 value: 17.068 - type: recall_at_100 value: 43.361 - type: recall_at_1000 value: 78.835 - type: recall_at_3 value: 6.821000000000001 - type: recall_at_5 value: 10.357 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.0954 - type: ap value: 14.216844153511959 - type: f1 value: 54.63687418565117 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.46293152235427 - type: f1 value: 61.744177921638645 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.12708617788644 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.75430649102938 - type: cos_sim_ap value: 73.34252536948081 - type: cos_sim_f1 value: 67.53758935173774 - type: cos_sim_precision value: 63.3672525439408 - type: cos_sim_recall value: 72.29551451187335 - type: dot_accuracy value: 81.71305954580676 - type: dot_ap value: 59.5532209082386 - type: dot_f1 value: 56.18466898954705 - type: dot_precision value: 47.830923248053395 - type: dot_recall value: 68.07387862796834 - type: euclidean_accuracy value: 85.81987244441795 - type: euclidean_ap value: 73.34325409809446 - type: euclidean_f1 value: 67.83451360417443 - type: euclidean_precision value: 64.09955388588871 - type: euclidean_recall value: 72.0316622691293 - type: manhattan_accuracy value: 85.68277999642368 - type: manhattan_ap value: 73.1535450121903 - type: manhattan_f1 value: 67.928237896289 - type: manhattan_precision value: 63.56945722171113 - type: manhattan_recall value: 72.9287598944591 - type: max_accuracy value: 85.81987244441795 - type: max_ap value: 73.34325409809446 - type: max_f1 value: 67.928237896289 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.90441262079403 - type: cos_sim_ap value: 85.79331880741438 - type: cos_sim_f1 value: 78.31563529842548 - type: cos_sim_precision value: 74.6683424102779 - type: cos_sim_recall value: 82.33754234678165 - type: dot_accuracy value: 84.89928978926534 - type: dot_ap value: 75.25819218316 - type: dot_f1 value: 69.88730119720536 - type: dot_precision value: 64.23362374959665 - type: dot_recall value: 76.63227594702803 - type: euclidean_accuracy value: 89.01695967710637 - type: euclidean_ap value: 85.98986606038852 - type: euclidean_f1 value: 78.5277880014722 - type: euclidean_precision value: 75.22211253701876 - type: euclidean_recall value: 82.13735756082538 - type: manhattan_accuracy value: 88.99561454573679 - type: manhattan_ap value: 85.92262421793953 - type: manhattan_f1 value: 78.38866094740769 - type: manhattan_precision value: 76.02373028505282 - type: manhattan_recall value: 80.9054511857099 - type: max_accuracy value: 89.01695967710637 - type: max_ap value: 85.98986606038852 - type: max_f1 value: 78.5277880014722 --- # E5-small-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2') model = AutoModel.from_pretrained('intfloat/e5-small-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens. ## Sentence Transformers Below is an example for usage with sentence_transformers. `pip install sentence_transformers~=2.2.2` This is community contributed, and results may vary up to numerical precision. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-small-v2') embeddings = model.encode(input_texts, normalize_embeddings=True) ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
Turbo-AI/gte-base-v0__trim_vocab-1024
Turbo-AI
sentence-similarity
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:609413", "loss:CachedMultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:Turbo-AI/gte-multilingual-base__trim_vocab", "base_model:finetune:Turbo-AI/gte-multilingual-base__trim_vocab", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,732
1,732
13
0
--- base_model: Turbo-AI/gte-multilingual-base__trim_vocab library_name: sentence-transformers metrics: - cosine_accuracy@10 - cosine_precision@10 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@10 - dot_accuracy@10 - dot_precision@10 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@10 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:609413 - loss:CachedMultipleNegativesRankingLoss widget: - source_sentence: Những loại giấy tờ, tài liệu nào có thể dùng để xác định nội dung con dấu, chữ ký và chức danh theo quy định pháp luật hiện nay? sentences: - 'Giấy tờ, tài liệu nêu tại điểm a khoản 2 Điều 12 Nghị định bao gồm: 1. Giấy tờ, tài liệu có con dấu, chữ ký và chức danh chưa được giới thiệu chính thức. 2. Giấy tờ, tài liệu có con dấu, chữ ký và chức danh không thể xác định được trên cơ sở đối chiếu với mẫu con dấu, mẫu chữ ký và chức danh được giới thiệu chính thức hoặc trên cơ sở kết quả xác minh.' - Người đứng đầu cơ quan, tổ chức trong phạm vi nhiệm vụ, quyền hạn của mình có trách nhiệm quản lý về lưu trữ, áp dụng các biện pháp nhằm nâng cao hiệu quả trong việc thu thập, quản lý, bảo quản và sử dụng tài liệu lưu trữ; ban hành quy chế về công tác lưu trữ của cơ quan, tổ chức mình. - '1. Tài liệu hình thành trong quá trình hoạt động của Hội đồng nhân dân, Ủy ban nhân dân, các tổ chức xã hội, tổ chức xã hội - nghề nghiệp của xã, phường, thị trấn được lựa chọn và lưu trữ tại Văn phòng Ủy ban nhân dân xã, phường, thị trấn. Người làm lưu trữ tại Văn phòng Ủy ban nhân dân xã, phường, thị trấn phải có đủ các tiêu chuẩn chuyên môn, nghiệp vụ lưu trữ và được hưởng chế độ, quyền lợi theo quy định của pháp luật. 2. Người làm lưu trữ tại Văn phòng Ủy ban nhân dân xã, phường, thị trấn có nhiệm vụ hướng dẫn việc lập hồ sơ, tiếp nhận hồ sơ, tài liệu, chỉnh lý, thống kê, bảo quản và phục vụ sử dụng tài liệu lưu trữ theo quy định của pháp luật về lưu trữ.' - source_sentence: Giấy tờ, tài liệu có con dấu, chữ ký và chức danh chưa được giới thiệu chính thức gồm những loại nào? sentences: - '1. Việc giới thiệu mẫu con dấu, mẫu chữ ký và chức danh của cơ quan, tổ chức lập, công chứng, chứng thực, chứng nhận giấy tờ, tài liệu theo quy định tại khoản 4 Điều 11 Nghị định được thực hiện như sau: a) Cơ quan, tổ chức có thẩm quyền lập, công chứng, chứng thực giấy tờ, tài liệu theo quy định của pháp luật có trách nhiệm giới thiệu mẫu con dấu, mẫu chữ ký và chức danh của cơ quan, tổ chức. b) Cơ quan, tổ chức có trách nhiệm định kỳ hàng năm rà soát mẫu con dấu, mẫu chữ ký, chức danh của cơ quan, tổ chức và thông báo kết quả rà soát trước ngày 01 tháng 02 của năm tiếp theo. c) Cục Lãnh sự và Sở Ngoại vụ Thành phố Hồ Chí Minh tiếp nhận việc giới thiệu mẫu con dấu, mẫu chữ ký và chức danh của cơ quan, tổ chức Trung ương và cơ quan, tổ chức địa phương. Cơ quan ngoại vụ địa phương tiếp nhận việc giới thiệu mẫu con dấu, mẫu chữ ký và chức danh của cơ quan, tổ chức địa phương và cơ quan, tổ chức Trung ương đặt tại địa phương được gửi tới cơ quan ngoại vụ; chuyển bản gốc văn bản giới thiệu cho Cục Lãnh sự và Sở Ngoại vụ Thành phố Hồ Chí Minh trong thời hạn 05 ngày làm việc, kể từ ngày nhận được giới thiệu, và lưu giữ bản chụp của văn bản này. 2. Cục Lãnh sự và Sở Ngoại vụ Thành phố Hồ Chí Minh có trách nhiệm giới thiệu mẫu con dấu, mẫu chữ ký và chức danh của đơn vị mình cho các Cơ quan đại diện nước ngoài tại Việt Nam và các Cơ quan đại diện Việt Nam ở nước ngoài. 3. Các Cơ quan đại diện Việt Nam ở nước ngoài có trách nhiệm giới thiệu mẫu con dấu, mẫu chữ ký và chức danh của Cơ quan đại diện cho Bộ Ngoại giao hoặc cơ quan có thẩm quyền khác của nước ngoài. 4. Trong trường hợp có sự thay đổi về mẫu con dấu, mẫu chữ ký và chức danh nêu tại khoản 1, khoản 2 và khoản 3 Điều này thì cơ quan liên quan phải giới thiệu mẫu con dấu, mẫu chữ ký và chức danh mới trong thời hạn 20 ngày làm việc, kể từ ngày có sự thay đổi.' - 'Giấy tờ, tài liệu nêu tại điểm a khoản 2 Điều 12 Nghị định bao gồm: 1. Giấy tờ, tài liệu có con dấu, chữ ký và chức danh chưa được giới thiệu chính thức. 2. Giấy tờ, tài liệu có con dấu, chữ ký và chức danh không thể xác định được trên cơ sở đối chiếu với mẫu con dấu, mẫu chữ ký và chức danh được giới thiệu chính thức hoặc trên cơ sở kết quả xác minh.' - 'Thông báo về tình hình chấp hành án của người đang chấp hành án phạt tù Bộ Công an thông báo ngay cho cơ quan có thẩm quyền của nước ngoài khi: 1. Người đang chấp hành án phạt tù được tạm đình chỉ thi hành án phạt tù, giảm thời hạn chấp hành hình phạt tù hoặc đặc xá; 2. Người đang chấp hành án phạt tù đã chấp hành xong án phạt tù; 3. Người đang chấp hành án phạt tù bỏ trốn khỏi nơi giam giữ; 4. Người đang chấp hành án phạt tù chết trước khi chấp hành xong án phạt tù; 5. Phía nước ngoài đề nghị thông báo về tình hình chấp hành án của người đang chấp hành án phạt tù.' - source_sentence: Việc trích yếu nội dung công văn của cơ quan Bộ Giáo dục và Đào tạo được thực hiện theo thể thức nào? sentences: - '1. Thể thức Tên loại văn bản là tên của từng loại văn bản do cơ quan, tổ chức ban hành. Khi ban hành văn bản đều phải ghi tên loại, trừ công văn. Trích yếu nội dung của văn bản là một câu ngắn gọn hoặc một cụm từ phản ánh khái quát nội dung chủ yếu của văn bản. 2. Kỹ thuật trình bày Tên loại và trích yếu nội dung của các loại văn bản có ghi tên loại được trình bày tại ô số 5a; tên loại văn bản (nghị quyết, quyết định, kế hoạch, báo cáo, tờ trình và các loại văn bản khác) được đặt canh giữa bằng chữ in hoa, cỡ chữ 14, kiểu chữ đứng, đậm; trích yếu nội dung văn bản được đặt canh giữa, ngay dưới tên loại văn bản, bằng chữ in thường, cỡ chữ 14, kiểu chữ đứng, đậm; bên dưới trích yếu có đường kẻ ngang, nét liền, có độ dài bằng từ 1/3 đến 1/2 độ dài của dòng chữ và đặt cân đối so với dòng chữ, ví dụ: QUYẾT ĐỊNH Về việc điều động cán bộ Trích yếu nội dung công văn được trình bày tại ô số 5b, sau chữ “V/v” bằng chữ in thường, cỡ chữ từ 12 đến 13, kiểu chữ đứng; được đặt canh giữa dưới số và ký hiệu văn bản, cách dòng 6pt với số và ký hiệu văn bản, ví dụ: Số: 72/VTLTNN-NVĐP V/v kế hoạch kiểm tra công tác văn thư, lưu trữ năm 2009' - 'Nguyên tắc vũ trang canh gác bảo vệ mục tiêu 1. Tuân thủ quy định tại Nghị định số 37/2009/NĐ-CP ngày 23/4/2009 quy định các mục tiêu quan trọng về chính trị, kinh tế, ngoại giao, khoa học - kỹ thuật, văn hóa, xã hội do lực lượng Cảnh sát nhân dân có trách nhiệm vũ trang canh gác bảo vệ và trách nhiệm của cơ quan, tổ chức có liên quan và quy định của Thông tư này. 2. Bảo đảm vũ trang canh gác bảo vệ mục tiêu thường xuyên, liên tục 24/24 giờ. 3. Phối hợp chặt chẽ với các đơn vị trong Công an nhân dân và các cơ quan, tổ chức có liên quan nhằm phòng ngừa, phát hiện, ngăn chặn, xử lý kịp thời mọi hành vi xâm hại mục tiêu.' - 'Thông báo về việc giảm thời hạn chấp hành án phạt tù, đặc xá, đại xá cho người đang chấp hành án phạt tù đã được chuyển giao 1. Ngay sau khi nhận được thông báo về quyết định giảm thời hạn chấp hành án phạt tù, đặc xá, đại xá cho người đang chấp hành án phạt tù đã được chuyển giao, Bộ Công an thông báo ngay cho cơ quan có thẩm quyền của nước ngoài biết để thực hiện việc giảm thời hạn chấp hành án phạt tù, đặc xá, đại xá cho người đang chấp hành án phạt tù. 2. Cơ quan đại diện Việt Nam có trách nhiệm phối hợp với Bộ Công an giám sát việc cơ quan có thẩm quyền của nước tiếp nhận thực hiện quyết định giảm thời hạn chấp hành án phạt tù, đặc xá, đại xá của cơ quan có thẩm quyền của Việt Nam.' - source_sentence: Điều kiện để người bị kết án phạt tù chuyển giao về Việt Nam là gì? sentences: - '1. Nguyên tắc ghi nhận chi phí: a) Doanh nghiệp kinh doanh xổ số chỉ được hạch toán vào chi phí các khoản chi phí phát sinh liên quan đến hoạt động kinh doanh trong năm tài chính; b) Việc xác định chi phí của doanh nghiệp kinh doanh xổ số được thực hiện phù hợp với chuẩn mực kế toán và các văn bản pháp luật về thuế hiện hành. 2. Nguyên tắc quản lý chi phí: a) Doanh nghiệp kinh doanh xổ số phải quản lý chặt chẽ các khoản chi phí để giảm chi phí và giá thành sản phẩm nhằm tăng hiệu quả hoạt động kinh doanh của doanh nghiệp; b) Việc quản lý chi phí của doanh nghiệp kinh doanh xổ số được thực hiện theo quy định của pháp luật về quy chế quản lý tài chính đối với doanh nghiệp do nhà nước sở hữu 100% vốn điều lệ.' - '"Điều 6. Điều kiện tiếp nhận người đang chấp hành án phạt tù Người đang chấp hành án phạt tù ở nước chuyển giao chỉ có thể được tiếp nhận về Việt Nam để tiếp tục chấp hành phần hình phạt tù còn lại khi có đủ các điều kiện sau đây: 1. Là công dân Việt Nam; 2. Có nơi thường trú cuối cùng ở Việt Nam; 3. Hành vi phạm tội mà người đó bị kết án ở nước ngoài cũng cấu thành tội phạm theo quy định của pháp luật Việt Nam; 4. Vào thời điểm tiếp nhận yêu cầu chuyển giao, thời hạn chưa chấp hành án phạt tù phải còn ít nhất là 01 (một) năm; trong trường hợp đặc biệt, thời hạn này còn ít nhất là 06 (sáu) tháng; 5. Bản án đối với người được đề nghị chuyển giao về Việt Nam đã có hiệu lực pháp luật và không còn thủ tục tố tụng nào đối với người đó tại nước chuyển giao; 6. Nước chuyển giao và người bị kết án đều đồng ý với việc chuyển giao. Trong trường hợp người bị kết án phạt tù là người chưa thành niên, người có nhược điểm về thể chất hoặc tâm thần thì phải có sự đồng ý của người đại diện hợp pháp của người đó; 7. Tòa án có thẩm quyền của Việt Nam có quyết định đồng ý tiếp nhận đã có hiệu lực pháp luật."' - '1. Cơ quan, tổ chức thuộc Danh mục cơ quan, tổ chức thuộc nguồn nộp lưu tài liệu có trách nhiệm sau đây: a) Chỉnh lý tài liệu trước khi giao nộp và lập Mục lục hồ sơ, tài liệu nộp lưu; b) Lập Danh mục tài liệu có đóng dấu chỉ các mức độ mật; c) Giao nộp tài liệu và công cụ tra cứu vào Lưu trữ lịch sử. 2. Lưu trữ lịch sử có trách nhiệm tổ chức tiếp nhận hồ sơ, tài liệu và lập Biên bản giao nhận hồ sơ, tài liệu. 3. Mục lục hồ sơ, tài liệu nộp lưu và Biên bản giao nhận hồ sơ, tài liệu được lập thành 03 bản; cơ quan, tổ chức giao nộp hồ sơ, tài liệu giữ 01 bản, Lưu trữ lịch sử giữ 02 bản và được lưu trữ vĩnh viễn tại cơ quan, tổ chức, Lưu trữ lịch sử.' - source_sentence: Trách nhiệm của nhân viên quản lý chất lượng tại phòng xét nghiệm trong quản lý chất lượng bệnh viện đa khoa được quy định ra sao? sentences: - 'Cơ quan, tổ chức chia, tách, sáp nhập, giải thể; tổ chức kinh tế là doanh nghiệp nhà nước chia, tách, sáp nhập, giải thể, chuyển đổi hình thức sở hữu hoặc phá sản thì người đứng đầu cơ quan, tổ chức, doanh nghiệp phải tổ chức quản lý và giao nộp tài liệu theo quy định sau đây: 1. Tài liệu hình thành trong quá trình hoạt động của cơ quan, tổ chức nào phải được chỉnh lý, thống kê và bảo quản theo phông lưu trữ của cơ quan, tổ chức đó; 2. Khi cơ quan, tổ chức có quyết định chia, tách, sáp nhập, giải thể; doanh nghiệp có quyết định chia, tách, sáp nhập, giải thể, chuyển đổi hình thức sở hữu hoặc phá sản thì tất cả các hồ sơ, tài liệu đã giải quyết xong của các đơn vị, cá nhân trong cơ quan, tổ chức, doanh nghiệp phải được giao nộp vào Lưu trữ cơ quan để tiến hành chỉnh lý tài liệu theo quy định. 3. Tài liệu lưu trữ sau khi được chỉnh lý được quản lý như sau: a) Tài liệu lưu trữ của cơ quan, tổ chức, doanh nghiệp thuộc nguồn nộp lưu tài liệu vào Lưu trữ lịch sử được giao nộp vào Lưu trữ lịch sử có thẩm quyền; b) Tài liệu lưu trữ của cơ quan, tổ chức, doanh nghiệp không thuộc nguồn nộp lưu vào Lưu trữ lịch sử được quản lý tại Lưu trữ cơ quan của cơ quan, tổ chức, doanh nghiệp mới tiếp nhận trụ sở cũ; trường hợp cơ quan, tổ chức giải thể, doanh nghiệp giải thể, phá sản hoặc không có cơ quan, tổ chức, doanh nghiệp tiếp nhận trụ sở cũ hoặc có nhiều cơ quan, tổ chức, doanh nghiệp mới cùng tiếp nhận trụ sở cũ thì tài liệu lưu trữ của cơ quan, tổ chức, doanh nghiệp được giao nộp vào Lưu trữ cơ quan theo quyết định của cơ quan, tổ chức cấp trên trực tiếp hoặc cơ quan, tổ chức có thẩm quyền.' - 'Trách nhiệm của nhân viên quản lý chất lượng tại phòng xét nghiệm 1. Tổng hợp, tham mưu cho trưởng phòng xét nghiệm trong triển khai các nội dung về quản lý chất lượng xét nghiệm. 2. Xây dựng kế hoạch và nội dung quản lý chất lượng xét nghiệm của phòng, trình lãnh đạo phòng xét nghiệm xem xét, quyết định để trình lãnh đạo cơ sở khám bệnh, chữa bệnh xem xét, phê duyệt. 3. Tổ chức thực hiện chương trình nội kiểm và tham gia chương trình ngoại kiểm để theo dõi, giám sát, đánh giá chất lượng công tác xét nghiệm và phát hiện, đề xuất giải pháp can thiệp kịp thời nhằm quản lý những trường hợp sai sót, có nguy cơ sai sót trong các quy trình xét nghiệm. 4. Thu thập, tổng hợp, phân tích dữ liệu, quản lý và bảo mật thông tin liên quan đến hoạt động phòng xét nghiệm. 5. Phối hợp và hỗ trợ các khoa hoặc phòng liên quan khác trong việc triển khai quản lý chất lượng xét nghiệm. 6. Tổng kết, báo cáo định kỳ hằng tháng, quý và năm về hoạt động và kết quả quản lý chất lượng xét nghiệm với trưởng phòng xét nghiệm, trưởng phòng (hoặc tổ trưởng) quản lý chất lượng bệnh viện và lãnh đạo cơ sở khám bệnh, chữa bệnh. 7. Là đầu mối tham mưu để thực hiện các công việc liên quan với các tổ chức đánh giá, cấp chứng nhận phòng xét nghiệm đạt tiêu chuẩn quốc gia hoặc tiêu chuẩn quốc tế.' - '1. Phòng tiếp khách đối ngoại tại trụ sở cơ quan đại diện, văn phòng trực thuộc hay nhà riêng có quốc kỳ Việt Nam và treo ảnh hoặc đặt tượng Chủ tịch Hồ Chí Minh phía sau nơi ngồi tiếp khách của người chủ trì tiếp khách. 2. Treo ảnh hoặc đặt tượng Chủ tịch Hồ Chí Minh ở chính giữa, quốc kỳ Việt Nam treo trên cột cờ đặt phía bên trái ảnh hoặc tượng nếu nhìn từ phía đối diện. Đỉnh của ảnh hoặc tượng Chủ tịch Hồ Chí Minh không cao hơn đỉnh ngôi sao vàng trong quốc kỳ Việt Nam khi treo trên cột cờ. (Xem hình 5 trong Phụ lục).' model-index: - name: SentenceTransformer based on Turbo-AI/gte-multilingual-base__trim_vocab results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@10 value: 0.9496981891348089 name: Cosine Accuracy@10 - type: cosine_precision@10 value: 0.10040241448692154 name: Cosine Precision@10 - type: cosine_recall@10 value: 0.9305835010060363 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7428373133092846 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6908386828909965 name: Cosine Mrr@10 - type: cosine_map@10 value: 0.6761721120373031 name: Cosine Map@10 - type: dot_accuracy@10 value: 0.9496981891348089 name: Dot Accuracy@10 - type: dot_precision@10 value: 0.10040241448692154 name: Dot Precision@10 - type: dot_recall@10 value: 0.9309188464118041 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.7149323172808646 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6546189837437317 name: Dot Mrr@10 - type: dot_map@10 value: 0.639206269362205 name: Dot Map@10 --- # SentenceTransformer based on Turbo-AI/gte-multilingual-base__trim_vocab This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Turbo-AI/gte-multilingual-base__trim_vocab](https://huggingface.co/Turbo-AI/gte-multilingual-base__trim_vocab). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Turbo-AI/gte-multilingual-base__trim_vocab](https://huggingface.co/Turbo-AI/gte-multilingual-base__trim_vocab) <!-- at revision b49c5d1f2703a4f4725bcd54c2307348ae6b2381 --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Turbo-AI/gte-base-v0__trim_vocab-1024") # Run inference sentences = [ 'Trách nhiệm của nhân viên quản lý chất lượng tại phòng xét nghiệm trong quản lý chất lượng bệnh viện đa khoa được quy định ra sao?', 'Trách nhiệm của nhân viên quản lý chất lượng tại phòng xét nghiệm\n1. Tổng hợp, tham mưu cho trưởng phòng xét nghiệm trong triển khai các nội dung về quản lý chất lượng xét nghiệm.\n2. Xây dựng kế hoạch và nội dung quản lý chất lượng xét nghiệm của phòng, trình lãnh đạo phòng xét nghiệm xem xét, quyết định để trình lãnh đạo cơ sở khám bệnh, chữa bệnh xem xét, phê duyệt.\n3. Tổ chức thực hiện chương trình nội kiểm và tham gia chương trình ngoại kiểm để theo dõi, giám sát, đánh giá chất lượng công tác xét nghiệm và phát hiện, đề xuất giải pháp can thiệp kịp thời nhằm quản lý những trường hợp sai sót, có nguy cơ sai sót trong các quy trình xét nghiệm.\n4. Thu thập, tổng hợp, phân tích dữ liệu, quản lý và bảo mật thông tin liên quan đến hoạt động phòng xét nghiệm.\n5. Phối hợp và hỗ trợ các khoa hoặc phòng liên quan khác trong việc triển khai quản lý chất lượng xét nghiệm.\n6. Tổng kết, báo cáo định kỳ hằng tháng, quý và năm về hoạt động và kết quả quản lý chất lượng xét nghiệm với trưởng phòng xét nghiệm, trưởng phòng (hoặc tổ trưởng) quản lý chất lượng bệnh viện và lãnh đạo cơ sở khám bệnh, chữa bệnh.\n7. Là đầu mối tham mưu để thực hiện các công việc liên quan với các tổ chức đánh giá, cấp chứng nhận phòng xét nghiệm đạt tiêu chuẩn quốc gia hoặc tiêu chuẩn quốc tế.', '1. Phòng tiếp khách đối ngoại tại trụ sở cơ quan đại diện, văn phòng trực thuộc hay nhà riêng có quốc kỳ Việt Nam và treo ảnh hoặc đặt tượng Chủ tịch Hồ Chí Minh phía sau nơi ngồi tiếp khách của người chủ trì tiếp khách.\n2. Treo ảnh hoặc đặt tượng Chủ tịch Hồ Chí Minh ở chính giữa, quốc kỳ Việt Nam treo trên cột cờ đặt phía bên trái ảnh hoặc tượng nếu nhìn từ phía đối diện. Đỉnh của ảnh hoặc tượng Chủ tịch Hồ Chí Minh không cao hơn đỉnh ngôi sao vàng trong quốc kỳ Việt Nam khi treo trên cột cờ. (Xem hình 5 trong Phụ lục).', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@10 | 0.9497 | | cosine_precision@10 | 0.1004 | | cosine_recall@10 | 0.9306 | | cosine_ndcg@10 | 0.7428 | | cosine_mrr@10 | 0.6908 | | **cosine_map@10** | **0.6762** | | dot_accuracy@10 | 0.9497 | | dot_precision@10 | 0.1004 | | dot_recall@10 | 0.9309 | | dot_ndcg@10 | 0.7149 | | dot_mrr@10 | 0.6546 | | dot_map@10 | 0.6392 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 609,413 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 25.66 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 265.94 tokens</li><li>max: 1024 tokens</li></ul> | * Samples: | anchor | positive | |:---------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Việc tuần tra, canh gác bảo vệ đê Điều trong mùa lũ được thực hiện như thế nào?</code> | <code>Thông tư này hướng dẫn tuần tra, canh gác bảo vệ đê Điều trong mùa lũ đối với các tuyến đê sông được phân loại, phân cấp theo quy định tại Điều 4 của Luật Đê Điều.</code> | | <code>Cách thức bảo vệ tuyến đê sông trong mùa lũ được quy định như thế nào?</code> | <code>Thông tư này hướng dẫn tuần tra, canh gác bảo vệ đê Điều trong mùa lũ đối với các tuyến đê sông được phân loại, phân cấp theo quy định tại Điều 4 của Luật Đê Điều.</code> | | <code>Các tuyến đê sông được phân loại, phân cấp thì được bảo vệ ra sao?</code> | <code>Thông tư này hướng dẫn tuần tra, canh gác bảo vệ đê Điều trong mùa lũ đối với các tuyến đê sông được phân loại, phân cấp theo quy định tại Điều 4 của Luật Đê Điều.</code> | * Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 4096 - `per_device_eval_batch_size`: 4096 - `num_train_epochs`: 5 - `warmup_ratio`: 0.05 - `bf16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 4096 - `per_device_eval_batch_size`: 4096 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | cosine_map@10 | |:------:|:----:|:-------------:|:-------------:| | 0.0067 | 1 | 7.1703 | - | | 0.0134 | 2 | 7.1798 | - | | 0.0201 | 3 | 7.1921 | - | | 0.0268 | 4 | 7.1764 | - | | 0.0336 | 5 | 7.1942 | - | | 0.0403 | 6 | 7.1712 | - | | 0.0470 | 7 | 7.0703 | - | | 0.0537 | 8 | 7.0478 | - | | 0.0604 | 9 | 6.8367 | - | | 0.0671 | 10 | 6.716 | - | | 0.0738 | 11 | 6.6603 | - | | 0.0805 | 12 | 5.8811 | - | | 0.0872 | 13 | 5.7548 | - | | 0.0940 | 14 | 5.4862 | - | | 0.1007 | 15 | 4.9705 | - | | 0.1074 | 16 | 4.5047 | - | | 0.1141 | 17 | 4.3474 | - | | 0.1208 | 18 | 4.1269 | - | | 0.1275 | 19 | 3.8795 | - | | 0.1342 | 20 | 3.7357 | - | | 0.1409 | 21 | 3.2726 | - | | 0.1477 | 22 | 3.107 | - | | 0.1544 | 23 | 2.9576 | - | | 0.1611 | 24 | 2.7544 | - | | 0.1678 | 25 | 2.5584 | - | | 0.1745 | 26 | 2.4129 | - | | 0.1812 | 27 | 2.2592 | - | | 0.1879 | 28 | 2.1605 | - | | 0.1946 | 29 | 1.9635 | - | | 0.2013 | 30 | 1.8905 | - | | 0.2081 | 31 | 1.7623 | - | | 0.2148 | 32 | 1.6536 | - | | 0.2215 | 33 | 1.5194 | - | | 0.2282 | 34 | 1.2924 | - | | 0.2349 | 35 | 1.1131 | - | | 0.2416 | 36 | 0.9684 | - | | 0.2483 | 37 | 0.8815 | - | | 0.2550 | 38 | 0.8135 | - | | 0.2617 | 39 | 0.7569 | - | | 0.2685 | 40 | 0.7257 | - | | 0.2752 | 41 | 0.6537 | - | | 0.2819 | 42 | 0.6096 | - | | 0.2886 | 43 | 0.5817 | - | | 0.2953 | 44 | 0.5965 | - | | 0.3020 | 45 | 0.5602 | - | | 0.3087 | 46 | 0.5287 | - | | 0.3154 | 47 | 0.4964 | - | | 0.3221 | 48 | 0.4858 | - | | 0.3289 | 49 | 0.4805 | - | | 0.3356 | 50 | 0.477 | - | | 0.3423 | 51 | 0.4686 | - | | 0.3490 | 52 | 0.432 | - | | 0.3557 | 53 | 0.4303 | - | | 0.3624 | 54 | 0.434 | - | | 0.3691 | 55 | 0.423 | - | | 0.3758 | 56 | 0.4075 | - | | 0.3826 | 57 | 0.4019 | - | | 0.3893 | 58 | 0.3886 | - | | 0.3960 | 59 | 0.3725 | - | | 0.4027 | 60 | 0.3675 | - | | 0.4094 | 61 | 0.3733 | - | | 0.4161 | 62 | 0.3774 | - | | 0.4228 | 63 | 0.3909 | - | | 0.4295 | 64 | 0.3721 | - | | 0.4362 | 65 | 0.3566 | - | | 0.4430 | 66 | 0.3409 | - | | 0.4497 | 67 | 0.362 | - | | 0.4564 | 68 | 0.3613 | - | | 0.4631 | 69 | 0.354 | - | | 0.4698 | 70 | 0.3379 | - | | 0.4765 | 71 | 0.3525 | - | | 0.4832 | 72 | 0.3225 | - | | 0.4899 | 73 | 0.3148 | - | | 0.4966 | 74 | 0.4755 | - | | 0.5034 | 75 | 0.5973 | - | | 0.5101 | 76 | 0.6487 | - | | 0.5168 | 77 | 0.6887 | - | | 0.5235 | 78 | 0.8575 | - | | 0.5302 | 79 | 0.8181 | - | | 0.5369 | 80 | 0.8791 | - | | 0.5436 | 81 | 0.8026 | - | | 0.5503 | 82 | 0.7696 | - | | 0.5570 | 83 | 0.7278 | - | | 0.5638 | 84 | 0.6707 | - | | 0.5705 | 85 | 0.632 | - | | 0.5772 | 86 | 0.6457 | - | | 0.5839 | 87 | 0.7231 | - | | 0.5906 | 88 | 0.6762 | - | | 0.5973 | 89 | 0.6605 | - | | 0.6040 | 90 | 0.639 | - | | 0.6107 | 91 | 0.7126 | - | | 0.6174 | 92 | 0.6703 | - | | 0.6242 | 93 | 0.6116 | - | | 0.6309 | 94 | 0.6143 | - | | 0.6376 | 95 | 0.6082 | - | | 0.6443 | 96 | 0.6074 | - | | 0.6510 | 97 | 0.6499 | - | | 0.6577 | 98 | 0.7679 | - | | 0.6644 | 99 | 0.676 | - | | 0.6711 | 100 | 0.6603 | 0.6409 | | 0.6779 | 101 | 0.6438 | - | | 0.6846 | 102 | 0.6479 | - | | 0.6913 | 103 | 0.5959 | - | | 0.6980 | 104 | 0.6173 | - | | 0.7047 | 105 | 0.602 | - | | 0.7114 | 106 | 0.5743 | - | | 0.7181 | 107 | 0.5942 | - | | 0.7248 | 108 | 0.5682 | - | | 0.7315 | 109 | 0.5648 | - | | 0.7383 | 110 | 0.5653 | - | | 0.7450 | 111 | 0.6396 | - | | 0.7517 | 112 | 0.7634 | - | | 0.7584 | 113 | 0.6744 | - | | 0.7651 | 114 | 0.6411 | - | | 0.7718 | 115 | 0.5934 | - | | 0.7785 | 116 | 0.5907 | - | | 0.7852 | 117 | 0.5567 | - | | 0.7919 | 118 | 0.5274 | - | | 0.7987 | 119 | 0.6453 | - | | 0.8054 | 120 | 0.6115 | - | | 0.8121 | 121 | 0.5959 | - | | 0.8188 | 122 | 0.5694 | - | | 0.8255 | 123 | 0.5554 | - | | 0.8322 | 124 | 0.5419 | - | | 0.8389 | 125 | 0.591 | - | | 0.8456 | 126 | 0.589 | - | | 0.8523 | 127 | 0.5484 | - | | 0.8591 | 128 | 0.5447 | - | | 0.8658 | 129 | 0.6312 | - | | 0.8725 | 130 | 0.6086 | - | | 0.8792 | 131 | 0.5948 | - | | 0.8859 | 132 | 0.5483 | - | | 0.8926 | 133 | 0.5192 | - | | 0.8993 | 134 | 0.5273 | - | | 0.9060 | 135 | 0.6629 | - | | 0.9128 | 136 | 0.6038 | - | | 0.9195 | 137 | 0.5433 | - | | 0.9262 | 138 | 0.5449 | - | | 0.9329 | 139 | 0.5415 | - | | 0.9396 | 140 | 0.566 | - | | 0.9463 | 141 | 0.5391 | - | | 0.9530 | 142 | 0.558 | - | | 0.9597 | 143 | 0.6029 | - | | 0.9664 | 144 | 0.6619 | - | | 0.9732 | 145 | 0.597 | - | | 0.9799 | 146 | 0.5931 | - | | 0.9866 | 147 | 0.5943 | - | | 0.9933 | 148 | 0.5677 | - | | 1.0 | 149 | 0.4389 | - | | 1.0067 | 150 | 0.2988 | - | | 1.0134 | 151 | 0.1699 | - | | 1.0201 | 152 | 0.0229 | - | | 1.0067 | 153 | 0.5693 | - | | 1.0134 | 154 | 0.6031 | - | | 1.0201 | 155 | 0.5859 | - | | 1.0268 | 156 | 0.5526 | - | | 1.0336 | 157 | 0.5625 | - | | 1.0403 | 158 | 0.5834 | - | | 1.0470 | 159 | 0.5425 | - | | 1.0537 | 160 | 0.5344 | - | | 1.0604 | 161 | 0.5476 | - | | 1.0671 | 162 | 0.5709 | - | | 1.0738 | 163 | 0.5811 | - | | 1.0805 | 164 | 0.5739 | - | | 1.0872 | 165 | 0.5466 | - | | 1.0940 | 166 | 0.564 | - | | 1.1007 | 167 | 0.5211 | - | | 1.1074 | 168 | 0.5349 | - | | 1.1141 | 169 | 0.5007 | - | | 1.1208 | 170 | 0.5054 | - | | 1.1275 | 171 | 0.4804 | - | | 1.1342 | 172 | 0.5091 | - | | 1.1409 | 173 | 0.5141 | - | | 1.1477 | 174 | 0.5154 | - | | 1.1544 | 175 | 0.4973 | - | | 1.1611 | 176 | 0.4771 | - | | 1.1678 | 177 | 0.4817 | - | | 1.1745 | 178 | 0.5269 | - | | 1.1812 | 179 | 0.5113 | - | | 1.1879 | 180 | 0.511 | - | | 1.1946 | 181 | 0.4819 | - | | 1.2013 | 182 | 0.5247 | - | | 1.2081 | 183 | 0.5168 | - | | 1.2148 | 184 | 0.5456 | - | | 1.2215 | 185 | 0.5288 | - | | 1.2282 | 186 | 0.4394 | - | | 1.2349 | 187 | 0.4011 | - | | 1.2416 | 188 | 0.3307 | - | | 1.2483 | 189 | 0.3293 | - | | 1.2550 | 190 | 0.3184 | - | | 1.2617 | 191 | 0.3202 | - | | 1.2685 | 192 | 0.3229 | - | | 1.2752 | 193 | 0.3038 | - | | 1.2819 | 194 | 0.2949 | - | | 1.2886 | 195 | 0.2887 | - | | 1.2953 | 196 | 0.297 | - | | 1.3020 | 197 | 0.2925 | - | | 1.3087 | 198 | 0.2819 | - | | 1.3154 | 199 | 0.2689 | - | | 1.3221 | 200 | 0.2711 | 0.6600 | | 1.3289 | 201 | 0.2725 | - | | 1.3356 | 202 | 0.2753 | - | | 1.3423 | 203 | 0.2686 | - | | 1.3490 | 204 | 0.2549 | - | | 1.3557 | 205 | 0.255 | - | | 1.3624 | 206 | 0.2586 | - | | 1.3691 | 207 | 0.2499 | - | | 1.3758 | 208 | 0.2481 | - | | 1.3826 | 209 | 0.2536 | - | | 1.3893 | 210 | 0.2407 | - | | 1.3960 | 211 | 0.2312 | - | | 1.4027 | 212 | 0.2264 | - | | 1.4094 | 213 | 0.2355 | - | | 1.4161 | 214 | 0.2501 | - | | 1.4228 | 215 | 0.2509 | - | | 1.4295 | 216 | 0.2428 | - | | 1.4362 | 217 | 0.2358 | - | | 1.4430 | 218 | 0.2266 | - | | 1.4497 | 219 | 0.2455 | - | | 1.4564 | 220 | 0.2431 | - | | 1.4631 | 221 | 0.2445 | - | | 1.4698 | 222 | 0.2349 | - | | 1.4765 | 223 | 0.2451 | - | | 1.4832 | 224 | 0.2225 | - | | 1.4899 | 225 | 0.2207 | - | | 1.4966 | 226 | 0.321 | - | | 1.5034 | 227 | 0.4106 | - | | 1.5101 | 228 | 0.442 | - | | 1.5168 | 229 | 0.4754 | - | | 1.5235 | 230 | 0.5885 | - | | 1.5302 | 231 | 0.5739 | - | | 1.5369 | 232 | 0.6174 | - | | 1.5436 | 233 | 0.5669 | - | | 1.5503 | 234 | 0.5445 | - | | 1.5570 | 235 | 0.5238 | - | | 1.5638 | 236 | 0.4801 | - | | 1.5705 | 237 | 0.4585 | - | | 1.5772 | 238 | 0.4656 | - | | 1.5839 | 239 | 0.5197 | - | | 1.5906 | 240 | 0.4985 | - | | 1.5973 | 241 | 0.4895 | - | | 1.6040 | 242 | 0.4842 | - | | 1.6107 | 243 | 0.5392 | - | | 1.6174 | 244 | 0.5039 | - | | 1.6242 | 245 | 0.4607 | - | | 1.6309 | 246 | 0.4645 | - | | 1.6376 | 247 | 0.4642 | - | | 1.6443 | 248 | 0.4603 | - | | 1.6510 | 249 | 0.4968 | - | | 1.6577 | 250 | 0.5946 | - | | 1.6644 | 251 | 0.5178 | - | | 1.6711 | 252 | 0.5016 | - | | 1.6779 | 253 | 0.4997 | - | | 1.6846 | 254 | 0.5071 | - | | 1.6913 | 255 | 0.4633 | - | | 1.6980 | 256 | 0.4814 | - | | 1.7047 | 257 | 0.4717 | - | | 1.7114 | 258 | 0.4541 | - | | 1.7181 | 259 | 0.4641 | - | | 1.7248 | 260 | 0.4557 | - | | 1.7315 | 261 | 0.4479 | - | | 1.7383 | 262 | 0.4514 | - | | 1.7450 | 263 | 0.5193 | - | | 1.7517 | 264 | 0.6282 | - | | 1.7584 | 265 | 0.5518 | - | | 1.7651 | 266 | 0.5105 | - | | 1.7718 | 267 | 0.4772 | - | | 1.7785 | 268 | 0.4799 | - | | 1.7852 | 269 | 0.4534 | - | | 1.7919 | 270 | 0.4318 | - | | 1.7987 | 271 | 0.5261 | - | | 1.8054 | 272 | 0.4988 | - | | 1.8121 | 273 | 0.4867 | - | | 1.8188 | 274 | 0.4587 | - | | 1.8255 | 275 | 0.449 | - | | 1.8322 | 276 | 0.4424 | - | | 1.8389 | 277 | 0.4824 | - | | 1.8456 | 278 | 0.4786 | - | | 1.8523 | 279 | 0.4554 | - | | 1.8591 | 280 | 0.4463 | - | | 1.8658 | 281 | 0.5171 | - | | 1.8725 | 282 | 0.5004 | - | | 1.8792 | 283 | 0.4993 | - | | 1.8859 | 284 | 0.4595 | - | | 1.8926 | 285 | 0.4356 | - | | 1.8993 | 286 | 0.4293 | - | | 1.9060 | 287 | 0.5566 | - | | 1.9128 | 288 | 0.509 | - | | 1.9195 | 289 | 0.4572 | - | | 1.9262 | 290 | 0.4519 | - | | 1.9329 | 291 | 0.4471 | - | | 1.9396 | 292 | 0.4804 | - | | 1.9463 | 293 | 0.4505 | - | | 1.9530 | 294 | 0.4779 | - | | 1.9597 | 295 | 0.5127 | - | | 1.9664 | 296 | 0.5568 | - | | 1.9732 | 297 | 0.5047 | - | | 1.9799 | 298 | 0.5021 | - | | 1.9866 | 299 | 0.5035 | - | | 1.9933 | 300 | 0.4793 | 0.6563 | | 2.0 | 301 | 0.3424 | - | | 2.0067 | 302 | 0.1931 | - | | 2.0134 | 303 | 0.0681 | - | | 2.0201 | 304 | 0.0015 | - | | 2.0067 | 305 | 0.4829 | - | | 2.0134 | 306 | 0.5198 | - | | 2.0201 | 307 | 0.4872 | - | | 2.0268 | 308 | 0.4593 | - | | 2.0336 | 309 | 0.4781 | - | | 2.0403 | 310 | 0.5025 | - | | 2.0470 | 311 | 0.4695 | - | | 2.0537 | 312 | 0.4529 | - | | 2.0604 | 313 | 0.468 | - | | 2.0671 | 314 | 0.4844 | - | | 2.0738 | 315 | 0.498 | - | | 2.0805 | 316 | 0.4933 | - | | 2.0872 | 317 | 0.4663 | - | | 2.0940 | 318 | 0.4913 | - | | 2.1007 | 319 | 0.4582 | - | | 2.1074 | 320 | 0.4652 | - | | 2.1141 | 321 | 0.4318 | - | | 2.1208 | 322 | 0.4375 | - | | 2.1275 | 323 | 0.4237 | - | | 2.1342 | 324 | 0.4435 | - | | 2.1409 | 325 | 0.4428 | - | | 2.1477 | 326 | 0.4469 | - | | 2.1544 | 327 | 0.4416 | - | | 2.1611 | 328 | 0.4162 | - | | 2.1678 | 329 | 0.419 | - | | 2.1745 | 330 | 0.4589 | - | | 2.1812 | 331 | 0.4456 | - | | 2.1879 | 332 | 0.4604 | - | | 2.1946 | 333 | 0.4231 | - | | 2.2013 | 334 | 0.4636 | - | | 2.2081 | 335 | 0.4625 | - | | 2.2148 | 336 | 0.4807 | - | | 2.2215 | 337 | 0.4663 | - | | 2.2282 | 338 | 0.3872 | - | | 2.2349 | 339 | 0.3506 | - | | 2.2416 | 340 | 0.2902 | - | | 2.2483 | 341 | 0.2842 | - | | 2.2550 | 342 | 0.2749 | - | | 2.2617 | 343 | 0.2765 | - | | 2.2685 | 344 | 0.283 | - | | 2.2752 | 345 | 0.2649 | - | | 2.2819 | 346 | 0.2552 | - | | 2.2886 | 347 | 0.2548 | - | | 2.2953 | 348 | 0.262 | - | | 2.3020 | 349 | 0.255 | - | | 2.3087 | 350 | 0.2479 | - | | 2.3154 | 351 | 0.2348 | - | | 2.3221 | 352 | 0.2354 | - | | 2.3289 | 353 | 0.2397 | - | | 2.3356 | 354 | 0.2427 | - | | 2.3423 | 355 | 0.2398 | - | | 2.3490 | 356 | 0.2235 | - | | 2.3557 | 357 | 0.222 | - | | 2.3624 | 358 | 0.2289 | - | | 2.3691 | 359 | 0.2201 | - | | 2.3758 | 360 | 0.2203 | - | | 2.3826 | 361 | 0.2236 | - | | 2.3893 | 362 | 0.2168 | - | | 2.3960 | 363 | 0.202 | - | | 2.4027 | 364 | 0.2021 | - | | 2.4094 | 365 | 0.2106 | - | | 2.4161 | 366 | 0.2213 | - | | 2.4228 | 367 | 0.2254 | - | | 2.4295 | 368 | 0.2189 | - | | 2.4362 | 369 | 0.2097 | - | | 2.4430 | 370 | 0.2001 | - | | 2.4497 | 371 | 0.2174 | - | | 2.4564 | 372 | 0.2135 | - | | 2.4631 | 373 | 0.2175 | - | | 2.4698 | 374 | 0.2085 | - | | 2.4765 | 375 | 0.2191 | - | | 2.4832 | 376 | 0.1964 | - | | 2.4899 | 377 | 0.1948 | - | | 2.4966 | 378 | 0.2866 | - | | 2.5034 | 379 | 0.3712 | - | | 2.5101 | 380 | 0.3974 | - | | 2.5168 | 381 | 0.4217 | - | | 2.5235 | 382 | 0.5219 | - | | 2.5302 | 383 | 0.5122 | - | | 2.5369 | 384 | 0.5458 | - | | 2.5436 | 385 | 0.5018 | - | | 2.5503 | 386 | 0.4838 | - | | 2.5570 | 387 | 0.4573 | - | | 2.5638 | 388 | 0.4314 | - | | 2.5705 | 389 | 0.4078 | - | | 2.5772 | 390 | 0.4151 | - | | 2.5839 | 391 | 0.467 | - | | 2.5906 | 392 | 0.4502 | - | | 2.5973 | 393 | 0.4405 | - | | 2.6040 | 394 | 0.4345 | - | | 2.6107 | 395 | 0.4825 | - | | 2.6174 | 396 | 0.4586 | - | | 2.6242 | 397 | 0.4178 | - | | 2.6309 | 398 | 0.4212 | - | | 2.6376 | 399 | 0.4165 | - | | 2.6443 | 400 | 0.4162 | 0.6601 | | 2.6510 | 401 | 0.4444 | - | | 2.6577 | 402 | 0.5377 | - | | 2.6644 | 403 | 0.4701 | - | | 2.6711 | 404 | 0.4506 | - | | 2.6779 | 405 | 0.4495 | - | | 2.6846 | 406 | 0.4515 | - | | 2.6913 | 407 | 0.4148 | - | | 2.6980 | 408 | 0.4318 | - | | 2.7047 | 409 | 0.4246 | - | | 2.7114 | 410 | 0.4115 | - | | 2.7181 | 411 | 0.4142 | - | | 2.7248 | 412 | 0.41 | - | | 2.7315 | 413 | 0.4054 | - | | 2.7383 | 414 | 0.4035 | - | | 2.7450 | 415 | 0.4666 | - | | 2.7517 | 416 | 0.5645 | - | | 2.7584 | 417 | 0.4962 | - | | 2.7651 | 418 | 0.4597 | - | | 2.7718 | 419 | 0.4317 | - | | 2.7785 | 420 | 0.4325 | - | | 2.7852 | 421 | 0.4165 | - | | 2.7919 | 422 | 0.3908 | - | | 2.7987 | 423 | 0.4786 | - | | 2.8054 | 424 | 0.4571 | - | | 2.8121 | 425 | 0.4388 | - | | 2.8188 | 426 | 0.4182 | - | | 2.8255 | 427 | 0.4059 | - | | 2.8322 | 428 | 0.3994 | - | | 2.8389 | 429 | 0.4332 | - | | 2.8456 | 430 | 0.4352 | - | | 2.8523 | 431 | 0.421 | - | | 2.8591 | 432 | 0.4081 | - | | 2.8658 | 433 | 0.4704 | - | | 2.8725 | 434 | 0.4592 | - | | 2.8792 | 435 | 0.4508 | - | | 2.8859 | 436 | 0.4201 | - | | 2.8926 | 437 | 0.3928 | - | | 2.8993 | 438 | 0.3992 | - | | 2.9060 | 439 | 0.5154 | - | | 2.9128 | 440 | 0.4649 | - | | 2.9195 | 441 | 0.4165 | - | | 2.9262 | 442 | 0.4121 | - | | 2.9329 | 443 | 0.4072 | - | | 2.9396 | 444 | 0.4369 | - | | 2.9463 | 445 | 0.4191 | - | | 2.9530 | 446 | 0.4306 | - | | 2.9597 | 447 | 0.4688 | - | | 2.9664 | 448 | 0.5092 | - | | 2.9732 | 449 | 0.4639 | - | | 2.9799 | 450 | 0.4647 | - | | 2.9866 | 451 | 0.4589 | - | | 2.9933 | 452 | 0.4412 | - | | 3.0 | 453 | 0.3029 | - | | 3.0067 | 454 | 0.1599 | - | | 3.0134 | 455 | 0.0411 | - | | 3.0201 | 456 | 0.0017 | - | | 3.0067 | 457 | 0.4429 | - | | 3.0134 | 458 | 0.4789 | - | | 3.0201 | 459 | 0.4512 | - | | 3.0268 | 460 | 0.4197 | - | | 3.0336 | 461 | 0.4446 | - | | 3.0403 | 462 | 0.4597 | - | | 3.0470 | 463 | 0.4297 | - | | 3.0537 | 464 | 0.4197 | - | | 3.0604 | 465 | 0.4309 | - | | 3.0671 | 466 | 0.4503 | - | | 3.0738 | 467 | 0.4494 | - | | 3.0805 | 468 | 0.4538 | - | | 3.0872 | 469 | 0.4294 | - | | 3.0940 | 470 | 0.4493 | - | | 3.1007 | 471 | 0.4222 | - | | 3.1074 | 472 | 0.4294 | - | | 3.1141 | 473 | 0.4099 | - | | 3.1208 | 474 | 0.4062 | - | | 3.1275 | 475 | 0.3896 | - | | 3.1342 | 476 | 0.4083 | - | | 3.1409 | 477 | 0.4108 | - | | 3.1477 | 478 | 0.4192 | - | | 3.1544 | 479 | 0.4061 | - | | 3.1611 | 480 | 0.3783 | - | | 3.1678 | 481 | 0.3949 | - | | 3.1745 | 482 | 0.428 | - | | 3.1812 | 483 | 0.4176 | - | | 3.1879 | 484 | 0.4207 | - | | 3.1946 | 485 | 0.3946 | - | | 3.2013 | 486 | 0.4282 | - | | 3.2081 | 487 | 0.4346 | - | | 3.2148 | 488 | 0.4544 | - | | 3.2215 | 489 | 0.432 | - | | 3.2282 | 490 | 0.3556 | - | | 3.2349 | 491 | 0.3244 | - | | 3.2416 | 492 | 0.2702 | - | | 3.2483 | 493 | 0.2678 | - | | 3.2550 | 494 | 0.2567 | - | | 3.2617 | 495 | 0.2528 | - | | 3.2685 | 496 | 0.2624 | - | | 3.2752 | 497 | 0.2437 | - | | 3.2819 | 498 | 0.2387 | - | | 3.2886 | 499 | 0.2398 | - | | 3.2953 | 500 | 0.2435 | 0.6685 | | 3.3020 | 501 | 0.2353 | - | | 3.3087 | 502 | 0.229 | - | | 3.3154 | 503 | 0.2183 | - | | 3.3221 | 504 | 0.22 | - | | 3.3289 | 505 | 0.2236 | - | | 3.3356 | 506 | 0.2242 | - | | 3.3423 | 507 | 0.2251 | - | | 3.3490 | 508 | 0.2108 | - | | 3.3557 | 509 | 0.2065 | - | | 3.3624 | 510 | 0.2128 | - | | 3.3691 | 511 | 0.2051 | - | | 3.3758 | 512 | 0.2043 | - | | 3.3826 | 513 | 0.2116 | - | | 3.3893 | 514 | 0.2044 | - | | 3.3960 | 515 | 0.1903 | - | | 3.4027 | 516 | 0.1857 | - | | 3.4094 | 517 | 0.1971 | - | | 3.4161 | 518 | 0.2029 | - | | 3.4228 | 519 | 0.2098 | - | | 3.4295 | 520 | 0.2031 | - | | 3.4362 | 521 | 0.199 | - | | 3.4430 | 522 | 0.1868 | - | | 3.4497 | 523 | 0.2047 | - | | 3.4564 | 524 | 0.1982 | - | | 3.4631 | 525 | 0.2026 | - | | 3.4698 | 526 | 0.1931 | - | | 3.4765 | 527 | 0.2024 | - | | 3.4832 | 528 | 0.1848 | - | | 3.4899 | 529 | 0.1818 | - | | 3.4966 | 530 | 0.2712 | - | | 3.5034 | 531 | 0.3456 | - | | 3.5101 | 532 | 0.3678 | - | | 3.5168 | 533 | 0.394 | - | | 3.5235 | 534 | 0.4889 | - | | 3.5302 | 535 | 0.4686 | - | | 3.5369 | 536 | 0.5048 | - | | 3.5436 | 537 | 0.4732 | - | | 3.5503 | 538 | 0.4504 | - | | 3.5570 | 539 | 0.4241 | - | | 3.5638 | 540 | 0.3936 | - | | 3.5705 | 541 | 0.3833 | - | | 3.5772 | 542 | 0.3815 | - | | 3.5839 | 543 | 0.4333 | - | | 3.5906 | 544 | 0.4239 | - | | 3.5973 | 545 | 0.4124 | - | | 3.6040 | 546 | 0.4028 | - | | 3.6107 | 547 | 0.4585 | - | | 3.6174 | 548 | 0.4256 | - | | 3.6242 | 549 | 0.3916 | - | | 3.6309 | 550 | 0.4002 | - | | 3.6376 | 551 | 0.3962 | - | | 3.6443 | 552 | 0.3874 | - | | 3.6510 | 553 | 0.4229 | - | | 3.6577 | 554 | 0.5071 | - | | 3.6644 | 555 | 0.4432 | - | | 3.6711 | 556 | 0.4282 | - | | 3.6779 | 557 | 0.4249 | - | | 3.6846 | 558 | 0.4287 | - | | 3.6913 | 559 | 0.3875 | - | | 3.6980 | 560 | 0.403 | - | | 3.7047 | 561 | 0.395 | - | | 3.7114 | 562 | 0.3859 | - | | 3.7181 | 563 | 0.3917 | - | | 3.7248 | 564 | 0.3882 | - | | 3.7315 | 565 | 0.379 | - | | 3.7383 | 566 | 0.3819 | - | | 3.7450 | 567 | 0.4411 | - | | 3.7517 | 568 | 0.5383 | - | | 3.7584 | 569 | 0.4696 | - | | 3.7651 | 570 | 0.4367 | - | | 3.7718 | 571 | 0.4098 | - | | 3.7785 | 572 | 0.4104 | - | | 3.7852 | 573 | 0.3928 | - | | 3.7919 | 574 | 0.3686 | - | | 3.7987 | 575 | 0.4534 | - | | 3.8054 | 576 | 0.4255 | - | | 3.8121 | 577 | 0.4193 | - | | 3.8188 | 578 | 0.3925 | - | | 3.8255 | 579 | 0.3762 | - | | 3.8322 | 580 | 0.3748 | - | | 3.8389 | 581 | 0.4145 | - | | 3.8456 | 582 | 0.4085 | - | | 3.8523 | 583 | 0.3888 | - | | 3.8591 | 584 | 0.3903 | - | | 3.8658 | 585 | 0.4395 | - | | 3.8725 | 586 | 0.4347 | - | | 3.8792 | 587 | 0.428 | - | | 3.8859 | 588 | 0.4008 | - | | 3.8926 | 589 | 0.3706 | - | | 3.8993 | 590 | 0.3769 | - | | 3.9060 | 591 | 0.4869 | - | | 3.9128 | 592 | 0.4406 | - | | 3.9195 | 593 | 0.3963 | - | | 3.9262 | 594 | 0.39 | - | | 3.9329 | 595 | 0.3831 | - | | 3.9396 | 596 | 0.4088 | - | | 3.9463 | 597 | 0.3912 | - | | 3.9530 | 598 | 0.4108 | - | | 3.9597 | 599 | 0.4381 | - | | 3.9664 | 600 | 0.4841 | 0.6654 | | 3.9732 | 601 | 0.4425 | - | | 3.9799 | 602 | 0.4377 | - | | 3.9866 | 603 | 0.4344 | - | | 3.9933 | 604 | 0.4155 | - | | 4.0 | 605 | 0.2801 | - | | 4.0067 | 606 | 0.1418 | - | | 4.0134 | 607 | 0.0315 | - | | 4.0201 | 608 | 0.0013 | - | | 4.0067 | 609 | 0.4213 | - | | 4.0134 | 610 | 0.4604 | - | | 4.0201 | 611 | 0.4312 | - | | 4.0268 | 612 | 0.406 | - | | 4.0336 | 613 | 0.4238 | - | | 4.0403 | 614 | 0.4446 | - | | 4.0470 | 615 | 0.4127 | - | | 4.0537 | 616 | 0.4034 | - | | 4.0604 | 617 | 0.4092 | - | | 4.0671 | 618 | 0.4285 | - | | 4.0738 | 619 | 0.4324 | - | | 4.0805 | 620 | 0.4317 | - | | 4.0872 | 621 | 0.4167 | - | | 4.0940 | 622 | 0.4352 | - | | 4.1007 | 623 | 0.4074 | - | | 4.1074 | 624 | 0.4102 | - | | 4.1141 | 625 | 0.3799 | - | | 4.1208 | 626 | 0.3885 | - | | 4.1275 | 627 | 0.3709 | - | | 4.1342 | 628 | 0.3957 | - | | 4.1409 | 629 | 0.394 | - | | 4.1477 | 630 | 0.4035 | - | | 4.1544 | 631 | 0.389 | - | | 4.1611 | 632 | 0.3656 | - | | 4.1678 | 633 | 0.372 | - | | 4.1745 | 634 | 0.4066 | - | | 4.1812 | 635 | 0.3991 | - | | 4.1879 | 636 | 0.4057 | - | | 4.1946 | 637 | 0.3782 | - | | 4.2013 | 638 | 0.4117 | - | | 4.2081 | 639 | 0.41 | - | | 4.2148 | 640 | 0.4359 | - | | 4.2215 | 641 | 0.4191 | - | | 4.2282 | 642 | 0.3428 | - | | 4.2349 | 643 | 0.3156 | - | | 4.2416 | 644 | 0.2591 | - | | 4.2483 | 645 | 0.2533 | - | | 4.2550 | 646 | 0.2483 | - | | 4.2617 | 647 | 0.2505 | - | | 4.2685 | 648 | 0.2494 | - | | 4.2752 | 649 | 0.2354 | - | | 4.2819 | 650 | 0.2302 | - | | 4.2886 | 651 | 0.2285 | - | | 4.2953 | 652 | 0.2366 | - | | 4.3020 | 653 | 0.229 | - | | 4.3087 | 654 | 0.2217 | - | | 4.3154 | 655 | 0.2107 | - | | 4.3221 | 656 | 0.2153 | - | | 4.3289 | 657 | 0.216 | - | | 4.3356 | 658 | 0.2161 | - | | 4.3423 | 659 | 0.2174 | - | | 4.3490 | 660 | 0.2031 | - | | 4.3557 | 661 | 0.2034 | - | | 4.3624 | 662 | 0.2015 | - | | 4.3691 | 663 | 0.1987 | - | | 4.3758 | 664 | 0.1984 | - | | 4.3826 | 665 | 0.2004 | - | | 4.3893 | 666 | 0.1961 | - | | 4.3960 | 667 | 0.1829 | - | | 4.4027 | 668 | 0.1821 | - | | 4.4094 | 669 | 0.1894 | - | | 4.4161 | 670 | 0.2007 | - | | 4.4228 | 671 | 0.201 | - | | 4.4295 | 672 | 0.1973 | - | | 4.4362 | 673 | 0.1931 | - | | 4.4430 | 674 | 0.1803 | - | | 4.4497 | 675 | 0.1988 | - | | 4.4564 | 676 | 0.1906 | - | | 4.4631 | 677 | 0.1941 | - | | 4.4698 | 678 | 0.1878 | - | | 4.4765 | 679 | 0.1977 | - | | 4.4832 | 680 | 0.1767 | - | | 4.4899 | 681 | 0.1814 | - | | 4.4966 | 682 | 0.2594 | - | | 4.5034 | 683 | 0.3342 | - | | 4.5101 | 684 | 0.3562 | - | | 4.5168 | 685 | 0.3857 | - | | 4.5235 | 686 | 0.4643 | - | | 4.5302 | 687 | 0.4596 | - | | 4.5369 | 688 | 0.4978 | - | | 4.5436 | 689 | 0.4564 | - | | 4.5503 | 690 | 0.4336 | - | | 4.5570 | 691 | 0.4154 | - | | 4.5638 | 692 | 0.3796 | - | | 4.5705 | 693 | 0.3724 | - | | 4.5772 | 694 | 0.3708 | - | | 4.5839 | 695 | 0.4281 | - | | 4.5906 | 696 | 0.4082 | - | | 4.5973 | 697 | 0.4004 | - | | 4.6040 | 698 | 0.3893 | - | | 4.6107 | 699 | 0.4372 | - | | 4.6174 | 700 | 0.4132 | 0.6762 | | 4.6242 | 701 | 0.3766 | - | | 4.6309 | 702 | 0.3869 | - | | 4.6376 | 703 | 0.382 | - | | 4.6443 | 704 | 0.3733 | - | | 4.6510 | 705 | 0.4088 | - | | 4.6577 | 706 | 0.4887 | - | | 4.6644 | 707 | 0.4282 | - | | 4.6711 | 708 | 0.4137 | - | | 4.6779 | 709 | 0.4098 | - | | 4.6846 | 710 | 0.4165 | - | | 4.6913 | 711 | 0.3737 | - | | 4.6980 | 712 | 0.3973 | - | | 4.7047 | 713 | 0.3853 | - | | 4.7114 | 714 | 0.3744 | - | | 4.7181 | 715 | 0.3787 | - | | 4.7248 | 716 | 0.372 | - | | 4.7315 | 717 | 0.3675 | - | | 4.7383 | 718 | 0.3652 | - | | 4.7450 | 719 | 0.4291 | - | | 4.7517 | 720 | 0.5218 | - | | 4.7584 | 721 | 0.452 | - | | 4.7651 | 722 | 0.4202 | - | | 4.7718 | 723 | 0.3904 | - | | 4.7785 | 724 | 0.3979 | - | | 4.7852 | 725 | 0.3796 | - | | 4.7919 | 726 | 0.3569 | - | | 4.7987 | 727 | 0.4375 | - | | 4.8054 | 728 | 0.4149 | - | | 4.8121 | 729 | 0.4055 | - | | 4.8188 | 730 | 0.3813 | - | | 4.8255 | 731 | 0.3671 | - | | 4.8322 | 732 | 0.3631 | - | | 4.8389 | 733 | 0.4026 | - | | 4.8456 | 734 | 0.3986 | - | | 4.8523 | 735 | 0.3786 | - | | 4.8591 | 736 | 0.3757 | - | | 4.8658 | 737 | 0.4309 | - | | 4.8725 | 738 | 0.4228 | - | | 4.8792 | 739 | 0.416 | - | | 4.8859 | 740 | 0.3891 | - | | 4.8926 | 741 | 0.3614 | - | | 4.8993 | 742 | 0.3633 | - | | 4.9060 | 743 | 0.4739 | - | | 4.9128 | 744 | 0.4307 | - | | 4.9195 | 745 | 0.3833 | - | </details> ### Framework Versions - Python: 3.10.6 - Sentence Transformers: 3.3.0.dev0 - Transformers: 4.45.2 - PyTorch: 2.4.1+cu118 - Accelerate: 0.34.0 - Datasets: 2.21.0 - Tokenizers: 0.20.2 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CachedMultipleNegativesRankingLoss ```bibtex @misc{gao2021scaling, title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup}, author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan}, year={2021}, eprint={2101.06983}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CHIA" ]
Non_BioNLP
RomainDarous/large_directOneEpoch_maxPooling_mistranslationModel
RomainDarous
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4460010", "loss:CoSENTLoss", "dataset:RomainDarous/corrupted_os_by_language", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,741
1,741
8
0
--- base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 datasets: - RomainDarous/corrupted_os_by_language library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4460010 - loss:CoSENTLoss widget: - source_sentence: Malformed target specific variable definition sentences: - Hedefe özgü değişken tanımı bozuk - Kan alle data in die gids lees - "слава Украине! героям слава!\uFEFF" - source_sentence: Can't write an inode bitmap sentences: - Skontrolujte stav aktualizácií alebo to skúste znova neskôr. - Malsukcesis skribi i nodan bitmapon - Zastępuje wersję GL obsługiwaną przez sterownik - source_sentence: Optimize soft proofing color transformations sentences: - 'arkadaslar biz artik her an kirmizi kart yiyecek,bencil,pas yapamayan,isabetsiz orta yapani istemiyoruz. sozde efsaneniz bu sezon Besiktasa en cok zarar verenlerden biriydi. kendini dusunmeden once Besiktasi dusunecek adam lazim bize. o yuzden #GoHomeQuaresma' - Yav bizim dedikodusunu yaptığımız insanın bile bi vizyonu var. Senin hakkında neden oturup konuşalım? - Ik ben een transgender. - source_sentence: 'Pass 1: Checking @is, @bs, and sizes' sentences: - Bu adam cidden kurabiye gibi ben bunu çayın yanında yerim - sagnat. errada. invisible. justificació. idioma - Wilt u echt de primaire sleutel verplaatsen? (j N) - source_sentence: Search for matching log entries sentences: - quem te lembra? caralho tô assustada aqui kkkkk - sendotasunik gabeko\ egoera bistaratuko den ala ez adierazten du - En aquest cas, hem d'incloure les imatges del contenidor )sr iov per a càrregues de treball de telco (per exemple, com a referència, es podrien obtenir des de valors de helm chart) model-index: - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.9703827409786012 name: Pearson Cosine - type: spearman_cosine value: 0.8654967442097427 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.9705606685594079 name: Pearson Cosine - type: spearman_cosine value: 0.8655243243689739 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 84fccfe766bcfd679e39efefe4ebf45af190ad2d --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): MultiHeadGeneralizedPooling() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/large_directOneEpoch_maxPooling_mistranslationModel") # Run inference sentences = [ 'Search for matching log entries', 'quem te lembra? caralho tô assustada aqui kkkkk', 'sendotasunik gabeko\\ egoera bistaratuko den ala ez adierazten du', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.9704 | 0.9706 | | **spearman_cosine** | **0.8655** | **0.8655** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 18.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.47 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:---------------| | <code>Check spelling. Print the document. Show completion window. General. Show help</code> | <code>Kontrolli õigekirja. присоединяюсь. </code> | <code>0</code> | | <code>EXIF not supported for this file format.</code> | <code>Šiam failo formatui EXIF nepalaikomas.</code> | <code>1</code> | | <code>This package includes the documentation for texlive everyhook</code> | <code>Paket ini menyertakan dokumentasi untuk texlive everyhook</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 17.71 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 26.95 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Could not identify the current seat.</code> | <code> 天天花着男人的钱还这这创造新词汇男权你可真牛批,你也就这一出了一问男权,就说是我是吧,到现在我也没听到你给我们讲的男权,你也就是在网上喷喷,现实走道都不敢探头自卑,你现实要把你女权的劲拿出来总低啥头,您老应该去国家教育局把男权加上是吧,你们女权天天说自己生活不好没地位,给你们地位了你们能干啥?用你们的女权打到全世界男性是吧,能相出男权这一词您老也是人才呀,是不是庆幸自己是个女的,活在自己想想的世界里不觉得孤单吗,假象有男权是吧,自己假象和男权还说自己不是田园女权,田园女权能连自己都骂说自己妈是驴爸是大鼎的也是奇葩呀,那我们国家大肆宣扬过你们这么田园女权吗,国家要的是女性人群自主自理,你们可好看看你们女权干的啥事,给你们女权地位高了,看看你们女权干的事n绿地集团高管怎么都不说呀,人家可是有钱有地位,也不是我们说三从四德洗衣做饭你们女权会吗?,那我问问你们女权干过啥惊天大事,还甩锅给孔子,还封建社会,那我问问你们女权在福利面前为啥说自己是女性呀不是社会主义社会吗不应该男女平等吗,天天自己也不知道是不是抱个手机天天欧巴欧巴,你家那位要是不陪你看一会就会问你是不是不爱我了是吧大姐,您老也就赚这白菜钱操心国家事,中国五千年的历史被您老一句否决,还嘲讽人家日本女性,好意思说自己不是女权,三从四德流传这么久到您这变成日本文化了,我就想问问男权您老是怎么想的,那你问孔子老人家呗为什么女人要三从四德,我说的是女权你干嘛自己对号入座,连中华人民传承的东西都不认跟我这谈男权,还男权您老给我举个例子呗,让我们男权听听都是h啥,这些不都是你们女权的标准吗?,还男权,您老醒醒吧这里是现实,不是你的公主世界,总觉得自己多么多么重要,地球没你是不能转了还是人类要灭亡呀,我真的想问一句你给我找一条男权的新闻,咋了我们男人不能提女权呗你老授权了呗,那我们谈论田园女权你老对号入座干嘛,天天过节要礼物,还嫌弃自己男朋友没有钱,我寻思你找个有钱人包养你呗,对了有钱人怎么可能看上你这种女权的呢,还要孩子跟女方姓我也没看见你没跟你妈姓呀,年年过节男人给你们送礼物你们女人给男人送过礼物吗?,一问我不是陪着他吗我对他说我爱你了这不是最好的礼物吗?,男人只要不送礼物就是不爱你们了呗,人家国际女权讲的男人能做的我们女人也能做,田园女权男人能做的我们女人为啥要做,还男权我笑了,以前结婚几头牛换个衣服原装的,现在几十万彩...</code> | <code>0</code> | | <code>Undoing Date and Time Adjustment</code> | <code>正在取消日期和时间调整</code> | <code>1</code> | | <code>Dependency package for gsl_2_6 gnu hpc</code> | <code>Pacotes de desenvolvimento do KDE</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | corrupted open os by language loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:----------------------------------:|:------------------------:|:------------------------:| | 1.0 | 55751 | 0.8298 | 0.3449 | 0.8655 | - | | -1 | -1 | - | - | - | 0.8655 | ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.4.1 - Transformers: 4.48.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.3.0 - Datasets: 2.16.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY", "TRANSLATION" ]
[ "CAS" ]
Non_BioNLP
Mit1208/Med-Sum
Mit1208
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,671
1,671
12
0
--- {} --- This model is fine-tuned on Medical data to perform summarization. Data used from https://data.mendeley.com/datasets/gg58kc7zy7 Size of data ~ 10k ```python from transformers import pipeline summarizer = pipeline("summarization", model="Mit1208/Med-Sum") long_text = "The human brain is the inspiration behind neural network architecture. Human brain cells, called neurons, form a complex, highly interconnected network and send electrical signals to each other to help humans process information. Similarly, an artificial neural network is made of artificial neurons that work together to solve a problem. Artificial neurons are software modules, called nodes, and artificial neural networks are software programs or algorithms that, at their core, use computing systems to solve mathematical calculations." result = summarizer(long_text) print(result[0]["summary_text"]) ``` Output: ``` The human brain is the inspiration behind neural network architecture. Human brain cells, called neurons, form a complex, highly interconnected network and send electrical signals to each other to help humans process information. The artificial neural network is made of artificial neurons that work together to solve a problem. ``` long_text--[https://aws.amazon.com/what-is/neural-network/]
[ "SUMMARIZATION" ]
[ "MEDICAL DATA" ]
BioNLP
DIS-Project/Sentence-Transformer_1
DIS-Project
sentence-similarity
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1000", "loss:TripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:microsoft/mpnet-base", "base_model:finetune:microsoft/mpnet-base", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,727
1,727
7
0
--- base_model: microsoft/mpnet-base library_name: sentence-transformers metrics: - cosine_accuracy - dot_accuracy - manhattan_accuracy - euclidean_accuracy - max_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1000 - loss:TripletLoss widget: - source_sentence: What was the value of the cargo that Adams sold in Cochinchina? sentences: - "KPMG International Limited (or simply KPMG) is a British-Dutch multinational\ \ professional services network, and one of the Big Four accounting organizations.\n\ \nHeadquartered in Amstelveen, Netherlands, although incorporated in the United\ \ Kingdom, KPMG is a network of firms in 145 countries, with over 236,000 employees\ \ and has three lines of services: financial audit, tax, and advisory. Its tax\ \ and advisory services are further divided into various service groups. Over\ \ the past decade various parts of the firm's global network of affiliates have\ \ been involved in regulatory actions as well as lawsuits.\n\nThe name \"KPMG\"\ \ stands for \"Klynveld Peat Marwick Goerdeler\". The acronym was chosen when\ \ KMG (Klynveld Main Goerdeler) merged with Peat Marwick in 1987.\n\nHistory\n\ \nEarly years and mergers \n\nIn 1818, John Moxham opened a company in Bristol.\ \ James Grace and James Grace Jr. bought John Moxham & Co. and renamed it James\ \ Grace & Son in 1857. In 1861, Henry Grace joined James Jr. and the company was\ \ renamed James & Henry Grace; the firm evolved to become Grace, Ryland & Co.\n\ \nWilliam Barclay Peat joined Robert Fletcher & Co. in London in 1870 at the age\ \ of 17 and became head of the firm in 1891, renamed William Barclay Peat & Co.\ \ by then. In 1877, Thomson McLintock founded Thomson McLintock & Co in Glasgow.\ \ In 1897, Marwick Mitchell & Co. was founded by James Marwick and Roger Mitchell\ \ in New York City. In 1899, Ferdinand William LaFrentz founded the American Audit\ \ Co., in New York. In 1923, The American Audit Company was renamed FW LaFrentz\ \ & Co.\n\nIn about 1913, Frank Wilber Main founded Main & Co. in Pittsburgh.\ \ In March 1917, Piet Klijnveld and Jaap Kraayenhof opened an accounting firm\ \ called Klynveld Kraayenhof & Co. in Amsterdam.\n\nIn 1925, William Barclay Peat\ \ & Co. and Marwick Mitchell & Co., merged to form Peat Marwick Mitchell & Co.\n\ \nIn 1963, Main LaFrentz & Co was formed by the merger of Main & Co and FW LaFrentz\ \ & Co. In 1969 Thomson McLintock and Main LaFrentz merged forming McLintock Main\ \ LaFrentz International and McLintock Main LaFrentz International absorbed the\ \ general practice of Grace, Ryland & Co.\n\nIn 1979, Klynveld Kraayenhof & Co.\ \ (Netherlands), McLintock Main LaFrentz (United Kingdom / United States) and\ \ Deutsche Treuhandgesellschaft (Germany) formed KMG (Klynveld Main Goerdeler)\ \ as a grouping of independent national practices to create a strong European-based\ \ international firm. Deutsche Treuhandgesellschaft CEO Reinhard Goerdeler (son\ \ of leading anti-Nazi activist Carl Goerdeler, who would have become Chancellor\ \ if Operation Valkyrie had succeeded) became the first CEO of KMG. In the United\ \ States, Main Lafrentz & Co. merged with Hurdman and Cranstoun to form Main Hurdman\ \ & Cranstoun.\n\nIn 1987, KMG and Peat Marwick joined forces in the first mega-merger\ \ of large accounting firms and formed a firm called KPMG in the United States,\ \ and most of the rest of the world, and Peat Marwick McLintock in the United\ \ Kingdom.\n\nIn the Netherlands, as a consequence of the merger between PMI and\ \ KMG in 1988, PMI tax advisors joined Meijburg & Co. (The tax advisory agency\ \ Meijburg & Co. was founded by Willem Meijburg, Inspector of National Taxes,\ \ in 1939). Today, the Netherlands is the only country with two members of KPMG\ \ International: KPMG Audit (accountants) and Meijburg & Co (tax consultants).\n\ \nIn 1991, the firm was renamed KPMG Peat Marwick, and in 1999, the name was reduced\ \ again to KPMG.\n\nIn October 1997, KPMG and Ernst & Young announced that they\ \ were to merge. However, while the merger to form PricewaterhouseCoopers was\ \ granted regulatory approval, the KPMG/Ernst & Young tie-up was later abandoned.\n\ \nRecent history \n\nIn 2001, KPMG spun off its United States consulting firm\ \ through an initial public offering of KPMG Consulting, which was rebranded BearingPoint.\ \ In early 2009, BearingPoint filed for Chapter 11 bankruptcy protection. The\ \ UK and Dutch consulting arms were sold to Atos in 2002.\n\nIn 2003, KPMG divested\ \ itself of its legal arm, Klegal and KPMG sold its Dispute Advisory Services\ \ to FTI Consulting.\n\nKPMG's member firms in the United Kingdom, Germany, Switzerland\ \ and Liechtenstein merged to form KPMG Europe LLP in October 2007. These member\ \ firms were followed by Spain, Belgium, the Netherlands, Luxembourg, CIS (Azerbaijan,\ \ Russia, Ukraine, Belarus, Kyrgyzstan, Kazakhstan, Armenia and Georgia), Turkey,\ \ Norway, and Saudi Arabia. They appointed joint Chairmen, John Griffith-Jones\ \ and Ralf Nonnenmacher.\n\nIn 2020, KPMG International Limited was incorporated\ \ in London, England.\n\nIn February 2021, KPMG UK appointed its first female\ \ leaders, replacing Bill Michael, who stepped aside after making controversial\ \ comments. Bina Mehta was asked to step in as acting UK chairman and Mary O'Connor\ \ took over Michael's executive responsibilities as acting senior partner in UK.\ \ In April 2021, O'Connor quit the firm after being passed over for the permanent\ \ role.\n\nIn November 2021, KPMG UK was reported as having revised its partnership\ \ process to introduce five levels of partnership which required partners to inject\ \ capital at levels starting at £150,000 and going up to £500,000. This along\ \ with the £115 million proceeds from the sale of its pensions business earlier\ \ in 2021, which it seems was not distributed to the partners, was intended to\ \ prepare the balance sheet for a potential large fine (up to £1 billion) arising\ \ out of the Carillion lawsuit.\n\nIn February 2022, KPMG Canada announced they\ \ had added Bitcoin and Ethereum cryptoassets to their corporate treasury.\n\n\ Global structure \nEach national KPMG firm is an independent legal entity and\ \ is a member of KPMG International Limited, a UK Limited Company incorporated\ \ in London, United Kingdom. KPMG International changed its legal structure from\ \ a Swiss Verein to a co-operative under Swiss law in 2003 and to a limited company\ \ in 2020.\n\nThis structure in which the Limited company provides support services\ \ only to the member firms is similar to other professional services networks.\ \ The member firms provide the services to client. The purpose is to limit the\ \ liability of each independent member.\n\nBill Thomas is KPMG's Global Chairman.\ \ He was formerly Senior Partner and CEO of KPMG LLP, the KPMG member firm in\ \ Canada.\n\nSome KPMG member firms are registered as multidisciplinary entities\ \ which also provide legal services in certain jurisdictions.\n\nIn India, regulations\ \ do not permit foreign auditing firms to operate. Hence KPMG carries out audits\ \ in India under the name of BSR & Co, an auditing firm that it bought. BSR &\ \ Co was an auditing firm founded by B.S. Raut in Mumbai. In 1992, after India\ \ was forced to liberalise as one of the conditions of the World Bank and IMF\ \ bail out, KPMG was granted a license to operate in India as an investment bank.\ \ It subsequently purchased BSR & Co and conducts audits in India under the name\ \ of this firm.\n\nServices \n\nKPMG is organised into the following three service\ \ lines (the 2021 revenue shares are listed in parentheses):\n\n Audit ($11.46\ \ billion)\n Advisory ($13.65 billion)\n Tax ($7.02 billion)\n\nTax arrangements\ \ relating to tax avoidance and multinational corporations and Luxembourg which\ \ were negotiated by KPMG became public in 2014 in the so-called Luxembourg Leaks.\n\ \nStaff \nKPMG was the preferred employer among the Big Four accounting firms\ \ according to CollegeGrad.com. It was also ranked No. 4 on the list of \"50 Best\ \ Places to Launch a Career\" in 2009 according to Bloomberg Businessweek.\n\n\ It was reported in early 2012 that KPMG has about 11,000 staff in the UK and 9,000\ \ in mainland China and Hong Kong. KPMG's global deputy chairman predicted that\ \ headcount in China would overtake that of the UK by the end of 2013.\n\nControversies\n\ \nTax shelter fraud\n\nIn 2003, the IRS issued summonses to KPMG for information\ \ about certain tax shelters and their investors. In February 2004, the US Justice\ \ Department commenced a criminal inquiry. The United States member firm, KPMG\ \ LLP, was accused by the United States Department of Justice of fraud in marketing\ \ abusive tax shelters. KPMG fired or forced the retirement of over a dozen who\ \ were involved. KPMG LLP admitted criminal wrongdoing in creating fraudulent\ \ tax shelters to help wealthy clients avoid $2.5 billion in taxes between 1996\ \ and 2002, and agreed to pay $456 million in penalties to avoid indictment. Under\ \ the deferred prosecution agreement, KPMG LLP would not face criminal prosecution\ \ if it complied with the terms of its agreement with the government. On 3 January\ \ 2007, the criminal conspiracy charges against KPMG were dropped.\n\nVarious\ \ other controversies\n\n 2003\nKPMG agreed to pay $125 million and $75 million\ \ to settle lawsuits stemming from the firm's audits of Rite Aid and Oxford Health\ \ Plans Inc., respectively.\n\n 2004\nKPMG agreed to pay $115 million to settle\ \ lawsuits stemming from the collapse of software company Lernout & Hauspie Speech\ \ Products NV.\n\n 2005\n\nDuring August, KPMG LLP admitted to criminal wrongdoing\ \ and agreed to pay US$456 million in fines, restitution, and penalties as part\ \ of an agreement to defer prosecution of the firm, according to the US Justice\ \ Department and the Internal Revenue Service. In addition to the agreement, nine\ \ individuals-including six former KPMG partners and the former deputy chairman\ \ of the firm were to be criminally prosecuted. As alleged in a series of charging\ \ documents, the fraud relates to the design, marketing, and implementation of\ \ fraudulent tax shelters.\n\n 2006\nAmerican real estate financing firm Fannie\ \ Mae sued KPMG for malpractice for approving years of erroneous financial statements.\n\ \n 2007\nIn February, KPMG Germany was investigated for ignoring questionable\ \ payments in the Siemens bribery case. In November 2008, the Siemens Supervisory\ \ Board recommended changing auditors from KPMG to Ernst & Young.\n\n 2008\nIn\ \ March, KPMG was accused of enabling \"improper and imprudent practices\" at\ \ New Century Financial, a failed mortgage company, and KPMG agreed to pay $80\ \ million to settle suits from Xerox shareholders over manipulated earnings reports.\n\ \nIn December, it was announced that two of Tremont Group's Rye Select funds,\ \ audited by KPMG, had $2.37 billion invested with the Madoff \"Ponzi scheme.\"\ \ Class action suits were filed.\n\n 2010\nIn August, it was reported by the Swedish\ \ Financial Supervisory Authority to the Swedish accountancy regulator after HQ\ \ Bank was forced into involuntary liquidation after the Financial Supervisory\ \ Authority revoked all its licences for breach of banking regulations.\n\n 2011\n\ In August, KPMG conducted due diligence work on Hewlett Packard's $11.1 billion\ \ acquisition of the British software company Autonomy. In November 2012 HP announced\ \ an $8.8 billion write off due to \"serious accounting improprieties\" committed\ \ by Autonomy management prior to the acquisition.\n\nAccording to an independent\ \ panel formed to investigate irregular payments made by Olympus which reported\ \ in December, KPMG's affiliate in Japan did not identify fraud at the company.\n\ \n 2013\nIn April, Scott London, a former KPMG LLP partner in charge of KPMG's\ \ US Los Angeles-based Pacific Southwest audit practice, admitted passing on stock\ \ tips about clients, including Herbalife, Skechers, and other companies, to his\ \ friend Bryan Shaw, a California jewelry-store owner. In return Shaw gave London\ \ $70,000 as well as gifts that included a $12,000 Rolex watch and concert tickets.\ \ On 6 May, Shaw agreed to plead guilty to one count of conspiracy to commit securities\ \ fraud. He also agreed to pay around $1.3 million in restitution, and to cooperate\ \ with the government as part of a plea deal with federal prosecutors. This scandal\ \ led KPMG to resign as auditor for Herbalife and Skechers.\n\n 2015\nKPMG was\ \ accused by the Canada Revenue Agency of abetting tax evasion schemes: \"The\ \ CRA alleges that the KPMG tax structure was in reality a 'sham' that intended\ \ to deceive the taxman.\"\n\n 2016\nThe Canada Revenue Agency offered an amnesty\ \ to KPMG clients caught using an offshore tax-avoidance scheme on the Isle of\ \ Man.\n\n 2017\nKPMG US terminated five partners in its audit practice, including\ \ the head of its audit practice in the US, after an investigation of advanced\ \ confidential knowledge of planned audit inspections by its Public Company Accounting\ \ Oversight Board. This followed criticism about KPMG's failure to uncover illegal\ \ sales practices at Wells Fargo or potential corruption at FIFA, the governing\ \ international body of football. It was reported in 2017 that KPMG had the highest\ \ number of deficiencies, among the Big Four, cited by its regulator in the previous\ \ two years. This includes two annual inspections that were compromised as a result\ \ of advanced access to inspection information. In March 2019, David Middendorf\ \ and Jeffrey Wada, co-defendants in the scandal, were convicted.\n\nIn August,\ \ KPMG US paid a $6.2 million fine to the US Securities and Exchange Commission\ \ for inadequacies in its audit of the financial statements of oil and gas company,\ \ Miller Energy Resources.\n\nIn November, 91 partners of KPMG Hong Kong faced\ \ contempt proceedings in Hong Kong High Court, as China Medical Technologies\ \ (CMED) liquidators investigating a $400 million fraud took action against KPMG\ \ with regard to its refusal to honor a February 2016 court order to produce Chinese\ \ working papers, correspondence, and records to the liquidators. The liquidators\ \ are asking that 91 defendants be held in contempt of court, which could result\ \ in criminal penalties, or weekly fines. KPMG had issued written audit reports\ \ for CMED from 2003 to 2008, and was replaced by PwC Zhong Tian in August 2009.\ \ \"Perhaps locking up 91 KPMG partners over Christmas may spur the firms to find\ \ a solution to this problem\", said Professor Paul Gillis of Peking University's\ \ Guanghua School of Management.\n\n 2018\nDuring July, KPMG has come under criticism\ \ for its role in the bankruptcy of Dubai-based private equity firm, Abraaj Group,\ \ after it was determined that KPMG Lower Gulf Chairman and Chief Executive Vijay\ \ Malhotra's son has worked at Abraaj and an executive named Ashish Dave alternated\ \ between stints at KPMG and as Abraaj’s chief financial officer, a job he held\ \ twice.\n\nAlso during July, the UK accounting watchdog, the Financial Reporting\ \ Council (FRC) announced an investigation into KPMG’s work for Conviviality,\ \ the British drinks supplier that collapsed into administration during April\ \ 2018.\n\nAlso during July, KPMG paid HK$650 million (US$84 million) to settle\ \ legal claims after failing to identify fraud at a Chinese timber company, China\ \ Forestry. The liquidators of China Forestry claimed KPMG was negligent when\ \ it failed to detect serious false accounting by some of the company’s top management\ \ ahead of its listing in 2009.\n\nDuring August, Chile's Comision Para El Mercado\ \ Financiero(CMF) sanctioned KPMG Auditores Consultores Limitada (KPMG LLP's local\ \ affiliate) 3,000 UF (~$114,000), and Joaquín Lira Herreros, its partner, for\ \ offences incurred in the audit made to the financial statements of the Aurus\ \ Insignia Fondo de Inversión, managed by Aurus Capital S.A. Admnistradora General\ \ de Fondos Management (AGF), corresponding to the year 2014.\n\nIn November,\ \ the Sultanate of Oman's Capital Market Authority (CMA) suspended KPMG from auditing\ \ entities regulated by the CMA for a period of one year after discovering major\ \ financial and accounting irregularities in the entities' records.\n\nIn December,\ \ KPMG South Africa published an open apology for its participation in various\ \ scandals in South Africa, including publishing a misleading report that led\ \ to the resignation of the South African Finance Minister, involvement with the\ \ Gupta family who have been implicated in corruption scandal with former President,\ \ Jacob Zuma, and acting as the auditor of VBS Mutual Bank that collapsed due\ \ to fraud. Its top eight staff resigned during 2017 and its workforce shrank\ \ from 3,400 to 2,200.\n\n 2019\nKPMG were fined £5 million by the Financial Reporting\ \ Council for misconduct shortly after the takeover of the Britannia Building\ \ Society by The Co-operative Bank, particularly relating to the valuation of\ \ Britannia's commercial loans and other liabilities. The takeover led to the\ \ near collapse of The Co-operative Bank.\n\nKPMG was fined £6 million by the\ \ Financial Reporting Council following a long running investigation by the regulator\ \ into misconduct in the firm’s auditing of Lloyds Syndicate 218 between 2007\ \ & 2009. KPMG partner Mark Taylor has been fined £100,000, severely reprimanded\ \ and agreed to the imposition of a requirement to have a second partner review\ \ of his audits until the end of 2020.\n\n 2020\n\nIn June, KPMG resigned from\ \ the auditor role at British fashion firm, Ted Baker plc after the company admitted\ \ accounting errors resulting in overstatement of its inventory by up to £58 million.\n\ \n 2021\n\nIn February, the liquidators for South African bank, VBS Mutual Bank,\ \ sued KPMG for 863.5 million rand (~US$59 million) over its audit opinion on\ \ the now defunct bank.\n\nIn March, KPMG US agreed to pay $10 million to settle\ \ a 10-year gender discrimination lawsuit in a New York Federal Court that alleged\ \ claims by 450 women that its culture was rife with gender discrimination, sexual\ \ harassment, and retaliation.\n\nDuring May, members of the Canadian Parliament's\ \ House of Commons finance committee re-launched a probe into offshore tax evasion\ \ by interviewing Lucia Iacovelli, managing partner at KPMG.\n\nKPMG in the UK\ \ has been sued for more than £6 million (~€6.9 million, ~US$7.8 million) by property\ \ company Mount Anvil which claims it was left with an unexpected tax bill after\ \ the firm provided it with negligent advice.\n\nIn July, the UK accounting regulator\ \ Financial Reporting Council (FRC) criticised KPMG for its “unacceptable” failure\ \ to meet required standards in its audits of banks for a third year running.\ \ Only 61% of KPMG’s audits sampled by the regulator met industry standards.\n\ \nThe Government of Malaysia and the state sovereign fund, 1MDB, launched a lawsuit\ \ seeking over $5.6 billion in damages from KPMG partners for alleged breaches\ \ and negligence linked to a corruption scandal at the fund.\n\nSouth Africa's\ \ largest asset manager, the Public Investment Corporation (PIC) sued KPMG for\ \ 144 million rand (~U$9.5 million) it lost when the VBS Mutual Bank went bankrupt\ \ as a result of fraud. Its claim is centred on the rights issue and a revolving\ \ credit facility it participated in at VBS relying on financial statements audited\ \ by KPMG and its former senior partner, Sipho Malaba.\n\nIn August, a tribunal\ \ convened by the UK regulator, FRC, fined KPMG £13m and ordered it to pay £2.75m\ \ in costs. This was because of its serious misconduct in the sale of bed company\ \ Silentnight. The tribunal found that KPMG had helped private equity group H.I.G\ \ Capital drive Silentnight into an insolvency process, so that HIG could acquire\ \ the company without its £100m pension scheme. KPMG was severely reprimanded\ \ by the tribunal, and was ordered to appoint an independent reviewer to check\ \ a sample of previous cases for similar failings. The tribunal ruled that KPMG's\ \ involvement with Silentnight was \"deeply troubling\" as it failed to act solely\ \ in its client's interests.\n\nIn September, the Public Company Accounting Oversight\ \ Board fined KPMG Australia $450,000 after it confessed to a cheating scandal\ \ involving over 1,100 (almost 12%) of its employees.\n\nIn October, UK regulator\ \ The Financial Reporting Council (FRC), said KPMG and David Costley-Wood, a partner\ \ at the firm, used an “untruthful defence” in an investigation into the sale\ \ of Silentnight to a private equity firm in 2011. Costley-Wood, who was formerly\ \ head of KPMG’s Manchester restructuring division, was also fined £500,000 and\ \ barred from insolvency or accountancy licences for 13 years.\n\nIn November,\ \ a British litigation financing firm - Augusta Ventures announced that it will\ \ bankroll three $152.4-million lawsuits in Canada against the previous auditor\ \ (KPMG LLP), authorized legal adviser (Cassels Brock & Blackwell LLP) and monetary\ \ adviser (Canaccord Genuity Corp) of the Money Retailer Monetary Providers Inc.,\ \ a Canadian payday lender that filed for creditor safety in 2014. “It’s alleged\ \ in these lawsuits that KPMG, Cassels Brock and Canaccord triggered over $100-million\ \ in damages to Money Retailer and its collectors,” mentioned Mr. Aziz, President\ \ of BlueTree Advisors Inc., a company restructuring advisory agency primarily\ \ based in Oakville, Ontario.\n\nAlso in November, KPMG UK has been hit with a\ \ £15m lawsuit by insurance outsourcer Watchstone — formerly known as Quindell\ \ — over allegations it suffered losses because of the audit firm's negligence\ \ in 2013.\n\nAgain in November, two units of Abraaj that are now in liquidation,\ \ filed a lawsuit in Dubai against KPMG LLP for damages of US$600 million alleging\ \ that KPMG accountants “failed to maintain independence and an appropriate attitude\ \ of professional skepticism,” and breached their duty of care when auditing the\ \ private-equity firm.\n\n 2022\n\nIn January, the Malaysian government reported\ \ that KPMG's local affiliate had agreed to pay a fine of RM 333 million ($111\ \ million) to settle the case filed against it in connect with the 1MDB funds\ \ scandal.\n\nA group of investors in Airbus, the Dutch foundation Stichting Investor\ \ Loss Compensation (SILC), filed a lawsuit against Airbus, KPMG and EY in the\ \ Hague District Court alleging they suffered damages worth at least €300 million\ \ (US$340 million) as a result of company's misleading publications about the\ \ manufacturer’s involvement in and financial settlements involving corruption,\ \ bribery, and other forms of fraud.\n\nUK regulator The Financial Reporting Council\ \ (FRC) fined KPMG £3 million for audit failings at collapsed alcohol retailer\ \ Conviviality.\n\nA settlement agreement between the UK regulator, the Financial\ \ Reporting Council and a KPMG Partner, Stuart Smith, who led the firm’s audit\ \ of IT company, Regenersis later renamed as Blancco Technology Group, resulted\ \ in a fine of £150,000 after he admitted misleading its inspectors.\n\nCarillion\ \ audit role\nIn January 2018, it was announced that KPMG, auditor of collapsed\ \ UK construction firm Carillion, would have its role examined by the Financial\ \ Reporting Council, (FRC) and it was summoned to give evidence before two House\ \ of Commons select committees on 22 February 2018.\n\nOn 13 February 2018, the\ \ 'Big 4' accountancy firms, including KPMG, were described by MP Frank Field\ \ as \"feasting on what was soon to become a carcass\" after collecting fees of\ \ £72m for Carillion work during the years leading up to its collapse. KPMG was\ \ singled out for particular criticism for signing off Carillion's last accounts\ \ before a profit warning in July 2017: \"Either KPMG failed to spot the warning\ \ signs, or its judgement was clouded by its cosy relationship with the company\ \ and the multimillion-pound fees it received,\" said MP Rachel Reeves. Two out\ \ of three former Carillion finance directors had also worked for KPMG.\n\nKPMG\ \ defended itself, saying that in the construction industry \"an accumulation\ \ of adverse events [...] can quite quickly cause a precipitous decline.\" KPMG\ \ chairman and senior partner Bill Michael said: \"It does not follow automatically\ \ from a company collapse either that the opinion of management was wrong, or\ \ that the auditor did a bad job.\"\n\nOn 22 February 2018, MPs contested evidence\ \ from KPMG (in one exchange MP Peter Kyle told KPMG partner Peter Meehan: \"\ I would not hire you to do an audit of the contents of my fridge\"). Rachel Reeves,\ \ chair of the business select committee, said:\n\nAuditing is a multi-million-pound\ \ business for the Big Four. On this morning's evidence from KPMG and Deloitte,\ \ these audits appear to be a colossal waste of time and money, fit only to provide\ \ false assurance to investors, workers and the public. [...] Carillion staff\ \ and investors could see the problems at the company but those responsible –\ \ auditors, regulators, and, ultimately, the directors – did nothing to stop Carillion\ \ being driven off a cliff.\n\nThe final report of the Parliamentary inquiry into\ \ Carillion's collapse, published on 16 May 2018, criticised KPMG for its \"complicity\"\ \ in the company's financial reporting practices:\n\nKPMG audited Carillion for\ \ 19 years, pocketing £29 million in the process. Not once during that time did\ \ they qualify their audit opinion on the financial statements, instead signing\ \ off the figures put in front of them by the company's directors. Yet, had KPMG\ \ been prepared to challenge management, the warning signs were there in highly\ \ questionable assumptions about construction contract revenue and the intangible\ \ asset of goodwill accumulated in historic acquisitions. These assumptions were\ \ fundamental to the picture of corporate health presented in audited annual accounts.\ \ In failing to exercise—and voice—professional scepticism towards Carillion's\ \ aggressive accounting judgements, KPMG was complicit in them. It should take\ \ its own share of responsibility for the consequences.\n\nThe select committee\ \ chairs (Frank Field and Rachel Reeves) called for a complete overhaul of Britain's\ \ corporate governance regime, saying the government had \"lacked the decisiveness\ \ or bravery\" to do so, accused the big four accounting firms of operating as\ \ a \"cosy club\", with KPMG singled out for its \"complicity\" in signing off\ \ Carillion's \"increasingly fantastical figures\".\n\nKPMG said:\nWe believe\ \ we conducted our audit appropriately. However, it's only right that following\ \ a corporate collapse of such size and significance, the necessary investigations\ \ are performed. Auditing large and complex businesses involves many judgments\ \ and we will continue to cooperate with the FRC's ongoing investigation. ...\ \ We welcome any future review of our profession. If we consider how the profession\ \ has changed in the last decade […] it is clear there is a need for us to look\ \ closely at our business models.\n\nIn a June 2018 report on audit standards\ \ across eight accounting firms, the FRC identified \"failure to challenge management\ \ and show appropriate scepticism across their audits.\" It highlighted a decline\ \ in the quality of work undertaken by the Big Four, with KPMG performing the\ \ worst. There had, the FRC said, been an \"unacceptable deterioration\" in the\ \ quality of KPMG's work, and the FRC would scrutinise KPMG more closely as a\ \ result. In October 2018, the FRC proposed reforms to tackle the \"underlying\ \ falling trust in business and the effectiveness of audit,\" and severely rebuked\ \ KPMG.\n\nIn November 2018, KPMG said it would no longer undertake consultancy\ \ work for FTSE 350 Index-listed companies if it was also auditing them, in an\ \ effort to \"remove even the perception of a possible conflict\" of interest.\n\ \nThe Carillion investigation followed FRC investigations into KPMG's role at\ \ HBOS, Quindell and The Co-operative Bank. In July 2018, the FRC started an investigation\ \ into KPMG's audit role at collapsed drinks merchant Conviviality.\n\nIn January\ \ 2019, KPMG announced it had suspended the partner that led Carillion's audit\ \ and three members of his team; in August 2021, an FRC disciplinary panel was\ \ scheduled for 10 January 2022 to hear a formal complaint against KPMG and former\ \ KPMG partner Peter Meehan regarding the provision of allegedly false and misleading\ \ information concerning the 2016 Carillion audit. The tribunal convened to hear\ \ the formal complaint started on 10 January 2022. At the disciplinary hearing,\ \ KPMG's UK chief executive Jon Holt said the firm had discovered misconduct by\ \ it staff in its own internal investigations, and immediately reported it to\ \ the FRC.\n\nThe FRC opened a second investigation into how KPMG audited Carillion's\ \ accounts. The FRC's first report, which found a number of breaches, was delivered\ \ to KPMG in September 2020; the FRC was awaiting a KPMG response before deciding\ \ whether to take enforcement action. In March 2021, KPMG was reported to be \"\ inching towards a financial settlement with regulators\" over its auditing of\ \ Carillion, with the FRC expected to impose a record fine, possibly around £25m,\ \ on KPMG for its failings.\n\nIn May 2020, the FT reported that the Official\ \ Receiver was preparing to sue KPMG for £250m over alleged negligence in its\ \ audits of Carillion. In May 2021, the liquidator secured funding for its legal\ \ action, with speculation that the likely damages claim could be as much as £2\ \ billion. In February 2022, Sky News reported the Official Receiver's claim would\ \ be in the range of £1bn-£1.5bn, with one source suggesting around £1.2bn. The\ \ OR's negligence claim focuses on the value of major contracts which were not\ \ properly accounted for in audits in 2014, 2015 and 2016, resulting in misstatements\ \ in excess of £800m within Carillion's financial reports. KPMG was said to have\ \ accepted management explanations for inflated revenue and understated cost positions.\ \ The OR had received legal advice that KPMG was answerable to Carillion's creditors\ \ for a portion of their losses. KPMG said: \"We believe this claim is without\ \ merit and we will robustly defend the case. Responsibility for the failure of\ \ Carillion lies solely with the company's board and management, who set the strategy\ \ and ran the business.\" The claim, for £1.3 billion (US$1.77 billion), accused\ \ KPMG of missing \"red flags\" during audits of Carillion, in one of the largest\ \ claims against an audit firm.\n\nAlterations to past audit work\nIn June 2019,\ \ KPMG was fined $50 million for altering its past audit work after receiving\ \ stolen data from accounting industry watch dog Public Company Accounting Oversight\ \ Board (PCAOB). KPMG admitted to its mistakes and as a part of its settlement,\ \ it also agreed to hire an independent consultant to review its internal controls.\n\ \n2017 South African corruption scandal\n\nIn 2017, KPMG was embroiled in related\ \ scandals involving the Gupta family. KPMG, whose history in South Africa dated\ \ back to 1895, and which had been part of the international organization since\ \ its founding in 1979, faced calls for closure, and an uncertain future, as a\ \ consequence of the damage done to the South African economy as a result of its\ \ activities.\n\nKPMG had been working with a Gupta family company in the mining\ \ sector, Oakbay Resources and Energy, for 15 years prior to the revelations of\ \ corruption and collusion in 2016, at which point KPMG resigned. The full impact\ \ and financial profit that KPMG received is yet to be determined; however, at\ \ least one large company has terminated its services with KPMG due to its relationship\ \ with Oakbay.\n\nIn July 2017, after controversial documents were leaked by the\ \ amaBhungane Centre for Investigative Journalism, former chief executive of KPMG\ \ South Africa and the former partner that was responsible for audits related\ \ to the Gupta family, Moses Kgosana, withdrew from becoming the chairman of Alexander\ \ Forbes, a financial services firm.\n\nIn 2015, KPMG issued a controversial report\ \ that implicated former Finance Minister Pravin Gordhan in the creation of an\ \ illegal intelligence gathering unit of the South African Revenue Service (SARS).\ \ This report was seen by elements of the media to be part of a wider Gupta-linked\ \ state capture conspiracy, with the aim of forcing Gordhan out of his post. The\ \ report was withdrawn by KPMG in September 2017, earning the ire of the Commissioner\ \ of SARS, Tom Moyane.\n\nAfter an internal investigation that found work done\ \ for the Gupta family fell \"considerably short\" of the firm's standards and\ \ amid rising political and public backlash, KPMG's senior leadership in South\ \ Africa, including its chairman Ahmed Jaffer, CEO Trevor Hoole, COO Steven Louw,\ \ and five partners, resigned in September 2017.\n\nSave South Africa, a civil-society\ \ group, accused KPMG and UK PR firm Bell Pottinger of playing a \"central role\ \ in facilitating state capture.\" Numerous South African companies either fired\ \ KPMG in the immediate aftermath of the scandal, or were reconsidering their\ \ relationships with the firm with the international chairman of KPMG, John Veihmeyer,\ \ apologising for the conduct of the South African arm and the firm pledged to\ \ donate fees earned from Gupta businesses, as well as the withdrawn SARS report,\ \ to anti-corruption activities.\n\nCorporate theme song\nIn 2001, a blogger named\ \ Chris Raettig discovered a number of “corporate theme songs,” including one\ \ for KPMG. He created a page for these songs, and included deep links to the\ \ MP3 files on the source servers. KPMG responded by sending a takedown notice\ \ to Raettig, reading: “Please be aware such links require that a formal Agreement\ \ exist between our two parties, as mandated by our organization's Web Link Policy.”\ \ Raettig wrote publicly about this takedown notice, responding that \"my own\ \ organization's Web link policy requires no such formal agreement.\" The chorus\ \ to the anthem reads: KPMG – We're as strong as can be, A team of power and energy,\ \ We go for the gold, together we hold, Onto our vision of global strategy!\n\n\ Sponsorship \n\nThe Swedish member firm was the main sponsor for Swedish biathlete\ \ Magdalena Forsberg, six-time world champion and two-time Olympic medalist. Forsberg\ \ was working as a tax consultant at the KPMG Sundsvall office parallel to her\ \ athletic career.\n\nIn February 2008, Phil Mickelson, ranked one of the best\ \ golfers in the world, signed a three-year global sponsorship deal with KPMG.\ \ As part of the agreement, Mickelson was to wear the KPMG logo on his headwear\ \ during all golf related appearances. The sponsorship lasted until 22 February\ \ 2022, when the two parties mutually split following comments in which Mickelson\ \ called Saudi Arabia \"scary\" but would overlook the country's human rights\ \ controversies in the best interest of the PGA Tour.\n\nThe Canadian member firm\ \ sponsored skier Alexandre Bilodeau, who won the first gold medal for Canada\ \ on home soil in the 2010 Vancouver Olympics. Alexandre's father is a tax partner\ \ in the Montreal office.\n\nKPMG and McLaren Technology Group have formed a strategic\ \ alliance to apply McLaren Applied Technologies' (MAT) predictive analytics and\ \ technology to KPMG's audit and advisory services. McLaren 2015 Formula 1 car\ \ has the KPMG logo engraved above the pilot seat.\n\nSince 2016, KPMG has been\ \ strategic sponsor of Brain Bar, a Budapest-based, annually held festival on\ \ the future.\n\nAwards \nKPMG ranked in the top two overall in Consultancy Rankings\ \ 2009 by OpRisk & Compliance – in recognition of KPMG's experience in risk management.\n\ \nIn 2011, the company was ranked second on the World's Best Outsourcing Advisors\ \ – in recognition of the firm's depth of experience, global reach and holistic\ \ approach. That same year, the company was inducted into Working Mother Hall\ \ of Fame after being honored for 15 years as one of Working Mother magazine's\ \ 100 Best Companies for Working Mothers. KPMG was ranked number 13 in Consulting\ \ Magazine'''s Best Firms to Work for in 2016.\n\nIn 2017, KPMG was ranked 29th\ \ on the Fortune'' list of 100 best companies to work for. That same year, KPMG,\ \ along with PwC, Deloitte, and PA Consulting Group, were among the UK's 25 top\ \ companies to work for.\n\nSee also \n\n Accounting networks and associations\n\ \ Big Four accounting firms: PwC, EY, Deloitte\n Crowe Global, BDO Global, Grant\ \ Thornton\n Financial audit\n FTSE 100 Index\n Management consulting\n Professional\ \ services\n Tax advisor\n\nReferences\n\nExternal links \n\n \n\n \n1818 establishments\ \ in England\nConsulting firms established in 1818\nFinancial services companies\ \ established in 1818\nAmstelveen\nCompanies based in North Holland\nCompanies\ \ based in Zug\nInternational management consulting firms\nMadoff investment scandal\n\ Privately held companies of Switzerland" - "(24 September 1564 – 16 May 1620), known in Japanese as , was an English navigator\ \ who, in 1600, was the first Englishman to reach Japan leading a five-ship expedition\ \ for a private Dutch fleet. Of the few survivors of the only ship that reached\ \ Japan, Adams and his second mate Jan Joosten were not allowed to leave the country\ \ while Jacob Quaeckernaeck and Melchior van Santvoort were permitted to go back\ \ to the Dutch Republic to invite them to trade.\n\nAdams, along with former second\ \ mate Joosten, then settled in Japan, and the two became some of the first (of\ \ very few) Western samurai.\n\nSoon after Adams' arrival in Japan, he became\ \ a key advisor to the shōgun Tokugawa Ieyasu. Adams directed construction for\ \ the shōgun of the first Western-style ships in the country. He was later key\ \ to Japan's approving the establishment of trading factories by the Netherlands\ \ and England. He was also highly involved in Japan's Red Seal Asian trade, chartering\ \ and serving as captain of four expeditions to Southeast Asia. He died in Japan\ \ at age 55. He has been recognised as one of the most influential foreigners\ \ in Japan during this period.\n\nEarly life\nAdams was born in Gillingham, Kent,\ \ England. When Adams was twelve his father died, and he was apprenticed to shipyard\ \ owner Master Nicholas Diggins at Limehouse for the seafaring life. He spent\ \ the next twelve years learning shipbuilding, astronomy, and navigation before\ \ entering the Royal Navy.\n\nWith England at war with Spain, Adams served in\ \ the Royal Navy under Sir Francis Drake. He saw naval service against the Spanish\ \ Armada in 1588 as master of the Richarde Dyffylde, a resupply ship. Adams was\ \ recorded to have married Mary Hyn in the parish church of St Dunstan's, Stepney\ \ on 20 August 1589 and they had two children: a son John and a daughter named\ \ Deliverance. Soon after, Adams became a pilot for the Barbary Company. During\ \ this service, Jesuit sources claim he took part in an expedition to the Arctic\ \ that lasted about two years, in search of a Northeast Passage along the coast\ \ of Siberia to the Far East. The veracity of this claim is somewhat suspect,\ \ because he never referred to such an expedition in his autobiographical letter\ \ written from Japan; its wording implies that the 1598 voyage was his first involvement\ \ with the Dutch. The Jesuit source may have misattributed to Adams a claim by\ \ one of the Dutch members of Mahu's crew who had been on Rijp's ship during the\ \ voyage that discovered Spitsbergen.\n\nExpedition to the Far East\n\nAttracted\ \ by the Dutch trade with India, Adams, then 34 years old, shipped as pilot major\ \ with a five-ship fleet dispatched from the isle of Texel to the Far East in\ \ 1598 by a company of Rotterdam merchants (a voorcompagnie, predecessor of the\ \ Dutch East India Company). His brother Thomas accompanied him. The Dutch were\ \ allied with England and as well as fellow Protestants, they too were also at\ \ war with Spain fighting for their independence.\n\nThe Adams brothers set sail\ \ from Texel on the Hoope and joined with the rest of the fleet on 24 June. The\ \ fleet consisted of:\n the Hoope (\"Hope\"), under Admiral Jacques Mahu (d. 1598),\ \ he was succeeded by Simon de Cordes (d. 1599) and Simon de Cordes Jr. This ship\ \ was lost near the Hawaiian Islands;\n the Liefde (\"Love\" or \"Charity\"),\ \ under Simon de Cordes, 2nd in command, succeeded by Gerrit van Beuningen and\ \ finally under Jacob Kwakernaak; this was the only ship which reached Japan\n\ \ the Geloof (\"Faith\"), under Gerrit van Beuningen, and in the end, Sebald de\ \ Weert; the only ship which came back in Rotterdam.\n the Trouw (\"Loyalty\"\ ), under Jurriaan van Boekhout (d. 1599) and finally, Baltazar de Cordes; was\ \ captured in Tidore\n the Blijde Boodschap (\"Good Tiding\" or \"The Gospel\"\ ), under Sebald de Weert, and later, Dirck Gerritz was seized in Valparaiso.\n\ \nJacques Mahu and Simon de Cordes were the leaders of an expedition with the\ \ goal to achieve the Chile, Peru and other kingdoms (in New Spain like Nueva\ \ Galicia; Captaincy General of Guatemala; Nueva Vizcaya; New Kingdom of León\ \ and Santa Fe de Nuevo México). The fleet's original mission was to sail for\ \ the west coast of South America, where they would sell their cargo for silver,\ \ and to head for Japan only if the first mission failed. In that case, they were\ \ supposed to obtain silver in Japan and to buy spices in the Moluccas, before\ \ heading back to Europe. Their goal was to sail through the Strait of Magellan\ \ to get to their destiny, which scared many sailors because of the harsh weather\ \ conditions. The first major expedition around South America was organized by\ \ a voorcompagnie, the Rotterdam or Magelhaen Company. It organized two fleets\ \ of five and four ships with 750 sailors and soldiers, including 30 English musicians.\n\ \nAfter leaving Goeree on 27 June 1598 the ships sailed to the Channel, but anchored\ \ in the Downs till mid July. When the ships approached the shores of North Africa\ \ Simon de Cordes realized he had been far too generous in the early weeks of\ \ the voyage and instituted a 'bread policy'. At the end of August they landed\ \ on Santiago, Cape Verde and Mayo off the coast of Africa because of a lack of\ \ water and need for fresh fruit. They stayed around three weeks in the hope to\ \ buy some goats. Near Praia they succeeded to occupy a Portuguese castle on the\ \ top of a hill, but came back without anything substantial. At Brava, Cape Verde\ \ half of the crew of the \"Hope\" caught fever there, with most of the men sick,\ \ among them Admiral Jacques Mahu. After his death the leadership of the expedition\ \ was taken over by Simon de Cordes, with Van Beuningen as vice admiral. Because\ \ of contrary wind the fleet was blown off course (NE in the opposite direction)\ \ and arrived at Cape Lopez, Gabon, Central Africa. An outbreak of scurvy forced\ \ a landing on Annobón, on 9 December. Several men became sick because of dysentery.\ \ They stormed the island only to find that the Portuguese and their native allies\ \ had set fire to their own houses and fled into the hills. They put all the sick\ \ ashore to recover and left early January. Because of starvation the men fell\ \ into great weakness; some tried to eat leather. On 10 March 1599 they reached\ \ the Rio de la Plata, in Argentina. Early April they arrived at the Strait, 570 km\ \ long, 2 km wide at its narrowest point, with an inaccurate chart of the seabed.\ \ The wind turned out to be unfavorable and this remained so for the next four\ \ months. Under freezing temperatures and poor visibility they caught penguins,\ \ seals, mussels, duck and fish. About two hundred crew members died. On 23 August\ \ the weather improved.\n\nOn the Pacific\n\nWhen finally the Pacific Ocean was\ \ reached on 3 September 1599, the ships were caught in a storm and lost sight\ \ of each other. The \"Loyalty\" and the \"Believe\" were driven back in the strait.\ \ After more than a year each ship went its own way.\nThe Geloof returned to Rotterdam\ \ in July 1600 with 36 men surviving of the original 109 crew.)\nDe Cordes ordered\ \ his small fleet to wait four weeks for each other on Santa María Island, Chile,\ \ but some ships missed the island. Adams wrote \"they brought us sheep and potatoes\"\ . From here the story becomes less reliable because of a lack of sources and changes\ \ in command. In early November, the \"Hope\" landed on Mocha Island where 27\ \ people were killed by the people from Araucania, including Simon de Cordes.\ \ (In the account given to Olivier van Noort it was said that Simon der Cordes\ \ was slain at the Punta de Lavapie, but Adams gives Mocha Island as the scene\ \ of his death.) The \"Love\" hit the island, but went on to Punta de Lavapié\ \ near Concepción, Chile. A Spanish captain supplied the \"Loyalty\" and \"Hope\"\ \ with food; the Dutch helped him against the Araucans, who had killed 23 Dutch,\ \ including Thomas Adams (according to his brother in his second letter) and Gerrit\ \ van Beuningen, who was replaced by Jacob Quaeckernaeck.\n\nDuring the voyage,\ \ before December 1598, Adams changed ships to the Liefde (originally named Erasmus\ \ and adorned by a wooden carving of Erasmus on her stern). The statue was preserved\ \ in the Ryuko-in Buddhist temple in Sano City, Tochigi-ken and moved to the Tokyo\ \ National Museum in the 1920s. The Trouw reached Tidore (Eastern Indonesia).\ \ The crew were killed by the Portuguese in January 1601.\n\nIn fear of the Spaniards,\ \ the remaining crews determined to leave the island and sail across the Pacific.\ \ It was 27 November 1599 when the two ships sailed westward for Japan. On their\ \ way, the two ships made landfall in \"certain islands\" where eight sailors\ \ deserted the ships. Later during the voyage, a typhoon claimed the Hope with\ \ all hands, in late February 1600.\n\nArrival in Japan\n\nIn April 1600, after\ \ more than nineteen months at sea, a crew of twenty-three sick and dying men\ \ (out of the 100 who started the voyage) brought the Liefde to anchor off the\ \ island of Kyūshū, Japan. Its cargo consisted of eleven chests of trade goods:\ \ coarse woolen cloth, glass beads, mirrors, and spectacles; and metal tools and\ \ weapons: nails, iron, hammers, nineteen bronze cannon; 5,000 cannonballs; 500\ \ muskets, 300 chain-shot, and three chests filled with coats of mail.\n\nWhen\ \ the nine surviving crew members were strong enough to stand, they made landfall\ \ on 19 April off Bungo (present-day Usuki, Ōita Prefecture). They were met by\ \ Japanese locals and Portuguese Jesuit missionary priests claiming that Adams'\ \ ship was a pirate vessel and that the crew should be executed as pirates. The\ \ ship was seized and the sickly crew were imprisoned at Osaka Castle on orders\ \ by Tokugawa Ieyasu, the daimyō of Edo and future shōgun. The nineteen bronze\ \ cannon of the Liefde were unloaded and, according to Spanish accounts, later\ \ used at the decisive Battle of Sekigahara on 21 October 1600.\n\nAdams met Ieyasu\ \ in Osaka three times between May and June 1600. He was questioned by Ieyasu,\ \ then a guardian of the young son of the Taikō Toyotomi Hideyoshi, the ruler\ \ who had just died. Adams' knowledge of ships, shipbuilding and nautical smattering\ \ of mathematics appealed to Ieyasu.\n\nComing before the king, he viewed me well,\ \ and seemed to be wonderfully favourable. He made many signs unto me, some of\ \ which I understood, and some I did not. In the end, there came one that could\ \ speak Portuguese. By him, the king demanded of me of what land I was, and what\ \ moved us to come to his land, being so far off. I showed unto him the name of\ \ our country, and that our land had long sought out the East Indies, and desired\ \ friendship with all kings and potentates in way of merchandise, having in our\ \ land diverse commodities, which these lands had not… Then he asked whether our\ \ country had wars? I answered him yea, with the Spaniards and Portugals, being\ \ in peace with all other nations. Further, he asked me, in what I did believe?\ \ I said, in God, that made heaven and earth. He asked me diverse other questions\ \ of things of religions, and many other things: As what way we came to the country.\ \ Having a chart of the whole world, I showed him, through the Strait of Magellan.\ \ At which he wondered, and thought me to lie. Thus, from one thing to another,\ \ I abode with him till mid-night. (from William Adams' letter to his wife)\n\n\ Adams wrote that Ieyasu denied the Jesuits' request for execution on the ground\ \ that:\nwe as yet had not done to him nor to none of his land any harm or damage;\ \ therefore against Reason or Justice to put us to death. If our country had wars\ \ the one with the other, that was no cause that he should put us to death; with\ \ which they were out of heart that their cruel pretence failed them. For which\ \ God be forever praised. (William Adams' letter to his wife)\n\nIeyasu ordered\ \ the crew to sail the Liefde from Bungo to Edo where, rotten and beyond repair,\ \ she sank.\n\nJapan's first western-style sailing ships\nIn 1604, Tokugawa ordered\ \ Adams and his companions to help Mukai Shōgen, who was commander-in-chief of\ \ the navy of Uraga, to build Japan's first Western-style ship. The sailing ship\ \ was built at the harbour of Itō on the east coast of the Izu Peninsula, with\ \ carpenters from the harbour supplying the manpower for the construction of an\ \ 80-ton vessel. It was used to survey the Japanese coast. The shōgun ordered\ \ a larger ship of 120 tons to be built the following year; it was slightly smaller\ \ than the Liefde, which was 150 tons. According to Adams, Tokugawa \"came aboard\ \ to see it, and the sight whereof gave him great content\". In 1610, the 120-ton\ \ ship (later named San Buena Ventura) was lent to shipwrecked Spanish sailors.\ \ They sailed it to New Spain, accompanied by a mission of twenty-two Japanese\ \ led by Tanaka Shōsuke.\n\nFollowing the construction, Tokugawa invited Adams\ \ to visit his palace whenever he liked and \"that always I must come in his presence.\"\ \n\nOther survivors of the Liefde were also rewarded with favours, and were allowed\ \ to pursue foreign trade. Most of the survivors left Japan in 1605 with the help\ \ of the daimyō of Hirado. Although Adams did not receive permission to leave\ \ Japan until 1613, Melchior van Santvoort and Jan Joosten van Lodensteijn engaged\ \ in trade between Japan and Southeast Asia and reportedly made a fortune. Both\ \ of them were reported by Dutch traders as being in Ayutthaya in early 1613,\ \ sailing richly cargoed junks.\n\nIn 1609 Adams contacted the interim governor\ \ of the Philippines, Rodrigo de Vivero y Aberrucia on behalf of Tokugawa Ieyasu,\ \ who wished to establish direct trade contacts with New Spain. Friendly letters\ \ were exchanged, officially starting relations between Japan and New Spain. Adams\ \ is also recorded as having chartered Red Seal Ships during his later travels\ \ to Southeast Asia. (The Ikoku Tokai Goshuinjō has a reference to Miura Anjin\ \ receiving a shuinjō, a document bearing a red Shogunal seal authorising the\ \ holder to engage in foreign trade, in 1614.)\n\nSamurai status\nTaking a liking\ \ to Adams, the shōgun appointed him as a diplomatic and trade advisor, bestowing\ \ great privileges upon him. Ultimately, Adams became his personal advisor on\ \ all things related to Western powers and civilization. After a few years, Adams\ \ replaced the Jesuit Padre João Rodrigues as the Shogun's official interpreter.\ \ Padre Valentim Carvalho wrote: \"After he had learned the language, he had access\ \ to Ieyasu and entered the palace at any time\"; he also described him as \"\ a great engineer and mathematician\".\n\nAdams had a wife Mary Hyn and two children\ \ back in England, but Ieyasu forbade the Englishman to leave Japan. He was presented\ \ with two swords representing the authority of a Samurai. The Shogun decreed\ \ that William Adams the pilot was dead and that Miura Anjin (三浦按針), a samurai,\ \ was born. According to the shōgun, this action \"freed\" Adams to serve the\ \ Shogunate permanently, effectively making Adams' wife in England a widow. (Adams\ \ managed to send regular support payments to her after 1613 via the English and\ \ Dutch companies.) Adams also was given the title of hatamoto (bannerman), a\ \ high-prestige position as a direct retainer in the shōgun's court.\n\nAdams\ \ was given generous revenues: \"For the services that I have done and do daily,\ \ being employed in the Emperor's service, the emperor has given me a living\"\ \ (Letters). He was granted a fief in Hemi (Jpn: 逸見) within the boundaries of\ \ present-day Yokosuka City, \"with eighty or ninety husbandmen, that be my slaves\ \ or servants\" (Letters). His estate was valued at 250 koku (a measure of the\ \ yearly income of the land in rice, with one koku defined as the quantity of\ \ rice sufficient to feed one person for one year). He finally wrote \"God hath\ \ provided for me after my great misery\" (Letters), by which he meant the disaster-ridden\ \ voyage that had initially brought him to Japan.\n\nAdams' estate was located\ \ next to the harbour of Uraga, the traditional point of entrance to Edo Bay.\ \ There he was recorded as dealing with the cargoes of foreign ships. John Saris\ \ related that when he visited Edo in 1613, Adams had resale rights for the cargo\ \ of a Spanish ship at anchor in Uraga Bay.\n\nIt is rumored that William had\ \ a child born in Hirado with another Japanese woman.\n\nAdams' position gave\ \ him the means to marry Oyuki (お雪), the adopted daughter of Magome Kageyu. He\ \ was a highway official who was in charge of a packhorse exchange on one of the\ \ grand imperial roads that led out of Edo (roughly present-day Tokyo). Although\ \ Magome was important, Oyuki was not of noble birth, nor high social standing.\ \ Adams may have married from affection rather than for social reasons. Adams\ \ and Oyuki had a son Joseph and a daughter Susanna. Adams was constantly traveling\ \ for work. Initially, he tried to organise an expedition in search of the Arctic\ \ passage that had eluded him previously.\n\nAdams had a high regard for Japan,\ \ its people, and its civilisation:\nThe people of this Land of Japan are good\ \ of nature, courteous above measure, and valiant in war: their justice is severely\ \ executed without any partiality upon transgressors of the law. They are governed\ \ in great civility. I mean, not a land better governed in the world by civil\ \ policy. The people be very superstitious in their religion, and are of diverse\ \ opinions.\n\nEstablishment of the Dutch East India Company in Japan\n\nIn 1604\ \ Ieyasu sent the Liefde'''s captain, Jacob Quaeckernaeck, and the treasurer,\ \ Melchior van Santvoort, on a shōgun-licensed Red Seal Ship to Patani in Southeast\ \ Asia. He ordered them to contact the Dutch East India Company trading factory,\ \ which had just been established in 1602, in order to bring more western trade\ \ to Japan and break the Portuguese monopoly. In 1605, Adams obtained a letter\ \ of authorization from Ieyasu formally inviting the Dutch to trade with Japan.\ \ \n\nHampered by conflicts with the Portuguese and limited resources in Asia,\ \ the Dutch were not able to send ships to Japan until 1609. Two Dutch ships,\ \ commanded by Jacques Specx, De Griffioen (the \"Griffin\", 19 cannons) and Roode\ \ Leeuw met Pijlen (the \"Red lion with arrows\", 400 tons, 26 cannons), were\ \ sent from Holland and reached Japan on 2 July 1609. The men of this Dutch expeditionary\ \ fleet established a trading base or \"factory\" on Hirado Island. Two Dutch\ \ envoys, Puyck and van den Broek, were the official bearers of a letter from\ \ Prince Maurice of Nassau to the court of Edo. Adams negotiated on behalf of\ \ these emissaries. The Dutch obtained free trading rights throughout Japan and\ \ to establish a trading factory there. (By contrast, the Portuguese were allowed\ \ to sell their goods only in Nagasaki at fixed, negotiated prices.)\n\nThe Hollandes\ \ be now settled (in Japan) and I have got them that privilege as the Spaniards\ \ and Portingals could never get in this 50 or 60 years in Japan.\n\nAfter obtaining\ \ this trading right through an edict of Tokugawa Ieyasu on 24 August 1609, the\ \ Dutch inaugurated a trading factory in Hirado on 20 September 1609. The Dutch\ \ preserved their \"trade pass\" (Dutch: Handelspas) in Hirado and then Dejima\ \ as a guarantee of their trading rights during the following two centuries that\ \ they operated in Japan.\n\nEstablishment of an English trading factory\nIn 1611,\ \ Adams learned of an English settlement in Banten Sultanate, present-day Indonesia.\ \ He wrote asking them to convey news of him to his family and friends in England.\ \ He invited them to engage in trade with Japan which \"the Hollanders have here\ \ an Indies of money.\"\n\nIn 1613, the English captain John Saris arrived at\ \ Hirado in the ship Clove, intending to establish a trading factory for the British\ \ East India Company. The Dutch East India Company (VOC) already had a major post\ \ at Hirado.\n\nSaris noted Adams praise of Japan and adoption of Japanese customs:\n\ He persists in giving \"admirable and affectionated commendations of Japan. It\ \ is generally thought amongst us that he is a naturalized Japaner.\" (John Saris)\n\ \nIn Hirado, Adams refused to stay in English quarters, residing instead with\ \ a local Japanese magistrate. The English noted that he wore Japanese dress and\ \ spoke Japanese fluently. Adams estimated the cargo of the Clove was of little\ \ value, essentially broadcloth, tin and cloves (acquired in the Spice Islands),\ \ saying that \"such things as he had brought were not very vendible\".\n\nAdams\ \ traveled with Saris to Suruga, where they met with Ieyasu at his principal residence\ \ in September. The Englishmen continued to Kamakura where they visited the noted\ \ Kamakura Great Buddha. (Sailors etched their names of the Daibutsu, made in\ \ 1252.) They continued to Edo, where they met Ieyasu's son Hidetada, who was\ \ nominally shōgun, although Ieyasu retained most of the decision-making powers.\ \ During that meeting, Hidetada gave Saris two varnished suits of armour for King\ \ James I. As of 2015, one of these suits of armour is housed in the Tower of\ \ London, the other is on display in the Royal Armouries Museum, Leeds. The suits\ \ were made by Iwai Yozaemon of Nanbu. They were part of a series of presentation\ \ armours of ancient 15th-century Dō-maru style.\n\nOn their return, the English\ \ party visited Tokugawa again. He conferred trading privileges to the English\ \ by a Red Seal permit, giving them \"free license to abide, buy, sell and barter\"\ \ in Japan. The English party returned to Hirado on 9 October 1613.\n\nAt this\ \ meeting, Adams asked for and obtained Tokugawa's authorisation to return to\ \ his home country. But, he finally declined Saris' offer to take him back to\ \ England: \"I answered him I had spent in this country many years, through which\ \ I was poor... [and] desirous to get something before my return\". His true reasons\ \ seem to lie rather with his profound antipathy for Saris: \"The reason I would\ \ not go with him was for diverse injuries done against me, which were things\ \ to me very strange and unlooked for.\" (William Adams letters)\n\nAdams accepted\ \ employment with the newly founded Hirado trading factory, signing a contract\ \ on 24November 1613, with the East India Company for the yearly salary of 100\ \ English Pounds. This was more than double the regular salary of 40 Pounds earned\ \ by the other factors at Hirado. Adams had a lead role, under Richard Cocks and\ \ together with six other compatriots (Tempest Peacock, Richard Wickham, William\ \ Eaton, Walter Carwarden, Edmund Sayers and William Nealson), in organising this\ \ new English settlement.\n\nAdams had advised Saris against the choice of Hirado,\ \ which was small and far away from the major markets in Osaka and Edo; he had\ \ recommended selection of Uraga near Edo for a post, but Saris wanted to keep\ \ an eye on the Dutch activities.\n\nDuring the ten-year operations of the East\ \ Indian Company (1613 and 1623), only three English ships after the Clove brought\ \ cargoes directly from London to Japan. They were invariably described as having\ \ poor value on the Japanese market. The only trade which helped support the factory\ \ was that organised between Japan and South-East Asia; this was chiefly Adams\ \ selling Chinese goods for Japanese silver:\nWere it not for hope of trade into\ \ China, or procuring some benefit from Siam, Pattania and Cochin China, it were\ \ no staying in Japon, yet it is certen here is silver enough & may be carried\ \ out at pleasure, but then we must bring them commodities to their liking. (Richard\ \ Cocks' diary, 1617)\n\nReligious rivalries\nThe Portuguese and other Catholic\ \ religious orders in Japan considered Adams a rival as an English Protestant.\ \ After Adams' power had grown, the Jesuits tried to convert him, then offered\ \ to secretly bear him away from Japan on a Portuguese ship. The Jesuits' willingness\ \ to disobey the order by Ieyasu prohibiting Adams from leaving Japan showed that\ \ they feared his growing influence. Catholic priests asserted that he was trying\ \ to discredit them. In 1614, Carvalho complained of Adams and other merchants\ \ in his annual letter to the Pope, saying that \"by false accusation [Adams and\ \ others] have rendered our preachers such objects of suspicion that he [Ieyasu]\ \ fears and readily believes that they are rather spies than sowers of the Holy\ \ Faith in his kingdom.\"\n\nIeyasu, influenced by Adams' counsels and disturbed\ \ by unrest caused by the numerous Catholic converts, expelled the Portuguese\ \ Jesuits from Japan in 1614. He demanded that Japanese Catholics abandon their\ \ faith. Adams apparently warned Ieyasu against Spanish approaches as well.\n\n\ Character\nAfter fifteen years spent in Japan, Adams had a difficult time establishing\ \ relations with the English arrivals. He initially shunned the company of the\ \ newly arrived English sailors in 1613 and could not get on good terms with Saris.\ \ But Richard Cocks, the head of the Hirado factory, came to appreciate Adams'\ \ character and what he had acquired of Japanese self-control. In a letter to\ \ the East India Company Cocks wrote:\n\nParticipation in Asian trade\n\nAdams\ \ later engaged in various exploratory and commercial ventures. He tried to organise\ \ an expedition to the legendary Northwest Passage from Asia, which would have\ \ greatly reduced the sailing distance between Japan and Europe. Ieyasu asked\ \ him if \"our countrimen could not find the northwest passage\" and Adams contacted\ \ the East India Company to organise manpower and supplies. The expedition never\ \ got underway.\n\nIn his later years, Adams worked for the English East Indian\ \ Company. He made a number of trading voyages to Siam in 1616 and Cochinchina\ \ in 1617 and 1618, sometimes for the English East India Company, sometimes for\ \ his own account. He is recorded in Japanese records as the owner of a Red Seal\ \ Ship of 500 tons.\n\nGiven the few ships that the company sent from England\ \ and the poor trading value of their cargoes (broadcloth, knives, looking glasses,\ \ Indian cotton, etc.), Adams was influential in gaining trading certificates\ \ from the shōgun to allow the company to participate in the Red Seal system.\ \ It made a total of seven junk voyages to Southeast Asia with mixed profit results.\ \ Four were led by William Adams as captain. Adams renamed a ship he acquired\ \ in 1617 as Gift of God; he sailed it on his expedition that year to Cochinchina.\ \ The expeditions he led are described more fully below.\n\n1614 Siam expedition\n\ In 1614, Adams wanted to organise a trade expedition to Siam to bolster the company\ \ factory's activities and cash situation. He bought and upgraded a 200-ton Japanese\ \ junk for the company, renaming her as Sea Adventure; and hired about 120 Japanese\ \ sailors and merchants, as well as several Chinese traders, an Italian and a\ \ Castilian (Spanish) trader. The heavily laden ship left in November 1614. The\ \ merchants Richard Wickham and Edmund Sayers of the English factory's staff also\ \ joined the voyage.\n\nThe expedition was to purchase raw silk, Chinese goods,\ \ sappan wood, deer skins and ray skins (the latter used for the hilts of Japanese\ \ swords). The ship carried £1250 in silver and £175 of merchandise (Indian cottons,\ \ Japanese weapons and lacquerware). The party encountered a typhoon near the\ \ Ryukyu Islands (modern Okinawa) and had to stop there to repair from 27 December\ \ 1614 until May 1615. It returned to Japan in June 1615 without having completed\ \ any trade.\n\n1615 Siam expedition\nAdams left Hirado in November 1615 for Ayutthaya\ \ in Siam on the refitted Sea Adventure, intent on obtaining sappanwood for resale\ \ in Japan. His cargo was chiefly silver (£600) and the Japanese and Indian goods\ \ unsold from the previous voyage.\n\nHe bought vast quantities of the high-profit\ \ products. His partners obtained two ships in Siam in order to transport everything\ \ back to Japan. Adams sailed the Sea Adventure to Japan with 143 tonnes of sappanwood\ \ and 3700 deer skins, returning to Hirado in 47 days. (The return trip took from\ \ 5 June and 22 July 1616). Sayers, on a hired Chinese junk, reached Hirado in\ \ October 1616 with 44 tons of sappanwood. The third ship, a Japanese junk, brought\ \ 4,560 deer skins to Nagasaki, arriving in June 1617 after the monsoon.\n\nLess\ \ than a week before Adams' return, Ieyasu had died. Adams accompanied Cocks and\ \ Eaton to court to offer company presents to the new ruler, Hidetada. Although\ \ Ieyasu's death seems to have weakened Adams' political influence, Hidetada agreed\ \ to maintain the English trading privileges. He also issued a new Red Seal permit\ \ (Shuinjō) to Adams, which allowed him to continue trade activities overseas\ \ under the shōgun's protection. His position as hatamoto was also renewed.\n\n\ On this occasion, Adams and Cocks also visited the Japanese Admiral Mukai Shōgen\ \ Tadakatsu, who lived near Adams' estate. They discussed plans for a possible\ \ invasion of the Catholic Philippines.\n\n1617 Cochinchina expedition\nIn March\ \ 1617, Adams set sail for Cochinchina, having purchased the junk Sayers had brought\ \ from Siam and renamed it the Gift of God. He intended to find two English factors,\ \ Tempest Peacock and Walter Carwarden, who had departed from Hirado two years\ \ before to explore commercial opportunities on the first voyage to South East\ \ Asia by the Hirado English Factory. Adams learned in Cochinchina that Peacock\ \ had been plied with drink, and killed for his silver. Carwarden, who was waiting\ \ in a boat downstream, realised that Peacock had been killed and hastily tried\ \ to reach his ship. His boat overturned and he drowned.\n\nAdams sold a small\ \ cargo of broadcloth, Indian piece goods and ivory in Cochinchina for the modest\ \ amount of £351.\n\n1618 Cochinchina expedition\nIn 1618, Adams is recorded as\ \ having organised his last Red Seal trade expedition to Cochinchina and Tonkin\ \ (modern Vietnam), the last expedition of the English Hirado Factory to Southeast\ \ Asia. The ship, a chartered Chinese junk, left Hirado on 11 March 1618 but met\ \ with bad weather that forced it to stop at Ōshima in the northern Ryukyus. The\ \ ship sailed back to Hirado in May.\n\nThose expeditions to Southeast Asia helped\ \ the English factory survive for some time—during that period, sappanwood resold\ \ in Japan with a 200% profit—until the factory fell into bankruptcy due to high\ \ expenditures.\n\nDeath and family legacy\n\nAdams died at Hirado, north of Nagasaki,\ \ on 16 May 1620, at the age of 55. He was buried in Nagasaki, where his grave\ \ marker may still be seen. His gravesite is next to a memorial to Saint Francis\ \ Xavier. In his will, he left his townhouse in Edo, his fief in Hemi, and 500\ \ British pounds to be divided evenly between his family in England and his family\ \ in Japan. Cocks wrote: \"I cannot but be sorrowful for the loss of such a man\ \ as Capt William Adams, he having been in such favour with two Emperors of Japan\ \ as never any Christian in these part of the world.\" (Cocks' diary)\n\nAdams'\ \ daughter Deliverance married Ratcliff mariner Raph Goodchild at St Dunstan's,\ \ Stepney on 30 September 1618. They had two daughters, Abigail in October 1619\ \ who died on the same month, and Jane in April 1621. Deliverance would later\ \ marry for a second time, to John Wright at St Alfege Church, Greenwich on 13\ \ October 1624.\n\nCocks remained in contact with Adams' Japanese family, sending\ \ gifts; in March 1622, he offered silks to Joseph and Susanna. On the Christmas\ \ after Adams' death, Cocks gave Joseph his father's sword and dagger. Cocks records\ \ that Hidetada transferred the lordship from William Adams to his son Joseph\ \ Adams with the attendant rights to the estate at Hemi: He (Hidetada) has confirmed\ \ the lordship to his son, which the other emperor (Ieyasu) gave to the father.\ \ (Cocks's diary)\n\nCocks administered Adams' trading rights (the shuinjō) for\ \ the benefit of Adams' children, Joseph and Susanna. He carried this out conscientiously.\ \ In 1623, three years after Adams' death, the unprofitable English trading factory\ \ was dissolved by the East India Company. The Dutch traded on Adams' children's\ \ behalf via the Red Seal ships. Joseph Adams had since inherited the title of\ \ Miura Anjin, became a trader and made five voyages to Cochinchina and Siam between\ \ 1624 and 1635.\n\nBy 1629 only two of Adams' shipmates from 1600 survived in\ \ Japan. Melchior van Santvoort and Vincent Romeyn lived privately in Nagasaki.\n\ \nIn 1635, Tokugawa Iemitsu enforced the Sakoku Edict for Japan to be closed against\ \ foreign trading, by then both Joseph and Susanna disappeared from historical\ \ records at that time.\n\nAdams has a second memorial monument at the location\ \ of his residence in Hemi. Consisting of a pair of hōkyōintō, the tuff memorial\ \ on the right is that of Adams, and the andesite one of the left is for his wife.\ \ The monuments were erected by his family in accordance with his will, and the\ \ site was designated as a National Historic Site in 1923.\n\nHonours for Adams\n\ \n A town in Edo (modern Tokyo), Anjin-chō (in modern-day Nihonbashi) was named\ \ for Adams, who had a house there. He is annually celebrated on 15 June.\n A\ \ village and a railroad station in his fiefdom, Hemi, in modern Yokosuka, were\ \ named for him.\n In the city of Itō, Shizuoka, the Miura Anjin Festival is held\ \ annually on 10 August. On the seafront at Itō is a monument to Adams. Next to\ \ it is a plaque inscribed with Edmund Blunden's poem, \"To the Citizens of Ito\"\ , which commemorates Adams' achievement.\n Adams' birth town, Gillingham, has\ \ held a Will Adams Festival every September since 2000. Since the late 20th century,\ \ both Itō and Yokosuka have become sister cities of Gillingham.\n A monument\ \ to Adams was installed in Watling Street, Gillingham (Kent), opposite Darland\ \ Avenue. The monument was unveiled 11 May 1934 by Tsuneo Matsudaira GCVO, Japanese\ \ ambassador to the Court of St James.\n A roundabout named Will Adams Roundabout\ \ with a Japanese theme just along from the Gillingham monument to Adams with\ \ two roads named after the Gillingham sister cities \"Ito Way\" and \"Yokosuka\ \ Way\"\n The townhouse of Will Adams still exists in Hirado. It is currently\ \ a sweet shop called Tsutaya at 431 Kihikidacho. It is known as Anjin no Yakata\ \ (Anjin's House).\n\nAnalysis of skeletal remains\n\nWilliam Adams (Miura Anjin)\ \ was buried in 1620, in Hirado, Nagasaki Prefecture. However, a few years later\ \ foreign cemeteries were destroyed and there was prosecution of Christians by\ \ the Tokugawa shogunate. The bones of Anjin were taken for safekeeping and reburied.\ \ In 1931, skeletal remains were first discovered there and assumed to be of Anjin,\ \ but it could not be confirmed due to technological limitations, the remains\ \ was later placed in a Showa period ceramic funerary urn and reburied back to\ \ where it was discovered.\n\nIn July 2017, the excavation of the skeletal remains\ \ began at the William Adams Memorial Park on Sakigata Hill, Hirado. In 2019,\ \ Japanese archaeologists announced the discovery of bones at the site believed\ \ to be those of Adams. These remains match the 1931 description. The subsequent\ \ biomolecular anthropological investigation of the genetic background showed\ \ the mtDNA analysis indicates Anjin's mitochondrial DNA likely belongs to haplogroup\ \ H. The analysis also showed aspects such as the dietary habits and burial style\ \ matched with Anjin. In April 2020, the University of Tokyo conducted conclusive\ \ forensic tests on the bones and confirmed it was William Adams' grave. Confirmation\ \ of the identity of the remains may have been possible with DNA data from living\ \ relatives of Adams. However, even if living relatives had been identified, there\ \ would probably have been the equivalent of about eight generations of genetic\ \ difference between them and Adams. Thus there may not have been sufficient similarities\ \ to establish with sufficient certainty whether such a relationship existed.\n\ \nRepresentation in other media\n James Clavell based his best-selling novel Shōgun\ \ (1975) on Adams' life, changing the name of his protagonist to \"John Blackthorne\"\ . This was adapted as a popular TV mini-series, Shōgun (1980). It was also adapted\ \ as a Broadway production, Shōgun: The Musical (1990), and the video game James\ \ Clavell's Shōgun (1989).\n Michel Foucault retold Adams' tale in The Discourse\ \ on Language. According to Foucault, the story embodies one of the \"great myths\ \ of European culture,\" namely, the idea that a mere sailor could teach mathematics\ \ to the Japanese shogun shows the difference between the open exchange of knowledge\ \ in Europe as opposed to the secretive control of knowledge under \"oriental\ \ tyranny.\" In fact, however, Adams was not a mere sailor but the chief navigator\ \ of the fleet, and his value to the Shogun was along the practical lines of shipbuilding.\n\ \nThere were numerous earlier works of fiction based on Adams.\n William Dalton\ \ wrote Will Adams, The First Englishman in Japan: A Romantic Biography (London,\ \ 1861).\n Richard Blaker's The Needlewatcher (London, 1932) is the least romantic\ \ of the novels; he consciously attempted to de-mythologize Adams and write a\ \ careful historical work of fiction.\n James Scherer's Pilot and Shōgun dramatises\ \ a series of incidents based on Adams' life.\n American Robert Lund wrote Daishi-san\ \ (New York, 1960).\n Christopher Nicole's Lord of the Golden Fan (1973) portrays\ \ Adams as sexually frustrated in England and freed by living in Japan, where\ \ he has numerous encounters. The work is considered light pornography.\n In 2002,\ \ Giles Milton's historical biography Samurai William (2002) is based on historical\ \ sources, especially Richard Cocks's diary.\n The 2002 alternate history novel\ \ Ruled Britannia by Harry Turtledove features a brief appearance by Adams, piloting\ \ cargo and passengers between England and Ostend, both of which are puppet states\ \ of the Habsburg Empire in this timeline.\n In the second season of Heroes, a\ \ story set in samurai-era Japan features an Englishman who seems to be based\ \ on Adams.\n A book series called Young Samurai is about a young English boy\ \ who is ship wrecked in Japan, and is trained as a samurai.\n Adams also serves\ \ as the template for the protagonist in the PlayStation 4 and PC video game series\ \ Nioh (2017) and non-playable in its prequel/sequel hybrid game (2020), but with\ \ supernatural and historical fiction elements. As of the end of the second game,\ \ some time after managing to arrest the female Spaniard Maria, he married Okatsu\ \ instead and had an English-Japanese son who inherited her mother's guardian\ \ spirit.\n This version also appeared in the Warriors series' crossover game,\ \ Warriors All-Stars.\n\nDepiction\n\nAccording to Professor Derek Massarella\ \ of Chuo University:\n\nThere is however one genuine contemporary image. \"It\ \ is a derivative drawing of William Adams, which appears to be based in a sketch\ \ attributed to Dorothy Burmingham (from a description given by Melchior von Santvoort).\ \ The original drawing is to be found at the Rotterdam Maritime Museum [whose\ \ specialist Marcel Kroon considers it to be from Adams' time]. A copy is preserved\ \ at the Bodleian Library, University of Oxford.\"\n\nSee also\n\n Anglo-Japanese\ \ relations\n Jan Joosten – known in Japanese as Yan Yōsuten, was a Dutch colleague\ \ of Adams, and the only known Dutch samurai. The Yaesu neighbourhood in Chūō,\ \ Tokyo was named for him.\n Henry Schnell – known in Japanese as Hiramatsu Buhei,\ \ was a Prussian arms dealer, who served the Aizu domain as a military instructor\ \ and procurer of weapons.\n Eugène Collache – French Navy officer, who fought\ \ for the shōgun during the Boshin War (1868–1869).\n Jules Brunet (1838–1911)\ \ – French officer who fought for the shōgun in the Boshin War\n Ernest Mason\ \ Satow (1843–1929) – British scholar, diplomat and Japanologist\n Hendrick Hamel\ \ (1630–1692) – first European to live in the Joseon-dynasty era in Korea (1666)\ \ and write about it\n Yasuke (b. c. 1556) – a black (African) retainer briefly\ \ in the service of the Japanese warlord Nobunaga Oda\n List of foreign-born samurai\ \ in Japan\n List of Westerners who visited Japan before 1868\n\nNotes\n\nReferences\n\ \ England's Earliest Intercourse with Japan, by C. W. Hillary (1905)\n Letters\ \ written by the English Residents in Japan, ed. by N. Murakami (1900, containing\ \ Adams' Letters reprinted from Memorials of the Empire of Japan, ed. by T. Rundall,\ \ Hakluyt Society, 1850)\n Diary of Richard Cocks, with preface by N. Murakami\ \ (1899, reprinted from the Hakluyt Society ed. 1883)\n Hildreth, Richard, Japan\ \ as it was and is (1855)\n John Harris, Navigantium atque Itinerantium Bibliotheca\ \ (1764), i. 856\n Voyage of John Saris, edited by Sir Ernest M. Satow (Hakluyt\ \ Society, 1900)\n Asiatic Society of Japan Transactions, xxvi. (sec. 1898) pp. I\ \ and 194, where four formerly unpublished letters of Adams are printed;\n Collection\ \ of State Papers; East Indies, China and Japan. The MS. of his logs written during\ \ his voyages to Siam and China is in the Bodleian Library at Oxford.\n Samurai\ \ William: The Adventurer Who Unlocked Japan, by Giles Milton (UK 2002: )\n William\ \ Adams and Early English Enterprise in Japan, by Anthony Farrington and Derek\ \ Massarella \n Adams the Pilot: The Life and Times of Captain William Adams:\ \ 1564–1620, by William Corr, Curzon Press, 1995 \n The English Factory in Japan\ \ 1613–1623, ed. by Anthony Farrington, British Library, 1991. (Includes all of\ \ William Adams' extant letters, as well as his will.)\n A World Elsewhere. Europe’s\ \ Encounter with Japan in the Sixteenth and Seventeenth Centuries, by Derek Massarella,\ \ Yale University Press, 1990.\n Recollections of Japan, Hendrik Doeff, \n\nHardcopy\n\ \ The Needle-Watcher: The Will Adams Story, British Samurai by Richard Blaker\n\ \ Servant of the Shogun by Richard Tames. Paul Norbury Publications, Tenterden,\ \ Kent, England..\n Samurai William: The Englishman Who Opened Japan,'' by Giles\ \ Milton; ; December 2003\n\nExternal links\n Williams Adams- Blue Eyed Samurai,\ \ Meeting Anjin\n \"Learning from Shogun. Japanese history and Western fantasy\"\ \n William Adams and Early English enterprise in Japan\n William Adams – The First\ \ Englishman In Japan, full text online, Internet Archive\n Will Adams Memorial\n\ \ \n \n \n\nSamurai\nForeign samurai in Japan\n1564 births\n1620 deaths\n16th-century\ \ English people\n17th-century English people\n16th-century Japanese people\n\ 17th-century Japanese people\nAdvisors to Tokugawa shoguns\nEnglish emigrants\ \ to Japan\nEnglish Anglicans\nEnglish sailors\nHatamoto\nJapan–United Kingdom\ \ relations\nPeople from Gillingham, Kent\nRoyal Navy officers\nSailors on ships\ \ of the Dutch East India Company\nForeign relations of the Tokugawa shogunate" - "Oliver Wendell Holmes Jr. (March 8, 1841 – March 6, 1935) was an American jurist\ \ and legal scholar who served as an associate justice of the Supreme Court of\ \ the United States from 1902 to 1932. He is one of the most widely cited United\ \ States Supreme Court justices and most influential American common law judges\ \ in history, noted for his long service, concise, and pithy opinions—particularly\ \ for opinions on civil liberties and American constitutional democracy—and deference\ \ to the decisions of elected legislatures. Holmes retired from the court at the\ \ age of 90, an unbeaten record for oldest justice on the United States Supreme\ \ Court. He previously served as a Brevet Colonel in the American Civil War, an\ \ Associate Justice and as Chief Justice of the Massachusetts Supreme Judicial\ \ Court, and was Weld Professor of Law at his alma mater, Harvard Law School.\ \ His positions, distinctive personality, and writing style made him a popular\ \ figure, especially with American progressives.\n\nDuring his tenure on the Supreme\ \ Court, to which he was appointed by President Theodore Roosevelt, he supported\ \ the constitutionality of state economic regulation and advocated broad freedom\ \ of speech under the First Amendment, although he upheld criminal sanctions against\ \ draft protestors with the memorable maxim that \"free speech would not protect\ \ a man in falsely shouting fire in a theatre and causing a panic\" and formulated\ \ the groundbreaking \"clear and present danger\" test for a unanimous court.\ \ In a famous dissent in Abrams v. United States (1919), he wrote that he regarded\ \ the United States Constitution's theory \"that the best test of truth is the\ \ power of the thought to get itself accepted in the competition of the market\"\ \ as \"an experiment, as all life is an experiment\" and believed that as a consequence\ \ \"we should be eternally vigilant against attempts to check the expression of\ \ opinions that we loathe and believe to be fraught with death\". \n\nHe was one\ \ of only a handful of justices to be known as a scholar; The Journal of Legal\ \ Studies has identified Holmes as the third-most cited American legal scholar\ \ of the 20th century. Holmes was a legal realist, as summed up in his maxim,\ \ \"The life of the law has not been logic: it has been experience\", and a moral\ \ skeptic opposed to the doctrine of natural law. His jurisprudence and academic\ \ writing influenced much subsequent American legal thinking, including the judicial\ \ consensus upholding New Deal regulatory law, and the influential American schools\ \ of pragmatism, critical legal studies, and law and economics.\n\nEarly life\n\ Holmes was born in Boston, Massachusetts, to the prominent writer and physician\ \ Oliver Wendell Holmes Sr. and abolitionist Amelia Lee Jackson. Dr. Holmes was\ \ a leading figure in Boston intellectual and literary circles. Mrs. Holmes was\ \ connected to the leading families; Henry James Sr., Ralph Waldo Emerson and\ \ other transcendentalists were family friends. Known as \"Wendell\" in his youth,\ \ Holmes, Henry James Jr. and William James became lifelong friends. Holmes accordingly\ \ grew up in an atmosphere of intellectual achievement, and early formed the ambition\ \ to be a man of letters like Emerson. While still in Harvard College he wrote\ \ essays on philosophic themes, and asked Emerson to read his attack on Plato's\ \ idealist philosophy. Emerson famously replied, \"If you strike at a king, you\ \ must kill him\". He supported the Abolitionist movement that thrived in Boston\ \ society during the 1850s. At Harvard, he was a member of the Hasty Pudding and\ \ the Porcellian Club; his father had also been a member of both clubs. In the\ \ Pudding, he served as Secretary and Poet, as his father did. Holmes graduated\ \ Phi Beta Kappa from Harvard in 1861 and in the spring of that year, he enlisted\ \ in the Massachusetts militia, when President Abraham Lincoln first called for\ \ volunteers following the firing on Fort Sumter, but returned briefly to Harvard\ \ College to participate in commencement exercises.\n\nCivil War\n\nDuring his\ \ senior year of college, at the outset of the American Civil War, Holmes enlisted\ \ in the fourth battalion, Massachusetts militia, then, with his father's help,\ \ received a commission as first lieutenant in the Twentieth Regiment of Massachusetts\ \ Volunteer Infantry. He saw much action, taking part in the Peninsula Campaign,\ \ the Battle of Fredericksburg and the Wilderness, suffering wounds at the Battle\ \ of Ball's Bluff, Antietam, and Chancellorsville, and suffered from a near-fatal\ \ case of dysentery. He particularly admired and was close to Henry Livermore\ \ Abbott, a fellow officer in the 20th Massachusetts. Holmes rose to the rank\ \ of lieutenant colonel, but eschewed promotion in his regiment and served on\ \ the staff of the VI Corps during the Wilderness Campaign. Abbott took command\ \ of the regiment in his place, and was later killed.\n\nHolmes is said to have\ \ shouted to Abraham Lincoln to take cover during the Battle of Fort Stevens,\ \ although this is commonly regarded as apocryphal. Holmes himself expressed uncertainty\ \ about who had warned Lincoln (\"Some say it was an enlisted man who shouted\ \ at Lincoln; others suggest it was General Wright who brusquely ordered Lincoln\ \ to safety. But for a certainty, the 6 foot 4 inch Lincoln, in frock coat and\ \ top hat, stood peering through field glasses from behind a parapet at the onrushing\ \ rebels.\") and other sources state he likely was not present on the day Lincoln\ \ visited Fort Stevens.\n\nHolmes received a brevet honorary promotion to colonel\ \ in recognition of his services during the war. He retired to his home in Boston\ \ after his three-year enlistment ended in 1864, weary and ill, his regiment disbanded.\n\ \nLegal career\n\nLawyer\n\nIn the summer of 1864, Holmes returned to the family\ \ home in Boston, wrote poetry, and debated philosophy with his friend William\ \ James, pursuing his debate with philosophic idealism, and considered re-enlisting.\ \ But by the fall, when it became clear that the war would soon end, Holmes enrolled\ \ in Harvard Law School, \"kicked into the law\" by his father, as he later recalled.\ \ He attended lectures there for a single year, reading extensively in theoretical\ \ works, and then clerked for a year in his cousin Robert Morse’s office. He was\ \ admitted to the bar in 1866, and after a long visit to London to complete his\ \ education, went into law practice in Boston. He joined a small firm, and in\ \ 1872 married a childhood friend, Fanny Bowditch Dixwell, buying a farm in Mattapoisett,\ \ Massachusetts, the following year. Their marriage lasted until her death on\ \ April 30, 1929. They never had children together. They did adopt and raise an\ \ orphaned cousin, Dorothy Upham. Fanny disliked Beacon Hill society, and devoted\ \ herself to embroidery. She was described as devoted, witty, wise, tactful, and\ \ perceptive.\n\nWhenever he could, Holmes visited London during the social season\ \ of spring and summer, and during the years of his work as a lawyer and judge\ \ in Boston he formed romantic friendships with English women of the nobility,\ \ with whom he corresponded while at home in the United States. The most important\ \ of these was his friendship with the Anglo-Irish Clare Castletown, the Lady\ \ Castletown, whose family estate in Ireland, Doneraile Court, he visited several\ \ times, and with whom he may have had a brief affair. He formed his closest intellectual\ \ friendships with British men, and became one of the founders of what was soon\ \ called the \"sociological\" school of jurisprudence in Great Britain, followed\ \ a generation later by the \"legal realist\" school in America.\n\nHolmes practiced\ \ admiralty law and commercial law in Boston for fifteen years. It was during\ \ this time that he did his principal scholarly work, serving as an editor of\ \ the new American Law Review, reporting decisions of state supreme courts, and\ \ preparing a new edition of Kent's Commentaries, which served practitioners as\ \ a compendium of case law, at a time when official reports were scarce and difficult\ \ to obtain. He summarized his hard-won understanding in a series of lectures,\ \ collected and published as The Common Law in 1881.\n\nThe Common Law\nThe Common\ \ Law has been continuously in print since 1881 and remains an important contribution\ \ to jurisprudence. The book also remains controversial, for Holmes begins by\ \ rejecting various kinds of formalism in law. In his earlier writings he had\ \ expressly denied the utilitarian view that law was a set of commands of the\ \ sovereign, rules of conduct that became legal duties. He rejected as well the\ \ views of the German idealist philosophers, whose views were then widely held,\ \ and the philosophy taught at Harvard, that the opinions of judges could be harmonized\ \ in a purely logical system. In the opening paragraphs of the book, he famously\ \ summarized his own view of the history of the common law:\n\nThe life of the\ \ law has not been logic: it has been experience. The felt necessities of the\ \ time, the prevalent moral and political theories, intuitions of public policy,\ \ avowed or unconscious, even the prejudices which judges share with their fellow-men,\ \ have had a good deal more to do than the syllogism in determining the rules\ \ by which men should be governed. The law embodies the story of a nation’s development\ \ through many centuries, and it cannot be dealt with as if it contained only\ \ the axioms and corollaries of a book of mathematics.\n\nIn The Common Law, Holmes\ \ wrote that, even though the law ”uses the language of morality, it necessarily\ \ ends in external standards not dependent on the consciousness of the individual”\ \ or on his moral culpability. Foreseeability of harm was the key: ”the general\ \ basis of criminal liability was knowledge, at the time of action, of facts from\ \ which common experience showed that certain harmful results were likely to follow.”\ \ Tort liability, similarly, was imposed when circumstances were ”such as would\ \ have led a prudent man to perceive danger, although not necessarily to foresee\ \ the specific harm”. Likewise, with respect to contracts, ”The law has nothing\ \ to do with the actual state of the parties’ minds. In contract, as elsewhere,\ \ it must go by externals, and judge parties by their conduct.”\n\nIn the book,\ \ Holmes set forth his view that the only source of law, properly speaking, was\ \ a judicial decision enforced by the state. Judges decided cases on the facts,\ \ and then wrote opinions afterward presenting a rationale for their decision.\ \ The true basis of the decision was often an ”inarticulate major premise”, however.\ \ A judge was obliged to choose between contending legal arguments, each posed\ \ in absolute terms, and the true basis of his decision was sometimes drawn from\ \ outside the law, when precedents were lacking or were evenly divided.\n\nThe\ \ common law evolves because civilized society evolves, and judges share the common\ \ preconceptions of the governing class. These views endeared Holmes to the later\ \ advocates of legal realism, and made him one of the early founders of law and\ \ economics jurisprudence. Holmes famously contrasted his own scholarship with\ \ the abstract doctrines of Christopher Columbus Langdell, dean of Harvard Law\ \ School, who viewed the common law as a self-enclosed set of doctrines. Holmes\ \ viewed Langdell’s work as akin to the German philosophic idealism he had for\ \ so long resisted, opposing it with his own scientific materialism.\n\nState\ \ court judge\n\nHolmes was considered for a federal court judgeship in 1878 by\ \ President Rutherford B. Hayes, but Massachusetts Senator George Frisbie Hoar\ \ persuaded Hayes to nominate another candidate. In the fall of 1882, Holmes became\ \ a professor at Harvard Law School, accepting an endowed professorship that had\ \ been created for him, largely through the efforts of Louis D. Brandeis. On Friday,\ \ December 8, 1882, Supreme Judicial Court of Massachusetts associate justice\ \ Otis Lord decided to resign, giving outgoing Republican governor John Davis\ \ Long a chance to appoint his successor, if he could do so before the Massachusetts\ \ Governor's Council adjourned at 3 pm. Holmes's partner George Shattuck proposed\ \ him for the vacancy, Holmes quickly agreed, and there being no objection by\ \ the council, he took the oath of office on December 15, 1882. His resignation\ \ from his professorship, after only a few weeks and without notice, was resented\ \ by the law school faculty, with James Bradley Thayer finding Holmes's conduct\ \ \"selfish\" and \"thoughtless\". On August 2, 1899, Holmes became Chief Justice\ \ of the Massachusetts Supreme Judicial Court following the death of Walbridge\ \ A. Field.\n\nDuring his service on the Massachusetts court, Holmes continued\ \ to develop and apply his views of the common law, usually following precedent\ \ faithfully. He issued few constitutional opinions in these years, but carefully\ \ developed the principles of free expression as a common-law doctrine. He departed\ \ from precedent to recognize workers' right to organize trade unions and to strike,\ \ as long as no violence was involved, and coercion was not exerted through impermissible\ \ means such as secondary boycotts, stating in his opinions that fundamental fairness\ \ required that workers be allowed to combine to compete on an equal footing with\ \ employers. He continued to give speeches and to write articles that added to\ \ or extended his work on the common law, most notably \"Privilege, Malice and\ \ Intent\", in which he presented his view of the pragmatic basis of the common-law\ \ privileges extended to speech and the press, which could be defeated by a showing\ \ of malice, or of specific intent to harm. This argument would later be incorporated\ \ into his famous opinions concerning the First Amendment.\n\nHe also published\ \ an address, \"The Path of the Law\", which is best known for its prediction\ \ theory of law, that \"[t]he prophecies of what the courts will do in fact, and\ \ nothing more pretentious, are what I mean by law\", and for its \"bad man\"\ \ perspective on the law that \"[i]f you really want to know the law and nothing\ \ else, you must look at it as a bad man, who cares only for the material consequences\ \ which such knowledge enables him to predict\".\n\nSupreme Court Justice\n\n\ Overview\n\nSoon after the resignation of Associate Justice Horace Gray in July\ \ 1902, President Theodore Roosevelt made known his intention to appoint Holmes\ \ as Gray's successor; it was the president's stated desire to fill the vacancy\ \ with someone from Massachusetts. The nomination was supported by Senator Henry\ \ Cabot Lodge, the junior senator from Massachusetts, but was opposed by its senior\ \ senator, George Frisbie Hoar, who was also chairman of the Senate Judiciary\ \ Committee. Hoar was a strenuous opponent of imperialism, and the legality of\ \ the annexation of Puerto Rico and the Philippines was expected to come before\ \ the Court. Lodge, like Roosevelt, was a strong supporter of imperialism, which\ \ Holmes was expected to support as well. \n\nDespite Hoar's opposition, the president\ \ moved ahead on the matter. On December 2, 1902, he formally submitted the nomination\ \ and Holmes was confirmed by the United States Senate on December 4. He was sworn\ \ into office on December 8.\n\nOn the bench, Holmes did vote to support the administration's\ \ position favoring the annexation of former Spanish colonies in the \"Insular\ \ Cases\". However, he later disappointed Roosevelt by dissenting in Northern\ \ Securities Co. v. United States, a major antitrust prosecution; the majority\ \ of the court, however, did rule against Holmes and sided with Theodore Roosevelt’s\ \ belief that Northern Securities violated the Sherman Antitrust Act. The dissent\ \ by Holmes permanently damaged his formerly close relationship with Theodore\ \ Roosevelt.\n\nHolmes was known for his pithy, short, and frequently quoted opinions.\ \ In more than twenty-nine years on the Supreme Court bench, he ruled on cases\ \ spanning the whole range of federal law. He is remembered for prescient opinions\ \ on topics as widely separated as copyright, the law of contempt, the antitrust\ \ status of professional baseball, and the oath required for citizenship. Holmes,\ \ like most of his contemporaries, viewed the Bill of Rights as codifying privileges\ \ obtained over the centuries in English and American common law, and was able\ \ to establish that view in numerous opinions of the Court. He is considered one\ \ of the greatest judges in American history and embodies for many the traditions\ \ of the common law, which are now challenged by originalists who insist the text\ \ of the Constitution trumps any common-law precedents that depart from the original\ \ understanding of its meaning.\n\nFrom the departure of William Howard Taft on\ \ February 3, 1930 until Charles Evans Hughes took office on February 24, 1930,\ \ Holmes briefly acted as the Chief Justice and presided over court sessions.\n\ \nNoteworthy rulings\n\nOtis v. Parker\n\nBeginning with his first opinion for\ \ the Court in Otis v. Parker, Holmes declared that ”due process of law”, the\ \ fundamental principle of fairness, protected people from unreasonable legislation\ \ but was limited only to those fundamental principles enshrined in the common\ \ law and did not protect most economic interests.\n\nSchenck v. United States\n\ \nIn a series of opinions surrounding the World War I Espionage Act of 1917 and\ \ the Sedition Act of 1918, he held that the freedom of expression guaranteed\ \ by federal and state constitutions simply declared a common-law privilege for\ \ speech and the press, even when those expressions caused injury, but that privilege\ \ would be defeated by a showing of malice or intent to do harm. Holmes came to\ \ write three unanimous opinions for the Supreme Court that arose from prosecutions\ \ under the 1917 Espionage Act because in an earlier case, Baltzer v. United States,\ \ he had circulated a powerfully expressed dissent, when the majority had voted\ \ to uphold a conviction of immigrant socialists who had circulated a petition\ \ criticizing the draft. Apparently learning that he was likely to publish this\ \ dissent, the government (perhaps alerted by Justice Louis D. Brandeis, newly\ \ appointed by President Woodrow Wilson) abandoned the case, and it was dismissed\ \ by the Court. The Chief Justice then asked Holmes to write opinions that could\ \ be unanimous, upholding convictions in three similar cases, where there were\ \ jury findings that speeches or leaflets were published with an intent to obstruct\ \ the draft, a crime under the 1917 law. Although there was no evidence that the\ \ attempts had succeeded, Holmes, in Schenck v. United States (1919), held for\ \ a unanimous Court that an attempt, purely by language, could be prosecuted if\ \ the expression, in the circumstances in which it was uttered, posed a \"clear\ \ and present danger\" that the legislature had properly forbidden. In his opinion\ \ for the Court, Holmes famously declared that the First Amendment would not protect\ \ a person \"falsely shouting fire in a theatre and causing a panic\". Although\ \ much criticized, Schenck remained an important precedent until it was superseded\ \ by the 1969 Supreme Court decision in Brandenburg v. Ohio, which held that \"\ advocacy of the use of force or of law violation\" is protected unless \"such\ \ advocacy is directed to inciting or producing imminent lawless action and is\ \ likely to incite or produce such action\".\n\nAbrams v. United States\nLater\ \ in 1919, however, in Abrams v. United States, Holmes was again in dissent. The\ \ Wilson Administration was vigorously prosecuting those suspected of sympathies\ \ with the recent Russian Revolution, as well as opponents of the war against\ \ Germany. The defendants in this case were socialists and anarchists, recent\ \ immigrants from Russia who opposed the apparent efforts of the United States\ \ to intervene in the Russian Civil War. They were charged with violating the\ \ Sedition Act of 1918, which was an amendment to the Espionage Act of 1917 that\ \ made criticisms of the government or the war effort a crime. Abrams and his\ \ co-defendants were charged with distributing leaflets (one in English and one\ \ in Yiddish) that called for a \"general strike\" to protest the U.S. intervention\ \ in Russia. A majority of the Court voted to uphold the convictions and sentences\ \ of ten and twenty years, to be followed by deportation, while Holmes dissented.\ \ The majority claimed to be following the precedents already set in Schenck and\ \ the other cases in which Holmes had written for the Court, but Holmes insisted\ \ that the defendants' leaflets neither threatened to cause any harm, nor showed\ \ a specific intent to hinder the war effort. Holmes condemned the Wilson Administration's\ \ prosecution and its insistence on draconian sentences for the defendants in\ \ passionate language: \"Even if I am technically wrong [regarding the defendants'\ \ intent] and enough can be squeezed from these poor and puny anonymities to turn\ \ the color of legal litmus paper ... the most nominal punishment seems to me\ \ all that possibly could be inflicted, unless the defendants are to be made to\ \ suffer, not for what the indictment alleges, but for the creed that they avow\ \ ... .\" Holmes then went on to explain the importance of freedom of thought\ \ in a democracy:\n\nIn writing this dissent, Holmes may have been influenced\ \ by Zechariah Chafee’s article ”Freedom of Speech in War Time”. Chafee had criticized\ \ Holmes’s opinion in Schenck for failing to express in more detail and more clearly\ \ the common-law doctrines upon which he relied. In his Abrams dissent, Holmes\ \ did elaborate somewhat on the decision in Schenck, roughly along the lines that\ \ Chafee had suggested. Although Holmes evidently believed that he was adhering\ \ to his own precedent, some later commentators accused Holmes of inconsistency,\ \ even of seeking to curry favor with his young admirers. In Abrams, the majority\ \ opinion relied on the clear-and-present-danger formulation of Schenck, claiming\ \ that the leaflets showed the necessary intent, and ignoring the point that they\ \ were unlikely to have any effect. In later opinions, the Supreme Court departed\ \ from this line of reasoning where the validity of a statute was in question,\ \ adopting the principle that a legislature could properly declare that some forms\ \ of speech posed a clear and present danger, regardless of the circumstances\ \ in which they were uttered. Holmes continued to dissent.\n\nSilverthorne Lumber\ \ Co. v. United States\nIn Silverthorne Lumber Co. v. United States (1920), Holmes\ \ ruled that any evidence obtained, even indirectly, from an illegal search was\ \ inadmissible in court. He reasoned that otherwise, police would have an incentive\ \ to circumvent the Fourth Amendment to obtain derivatives of the illegally obtained\ \ evidence, so any evidence resulting indirectly from an illegal search must be\ \ similarly suppressed. This later became known as the \"fruit of the poisonous\ \ tree\" doctrine.\n\nBuck v. Bell\nIn 1927, Holmes wrote the 8–1 majority opinion\ \ in Buck v. Bell, a case that upheld the Virginia Sterilization Act of 1924 and\ \ the forced sterilization of Carrie Buck, who was claimed to be mentally defective.\ \ Later scholarship has shown that the suit was collusive, in that \"two eugenics\ \ enthusiasts ... had chosen Buck as a bit player in a test case that they had\ \ devised\", and \"had asked Buck's guardian to challenge [the Virginia sterilization\ \ law]\". In addition, Carrie Buck was probably of normal intelligence. The argument\ \ made on her behalf was principally that the statute requiring sterilization\ \ of institutionalized persons was unconstitutional, as a violation of what today\ \ is called \"substantive due process\". Holmes repeated familiar arguments that\ \ statutes would not be struck down if they appeared on their face to have a reasonable\ \ basis. In support of his argument that the interest of ”public welfare” outweighs\ \ the interest of individuals in their bodily integrity, he argued:\n\nSterilization\ \ rates under eugenics laws in the United States climbed from 1927 until Skinner\ \ v. Oklahoma, 316 U.S. 535 (1942), in which the U.S. Supreme Court declared unconstitutional\ \ an Oklahoma statute that provided for the sterilization of \"habitual criminals\"\ .\n\nBuck v. Bell continues to be cited occasionally in support of due process\ \ requirements for state interventions in medical procedures. For instance, in\ \ 2001, the United States Court of Appeals for the Eighth Circuit cited Buck v.\ \ Bell to protect the constitutional rights of a woman coerced into sterilization\ \ without procedural due process. The court stated that error and abuse will result\ \ if the state does not follow the procedural requirements, established by Buck\ \ v. Bell, for performing an involuntary sterilization. Buck v. Bell was also\ \ cited briefly, though not discussed, in Roe v. Wade, in support of the proposition\ \ that the Court does not recognize an \"unlimited right to do with one's body\ \ as one pleases\". However, although Buck v. Bell has not been overturned, ”the\ \ Supreme Court has distinguished the case out of existence”.\n\nJurisprudential\ \ contributions\n\nCritique of formalism\nFrom his earliest writings, Holmes demonstrated\ \ a lifelong belief that the decisions of judges were consciously or unconsciously\ \ result-oriented and reflected the mores of the class and society from which\ \ judges were drawn. Holmes accordingly argued that legal rules are not deduced\ \ through formal logic but rather emerge from an active process of human self-government.\ \ He explored these theories in his 1881 book The Common Law. His philosophy represented\ \ a departure from the prevailing jurisprudence of the time: legal formalism,\ \ which held that law was an orderly system of rules from which decisions in particular\ \ cases could be deduced. Holmes sought to consciously reinvent the common law –\ \ to modernize it as a tool for adjusting to the changing nature of modern life,\ \ as judges of the past had done more or less unconsciously. He has been classed\ \ with the philosophic pragmatists, although pragmatism is what he attributed\ \ to the law, rather than his personal philosophy.\n\nCentral to his thought was\ \ the notion that the law, as it had evolved in modern societies, was concerned\ \ with the material results of a defendant's actions. A judge's task was to decide\ \ which of two parties before him would bear the cost of an injury. Holmes argued\ \ that the evolving common law standard was that liability would fall upon a person\ \ whose conduct failed to reflect the prudence of a \"reasonable man\". If a construction\ \ worker throws a beam onto a crowded street:\n\nThis ”objective standard” adopted\ \ by common-law judges, Holmes thought, reflected a shift in community standards,\ \ away from condemnation of a person’s act toward an impersonal assessment of\ \ its value to the community. In the modern world, the advances made in biology\ \ and the social sciences should allow a better conscious determination of the\ \ results of individual acts and the proper measure of liability for them. This\ \ belief in the pronouncements of science concerning social welfare, although\ \ he later doubted its applicability to law in many cases, accounts for his enthusiastic\ \ endorsement of eugenics in his writings, and his opinion in the case of Buck\ \ v. Bell.\n\nLegal positivism\nIn 1881, in The Common Law, Holmes brought together\ \ into a coherent whole his earlier articles and lectures concerning the history\ \ of the common law (judicial decisions in England and the United States), which\ \ he interpreted from the perspective of a practicing lawyer. What counted as\ \ law, to a lawyer, was what judges did in particular cases. Law was what the\ \ state would enforce, through violence if necessary; echoes of his experience\ \ in the Civil War were always present in his writings. Judges decided where and\ \ when the force of the state would be brought to bear, and judges in the modern\ \ world tended to consult facts and consequences when deciding what conduct to\ \ punish. The decisions of judges, viewed over time, determined the rules of conduct\ \ and the legal duties by which all are bound. Judges did not and should not consult\ \ any external system of morality, certainly not a system imposed by a deity.\n\ \nHolmes brought himself into constant conflict with scholars who believed that\ \ legal duties rested upon natural law, a moral order of the kind invoked by Christian\ \ theologians and other philosophic idealists. He believed instead \"that men\ \ make their own laws; that these laws do not flow from some mysterious omnipresence\ \ in the sky, and that judges are not independent mouthpieces of the infinite.\"\ \ ”The common law is not a brooding omnipresence in the sky. ... ” Rather than\ \ a set of abstract, rational, mathematical, or in any way unworldly set of principles,\ \ Holmes said: \"[T]he prophecies of what the courts will do in fact, and nothing\ \ more pretentious, are what I mean by the law.\"\n\nHis belief that law, properly\ \ speaking, was a set of generalizations from what judges had done in similar\ \ cases, determined his view of the Constitution of the United States. As a justice\ \ of the U.S. Supreme Court, Holmes rejected the argument that the text of the\ \ Constitution should be applied directly to cases that came before the court,\ \ as if it were a statute. He shared with most of his fellow judges the belief\ \ that the Constitution carried forward principles derived from the common law,\ \ principles that continued to evolve in American courts. The text of the Constitution\ \ itself, as originally understood, was not a set of rules, but only a directive\ \ to courts to consider the body of the common law when deciding cases that arose\ \ under the Constitution. It followed that constitutional principles adopted from\ \ the common law were evolving, as the law itself evolved: \"A word [in the Constitution]\ \ is not a crystal, transparent and unchanged, it is the skin of a living thought....\"\ \n\nThe provisions of the Constitution are not mathematical formulas that have\ \ their essence in form, they are organic, living institutions transplanted from\ \ English soil. Their significance is vital, not formal; it is to be gathered\ \ not simply by taking the words and a dictionary but by considering their origin\ \ and the line of their growth.\n\nHolmes also insisted on the separation of ”ought”\ \ and ”is”, confusion of which he saw as an obstacle in understanding the realities\ \ of the law. \"The law is full of phraseology drawn from morals, and talks about\ \ rights and duties, malice, intent, and negligence – and nothing is easier in\ \ legal reasoning than to take these words in their moral sense\". \"Therefore\ \ nothing but confusion can result from assuming that the rights of man in a moral\ \ sense are equally rights in the sense of the Constitution and the law\". Holmes\ \ said, ”I think our morally tinted words have caused a great deal of confused\ \ thinking”.\n\nNevertheless, in rejecting morality as a form of natural law outside\ \ of and superior to human enactments, Holmes was not rejecting moral principles\ \ that were the result of enforceable law: \"The law is the witness and external\ \ deposit of our moral life. Its history is the history of the moral development\ \ of the race. The practice of it, in spite of popular jests, tends to make good\ \ citizens and good men. When I emphasize the difference between law and morals\ \ I do so with reference to a single end, that of learning and understanding the\ \ law.\" Holmes's insistence on the material basis of law, on the facts of a case,\ \ has led some to characterize him as unfeeling, however. George Washington University\ \ law professor Jeffrey Rosen summarized Holmes's views this way: \"Holmes was\ \ a cold and brutally cynical man who had contempt for the masses and for the\ \ progressive laws he voted to uphold ... an aristocratic nihilist who once told\ \ his sister that he loathed 'the thick-fingered clowns we call the people'.\"\ \n\nReputation as a dissenter\n\nAlthough Holmes did not dissent frequently —\ \ during his 29 years on the U.S. Supreme Court, he wrote only 72 separate opinions,\ \ whereas he penned 852 majority opinions — his dissents were often prescient\ \ and acquired so much authority that he became known as \"The Great Dissenter\"\ . Chief Justice Taft complained that \"his opinions are short, and not very helpful\"\ . Two of his most famous dissents were in Abrams v. United States and Lochner\ \ v. New York.\n\nSpeeches and letters\n\nSpeeches\nOnly Holmes’s legal writings\ \ were readily available during his life and in the first years after his death,\ \ but he confided his thoughts more freely in talks, often to limited audiences,\ \ and more than two thousand letters that have survived. Holmes's executor, John\ \ Gorham Palfrey, diligently collected Holmes’s published and unpublished papers\ \ and donated them (and their copyrights) to Harvard Law School. Harvard Law Professor\ \ Mark De Wolfe Howe undertook to edit the papers and was authorized by the school\ \ to publish them and to prepare a biography of Holmes. Howe published several\ \ volumes of correspondence, beginning with Holmes’s correspondence with Frederick\ \ Pollock, and a volume of Holmes's speeches, before his untimely death. Howe's\ \ work formed the basis of much subsequent Holmes scholarship.\n\nHolmes's speeches\ \ were divided into two groups: public addresses, which he gathered into a slim\ \ volume, regularly updated, that he gave to friends and used as a visiting card,\ \ and less formal addresses to men's clubs, dinners, law schools, and Twentieth\ \ Regiment reunions. All of the speeches are reproduced in the third volume of\ \ The Collected Works of Justice Holmes. The public addresses are Holmes’s effort\ \ to express his personal philosophy in Emersonian, poetic terms. They frequently\ \ advert to the Civil War and to death, and express a hope that personal sacrifice,\ \ however pointless it may seem, serves to advance the human race toward some\ \ as-yet unforeseen goal. This mysterious purpose explained the commitment to\ \ duty and honor that Holmes felt deeply himself and that he thought was the birthright\ \ of a certain class of men. As Holmes stated at a talk upon receiving an honorary\ \ degree from Yale:\n\nIn the 1890s, at a time when \"scientific\" anthropology\ \ that spoke of racial differences was in vogue, his observations took on a bleakly\ \ Darwinist cast:\n\nThis talk was widely reprinted and admired at the time, and\ \ may have contributed to the popular name given by the press to the 1st United\ \ States Volunteer Cavalry (the \"Rough Riders\") during the Spanish–American\ \ War.\n\nOn May 30, 1895, Holmes gave the address at a Memorial Day function\ \ held by the Graduating Class of Harvard University in Boston, Massachusetts.\ \ The speech, which came to be known as \"The Soldier's Faith\", expressed Holmes's\ \ view of the nature of war, and the conflict between the high ideals that motivated\ \ his generation to fight in the civil war, and the reality of a soldier's experience\ \ and personal pledge to follow orders into battle. Holmes stated:\n\nIn the conclusion\ \ of the speech, Holmes said:\n\nTheodore Roosevelt reportedly admired Holmes's\ \ \"Soldier's Faith\" speech, and it is believed to have contributed to his decision\ \ to nominate Holmes to the Supreme Court.\n\nLetters\n\nMany of Holmes's closest\ \ male friends were in England and he corresponded with them regularly and at\ \ length, speaking usually of his work. Letters to friends in England such as\ \ Harold Laski and Frederick Pollock contain frank discussion of his decisions\ \ and his fellow justices. In the United States, letters to male friends Morris\ \ R. Cohen, Lewis Einstein, Felix Frankfurter, and Franklin Ford are similar,\ \ although the letters to Frankfurter are especially personal. Holmes’s correspondence\ \ with women in Great Britain and the U.S. was at least as extensive, and in many\ \ ways more revealing, but these series of letters have not been published. An\ \ extensive selection of letters to Claire Castletown, in Ireland, is included\ \ in Honorable Justice: The Life of Oliver Wendell Holmes, by Sheldon Novick.\ \ These letters are closer to Holmes’s conversation and cast light upon the style\ \ he adopted in judicial opinions, which were often designed to be read aloud.\n\ \nIn a letter to a contemporary, Holmes made this comment on international comparisons:\ \ \"Judge not a people by the ferocity of its men, but by the steadfastness of\ \ its women.\"\n\nRetirement, death, honors and legacy\n\nHolmes was widely admired\ \ during his last years, and on his ninetieth birthday was honored on one of the\ \ first coast-to-coast radio broadcasts, during which the Chief Justice, the Dean\ \ of Yale Law School, and the president of the American Bar Association read encomia;\ \ the Bar Association presented him with a gold medal. Holmes served on the court\ \ until January 12, 1932, when his brethren on the court, citing his advanced\ \ age, suggested that the time had come for him to step down. By that time, at\ \ 90 years and 10 months of age, he was the oldest justice to serve in the court's\ \ history, and his record has only been challenged by John Paul Stevens in 2010,\ \ who retired when only 8 months younger than Holmes had been at retirement. On\ \ Holmes’s ninety-second birthday, newly inaugurated President Franklin D. Roosevelt\ \ and his wife Eleanor, called on Holmes at his home in Washington, D.C. Holmes\ \ died of pneumonia in Washington on March 6, 1935, two days short of his 94th\ \ birthday. He was the last living Justice of the Fuller Court and had been between\ \ 1925 and 1932 the last Justice of that Court to remain on the bench.\n\nIn his\ \ will, Holmes left his residuary estate to the United States government (he had\ \ earlier said that ”taxes are what we pay for civilized society” in Compañia\ \ General de Tabacos de Filipinas vs. Collector of Internal Revenue, 275 U.S.\ \ 87, 100 (1927).) After his death, his personal effects included his Civil War\ \ Officer’s uniform still stained with his blood and ’torn with shot’ as well\ \ as the Minié balls that had wounded him three times in separate battles. Holmes\ \ was buried beside his wife in Arlington National Cemetery.\n\nThe United States\ \ Postal Service honored Holmes with a Prominent Americans series (1965–1978)\ \ 15¢ postage stamp.\n\nHolmes's papers, donated to Harvard Law School, were kept\ \ closed for many years after his death, a circumstance that gave rise to somewhat\ \ fanciful accounts of his life. Catherine Drinker Bowen’s fictionalized biography\ \ Yankee from Olympus was a long-time bestseller, and the 1946 Broadway play and\ \ 1950 Hollywood motion picture The Magnificent Yankee were based on a biography\ \ of Holmes by Francis Biddle, who had been one of his secretaries. Much of the\ \ scholarly literature addressing Holmes’s opinions was written before much was\ \ known about his life, and before a coherent account of his views was available.\ \ The Harvard Law Library eventually relented and made available to scholars the\ \ extensive Holmes papers, collected and annotated by Mark DeWolfe Howe, who died\ \ before he was able to complete his own biography of the justice. In 1989, the\ \ first full biography based on Holmes's papers was published, and several other\ \ biographies have followed.\n\nCongress established the U.S. Permanent Committee\ \ for the Oliver Wendell Holmes Devise within the Library of Congress with the\ \ funds he left to the United States in his will which were used to create a memorial\ \ garden at the Supreme Court building and to publish an ongoing series on the\ \ history of the Supreme Court.\n\nHolmes' summer house in Beverly, Massachusetts,\ \ was designated a National Historic Landmark in 1972, recognition for his contributions\ \ to American jurisprudence.\n\nJustice Holmes was an honorary member of the Connecticut\ \ Society of the Cincinnati.\n\nClerks\n\n\"Many secretaries formed close friendships\ \ with one another\", wrote Tony Hiss, son of Alger Hiss, about the special club\ \ of clerks of Oliver Wendell Holmes Jr. They included:\n Robert M. Benjamin (later,\ \ lawyer for an appeal by Alger Hiss)\n Laurence Curtis, U.S. Representative\n\ \ Alger Hiss, president of the Carnegie Endowment for International Peace and\ \ convicted perjurer\n Donald Hiss, partner, Covington & Burling law firm\n Irving\ \ Sands Olds, chairman of U.S. Steel\n H. Chapman Rose, Undersecretary of the\ \ United States Treasury\n Chauncey Belknap, partner at Patterson, Belknap, Webb\ \ & Tyler, one of largest law firms in New York during his time, and an attorney\ \ for the Rockefeller Foundation\n\nIn popular culture\n American actor Louis\ \ Calhern portrayed Holmes in the 1946 play The Magnificent Yankee, with Dorothy\ \ Gish as Holmes's wife Fanny. In 1950, Calhern repeated his performance in Metro-Goldwyn-Mayer's\ \ film version The Magnificent Yankee, for which he received his only Academy\ \ Award nomination. Ann Harding co-starred in the film. A 1965 television adaptation\ \ of the play starred Alfred Lunt and Lynn Fontanne in one of their few appearances\ \ on the small screen.\n In the movie Judgment at Nuremberg (1961), defense advocate\ \ Hans Rolfe (Maximilian Schell) quotes Holmes twice. First, with one of his earlier\ \ opinions:\n\nSecond, on the sterilization laws enacted in Virginia and upheld\ \ by the Supreme Court in Buck v. Bell:\n\n This was in relation to Holmes' support\ \ for eugenics laws in the United States, which Rolfe argued were not different\ \ in principle from the Nazi laws. In the earlier Playhouse 90 television version\ \ from 1959, which also quotes Holmes in this context, the tribunal judge Ives,\ \ who ultimately presents a dissenting verdict, is played by the actor Wendell\ \ Holmes (1914–1962), born Oliver Wendell Holmes.\n Holmes appears as a minor\ \ character in Bernard Cornwell's novels Copperhead and The Bloody Ground, the\ \ second and fourth volumes of his Starbuck Chronicles; the novels portray the\ \ battles of Ball's Bluff and Antietam, in both of which the young Lieutenant\ \ Holmes was wounded in action.\n The 1960s television sitcom Green Acres starred\ \ Eddie Albert as a character named Oliver Wendell Douglas, a Manhattan white\ \ shoe lawyer who gives up the law to become a farmer.\n The 1980 comic strip\ \ Bloom County features a character named Oliver Wendell Jones, a young computer\ \ hacker and gifted scientist.\n\nSee also\n\n Demographics of the Supreme Court\ \ of the United States\n Freedom for the Thought That We Hate\n List of justices\ \ of the Supreme Court of the United States\n List of law clerks of the Supreme\ \ Court of the United States (Seat 2)\n List of United States Supreme Court justices\ \ by time in office\n Prediction theory of law\n List of people on the cover of\ \ Time Magazine: 1920s (March 15, 1926)\n Skepticism in law\n List of United States\ \ Supreme Court cases by the Fuller Court\n List of United States Supreme Court\ \ cases by the Hughes Court\n List of United States Supreme Court cases by the\ \ Taft Court\n List of United States Supreme Court cases by the White Court\n\n\ References\n\nExplanatory notes\n\nCitations\n\nGeneral bibliography \n \n Collins,\ \ Ronald K.L., ed., The Fundamental Holmes: A Free Speech Chronicle and Reader\ \ (Cambridge University Press, 2010)\n \n \n \n \n Hoeflich, Michael H. and Davies,\ \ Ross E., eds. (2021). The Black Book of Justice Holmes: Text Transcript and\ \ Commentary. The Lawbook Exchange, Ltd. . Interview with editors \n Holmes, Oliver\ \ Wendell (1920). Collected Legal Papers. Harcourt, Brace and Company.\n\nFurther\ \ reading \n Abraham, Henry J., Justices, Presidents, and Senators: A History\ \ of the U.S. Supreme Court Appointments from Washington to Bush II (5th ed.,\ \ 2007). New York: Rowman & Littlefield Publishers. .\n Aichele, Gary, Oliver\ \ Wendell Holmes, Jr.: Soldier, Scholar, Judge. Twayne Publishers, 1989.\n 1991.\n\ \ Biddle, Francis, Mr. Justice Holmes. Scribner, 1942.\n Biddle, Francis, Justice\ \ Holmes, Natural Law, and the Supreme Court. MacMillan, 1961. Reviewed\n Brown,\ \ Richard Maxwell, No Duty to Retreat: Violence and Values in American History\ \ and Society. (University of Oklahoma Press, Norman Publishing Division of the\ \ University, by arrangement with Oxford University Press, Inc., 1991). \n Budiansky,\ \ Stephen, Oliver Wendell Holmes: A Life in War, Law, and Ideas. W.W. Norton &\ \ Company, 2019.\n Burton, Steven J., ed., The Path of the Law And Its Influence:\ \ The Legacy of Oliver Wendell Holmes, Jr. Cambridge University Press, 2000.\n\ \ Cushman, Clare, The Supreme Court Justices: Illustrated Biographies,1789-1995\ \ (2nd ed.) (Supreme Court Historical Society), (Congressional Quarterly Books,\ \ 2001) .\n Frank, John P., The Justices of the United States Supreme Court: Their\ \ Lives and Major Opinions (Leon Friedman and Fred L. Israel, editors) (Chelsea\ \ House Publishers, 1995) .\n Frankfurter, Felix, ed., Mr. Justice Holmes. Coward-McCann,\ \ Inc., 1931.\n Gordon, Robert W., ed., The Legacy of Oliver Wendell Holmes, Jr.\ \ Stanford University Press, 1992.\n Grant, Susan-Mary, Oliver Wendell Holmes,\ \ Jr.: Civil War Soldier, Supreme Court Justice. Routledge, 2016.\n Hall, Kermit\ \ L., ed., The Oxford Companion to the Supreme Court of the United States. New\ \ York: Oxford University Press, 1992. .\n Hurst, James Willard, Justice Holmes\ \ on Legal History. The Macmillan Company, 1964.\n Kang, John M., Oliver Wendell\ \ Holmes and Fixations of Manliness. Routledge, 2018. Reviewed\n Kellogg, Frederic\ \ R., Justice Oliver Wendell Holmes, Jr., Legal Theory, and Judicial Restraint.\ \ Cambridge University Press, 2007.\n Kellogg, Frederic R., Oliver Wendell Holmes\ \ Jr. and Legal Logic. The University of Chicago Press, 2018.\n Kornstein, Daniel,\ \ The Second Greatest American. AuthorHouse, 2017.\n Lerner, Max, ed., The Mind\ \ and Faith of Justice Holmes: His Speeches, Essays, Letters, and Judicial Opinions.\ \ Boston: Little, Brown and Company, 1943.\n Lewis, Anthony, Freedom for the Thought\ \ That We Hate: A Biography of the First Amendment (Basic ideas. New York: Basic\ \ Books, 2007). .\n Lian, Alexander, Stereoscopic Law: Oliver Wendell Holmes and\ \ Legal Education. Cambridge University Press, 2020.\n Martin, Fenton S. and Goehlert,\ \ Robert U., The U.S. Supreme Court: A Bibliography. Congressional Quarterly Books,\ \ 1990. .\n Matteson, John, A Worse Place Than Hell: How the Civil War Battle\ \ of Fredericksburg Changed a Nation. New York: W.W. Norton and Company, 2021.\ \ .\n Menand, Louis, The Metaphysical Club: A Story of Ideas in America. New York:\ \ Farrar, Straus and Giroux, 2001. .\n Mendenhall, Allen, Oliver Wendell Holmes\ \ Jr., Pragmatism, and the Jurisprudence of Agon: Aesthetic Dissent and the Common\ \ Law. Bucknell University Press, 2016.\n Monagan, John S., The Grand Panjandrum:\ \ Mellow Years of Justice Holmes. Lanham: University Press of America, 1988. .\n\ \ Rabban, David M., Law's History: American Legal Thought and the Transatlantic\ \ Turn to History. Cambridge University Press, 2012. .\n Rosenberg, David, The\ \ Hidden Holmes: His Theory of Torts in History. Harvard University Press, 1995.\ \ .\n Shriver, Harry C., ed., Justice Oliver Wendell Holmes: His Book Notices\ \ and Uncollected Letters and Papers. Central Book Co., 1936.\n Snyder, Brad,\ \ The House of Truth: A Washington Political Salon and the Foundations of American\ \ Liberalism. Oxford University Press, 2017.\n Urofsky, Melvin I., The Supreme\ \ Court Justices: A Biographical Dictionary (New York: Garland Publishing, 1994).\ \ 590 pp. .\n Vannatta, Seth, ed., The Pragmatism and Prejudice of Oliver Wendell\ \ Holmes Jr. Lexington Books, 2019.\n Wells, Catharine Pierce, Oliver Wendell\ \ Holmes: A Willing Servant to an Unknown God. Cambridge University Press, 2020.\n\ \ White, G. Edward, Justice Oliver Wendell Holmes: Law and the Inner Self. Oxford\ \ University Press, 1993.\n White, G. Edward, Oliver Wendell Holmes, Jr. Oxford\ \ University Press, 2006.\n\nExternal links\n\n Fanny Holmes, Wife Of Supreme\ \ Court Justice Oliver Wendell Holmes, Jr.\n Oliver Wendell Holmes Jr., American\ \ Jurist\n \n \n Oliver Wendell Holmes, Jr., Recalls Famed Abraham Lincoln Fort\ \ Stevens Visit, Original Letter at Shapell Manuscript Foundation\n \n Holmes'\ \ Dissenting Opinion, Abrams vs. United States, 10 November 1919\n \n \n \n Booknotes\ \ interview with Liva Baker on The Justice from Beacon Hill: The Life and Times\ \ of Oliver Wendell Holmes, September 8, 1991.\n\n|-\n\n|-\n\n \n1841 births\n\ 1935 deaths\n19th-century American judges\n20th-century American judges\nAmerican\ \ eugenicists\nAmerican legal writers\nAmerican people of English descent\nAmerican\ \ Unitarians\nBurials at Arlington National Cemetery\nChief Justices of the Massachusetts\ \ Supreme Judicial Court\nCorresponding Fellows of the British Academy\nDeaths\ \ from pneumonia in Washington, D.C.\nHall of Fame for Great Americans inductees\n\ Harvard College alumni\nHarvard Law School alumni\nHarvard Law School faculty\n\ Hasty Pudding alumni\nJustices of the Supreme Court of the United States\nLawyers\ \ from Boston\nMassachusetts lawyers\nMassachusetts Republicans\nPeople from Beacon\ \ Hill, Boston\nPeople from Mattapoisett, Massachusetts\nPeople of Massachusetts\ \ in the American Civil War\nPhilosophers of law\nUnited States Army officers\n\ United States federal judges appointed by Theodore Roosevelt" - source_sentence: What are the boiling and melting points of water on the Celsius temperature scale? sentences: - "Amyotrophic lateral sclerosis (ALS), also known as motor neurone disease (MND)\ \ or Lou Gehrig's disease, is a neurodegenerative disease that results in the\ \ progressive loss of motor neurons that control voluntary muscles. ALS is the\ \ most common type of motor neuron disease. Early symptoms of ALS include stiff\ \ muscles, muscle twitches, and gradual increasing weakness and muscle wasting.\ \ Limb-onset ALS begins with weakness in the arms or legs, while bulbar-onset\ \ ALS begins with difficulty speaking or swallowing. Half of the people with ALS\ \ develop at least mild difficulties with thinking and behavior, and about 15%\ \ develop frontotemporal dementia. Most people experience pain. The affected muscles\ \ are responsible for chewing food, speaking, and walking. Motor neuron loss continues\ \ until the ability to eat, speak, move, and finally the ability to breathe is\ \ lost. ALS eventually causes paralysis and early death, usually from respiratory\ \ failure.\n\nMost cases of ALS (about 90% to 95%) have no known cause, and are\ \ known as sporadic ALS. However, both genetic and environmental factors are believed\ \ to be involved. The remaining 5% to 10% of cases have a genetic cause linked\ \ to a history of the disease in the family, and these are known as familial ALS.\ \ About half of these genetic cases are due to one of two specific genes. ALS\ \ and frontotemporal dementia (FTD) are considered to be part of a common disease\ \ spectrum (ALS-FTD) because of genetic, clinical, and pathological similarities.\ \ The underlying mechanism involves damage to both upper and lower motor neurons;\ \ in ALS-FTD, neurons in the frontal and temporal lobes of the brain die as well.\ \ The diagnosis is based on a person's signs and symptoms, with testing done to\ \ rule out other potential causes.\n\nThere is no known cure for ALS. The goal\ \ of treatment is to improve symptoms. A medication called riluzole may extend\ \ life by about two to three months. Non-invasive ventilation may result in both\ \ improved quality and length of life. Mechanical ventilation can prolong survival\ \ but does not stop disease progression. A feeding tube may help. The disease\ \ can affect people of any age, but usually starts around the age of 60 and in\ \ inherited cases around the age of 50. The average survival from onset to death\ \ is two to four years, though this can vary, and about 10% survive longer than\ \ 10 years, and death is usually due to respiratory failure. In Europe, the disease\ \ affects about two to three people per 100,000 per year. Rates in much of the\ \ world are unclear. In the United States, it is more common in white people than\ \ in black people.\n\nDescriptions of the disease date back to at least 1824 by\ \ Charles Bell. In 1869, the connection between the symptoms and the underlying\ \ neurological problems was first described by French neurologist Jean-Martin\ \ Charcot, who in 1874 began using the term amyotrophic lateral sclerosis. It\ \ became well known in the United States in the 20th century when in 1939 it affected\ \ baseball player Lou Gehrig (leading to his death two years later), and later\ \ worldwide, following the 1963 diagnosis of then 21 year old cosmologist Stephen\ \ Hawking. However, unlike most ALS sufferers, Hawking managed to survive his\ \ illness for another 55 years. The first ALS gene was discovered in 1993 while\ \ the first animal model was developed in 1994. In 2014, videos of the Ice Bucket\ \ Challenge went viral on the Internet and increased public awareness of the condition.\n\ \nClassification\nALS is a motor neuron disease, also spelled \"motor neurone\ \ disease\", which is a group of neurological disorders that selectively affect\ \ motor neurons, the cells that control voluntary muscles of the body. Other motor\ \ neuron diseases include primary lateral sclerosis (PLS), progressive muscular\ \ atrophy (PMA), progressive bulbar palsy, pseudobulbar palsy, and monomelic amyotrophy\ \ (MMA).\n\nALS itself can be classified in a few different ways: by how fast\ \ the disease progresses which is related to the age of onset; by whether it is\ \ familial or sporadic, and by the region first affected. In about 25% of cases,\ \ muscles in the face, mouth, and throat are affected first because motor neurons\ \ in the part of the brainstem called the medulla oblongata (formerly called the\ \ \"bulb\") start to die first along with lower motor neurons. This form is called\ \ \"bulbar-onset ALS\". In about 5% of cases, muscles in the trunk of the body\ \ are affected first. In most cases the disease spreads and affects other spinal\ \ cord regions. A few people with ALS have symptoms that are limited to one spinal\ \ cord region for at least 12 to 24 months before spreading to a second region;\ \ these regional variants of ALS are associated with a better prognosis.\n\nClassical\ \ ALS, PLS, and PMA\n\nALS can be classified by the types of motor neurons that\ \ are affected. Typical or \"classical\" ALS involves upper motor neurons in the\ \ brain and lower motor neurons in the spinal cord. Primary lateral sclerosis\ \ (PLS) involves only upper motor neurons, and progressive muscular atrophy (PMA)\ \ involves only lower motor neurons. There is debate over whether PLS and PMA\ \ are separate diseases or simply variants of ALS.\n\nClassic ALS accounts for\ \ about 70% of all cases of ALS and can be subdivided into limb-onset ALS (also\ \ known as spinal-onset) and bulbar-onset ALS. Limb-onset ALS begins with weakness\ \ in the arms and legs and accounts for about two-thirds of all classic ALS cases.\ \ Bulbar-onset ALS begins with weakness in the muscles of speech, chewing, and\ \ swallowing and accounts for the other one-third of cases. Bulbar onset is associated\ \ with a worse prognosis than limb-onset ALS; a population-based study found that\ \ bulbar-onset ALS has a median survival of 2.0 years and a 10-year survival rate\ \ of 3%, while limb-onset ALS has a median survival of 2.6 years and a 10-year\ \ survival rate of 13%. A rare variant is respiratory-onset ALS that accounts\ \ for about 3% of all cases of ALS, in which the initial symptoms are difficulty\ \ breathing (dyspnea) with exertion, at rest, or while lying down (orthopnea).\ \ Spinal and bulbar symptoms tend to be mild or absent at the beginning. It is\ \ more common in males. Respiratory-onset ALS has the worst prognosis of any ALS\ \ variant; in a population-based study, those with respiratory-onset had a median\ \ survival of 1.4 years and 0% survival at 10 years.\n\nPrimary lateral sclerosis\ \ (PLS) accounts for about 5% of all cases of ALS and affects upper motor neurons\ \ in the arms and legs. However, more than 75% of people with apparent PLS develop\ \ lower motor neuron signs within four years of symptom onset, meaning that a\ \ definite diagnosis of PLS cannot be made until then. PLS has a better prognosis\ \ than classic ALS, as it progresses slower, results in less functional decline,\ \ does not affect the ability to breathe, and causes less severe weight loss.\n\ \nProgressive muscular atrophy (PMA) accounts for about 5% of all cases of ALS\ \ and affects lower motor neurons in the arms and legs. While PMA is associated\ \ with longer survival on average than classic ALS, it still progresses to other\ \ spinal cord regions over time, eventually leading to respiratory failure and\ \ death. Upper motor neuron signs can develop late in the course of PMA, in which\ \ case the diagnosis might be changed to classic ALS.\n\nRegional variants\nRegional\ \ variants of ALS have symptoms that are limited to a single spinal cord region\ \ for at least a year; they progress more slowly than classic ALS and are associated\ \ with longer survival. Examples include flail arm syndrome, flail leg syndrome,\ \ and isolated bulbar ALS. Flail arm syndrome and flail leg syndrome are often\ \ considered to be regional variants of PMA because they only involve lower motor\ \ neurons. Isolated bulbar ALS can involve upper or lower motor neurons. These\ \ regional variants of ALS cannot be diagnosed at the onset of symptoms; a failure\ \ of the disease to spread to other spinal cord regions for an extended period\ \ of time (at least 12 months) must be observed.\n\nFlail arm syndrome, also called\ \ brachial amyotrophic diplegia, is characterized by lower motor neuron damage\ \ in the cervical spinal cord only, leading to gradual onset of weakness in the\ \ proximal arm muscles and decreased or absent reflexes. Flail leg syndrome, also\ \ called leg amyotrophic diplegia, is characterized by lower motor neuron damage\ \ in the lumbosacral spinal cord only, leading to gradual onset of weakness in\ \ the legs and decreased or absent reflexes. Isolated bulbar ALS is characterized\ \ by upper or lower motor neuron damage in the bulbar region only, leading to\ \ gradual onset of difficulty with speech (dysarthria) and swallowing (dysphagia);\ \ breathing (respiration) is generally preserved, at least initially. Two small\ \ studies have shown that people with isolated bulbar ALS may live longer than\ \ people with bulbar-onset ALS.\n\nAge of onset\nALS can also be classified based\ \ on the age of onset. While the peak age of onset is 58 to 63 for sporadic ALS\ \ and 47 to 52 for familial ALS, about 10% of all cases of ALS begin before age\ \ 45 (\"young-onset\" ALS), and about 1% of all cases begin before age 25 (juvenile\ \ ALS). People who develop young-onset ALS are more likely to be male, less likely\ \ to have bulbar onset of symptoms, and more likely to have a slower progression\ \ of disease. Juvenile ALS is more likely to be familial than adult-onset ALS;\ \ genes known to be associated with juvenile ALS include ALS2, SETX, SPG11, FUS,\ \ and SIGMAR1. Although most people with juvenile ALS live longer than those with\ \ adult-onset ALS, some of them have specific mutations in FUS and SOD1 that are\ \ associated with a poor prognosis. Late onset (after age 65) is associated with\ \ a more rapid functional decline and shorter survival.\n\nSigns and symptoms\ \ \nThe disorder causes muscle weakness, atrophy, and muscle spasms throughout\ \ the body due to the degeneration of the upper motor and lower motor neurons.\ \ Individuals affected by the disorder may ultimately lose the ability to initiate\ \ and control all voluntary movement, although bladder and bowel function and\ \ the extraocular muscles (the muscles responsible for eye movement) are usually\ \ spared until the final stages of the disease.\n\nCognitive or behavioral dysfunction\ \ is present in 30–50% of individuals with ALS. Around half of people with ALS\ \ will experience mild changes in cognition and behavior, and 10–15% will show\ \ signs of frontotemporal dementia (FTD). Most people with ALS who have normal\ \ cognition at the time of diagnosis have preserved cognition throughout the course\ \ of their disease; the development of cognitive impairment in those with normal\ \ cognition at baseline is associated with a worse prognosis. Repeating phrases\ \ or gestures, apathy, and loss of inhibition are frequently reported behavioral\ \ features of ALS. Language dysfunction, executive dysfunction, and troubles with\ \ social cognition and verbal memory are the most commonly reported cognitive\ \ symptoms in ALS; a meta-analysis found no relationship between dysfunction and\ \ disease severity. However, cognitive and behavioral dysfunctions have been found\ \ to correlate with reduced survival in people with ALS and increased caregiver\ \ burden; this may be due in part to deficits in social cognition. About half\ \ the people who have ALS experience emotional lability, in which they cry or\ \ laugh for no reason; it is more common in those with bulbar-onset ALS.\n\nPain\ \ is a symptom experienced by most people with ALS and can take the form of neuropathic\ \ pain (pain caused by nerve damage), spasticity, muscle cramps, and nociceptive\ \ pain caused by reduced mobility and muscle weakness; examples of nociceptive\ \ pain in ALS include contractures (permanent shortening of a muscle or joint),\ \ neck pain, back pain, shoulder pain, and pressure ulcers.\n\nSensory nerves\ \ and the autonomic nervous system are generally unaffected, meaning the majority\ \ of people with ALS maintain hearing, sight, touch, smell, and taste.\n\nInitial\ \ symptoms \nThe start of ALS may be so subtle that the symptoms are overlooked.\ \ The earliest symptoms of ALS are muscle weakness or muscle atrophy. Other presenting\ \ symptoms include trouble swallowing or breathing, cramping, or stiffness of\ \ affected muscles; muscle weakness affecting an arm or a leg; or slurred and\ \ nasal speech. The parts of the body affected by early symptoms of ALS depend\ \ on which motor neurons in the body are damaged first.\n\nIn limb-onset ALS,\ \ the first symptoms are in the arms or the legs. If the legs are affected first,\ \ people may experience awkwardness, tripping, or stumbling when walking or running;\ \ this is often marked by walking with a \"dropped foot\" that drags gently on\ \ the ground. If the arms are affected first, they may experience difficulty with\ \ tasks requiring manual dexterity, such as buttoning a shirt, writing, or turning\ \ a key in a lock.\n\nIn bulbar-onset ALS, the first symptoms are difficulty speaking\ \ or swallowing. Speech may become slurred, nasal in character, or quieter. There\ \ may be difficulty with swallowing and loss of tongue mobility. A smaller proportion\ \ of people experience \"respiratory-onset\" ALS, where the intercostal muscles\ \ that support breathing are affected first.\n\nOver time, people experience increasing\ \ difficulty moving, swallowing (dysphagia), and speaking or forming words (dysarthria).\ \ Symptoms of upper motor neuron involvement include tight and stiff muscles (spasticity)\ \ and exaggerated reflexes (hyperreflexia), including an overactive gag reflex.\ \ An abnormal reflex commonly called Babinski's sign also indicates upper motor\ \ neuron damage. Symptoms of lower motor neuron degeneration include muscle weakness\ \ and atrophy, muscle cramps, and fleeting twitches of muscles that can be seen\ \ under the skin (fasciculations). However, twitching is more of a side effect\ \ than a diagnostic symptom; it either occurs after or accompanies weakness and\ \ atrophy.\n\nProgression \nAlthough the initial symptoms and rate of progression\ \ vary from person to person, the disease eventually spreads to unaffected regions\ \ and the affected regions become more affected. Most people eventually are not\ \ able to walk or use their hands and arms, lose the ability to speak and swallow\ \ food and their own saliva, and begin to lose the ability to cough and to breathe\ \ on their own.\n\nThe rate of progression can be measured using the ALS Functional\ \ Rating Scale - Revised (ALSFRS-R), a 12-item instrument survey administered\ \ as a clinical interview or self-reported questionnaire that produces a score\ \ between 48 (normal function) and 0 (severe disability); it is the most commonly\ \ used outcome measure in clinical trials and is used by doctors to track disease\ \ progression. Though the degree of variability is high and a small percentage\ \ of people have a much slower disorder, on average, people with ALS lose about\ \ 0.9 FRS points per month. A survey-based study among clinicians showed that\ \ they rated a 20% change in the slope of the ALSFRS-R as being clinically meaningful.\n\ \nDisease progression tends to be slower in people who are younger than 40 at\ \ onset, are mildly obese, have symptoms restricted primarily to one limb, and\ \ those with primarily upper motor neuron symptoms. Conversely, progression is\ \ faster and prognosis poorer in people with bulbar-onset ALS, respiratory-onset\ \ ALS and frontotemporal dementia.\n\nLate stages \nDifficulties with chewing\ \ and swallowing make eating very difficult and increase the risk of choking or\ \ of aspirating food into the lungs. In later stages of the disorder, aspiration\ \ pneumonia can develop, and maintaining a healthy weight can become a significant\ \ problem that may require the insertion of a feeding tube. As the diaphragm and\ \ intercostal muscles of the rib cage that support breathing weaken, measures\ \ of lung function such as vital capacity and inspiratory pressure diminish. In\ \ respiratory-onset ALS, this may occur before significant limb weakness is apparent.\ \ The most common cause of death among people with ALS are respiratory failure\ \ or pneumonia and most people with ALS die in their own home from the former\ \ cause, with their breath stopping while they sleep.\n\nAlthough respiratory\ \ support can ease problems with breathing and prolong survival, it does not affect\ \ the progression of ALS. Most people with ALS die between two and four years\ \ after the diagnosis. Around half of people with ALS die within 30 months of\ \ their symptoms beginning, and about 20% of people with ALS live between five\ \ and ten years after symptoms begin. Guitarist Jason Becker has lived since 1989\ \ with the disorder, while cosmologist Stephen Hawking lived for 55 more years\ \ following his diagnosis, but they are considered unusual cases.\n\nCause \n\n\ Though the exact cause of ALS is unknown, genetic and environmental factors are\ \ thought to be of roughly equal importance. The genetic factors are better understood\ \ than the environmental factors; no specific environmental factor has been definitively\ \ shown to cause ALS. A liability threshold model for ALS proposes that cellular\ \ damage accumulates over time due to genetic factors present at birth and exposure\ \ to environmental risks throughout life.\n\nGenetics \n\nALS can be classified\ \ as familial or sporadic, depending on whether or not there is a family history\ \ of the disease. There is no consensus among neurologists on the exact definition\ \ of familial ALS. The strictest definition is that a person with ALS must have\ \ two or more first-degree relatives (children, siblings, or parents) who also\ \ have ALS. A less strict definition is that a person with ALS must have at least\ \ one first-degree or second-degree relative (grandparents, grandchildren, aunts,\ \ uncles, nephews, nieces or half-siblings) who also has ALS. Familial ALS is\ \ usually said to account for 10% of all cases of ALS, though estimates range\ \ from 5% to 20%. Higher estimates use a broader definition of familial ALS and\ \ examine the family history of people with ALS more thoroughly.\n\nIn sporadic\ \ ALS, there is no family history of the disease. Sporadic ALS and familial ALS\ \ appear identical clinically and pathologically and are similar genetically;\ \ about 10% of people with sporadic ALS have mutations in genes that are known\ \ to cause familial ALS. In light of these parallels, the term \"sporadic ALS\"\ \ has been criticized as misleading because it implies that cases of sporadic\ \ ALS are only caused by environmental factors; the term \"isolated ALS\" has\ \ been suggested as a more accurate alternative.\n\nMore than 20 genes have been\ \ associated with familial ALS, of which four account for the majority of familial\ \ cases: C9orf72 (40%), SOD1 (20%), FUS (1–5%), and TARDBP (1–5%). The genetics\ \ of familial ALS are better understood than the genetics of sporadic ALS; , the\ \ known ALS genes explained about 70% of familial ALS and about 15% of sporadic\ \ ALS. Overall, first-degree relatives of an individual with ALS have a 1% risk\ \ of developing ALS. ALS has an oligogenic mode of inheritance, meaning that mutations\ \ in two or more genes are required to cause disease.\n\nALS and frontotemporal\ \ dementia (FTD) are now considered to be part of a common disease spectrum (FTD–ALS)\ \ because of genetic, clinical, and pathological similarities. Genetically, C9orf72\ \ repeat expansions account for about 40% of familial ALS and 25% of familial\ \ FTD. Clinically, 50% of people with ALS have some cognitive or behavioral impairments\ \ and 5–15% have FTD, while 40% of people with FTD have some motor neuron symptoms\ \ and 12.5% have ALS. Pathologically, abnormal aggregations of TDP-43 protein\ \ are seen in up to 97% of ALS patients and up to 50% of FTD patients. In December\ \ 2021 a paper found the TDP-43 proteinopathy is in turn caused by defective cyclophilin\ \ A which regulates TARDBP gene expression. Other genes known to cause FTD-ALS\ \ include CHCHD10, SQSTM1, and TBK1.\n\nEnvironmental factors \nWhere no family\ \ history of the disease is present — around 90% of cases — no cause is known.\ \ Possible associations for which evidence is inconclusive include military service\ \ and smoking. Although studies on military history and ALS frequency are inconsistent,\ \ there is weak evidence for a positive correlation. Various proposed factors\ \ include exposure to environmental toxins (inferred from geographical deployment\ \ studies), as well as alcohol and tobacco use during military service.\n\nA 2016\ \ review of 16 meta-analyses concluded that there was convincing evidence for\ \ an association with chronic occupational exposure to lead; suggestive evidence\ \ for farming, exposure to heavy metals other than lead, beta-carotene intake,\ \ and head injury; and weak evidence for omega-three fatty acid intake, exposure\ \ to extremely low frequency electromagnetic fields, pesticides, and serum uric\ \ acid.\n\nIn a 2017 study by the United States Centers for Disease Control and\ \ Prevention analyzing U.S. deaths from 1985 to 2011, occupations correlated with\ \ ALS deaths were white collar, such as in management, financial, architectural,\ \ computing, legal, and education jobs. Other potential risk factors remain unconfirmed,\ \ including chemical exposure, electromagnetic field exposure, occupation, physical\ \ trauma, and electric shock. There is a tentative association with exposure to\ \ various pesticides, including the organochlorine insecticides aldrin, dieldrin,\ \ DDT, and toxaphene.\n\nHead injury\nA 2015 review found that moderate to severe\ \ traumatic brain injury is a risk factor for ALS, but whether mild traumatic\ \ brain injury increases rates was unclear. A 2017 meta-analysis found an association\ \ between head injuries and ALS; however, this association disappeared when the\ \ authors considered the possibility of reverse causation, which is the idea that\ \ head injuries are an early symptom of undiagnosed ALS, rather than the cause\ \ of ALS.\n\nPhysical activity\nA number of reviews prior to 2021 found no relationship\ \ between the amount of physical activity and the risk of developing ALS. A 2009\ \ review found that the evidence for physical activity as a risk factor for ALS\ \ was limited, conflicting, and of insufficient quality to come to a firm conclusion.\ \ A 2014 review concluded that physical activity in general is not a risk factor\ \ for ALS, that soccer and American football are possibly associated with ALS,\ \ and that there was not enough evidence to say whether or not physically demanding\ \ occupations are associated with ALS. A 2016 review found the evidence inconclusive\ \ and noted that differences in study design make it difficult to compare studies,\ \ as they do not use the same measures of physical activity or the same diagnostic\ \ criteria for ALS.\nHowever, research published in 2021 suggested that there\ \ was a positive causal relationship between ALS and intense physical exercise\ \ in those with a risk genotype.\n\nSports\nBoth soccer and American football\ \ have been identified as risk factors for ALS in several studies, although this\ \ association is based on small numbers of ALS cases. A 2012 retrospective cohort\ \ study of 3,439 former NFL players found that their risk of dying from neurodegenerative\ \ causes was three times higher than the general US population, and their risk\ \ of dying from ALS or Alzheimer's disease was four times higher. However, this\ \ increased risk was calculated on the basis of two deaths from Alzheimer's disease\ \ and six deaths from ALS out of 334 deaths total in this cohort, meaning that\ \ this study does not definitively prove that playing American football is a risk\ \ factor for ALS. Some NFL players thought to have died from ALS may have actually\ \ had chronic traumatic encephalopathy (CTE), a neurodegenerative disorder associated\ \ with multiple head injuries that can present with symptoms that are very similar\ \ to ALS.\n\nSoccer was identified as a possible risk factor for ALS in a retrospective\ \ cohort study of 24,000 Italian soccer players who played between 1960 and 1996.\ \ There were 375 deaths in this group, including eight from ALS. Based on this\ \ information and the incidence of ALS, it was calculated that the soccer players\ \ were 11 times more likely to die from ALS than the general Italian population.\ \ However, this calculation has been criticized for relying on an inappropriately\ \ low number of expected cases of ALS in the cohort. When the lifetime risk of\ \ developing ALS was used to predict the number of expected cases, soccer players\ \ were no more likely to die of ALS than the general population.\n\nSmoking\n\ Smoking is possibly associated with ALS. A 2009 review concluded that smoking\ \ was an established risk factor for ALS. A 2010 systematic review and meta-analysis\ \ concluded that there was not a strong association between smoking and ALS, but\ \ that smoking might be associated with a higher risk of ALS in women. A 2011\ \ meta-analysis concluded that smoking increases the risk of ALS versus never\ \ smoking. Among smokers, the younger they started smoking, the more likely they\ \ were to get ALS; however, neither the number of years smoked nor the number\ \ of cigarettes smoked per day affected their risk of developing ALS.\n\nPathophysiology\n\ \nNeuropathology\nThe defining feature of ALS is the death of both upper motor\ \ neurons (located in the motor cortex of the brain) and lower motor neurons (located\ \ in the brainstem and spinal cord). In ALS with frontotemporal dementia, neurons\ \ throughout the frontal and temporal lobes of the brain die as well. The pathological\ \ hallmark of ALS is the presence of inclusion bodies (abnormal aggregations of\ \ protein) known as Bunina bodies in the cytoplasm of motor neurons. In about\ \ 97% of people with ALS, the main component of the inclusion bodies is TDP-43\ \ protein; however, in those with SOD1 or FUS mutations, the main component of\ \ the inclusion bodies is SOD1 protein or FUS protein, respectively. The gross\ \ pathology of ALS, which are features of the disease that can be seen with the\ \ naked eye, include skeletal muscle atrophy, motor cortex atrophy, sclerosis\ \ of the corticospinal and corticobulbar tracts, thinning of the hypoglossal nerves\ \ (which control the tongue), and thinning of the anterior roots of the spinal\ \ cord. Aside from the death of motor neurons, two other characteristics common\ \ to most ALS variants are focal initial pathology, meaning that symptoms start\ \ in a single spinal cord region, and progressive continuous spread, meaning that\ \ symptoms spread to additional regions over time. Prion-like propagation of misfolded\ \ proteins from cell to cell may explain why ALS starts in one area and spreads\ \ to others. The glymphatic system may also be involved in the pathogenesis of\ \ ALS.\n\nBiochemistry\n\nIt is still not fully understood why neurons die in\ \ ALS, but this neurodegeneration is thought to involve many different cellular\ \ and molecular processes. The genes known to be involved in ALS can be grouped\ \ into three general categories based on their normal function: protein degradation,\ \ the cytoskeleton, and RNA processing. Mutant SOD1 protein forms intracellular\ \ aggregations that inhibit protein degradation. Cytoplasmic aggregations of wild-type\ \ (normal) SOD1 protein are common in sporadic ALS. It is thought that misfolded\ \ mutant SOD1 can cause misfolding and aggregation of wild-type SOD1 in neighboring\ \ neurons in a prion-like manner. Other protein degradation genes that can cause\ \ ALS when mutated include VCP, OPTN, TBK1, and SQSTM1. Three genes implicated\ \ in ALS that are important for maintaining the cytoskeleton and for axonal transport\ \ include DCTN1, PFN1, and TUBA4A.\n\nThere are a number of ALS genes that encode\ \ for RNA-binding proteins. The first to be discovered was TDP-43 protein, a nuclear\ \ protein that aggregates in the cytoplasm of motor neurons in almost all cases\ \ of ALS; however, mutations in TARDBP, the gene that codes for TDP-43, are a\ \ rare cause of ALS. FUS codes for FUS, another RNA-binding protein with a similar\ \ function to TDP-43, which can cause ALS when mutated. It is thought that mutations\ \ in TARDBP and FUS increase the binding affinity of the low-complexity domain,\ \ causing their respective proteins to aggregate in the cytoplasm. Once these\ \ mutant RNA-binding proteins are misfolded and aggregated, they may be able to\ \ misfold normal protein both within and between cells in a prion-like manner.\ \ This also leads to decreased levels of RNA-binding protein in the nucleus, which\ \ may mean that their target RNA transcripts do not undergo the normal processing.\ \ Other RNA metabolism genes associated with ALS include ANG, SETX, and MATR3.\n\ \nC9orf72 is the most commonly mutated gene in ALS and causes motor neuron death\ \ through a number of mechanisms. The pathogenic mutation is a hexanucleotide\ \ repeat expansion (a series of six nucleotides repeated over and over); people\ \ with up to 30 repeats are considered normal, while people with hundreds or thousands\ \ of repeats can have familial ALS, frontotemporal dementia, or sometimes sporadic\ \ ALS. The three mechanisms of disease associated with these C9orf72 repeats are\ \ deposition of RNA transcripts in the nucleus, translation of the RNA into toxic\ \ dipeptide repeat proteins in the cytoplasm, and decreased levels of the normal\ \ C9orf72 protein. Mitochondrial bioenergetic dysfunction leading to dysfunctional\ \ motor neuron axonal homeostasis (reduced axonal length and fast axonal transport\ \ of mitochondrial cargo) has been shown to occur in C9orf72-ALS using human induced\ \ pluripotent stem cell (iPSC) technologies coupled with CRISPR/Cas9 gene-editing,\ \ and human post-mortem spinal cord tissue examination.\n\nExcitotoxicity, or\ \ nerve cell death caused by high levels of intracellular calcium due to excessive\ \ stimulation by the excitatory neurotransmitter glutamate, is a mechanism thought\ \ to be common to all forms of ALS. Motor neurons are more sensitive to excitotoxicity\ \ than other types of neurons because they have a lower calcium-buffering capacity\ \ and a type of glutamate receptor (the AMPA receptor) that is more permeable\ \ to calcium. In ALS, there are decreased levels of excitatory amino acid transporter\ \ 2 (EAAT2), which is the main transporter that removes glutamate from the synapse;\ \ this leads to increased synaptic glutamate levels and excitotoxicity. Riluzole,\ \ a drug that modestly prolongs survival in ALS, inhibits glutamate release from\ \ pre-synaptic neurons; however, it is unclear if this mechanism is responsible\ \ for its therapeutic effect.\n\nDiagnosis \n\nNo test can provide a definite\ \ diagnosis of ALS, although the presence of upper and lower motor neuron signs\ \ in a single limb is strongly suggestive. Instead, the diagnosis of ALS is primarily\ \ based on the symptoms and signs the physician observes in the person and a series\ \ of tests to rule out other diseases. Physicians obtain the person's full medical\ \ history and usually conduct a neurologic examination at regular intervals to\ \ assess whether symptoms such as muscle weakness, atrophy of muscles, hyperreflexia,\ \ and spasticity are worsening. A number of biomarkers are being studied for the\ \ condition, but so far are not in general medical use.\n\nDiagnostic criteria\n\ The diagnosis of ALS is based on the El Escorial Revised criteria and the Awaji\ \ criteria. The original El Escorial criteria had four levels of diagnostic certainty,\ \ based on how many of the four spinal cord regions were involved: bulbar, cervical,\ \ thoracic, and lumbar. Definite ALS was defined as upper motor neuron (UMN) and\ \ lower motor neuron (LMN) signs in three spinal cord regions, probable ALS as\ \ UMN and LMN signs in two regions, possible ALS as UMN and LMN signs in only\ \ one region, and suspected ALS as LMN signs only. The El Escorial Revised criteria,\ \ also known as the Airlie House criteria, dropped the \"suspected ALS\" category\ \ and added a \"laboratory-supported probable ALS\" category. The Awaji criteria\ \ give abnormal EMG tests the same weight as clinical signs of LMN dysfunction\ \ in making the diagnosis of ALS, thus making the \"laboratory-supported probable\ \ ALS\" category unnecessary. The only three categories in the Awaji criteria\ \ are definite ALS, probable ALS, and possible ALS.\n\nThe El Escorial Revised\ \ criteria are specific for ALS, which means that someone who meets the criteria\ \ is very likely to have ALS; however, they are not especially sensitive for ALS,\ \ which means that someone who does not meet the criteria can still have ALS.\ \ Their sensitivity is particularly poor in the early stages of ALS. The Awaji\ \ criteria have better sensitivity than the El Escorial Revised criteria, especially\ \ for bulbar-onset ALS. A 2012 meta-analysis found that the El Escorial Revised\ \ criteria had a sensitivity of 62.2%, while the Awaji criteria had a sensitivity\ \ of 81.1%; both sets of criteria had a specificity of about 98%. The El Escorial\ \ criteria were designed to standardize patient groups for clinical trials but\ \ are not as useful in clinical practice; possible ALS as described by the El\ \ Escorial criteria is almost always clinically ALS.\n\nDifferential diagnosis\n\ Because symptoms of ALS can be similar to those of a wide variety of other, more\ \ treatable diseases or disorders, appropriate tests must be conducted to exclude\ \ the possibility of other conditions. One of these tests is electromyography\ \ (EMG), a special recording technique that detects electrical activity in muscles.\ \ Certain EMG findings can support the diagnosis of ALS. Another common test measures\ \ nerve conduction velocity (NCV). Specific abnormalities in the NCV results may\ \ suggest, for example, that the person has a form of peripheral neuropathy (damage\ \ to peripheral nerves) or myopathy (muscle disease) rather than ALS. While a\ \ magnetic resonance imaging (MRI) is often normal in people with early stage\ \ ALS, it can reveal evidence of other problems that may be causing the symptoms,\ \ such as a spinal cord tumor, multiple sclerosis, a herniated disc in the neck,\ \ syringomyelia, or cervical spondylosis.\n\nBased on the person's symptoms and\ \ findings from the examination and from these tests, the physician may order\ \ tests on blood and urine samples to eliminate the possibility of other diseases,\ \ as well as routine laboratory tests. In some cases, for example, if a physician\ \ suspects the person may have a myopathy rather than ALS, a muscle biopsy may\ \ be performed.\n\nA number of infectious diseases can sometimes cause ALS-like\ \ symptoms, including human immunodeficiency virus (HIV), human T-lymphotropic\ \ virus (HTLV), Lyme disease, and syphilis. Neurological disorders such as multiple\ \ sclerosis, post-polio syndrome, multifocal motor neuropathy, CIDP, spinal muscular\ \ atrophy, and spinal and bulbar muscular atrophy can also mimic certain aspects\ \ of the disease and should be considered.\n\nALS must be differentiated from\ \ the \"ALS mimic syndromes\", which are unrelated disorders that may have a similar\ \ presentation and clinical features to ALS or its variants. Because the prognosis\ \ of ALS and closely related subtypes of motor neurone disease are generally poor,\ \ neurologists may carry out investigations to evaluate and exclude other diagnostic\ \ possibilities. Disorders of the neuromuscular junction, such as myasthenia gravis\ \ (MG) and Lambert–Eaton myasthenic syndrome, may also mimic ALS, although this\ \ rarely presents diagnostic difficulty over time. Benign fasciculation syndrome\ \ and cramp fasciculation syndrome may also, occasionally, mimic some of the\ \ early symptoms of ALS. Nonetheless, the absence of other neurological features\ \ that develop inexorably with ALS means that, over time, the distinction will\ \ not present any difficulty to the experienced neurologist; where doubt remains,\ \ EMG may be helpful.\n\nMost cases of ALS, however, are correctly diagnosed,\ \ with the error rate of diagnosis in large ALS clinics being less than 10%. One\ \ study examined 190 people who met the MND/ALS diagnostic criteria, complemented\ \ with laboratory research in compliance with both research protocols and regular\ \ monitoring. Thirty of these people (16%) had their diagnosis completely changed\ \ during the clinical observation development period. In the same study, three\ \ people had a false negative diagnosis of MG, which can mimic ALS and other neurological\ \ disorders, leading to a delay in diagnosis and treatment. MG is eminently treatable;\ \ ALS is not.\n\nManagement \n\nThere is no cure for ALS. Management focuses on\ \ treating symptoms and providing supportive care, with the goal of improving\ \ quality of life and prolonging survival. This care is best provided by multidisciplinary\ \ teams of healthcare professionals; attending a multidisciplinary ALS clinic\ \ is associated with longer survival, fewer hospitalizations, and improved quality\ \ of life. Riluzole prolongs survival by about 2–3 months. Edaravone slows functional\ \ decline slightly in a small number of people with ALS; it is expensive and must\ \ be administered by daily IV infusions that may decrease quality of life. Other\ \ medications may be used to manage other symptoms.\n\nNon-invasive ventilation\ \ (NIV) is the main treatment for respiratory failure in ALS. In people with normal\ \ bulbar function, it prolongs survival by about seven months and improves quality\ \ of life. One study found that NIV is ineffective for people with poor bulbar\ \ function while another suggested that it may provide a modest survival benefit.\ \ Many people with ALS have difficulty tolerating NIV. Invasive ventilation is\ \ an option for people with advanced ALS when NIV is not enough to manage their\ \ symptoms. While invasive ventilation prolongs survival, disease progression\ \ and functional decline continue. It may decrease the quality of life of people\ \ with ALS or their caregivers. Invasive ventilation is more commonly used in\ \ Japan than North America or Europe.\n\nPhysical therapy can promote functional\ \ independence through aerobic, range of motion, and stretching exercises. Occupational\ \ therapy can assist with activities of daily living through adaptive equipment.\ \ Speech therapy can assist people with ALS who have difficulty speaking. Preventing\ \ weight loss and malnutrition in people with ALS improves both survival and quality\ \ of life. Initially, difficulty swallowing (dysphagia) can be managed by dietary\ \ changes and swallowing techniques. A feeding tube should be considered if someone\ \ with ALS loses 5% or more of their body weight or if they cannot safely swallow\ \ food and water. The feeding tube is usually inserted by percutaneous endoscopic\ \ gastrostomy (PEG). There is weak evidence that PEG tubes improve survival. PEG\ \ insertion is usually performed with the intent of improving quality of life.\n\ \nPalliative care should begin shortly after someone is diagnosed with ALS. Discussion\ \ of end-of-life issues gives people with ALS time to reflect on their preferences\ \ for end-of-life care and can help avoid unwanted interventions or procedures.\ \ Hospice care can improve symptom management at the end of life and increases\ \ the likelihood of a peaceful death. In the final days of life, opioids can be\ \ used to treat pain and dyspnea, while benzodiazepines can be used to treat anxiety.\n\ \nMedications \n\nRiluzole has been found to modestly prolong survival by about\ \ 2–3 months. It may have a greater survival benefit for those with bulbar-onset\ \ ALS. It may work by decreasing release of the excitatory neurotransmitter glutamate\ \ from pre-synaptic neurons. The most common side effects are nausea and a lack\ \ of energy (asthenia). People with ALS should begin treatment with riluzole as\ \ soon as possible following their diagnosis.\n\nEdaravone has been shown to modestly\ \ slow the decline in function in a small group of people with early-stage ALS.\ \ It may work by protecting motor neurons from oxidative stress. The most common\ \ side effects are bruising and gait disturbance. Treatment with edaravone is\ \ expensive and requires daily hour-long IV infusions for 10 days in a two-week\ \ period.\n\nOther medications may be used to help reduce fatigue, ease muscle\ \ cramps, control spasticity, and reduce excess saliva and phlegm. Gabapentin,\ \ pregabalin, and tricyclic antidepressants (e.g., amitriptyline) can be used\ \ for neuropathic pain, while nonsteroidal anti-inflammatory drugs (NSAIDs), acetaminophen,\ \ and opioids can be used for nociceptive pain.\n\nDepression can be treated with\ \ selective serotonin reuptake inhibitors (SSRIs) or tricyclic antidepressants,\ \ while benzodiazepines can be used for anxiety. There are no medications to treat\ \ cognitive impairment/frontotemporal dementia (FTD); however, SSRIs and antipsychotics\ \ can help treat some of the symptoms of FTD. Baclofen and tizanidine are the\ \ most commonly used oral drugs for treating spasticity; an intrathecal baclofen\ \ pump can be used for severe spasticity. Atropine, scopolamine, amitriptyline\ \ or glycopyrrolate may be prescribed when people with ALS begin having trouble\ \ swallowing their saliva (sialorrhea).\n\nA 2017 review concluded that mexiletine\ \ was safe and effective for treating cramps in ALS based on a randomized controlled\ \ trial from 2016. In a study from 2020, AMX0035, a combination of sodium phenylbutyrate\ \ and taurursodiol, was shown to prolong the survival of patients by several months.\n\ \nBreathing support\n\nNon-invasive ventilation \n\nNon-invasive ventilation (NIV)\ \ is the primary treatment for respiratory failure in ALS and was the first treatment\ \ shown to improve both survival and quality of life. NIV uses a face or nasal\ \ mask connected to a ventilator that provides intermittent positive pressure\ \ to support breathing. Continuous positive pressure is not recommended for people\ \ with ALS because it makes breathing more difficult. Initially, NIV is used only\ \ at night because the first sign of respiratory failure is decreased gas exchange\ \ (hypoventilation) during sleep; symptoms associated with this nocturnal hypoventilation\ \ include interrupted sleep, anxiety, morning headaches, and daytime fatigue.\ \ As the disease progresses, people with ALS develop shortness of breath when\ \ lying down, during physical activity or talking, and eventually at rest. Other\ \ symptoms include poor concentration, poor memory, confusion, respiratory tract\ \ infections, and a weak cough. Respiratory failure is the most common cause of\ \ death in ALS.\n\nIt is important to monitor the respiratory function of people\ \ with ALS every three months, because beginning NIV soon after the start of respiratory\ \ symptoms is associated with increased survival. This involves asking the person\ \ with ALS if they have any respiratory symptoms and measuring their respiratory\ \ function. The most commonly used measurement is upright forced vital capacity\ \ (FVC), but it is a poor detector of early respiratory failure and is not a good\ \ choice for those with bulbar symptoms, as they have difficulty maintaining a\ \ tight seal around the mouthpiece. Measuring FVC while the person is lying on\ \ their back (supine FVC) is a more accurate measure of diaphragm weakness than\ \ upright FVC. Sniff nasal inspiratory pressure (SNIP) is a rapid, convenient\ \ test of diaphragm strength that is not affected by bulbar muscle weakness. If\ \ someone with ALS has signs and symptoms of respiratory failure, they should\ \ undergo daytime blood gas analysis to look for hypoxemia (low oxygen in the\ \ blood) and hypercapnia (too much carbon dioxide in the blood). If their daytime\ \ blood gas analysis is normal, they should then have nocturnal pulse oximetry\ \ to look for hypoxemia during sleep.\n\nNon-invasive ventilation prolongs survival\ \ longer than riluzole. A 2006 randomized controlled trial found that NIV prolongs\ \ survival by about 48 days and improves quality of life; however, it also found\ \ that some people with ALS benefit more from this intervention than others. For\ \ those with normal or only moderately impaired bulbar function, NIV prolongs\ \ survival by about seven months and significantly improves quality of life. For\ \ those with poor bulbar function, NIV neither prolongs survival nor improves\ \ quality of life, though it does improve some sleep-related symptoms. Despite\ \ the clear benefits of NIV, about 25–30% of all people with ALS are unable to\ \ tolerate it, especially those with cognitive impairment or bulbar dysfunction.\ \ Results from a large 2015 cohort study suggest that NIV may prolong survival\ \ in those with bulbar weakness, and so NIV should be offered to all people with\ \ ALS, even if it is likely that they will have difficulty tolerating it.\n\n\ Invasive ventilation \nInvasive ventilation bypasses the nose and mouth (the upper\ \ airways) by making a cut in the trachea (tracheostomy) and inserting a tube\ \ connected to a ventilator. It is an option for people with advanced ALS whose\ \ respiratory symptoms are poorly managed despite continuous NIV use. While invasive\ \ ventilation prolongs survival, especially for those younger than 60, it does\ \ not treat the underlying neurodegenerative process. The person with ALS will\ \ continue to lose motor function, making communication increasingly difficult\ \ and sometimes leading to locked-in syndrome, in which they are completely paralyzed\ \ except for their eye muscles. About half of the people with ALS who choose to\ \ undergo invasive ventilation report a decrease in their quality of life but\ \ most still consider it to be satisfactory. However, invasive ventilation imposes\ \ a heavy burden on caregivers and may decrease their quality of life. Attitudes\ \ toward invasive ventilation vary from country to country; about 30% of people\ \ with ALS in Japan choose invasive ventilation, versus less than 5% in North\ \ America and Europe.\n\nTherapy\n\nPhysical therapy plays a large role in rehabilitation\ \ for individuals with ALS. Specifically, physical, occupational, and speech therapists\ \ can set goals and promote benefits for individuals with ALS by delaying loss\ \ of strength, maintaining endurance, limiting pain, improving speech and swallowing,\ \ preventing complications, and promoting functional independence.\n\nOccupational\ \ therapy and special equipment such as assistive technology can also enhance\ \ people's independence and safety throughout the course of ALS. Gentle, low-impact\ \ aerobic exercise such as performing activities of daily living, walking, swimming,\ \ and stationary bicycling can strengthen unaffected muscles, improve cardiovascular\ \ health, and help people fight fatigue and depression. Range of motion and stretching\ \ exercises can help prevent painful spasticity and shortening (contracture) of\ \ muscles. Physical and occupational therapists can recommend exercises that provide\ \ these benefits without overworking muscles, because muscle exhaustion can lead\ \ to worsening of symptoms associated with ALS, rather than providing help to\ \ people with ALS. They can suggest devices such as ramps, braces, walkers, bathroom\ \ equipment (shower chairs, toilet risers, etc.), and wheelchairs that help people\ \ remain mobile. Occupational therapists can provide or recommend equipment and\ \ adaptations to enable ALS people to retain as much safety and independence in\ \ activities of daily living as possible. Since respiratory insufficiency is the\ \ primary cause of mortality, physical therapists can help improve respiratory\ \ outcomes in people with ALS by implementing pulmonary physical therapy. This\ \ includes inspiratory muscle training, lung volume recruitment training, and\ \ manual assisted cough therapy aimed at increasing respiratory muscle strength\ \ as well as increasing survival rates.\n\nPeople with ALS who have difficulty\ \ speaking or swallowing may benefit from working with a speech-language pathologist.\ \ These health professionals can teach people adaptive strategies such as techniques\ \ to help them speak louder and more clearly. As ALS progresses, speech-language\ \ pathologists can recommend the use of augmentative and alternative communication\ \ such as voice amplifiers, speech-generating devices (or voice output communication\ \ devices) or low-tech communication techniques such as head mounted laser pointers,\ \ alphabet boards or yes/no signals.\n\nNutrition\n\nPreventing weight loss and\ \ malnutrition in people with ALS improves both survival and quality of life.\ \ Weight loss in ALS is caused by muscle wasting due to motor neuron death, increased\ \ resting energy expenditure, and decreased food intake. Difficulty swallowing\ \ (dysphagia) develops in about 85% of people with ALS at some point over the\ \ course of their disease and is a major cause of decreased food intake, leading\ \ to malnutrition and weight loss. It is important to regularly assess the weight\ \ and swallowing ability of people with ALS. Initially, dysphagia may be managed\ \ by dietary changes and modified swallowing techniques. Difficulty swallowing\ \ liquids usually develops first and can be managed by switching to thicker liquids\ \ like fruit nectar or smoothies, or by adding fluid thickeners to thin fluids\ \ like water and coffee. People with ALS should eat soft, moist foods, which tend\ \ to be easier to swallow than dry, crumbly, or chewy foods. They should also\ \ be instructed on proper head posture during swallowing, which can make swallowing\ \ easier. There is tentative evidence that high-calorie diets may prevent further\ \ weight loss and improve survival.\n\nA feeding tube should be considered if\ \ someone with ALS loses 5% or more of their body weight or if they cannot safely\ \ swallow food and water. This can take the form of a gastrostomy tube, in which\ \ a tube is placed through the wall of the abdomen into the stomach, or a nasogastric\ \ tube, in which a tube is placed through the nose and down the esophagus into\ \ the stomach. A gastrostomy tube is more appropriate for long-term use than a\ \ nasogastric tube, which is uncomfortable and can cause esophageal ulcers. The\ \ feeding tube is usually inserted by percutaneous endoscopic gastrostomy (PEG).\ \ There is some evidence that a PEG tube should be inserted before vital capacity\ \ drops below 50% of expected, as a low vital capacity may be associated with\ \ a higher risk of complications. However, a large 2015 study showed that PEG\ \ insertion is safe in people with advanced ALS and low vital capacities, as long\ \ as they are on NIV during the procedure.\n\nThere is weak evidence that PEG\ \ tubes improve survival. PEG insertion is usually performed with the intent of\ \ improving quality of life by sustaining nutrition and medication intake. This\ \ reduces the risk of weight loss and dehydration, and can decrease anxiety from\ \ extended mealtimes and decreased oral food intake.\n\nEnd-of-life care\nPalliative\ \ care, which relieves symptoms and improves quality of life without treating\ \ the underlying disease, should begin shortly after someone is diagnosed with\ \ ALS. Early discussion of end-of-life issues gives people with ALS time to reflect\ \ on their preferences for end-of-life care and can help avoid unwanted interventions\ \ or procedures. Once they have been fully informed about all aspects of various\ \ life-prolonging measures, they can fill out advance directives indicating their\ \ attitude toward noninvasive ventilation, invasive ventilation, and feeding tubes.\ \ Late in the disease course, difficulty speaking due to muscle weakness (dysarthria)\ \ and cognitive dysfunction may impair their ability to communicate their wishes\ \ regarding care. Continued failure to solicit the preferences of the person with\ \ ALS may lead to unplanned and potentially unwanted emergency interventions,\ \ such as invasive ventilation. If people with ALS or their family members are\ \ reluctant to discuss end-of-life issues, it may be useful to use the introduction\ \ of gastrostomy or noninvasive ventilation as an opportunity to bring up the\ \ subject.\n\nHospice care, or palliative care at the end of life, is especially\ \ important in ALS because it helps to optimize the management of symptoms and\ \ increases the likelihood of a peaceful death. It is unclear exactly when the\ \ end-of-life phase begins in ALS, but it is associated with significant difficulty\ \ moving, communicating, and, in some cases, thinking. Although many people with\ \ ALS fear choking to death (suffocating), they can be reassured that this occurs\ \ rarely, about 0–3% of the time. About 90% of people with ALS die peacefully.\ \ In the final days of life, opioids can be used to treat pain and dyspnea, while\ \ benzodiazepines can be used to treat anxiety.\n\nEpidemiology\nALS is the most\ \ common motor neuron disease in adults and the third most common neurodegenerative\ \ disease after Alzheimer's disease and Parkinson's disease. Worldwide the number\ \ of people who develop ALS yearly is estimated to be 1.9 people per 100,000 per\ \ year, while the number of people who have ALS at any given time is estimated\ \ to be about 4.5 people per 100,000. In Europe, the number of new cases a year\ \ is about 2.6 people per 100,000, while the number affected is 7–9 people per\ \ 100,000. The lifetime risk of developing ALS is 1:350 for European men and 1:400\ \ for European women. Men have a higher risk mainly because spinal-onset ALS is\ \ more common in men than women. The number of those with ALS in the United States\ \ in 2015 was 5.2 people per 100,000, and was higher in whites, males, and people\ \ over 60 years old. The number of new cases is about 0.8 people per 100,000 per\ \ year in east Asia and about 0.7 people per 100,000 per year in south Asia. About\ \ 80% of ALS epidemiology studies have been conducted in Europe and the United\ \ States, mostly in people of northern European descent. There is not enough information\ \ to determine the rates of ALS in much of the world, including Africa, parts\ \ of Asia, India, Russia, and South America. There are several geographic clusters\ \ in the Western Pacific where the prevalence of ALS was reported to be 50–100\ \ times higher than the rest of the world, including Guam, the Kii Peninsula of\ \ Japan, and Western New Guinea. The incidence in these areas has decreased since\ \ the 1960s; the cause remains unknown.\n\nPeople of all races and ethnic backgrounds\ \ may be affected by ALS, but it is more common in whites than in Africans, Asians,\ \ or Hispanics. In the United States in 2015, the prevalence of ALS in whites\ \ was 5.4 people per 100,000, while the prevalence in blacks was 2.3 people per\ \ 100,000. The Midwest had the highest prevalence of the four US Census regions\ \ with 5.5 people per 100,000, followed by the Northeast (5.1), the South (4.7),\ \ and the West (4.4). The Midwest and Northeast likely had a higher prevalence\ \ of ALS because they have a higher proportion of whites than the South and West.\ \ Ethnically mixed populations may be at a lower risk of developing ALS; a study\ \ in Cuba found that people of mixed ancestry were less likely to die from ALS\ \ than whites or blacks. There are also differences in the genetics of ALS between\ \ different ethnic groups; the most common ALS gene in Europe is C9orf72, followed\ \ by SOD1, TARDBP, and FUS, while the most common ALS gene in Asia is SOD1, followed\ \ by FUS, C9orf72, and TARDBP.\n\nALS can affect people at any age, but the peak\ \ incidence is between 50 and 75 years and decreases dramatically after 80 years.\ \ The reason for the decreased incidence in the elderly is unclear. One thought\ \ is that people who survive into their 80s may not be genetically susceptible\ \ to developing ALS; alternatively, ALS in the elderly might go undiagnosed because\ \ of comorbidities (other diseases they have), difficulty seeing a neurologist,\ \ or dying quickly from an aggressive form of ALS. In the United States in 2015,\ \ the lowest prevalence was in the 18–39 age group, while the highest prevalence\ \ was in the 70–79 age group. Sporadic ALS usually starts around the ages of 58\ \ to 63 years, while familial ALS starts earlier, usually around 47 to 52 years.\ \ The number of ALS cases worldwide is projected to increase from 222,801 in 2015\ \ to 376,674 in 2040, an increase of 69%. This will largely be due to the aging\ \ of the world's population, especially in developing countries.\n\nHistory\n\n\ Descriptions of the disease date back to at least 1824 by Charles Bell. In 1850,\ \ François-Amilcar Aran was the first to describe a disorder he named \"progressive\ \ muscular atrophy\", a form of ALS in which only the lower motor neurons are\ \ affected. In 1869, the connection between the symptoms and the underlying neurological\ \ problems were first described by Jean-Martin Charcot, who initially introduced\ \ the term amyotrophic lateral sclerosis in his 1874 paper. Flail arm syndrome,\ \ a regional variant of ALS, was first described by Alfred Vulpian in 1886. Flail\ \ leg syndrome, another regional variant of ALS, was first described by Pierre\ \ Marie and his student Patrikios in 1918.\n\nIn 1945, American naval doctors\ \ reported that ALS was 100 times more prevalent among the Chamorro people of\ \ Guam than in the rest of the world. In 1956 the variant of ALS endemic to Guam\ \ was named \"amyotrophic lateral sclerosis/parkinsonism dementia complex\" (ALS/PDC),\ \ as it had the typical symptoms of ALS accompanied by parkinsonism-like symptoms;\ \ the name in the local language is lytico-bodig disease. Despite a number of\ \ genetic and environmental studies, the cause of ALS/PDC remains unknown. Rates\ \ peaked in the early 1950s and steadily declined thereafter, and by 1985 the\ \ incidence of ALS/PDC in Guam was about the same as the rest of the world.\n\n\ The first gene to be associated with ALS was SOD1, which was identified in 1993.\ \ This led to the development of the first animal model of ALS, the transgenic\ \ SOD1 mouse, in 1994. In December 1995, riluzole became the first FDA-approved\ \ drug for ALS. It was then approved in Europe in 1996 and in Japan in 1998. In\ \ 1996, the ALS Functional Rating Scale (ALSFRS) was first published; it was a\ \ 10-item questionnaire that measured the ability of people with ALS to perform\ \ activities of daily living. In 1999, the scale was changed to give more weight\ \ to respiratory symptoms. The resulting ALS Functional Rating Scale - Revised\ \ (ALSFRS-R) is a 12-item questionnaire that replaces the single question about\ \ breathing with a question each about dyspnea, orthopnea, and respiratory insufficiency.\n\ \nIn 2006, it was discovered that the protein TDP-43 is a major component of the\ \ inclusion bodies seen in both ALS and frontotemporal dementia (FTD), which provided\ \ evidence that ALS and FTD are part of a common disease spectrum. This led to\ \ the discovery in 2008 that mutations in TARDBP, the gene that codes for TDP-43,\ \ are a cause of familial ALS. In 2011, noncoding repeat expansions in C9orf72\ \ were found to be a major cause of ALS and FTD. Edaravone was approved to treat\ \ ALS in Japan and South Korea in 2015 and in the United States in 2017. , it\ \ has not been approved to treat ALS in Europe.\n\nDiagnostic criteria\n\nIn the\ \ 1950s, electrodiagnostic testing (EMG and NCV) began to be used to evaluate\ \ clinically suspected ALS. In 1969 Edward H. Lambert published the first EMG/NCS\ \ diagnostic criteria for ALS, consisting of four findings he considered to strongly\ \ support the diagnosis. In 1990, the World Federation of Neurology (WFN) held\ \ a meeting at El Escorial, Spain, to come up with precise diagnostic criteria\ \ for ALS to help standardize clinical trials; the resulting \"El Escorial\" criteria\ \ were published in 1994. In 1998, the WFN held another meeting to revise the\ \ criteria at Airlie House in Warrenton, Virginia; the resulting \"Airlie House\"\ \ or \"El Escorial Revised\" criteria were published in 2000. In 2006, a meeting\ \ was held on Awaji Island in Japan to discuss how to use EMG and NCV tests to\ \ help diagnose ALS earlier; the resulting \"Awaji\" criteria were published in\ \ 2008.\n\nName\n\nAmyotrophic comes from Greek: a- means \"no\", myo (from mûs)\ \ refers to \"muscle\", and trophḗ means \"nourishment\". Therefore, amyotrophy\ \ means \"muscle malnourishment\" or the wasting of muscle tissue. Lateral identifies\ \ the areas in a person's spinal cord where the affected motor neurons that control\ \ muscle are located. Sclerosis means \"scarring\" or \"hardening\" and refers\ \ to the death of the motor neurons in the spinal cord.\n\nALS is sometimes referred\ \ to as Charcot's disease (not to be confused with Charcot–Marie–Tooth disease\ \ and Charcot joint disease), because Jean-Martin Charcot was the first to connect\ \ the clinical symptoms with the pathology seen at autopsy. The British neurologist\ \ Russell Brain coined the term motor neurone disease in 1933 to reflect his belief\ \ that ALS, progressive bulbar palsy, and progressive muscular atrophy were all\ \ different forms of the same disease, neurone being a historically incorrect\ \ form of neuron. In some countries, especially the United States, ALS is called\ \ Lou Gehrig's disease after the American baseball player Lou Gehrig who developed\ \ ALS in 1938.\n\nIn the United States and continental Europe, the term ALS (as\ \ well as Lou Gehrig's disease in the US) refers to all forms of the disease,\ \ including \"classical\" ALS, progressive bulbar palsy, progressive muscular\ \ atrophy, and primary lateral sclerosis. In the United Kingdom and Australia,\ \ the term motor neurone disease refers to all forms of the disease while ALS\ \ only refers to \"classical\" ALS, meaning the form with both upper and lower\ \ motor neuron involvement.\n\nSociety and culture \n\nIn August 2014, a challenge\ \ went viral online, commonly known as the \"ALS Ice Bucket Challenge\". Contestants\ \ fill a bucket full of ice and water, then state who nominated them to do the\ \ challenge, and nominate three other individuals of their choice to take part\ \ in it. The contestants then dump the buckets of ice and water onto themselves.\ \ However, it can be done in a different order. The contestants then donate at\ \ least US$10 (or a similar amount in their local currency) to ALS research at\ \ the ALS Association, the ALS Therapy Development Institute, ALS Society of Canada\ \ or Motor Neurone Disease Association in the UK. Any contestants who refuse to\ \ have the ice and water dumped on them are expected to donate at least US$100\ \ to ALS research. , the Ice Bucket Challenge had raised $115 million for the\ \ ALS Association. Many celebrities have taken part in the challenge. The Ice\ \ Bucket Challenge was credited with helping to raise funds that contributed to\ \ the discovery that the gene NEK1 may potentially contribute to the development\ \ for ALS.\n\nResearch\n\nModel organisms\n\nMany different organisms are used\ \ as models for studying ALS, including Saccharomyces cerevisiae (a species of\ \ yeast), Caenorhabditis elegans (a roundworm), Drosophila melanogaster (the common\ \ fruit fly), Danio rerio (the zebrafish), Mus musculus (the house mouse), and\ \ Rattus norvegicus (the common rat). None of these models perfectly represents\ \ ALS in humans, partly because most animal models are based on gene overexpression,\ \ meaning that multiple copies of the mutant human gene are inserted into the\ \ transgenic model, and partly because the human nervous system is very different\ \ from that of other animals.\n\nThe first animal model for ALS was the SOD1G93A\ \ transgenic mouse, which was developed in 1994. It expresses about 20–24 copies\ \ of the mutant human SOD1 gene and reproduces most of the clinical and pathological\ \ findings seen in ALS. Although there are now over 20 different SOD1 mouse models,\ \ the SOD1G93A model remains both the most widely used SOD1 mouse model and the\ \ most widely used ALS mouse model overall. Much of the present understanding\ \ of ALS pathophysiology came from studying mouse models that overexpress mutant\ \ SOD1, especially SOD1G93A mice. However, many drug targets that were shown to\ \ be effective in the SOD1G93A transgenic mouse failed in clinical trials in humans;\ \ other SOD1 models have had similar problems. Most of these drugs were identified\ \ as potentially effective based on a single study in a rodent SOD1 model and\ \ then failed in clinical trials in patients who primarily had sporadic ALS. It\ \ is thought that these clinical trials failed because SOD1 mutations account\ \ for only 2% of all ALS cases and because the pathology of SOD1 ALS is thought\ \ to be distinct from all other types of ALS; it lacks the abnormal aggregations\ \ of TDP-43 protein or FUS protein seen in nearly all other cases of ALS.\n\n\ As of 2018, there are about 20 TARDBP mouse models, a dozen FUS mouse models,\ \ and a number of C9orf72, PFN1, and UBQLN2 mouse models. There are also new methods\ \ of developing animal models, including viral transgenesis, in which viruses\ \ are used to deliver mutant genes to an animal model, and CRISPR/Cas9, which\ \ can be used to give an animal model multiple mutated genes. Both of these methods\ \ are faster and cheaper than traditional methods of genetically engineering mice;\ \ they also allow scientists to study the effects of a mutation in mice of different\ \ genetic backgrounds, which better represents the genetic diversity seen in humans.\n\ \nCellular models used to study ALS include the yeast Saccharomyces cerevisiae\ \ and rat or mouse motor neurons in culture. Small-animal models include the fruit\ \ fly, the roundworm C. elegans, and the zebrafish. Of the three, the fruit fly\ \ is the most widely used; it has a rapid life-cycle, short lifespan, a sophisticated\ \ nervous system, and many genetic tools available. C. elegans has a short life-cycle,\ \ is easy to manipulate genetically, and has a simple but well-understood nervous\ \ system. The zebrafish has transparent embryos that can be injected with DNA\ \ or RNA and has a lifespan of up to two years. Induced pluripotent stem cells\ \ (iPSCs) can be used to convert skin fibroblasts into motor neurons. It is now\ \ possible to generate iPSCs from people with ALS, which can then be converted\ \ into spinal motor neurons, which are useful for studying disease mechanisms\ \ and for testing potential drugs for ALS. iPSCs allow sporadic ALS to be modelled,\ \ which cannot be done with animal models.\n\nTreatments\nFrom the 1960s until\ \ 2014, about 50 drugs for ALS were tested in randomized controlled trials (RCTs);\ \ of these, riluzole was the only one that showed a slight benefit in improving\ \ survival. Drugs tested and not shown to be effective in clinical trials in humans\ \ include antiviral drugs, anti-excitotoxic drugs, growth factors, neurotrophic\ \ factors, anti-inflammatory drugs, antioxidants, anti-apoptotic drugs, and drugs\ \ to improve mitochondria function.\n\nAn analysis of 23 large phase II and phase\ \ III RCTs that failed between 2004 and 2014 concluded that there were many potential\ \ reasons for their lack of success. These trials in humans went ahead on the\ \ basis of positive results in SOD1 transgenic mice, which are not a good animal\ \ model for sporadic ALS. Additionally, in most preclinical studies the SOD1 mice\ \ were given the drug during the presymptomatic stage; this makes the results\ \ less likely to apply to people with ALS, who begin treatment well after their\ \ symptoms begin. Positive results in small phase II studies in humans could also\ \ be misleading and lead to failure in phase III trials. Other potential issues\ \ included the drug not reaching its intended site of action in the central nervous\ \ system and drug interactions between the study drug and riluzole.\n\nRepetitive\ \ transcranial magnetic stimulation had been studied in ALS in small and poorly\ \ designed clinical trials; , evidence was insufficient to know whether rTMS is\ \ safe or effective for ALS. One 2016 review of stem-cell therapy trials found\ \ tentative evidence that intraspinal stem cell implantation was relatively safe\ \ and possibly effective. A 2019 Cochrane review of cell-based therapies found\ \ that there was insufficient evidence to speculate about efficacy. Masitinib\ \ has been approved as an orphan drug in Europe and the United States, with studies\ \ ongoing . Beta-adrenergic agonist drugs have been proposed as a treatment for\ \ their effects on muscle growth and neuroprotection, but research in humans is\ \ insufficient to determine their efficacy.\n\nCause \nWith the discovery that\ \ TDP-43, FUS, and C9orf72 can cause ALS as well as related forms of frontotemporal\ \ dementia (FTD/ALS) there has been intense effort to understand how these mutations\ \ cause disease, and whether other protein dysfunction may be important. it appeared\ \ that differences in the methylation of arginine residues in FUS protein may\ \ be relevant, and methylation status may be a way to distinguish some forms of\ \ FTD from ALS.\n\nSee also\n Lou Gehrig\n Transportin 1\n\nNotes\n\nReferences\n\ \nExternal links \n\n Official website\n\n \n\n \nMotor neuron diseases\nRare\ \ diseases\nUnsolved problems in neuroscience\nSystemic atrophies primarily affecting\ \ the central nervous system\nCytoskeletal defects\nWikipedia medicine articles\ \ ready to translate\nNeuromuscular disorders\nWikipedia neurology articles ready\ \ to translate" - "Water (chemical formula H2O) is an inorganic, transparent, tasteless, odorless,\ \ and nearly colorless chemical substance, which is the main constituent of Earth's\ \ hydrosphere and the fluids of all known living organisms (in which it acts as\ \ a solvent). It is vital for all known forms of life, even though it provides\ \ no calories or organic nutrients. Its chemical formula, H2O, indicates that\ \ each of its molecules contains one oxygen and two hydrogen atoms, connected\ \ by covalent bonds. The hydrogen atoms are attached to the oxygen atom at an\ \ angle of 104.45°. \"Water\" is the name of the liquid state of H2O at standard\ \ conditions for temperature and pressure. \n\nA number of natural states of water\ \ exist. It forms precipitation in the form of rain and aerosols in the form of\ \ fog. Clouds consist of suspended droplets of water and ice, its solid state.\ \ When finely divided, crystalline ice may precipitate in the form of snow. The\ \ gaseous state of water is steam or water vapor.\n\nWater covers approximately\ \ 70.9% of the Earth's surface, mostly in seas and oceans. Small portions of water\ \ occur as groundwater (1.7%), in the glaciers and the ice caps of Antarctica\ \ and Greenland (1.7%), and in the air as vapor, clouds (consisting of ice and\ \ liquid water suspended in air), and precipitation (0.001%). Water moves continually\ \ through the water cycle of evaporation, transpiration (evapotranspiration),\ \ condensation, precipitation, and runoff, usually reaching the sea.\n\nWater\ \ plays an important role in the world economy. Approximately 70% of the freshwater\ \ used by humans goes to agriculture. Fishing in salt and fresh water bodies is\ \ a major source of food for many parts of the world. Much of the long-distance\ \ trade of commodities (such as oil, natural gas, and manufactured products) is\ \ transported by boats through seas, rivers, lakes, and canals. Large quantities\ \ of water, ice, and steam are used for cooling and heating, in industry and homes.\ \ Water is an excellent solvent for a wide variety of substances both mineral\ \ and organic; as such it is widely used in industrial processes, and in cooking\ \ and washing. Water, ice and snow are also central to many sports and other forms\ \ of entertainment, such as swimming, pleasure boating, boat racing, surfing,\ \ sport fishing, diving, ice skating and skiing.\n\nEtymology\nThe word water\ \ comes from Old English , from Proto-Germanic *watar (source also of Old Saxon\ \ , Old Frisian , Dutch , Old High German , German , , Gothic (), from Proto-Indo-European\ \ *wod-or, suffixed form of root *wed- (\"water\"; \"wet\"). Also cognate, through\ \ the Indo-European root, with Greek (), Russian (), Irish , and Albanian .\n\ \nHistory\n\nProperties\n\nWater () is a polar inorganic compound that is at room\ \ temperature a tasteless and odorless liquid, nearly colorless with a hint of\ \ blue. This simplest hydrogen chalcogenide is by far the most studied chemical\ \ compound and is described as the \"universal solvent\" for its ability to dissolve\ \ many substances. This allows it to be the \"solvent of life\": indeed, water\ \ as found in nature almost always includes various dissolved substances, and\ \ special steps are required to obtain chemically pure water. Water is the only\ \ common substance to exist as a solid, liquid, and gas in normal terrestrial\ \ conditions.\n\nStates\n\nAlong with oxidane, water is one of the two official\ \ names for the chemical compound ; it is also the liquid phase of . The other\ \ two common states of matter of water are the solid phase, ice, and the gaseous\ \ phase, water vapor or steam. The addition or removal of heat can cause phase\ \ transitions: freezing (water to ice), melting (ice to water), vaporization (water\ \ to vapor), condensation (vapor to water), sublimation (ice to vapor) and deposition\ \ (vapor to ice).\n\nDensity \nWater differs from most liquids in that it becomes\ \ less dense as it freezes. In 1 atm pressure, it reaches its maximum density\ \ of at . The density of ice is , an expansion of 9%. This expansion can exert\ \ enormous pressure, bursting pipes and cracking rocks (see Frost weathering).\n\ \nIn a lake or ocean, water at 4 °C (39.2 °F) sinks to the bottom, and ice forms\ \ on the surface, floating on the liquid water. This ice insulates the water below,\ \ preventing it from freezing solid. Without this protection, most aquatic organisms\ \ would perish during the winter.\n\nMagnetism \nWater is a diamagnetic material.\ \ Though interaction is weak, with superconducting magnets it can attain a notable\ \ interaction.\n\nPhase transitions \nAt a pressure of one atmosphere (atm), ice\ \ melts or water freezes at 0 °C (32 °F) and water boils or vapor condenses at\ \ 100 °C (212 °F). However, even below the boiling point, water can change to\ \ vapor at its surface by evaporation (vaporization throughout the liquid is known\ \ as boiling). Sublimation and deposition also occur on surfaces. For example,\ \ frost is deposited on cold surfaces while snowflakes form by deposition on an\ \ aerosol particle or ice nucleus. In the process of freeze-drying, a food is\ \ frozen and then stored at low pressure so the ice on its surface sublimates.\n\ \nThe melting and boiling points depend on pressure. A good approximation for\ \ the rate of change of the melting temperature with pressure is given by the\ \ Clausius–Clapeyron relation:\n\nwhere and are the molar volumes of the liquid\ \ and solid phases, and is the molar latent heat of melting. In most substances,\ \ the volume increases when melting occurs, so the melting temperature increases\ \ with pressure. However, because ice is less dense than water, the melting temperature\ \ decreases. In glaciers, pressure melting can occur under sufficiently thick\ \ volumes of ice, resulting in subglacial lakes.\n\nThe Clausius-Clapeyron relation\ \ also applies to the boiling point, but with the liquid/gas transition the vapor\ \ phase has a much lower density than the liquid phase, so the boiling point increases\ \ with pressure. Water can remain in a liquid state at high temperatures in the\ \ deep ocean or underground. For example, temperatures exceed in Old Faithful,\ \ a geyser in Yellowstone National Park. In hydrothermal vents, the temperature\ \ can exceed .\n\nAt sea level, the boiling point of water is . As atmospheric\ \ pressure decreases with altitude, the boiling point decreases by 1 °C every\ \ 274 meters. High-altitude cooking takes longer than sea-level cooking. For example,\ \ at , cooking time must be increased by a fourth to achieve the desired result.\ \ (Conversely, a pressure cooker can be used to decrease cooking times by raising\ \ the boiling temperature.)\nIn a vacuum, water will boil at room temperature.\n\ \nTriple and critical points \n\nOn a pressure/temperature phase diagram (see\ \ figure), there are curves separating solid from vapor, vapor from liquid, and\ \ liquid from solid. These meet at a single point called the triple point, where\ \ all three phases can coexist. The triple point is at a temperature of and a\ \ pressure of ; it is the lowest pressure at which liquid water can exist. Until\ \ 2019, the triple point was used to define the Kelvin temperature scale.\n\n\ The water/vapor phase curve terminates at and . This is known as the critical\ \ point. At higher temperatures and pressures the liquid and vapor phases form\ \ a continuous phase called a supercritical fluid. It can be gradually compressed\ \ or expanded between gas-like and liquid-like densities, its properties (which\ \ are quite different from those of ambient water) are sensitive to density. For\ \ example, for suitable pressures and temperatures it can mix freely with nonpolar\ \ compounds, including most organic compounds. This makes it useful in a variety\ \ of applications including high-temperature electrochemistry and as an ecologically\ \ benign solvent or catalyst in chemical reactions involving organic compounds.\ \ In Earth's mantle, it acts as a solvent during mineral formation, dissolution\ \ and deposition.\n\nPhases of ice and water \nThe normal form of ice on the surface\ \ of Earth is Ice Ih, a phase that forms crystals with hexagonal symmetry. Another\ \ with cubic crystalline symmetry, Ice Ic, can occur in the upper atmosphere.\ \ As the pressure increases, ice forms other crystal structures. As of 2019, 17\ \ have been experimentally confirmed and several more are predicted theoretically.\ \ The 18th form of ice, ice XVIII, a face-centred-cubic, superionic ice phase,\ \ was discovered when a droplet of water was subject to a shock wave that raised\ \ the water’s pressure to millions of atmospheres and its temperature to thousands\ \ of degrees, resulting in a structure of rigid oxygen atoms in which hydrogen\ \ atoms flowed freely. When sandwiched between layers of graphene, ice forms a\ \ square lattice.\n\nThe details of the chemical nature of liquid water are not\ \ well understood; some theories suggest that its unusual behaviour is due to\ \ the existence of 2 liquid states.\n\nTaste and odor\nPure water is usually described\ \ as tasteless and odorless, although humans have specific sensors that can feel\ \ the presence of water in their mouths, and frogs are known to be able to smell\ \ it. However, water from ordinary sources (including bottled mineral water) usually\ \ has many dissolved substances, that may give it varying tastes and odors. Humans\ \ and other animals have developed senses that enable them to evaluate the potability\ \ of water by avoiding water that is too salty or putrid.\n\nColor and appearance\n\ \nPure water is visibly blue due to absorption of light in the region ca. 600 nm\ \ – 800 nm. The color can be easily observed in a glass of tap-water placed against\ \ a pure white background, in daylight. The principal absorption bands responsible\ \ for the color are overtones of the O–H stretching vibrations. The apparent intensity\ \ of the color increases with the depth of the water column, following Beer's\ \ law. This also applies, for example, with a swimming pool when the light source\ \ is sunlight reflected from the pool's white tiles.\n \nIn nature, the color\ \ may also be modified from blue to green due to the presence of suspended solids\ \ or algae.\n\nIn industry, near-infrared spectroscopy is used with aqueous solutions\ \ as the greater intensity of the lower overtones of water means that glass cuvettes\ \ with short path-length may be employed. To observe the fundamental stretching\ \ absorption spectrum of water or of an aqueous solution in the region around\ \ 3500 cm−1 (2.85 μm) a path length of about 25 μm is needed. Also, the cuvette\ \ must be both transparent around 3500 cm−1 and insoluble in water; calcium fluoride\ \ is one material that is in common use for the cuvette windows with aqueous solutions.\n\ \ \nThe Raman-active fundamental vibrations may be observed with, for example,\ \ a 1 cm sample cell.\n\nAquatic plants, algae, and other photosynthetic organisms\ \ can live in water up to hundreds of meters deep, because sunlight can reach\ \ them.\nPractically no sunlight reaches the parts of the oceans below of depth.\n\ \nThe refractive index of liquid water (1.333 at ) is much higher than that of\ \ air (1.0), similar to those of alkanes and ethanol, but lower than those of\ \ glycerol (1.473), benzene (1.501), carbon disulfide (1.627), and common types\ \ of glass (1.4 to 1.6). The refraction index of ice (1.31) is lower than that\ \ of liquid water.\n\nPolar molecule \n\nIn a water molecule, the hydrogen atoms\ \ form a 104.5° angle with the oxygen atom. The hydrogen atoms are close to two\ \ corners of a tetrahedron centered on the oxygen. At the other two corners are\ \ lone pairs of valence electrons that do not participate in the bonding. In a\ \ perfect tetrahedron, the atoms would form a 109.5° angle, but the repulsion\ \ between the lone pairs is greater than the repulsion between the hydrogen atoms.\ \ The O–H bond length is about 0.096 nm.\n\nOther substances have a tetrahedral\ \ molecular structure, for example, methane () and hydrogen sulfide (). However,\ \ oxygen is more electronegative (holds on to its electrons more tightly) than\ \ most other elements, so the oxygen atom retains a negative charge while the\ \ hydrogen atoms are positively charged. Along with the bent structure, this gives\ \ the molecule an electrical dipole moment and it is classified as a polar molecule.\n\ \nWater is a good polar solvent, that dissolves many salts and hydrophilic organic\ \ molecules such as sugars and simple alcohols such as ethanol. Water also dissolves\ \ many gases, such as oxygen and carbon dioxide—the latter giving the fizz of\ \ carbonated beverages, sparkling wines and beers. In addition, many substances\ \ in living organisms, such as proteins, DNA and polysaccharides, are dissolved\ \ in water. The interactions between water and the subunits of these biomacromolecules\ \ shape protein folding, DNA base pairing, and other phenomena crucial to life\ \ (hydrophobic effect).\n\nMany organic substances (such as fats and oils and\ \ alkanes) are hydrophobic, that is, insoluble in water. Many inorganic substances\ \ are insoluble too, including most metal oxides, sulfides, and silicates.\n\n\ Hydrogen bonding\n\nBecause of its polarity, a molecule of water in the liquid\ \ or solid state can form up to four hydrogen bonds with neighboring molecules.\ \ Hydrogen bonds are about ten times as strong as the Van der Waals force that\ \ attracts molecules to each other in most liquids. This is the reason why the\ \ melting and boiling points of water are much higher than those of other analogous\ \ compounds like hydrogen sulfide. They also explain its exceptionally high specific\ \ heat capacity (about 4.2 J/g/K), heat of fusion (about 333 J/g), heat of vaporization\ \ (), and thermal conductivity (between 0.561 and 0.679 W/m/K). These properties\ \ make water more effective at moderating Earth's climate, by storing heat and\ \ transporting it between the oceans and the atmosphere. The hydrogen bonds of\ \ water are around 23 kJ/mol (compared to a covalent O-H bond at 492 kJ/mol).\ \ Of this, it is estimated that 90% is attributable to electrostatics, while the\ \ remaining 10% is partially covalent.\n\nThese bonds are the cause of water's\ \ high surface tension and capillary forces. The capillary action refers to the\ \ tendency of water to move up a narrow tube against the force of gravity. This\ \ property is relied upon by all vascular plants, such as trees.\n\nSelf-ionisation\n\ \nWater is a weak solution of hydronium hydroxide - there is an equilibrium ⇔\ \ + , in combination with solvation of the resulting hydronium ions.\n\nElectrical\ \ conductivity and electrolysis\nPure water has a low electrical conductivity,\ \ which increases with the dissolution of a small amount of ionic material such\ \ as common salt.\n\nLiquid water can be split into the elements hydrogen and\ \ oxygen by passing an electric current through it—a process called electrolysis.\ \ The decomposition requires more energy input than the heat released by the inverse\ \ process (285.8 kJ/mol, or 15.9 MJ/kg).\n\nMechanical properties\nLiquid water\ \ can be assumed to be incompressible for most purposes: its compressibility ranges\ \ from 4.4 to in ordinary conditions. Even in oceans at 4 km depth, where the\ \ pressure is 400 atm, water suffers only a 1.8% decrease in volume.\n\nThe viscosity\ \ of water is about 10−3 Pa·s or 0.01 poise at , and the speed of sound in liquid\ \ water ranges between depending on temperature. Sound travels long distances\ \ in water with little attenuation, especially at low frequencies (roughly 0.03\ \ dB/km for 1 kHz), a property that is exploited by cetaceans and humans for communication\ \ and environment sensing (sonar).\n\nReactivity\nMetallic elements which are\ \ more electropositive than hydrogen, particularly the alkali metals and alkaline\ \ earth metals such as lithium, sodium, calcium, potassium and cesium displace\ \ hydrogen from water, forming hydroxides and releasing hydrogen. At high temperatures,\ \ carbon reacts with steam to form carbon monoxide and hydrogen.\n\nOn Earth\n\ \nHydrology is the study of the movement, distribution, and quality of water throughout\ \ the Earth. The study of the distribution of water is hydrography. The study\ \ of the distribution and movement of groundwater is hydrogeology, of glaciers\ \ is glaciology, of inland waters is limnology and distribution of oceans is oceanography.\ \ Ecological processes with hydrology are in the focus of ecohydrology.\n\nThe\ \ collective mass of water found on, under, and over the surface of a planet is\ \ called the hydrosphere. Earth's approximate water volume (the total water supply\ \ of the world) is 1.386 × 109 cubic kilometers (3.33 × 108 cubic miles).\n\n\ Liquid water is found in bodies of water, such as an ocean, sea, lake, river,\ \ stream, canal, pond, or puddle. The majority of water on Earth is sea water.\ \ Water is also present in the atmosphere in solid, liquid, and vapor states.\ \ It also exists as groundwater in aquifers.\n\nWater is important in many geological\ \ processes. Groundwater is present in most rocks, and the pressure of this groundwater\ \ affects patterns of faulting. Water in the mantle is responsible for the melt\ \ that produces volcanoes at subduction zones. On the surface of the Earth, water\ \ is important in both chemical and physical weathering processes. Water, and\ \ to a lesser but still significant extent, ice, are also responsible for a large\ \ amount of sediment transport that occurs on the surface of the earth. Deposition\ \ of transported sediment forms many types of sedimentary rocks, which make up\ \ the geologic record of Earth history.\n\nWater cycle\n\nThe water cycle (known\ \ scientifically as the hydrologic cycle) refers to the continuous exchange of\ \ water within the hydrosphere, between the atmosphere, soil water, surface water,\ \ groundwater, and plants.\n\nWater moves perpetually through each of these regions\ \ in the water cycle consisting of the following transfer processes:\n evaporation\ \ from oceans and other water bodies into the air and transpiration from land\ \ plants and animals into the air.\n precipitation, from water vapor condensing\ \ from the air and falling to the earth or ocean.\n runoff from the land usually\ \ reaching the sea.\nMost water vapors found mostly in the ocean returns to it,\ \ but winds carry water vapor over land at the same rate as runoff into the sea,\ \ about 47 Tt per year whilst evaporation and transpiration happening in land\ \ masses also contribute another 72 Tt per year. Precipitation, at a rate of 119\ \ Tt per year over land, has several forms: most commonly rain, snow, and hail,\ \ with some contribution from fog and dew. Dew is small drops of water that are\ \ condensed when a high density of water vapor meets a cool surface. Dew usually\ \ forms in the morning when the temperature is the lowest, just before sunrise\ \ and when the temperature of the earth's surface starts to increase. Condensed\ \ water in the air may also refract sunlight to produce rainbows.\n\nWater runoff\ \ often collects over watersheds flowing into rivers. A mathematical model used\ \ to simulate river or stream flow and calculate water quality parameters is a\ \ hydrological transport model. Some water is diverted to irrigation for agriculture.\ \ Rivers and seas offer opportunities for travel and commerce. Through erosion,\ \ runoff shapes the environment creating river valleys and deltas which provide\ \ rich soil and level ground for the establishment of population centers. A flood\ \ occurs when an area of land, usually low-lying, is covered with water which\ \ occurs when a river overflows its banks or a storm surge happens. On the other\ \ hand, drought is an extended period of months or years when a region notes a\ \ deficiency in its water supply. This occurs when a region receives consistently\ \ below average precipitation either due to its topography or due to its location\ \ in terms of latitude.\n\nWater resources\n\nWater occurs as both \"stocks\"\ \ and \"flows\". Water can be stored as lakes, water vapor, groundwater or aquifers,\ \ and ice and snow. Of the total volume of global freshwater, an estimated 69\ \ percent is stored in glaciers and permanent snow cover; 30 percent is in groundwater;\ \ and the remaining 1 percent in lakes, rivers, the atmosphere, and biota. The\ \ length of time water remains in storage is highly variable: some aquifers consist\ \ of water stored over thousands of years but lake volumes may fluctuate on a\ \ seasonal basis, decreasing during dry periods and increasing during wet ones.\ \ A substantial fraction of the water supply for some regions consists of water\ \ extracted from water stored in stocks, and when withdrawals exceed recharge,\ \ stocks decrease. By some estimates, as much as 30 percent of total water used\ \ for irrigation comes from unsustainable withdrawals of groundwater, causing\ \ groundwater depletion.\n\nSea water and tides\n\nSea water contains about 3.5%\ \ sodium chloride on average, plus smaller amounts of other substances. The physical\ \ properties of seawater differ from fresh water in some important respects. It\ \ freezes at a lower temperature (about ) and its density increases with decreasing\ \ temperature to the freezing point, instead of reaching maximum density at a\ \ temperature above freezing. The salinity of water in major seas varies from\ \ about 0.7% in the Baltic Sea to 4.0% in the Red Sea. (The Dead Sea, known for\ \ its ultra-high salinity levels of between 30–40%, is really a salt lake.)\n\n\ Tides are the cyclic rising and falling of local sea levels caused by the tidal\ \ forces of the Moon and the Sun acting on the oceans. Tides cause changes in\ \ the depth of the marine and estuarine water bodies and produce oscillating currents\ \ known as tidal streams. The changing tide produced at a given location is the\ \ result of the changing positions of the Moon and Sun relative to the Earth coupled\ \ with the effects of Earth rotation and the local bathymetry. The strip of seashore\ \ that is submerged at high tide and exposed at low tide, the intertidal zone,\ \ is an important ecological product of ocean tides.\n\nEffects on life\n\nFrom\ \ a biological standpoint, water has many distinct properties that are critical\ \ for the proliferation of life. It carries out this role by allowing organic\ \ compounds to react in ways that ultimately allow replication. All known forms\ \ of life depend on water. Water is vital both as a solvent in which many of the\ \ body's solutes dissolve and as an essential part of many metabolic processes\ \ within the body. Metabolism is the sum total of anabolism and catabolism. In\ \ anabolism, water is removed from molecules (through energy requiring enzymatic\ \ chemical reactions) in order to grow larger molecules (e.g., starches, triglycerides,\ \ and proteins for storage of fuels and information). In catabolism, water is\ \ used to break bonds in order to generate smaller molecules (e.g., glucose, fatty\ \ acids, and amino acids to be used for fuels for energy use or other purposes).\ \ Without water, these particular metabolic processes could not exist.\n\nWater\ \ is fundamental to photosynthesis and respiration. Photosynthetic cells use the\ \ sun's energy to split off water's hydrogen from oxygen. Hydrogen is combined\ \ with CO2 (absorbed from air or water) to form glucose and release oxygen. All\ \ living cells use such fuels and oxidize the hydrogen and carbon to capture the\ \ sun's energy and reform water and CO2 in the process (cellular respiration).\n\ \nWater is also central to acid-base neutrality and enzyme function. An acid,\ \ a hydrogen ion (H+, that is, a proton) donor, can be neutralized by a base,\ \ a proton acceptor such as a hydroxide ion (OH−) to form water. Water is considered\ \ to be neutral, with a pH (the negative log of the hydrogen ion concentration)\ \ of 7. Acids have pH values less than 7 while bases have values greater than\ \ 7.\n\nAquatic life forms\n\nEarth surface waters are filled with life. The earliest\ \ life forms appeared in water; nearly all fish live exclusively in water, and\ \ there are many types of marine mammals, such as dolphins and whales. Some kinds\ \ of animals, such as amphibians, spend portions of their lives in water and portions\ \ on land. Plants such as kelp and algae grow in the water and are the basis for\ \ some underwater ecosystems. Plankton is generally the foundation of the ocean\ \ food chain.\n\nAquatic vertebrates must obtain oxygen to survive, and they do\ \ so in various ways. Fish have gills instead of lungs, although some species\ \ of fish, such as the lungfish, have both. Marine mammals, such as dolphins,\ \ whales, otters, and seals need to surface periodically to breathe air. Some\ \ amphibians are able to absorb oxygen through their skin. Invertebrates exhibit\ \ a wide range of modifications to survive in poorly oxygenated waters including\ \ breathing tubes (see insect and mollusc siphons) and gills (Carcinus). However,\ \ as invertebrate life evolved in an aquatic habitat most have little or no specialization\ \ for respiration in water.\n\nEffects on human civilization\n\nCivilization has\ \ historically flourished around rivers and major waterways; Mesopotamia, the\ \ so-called cradle of civilization, was situated between the major rivers Tigris\ \ and Euphrates; the ancient society of the Egyptians depended entirely upon the\ \ Nile. The early Indus Valley Civilization (c. 3300 BCE to 1300 BCE) developed\ \ along the Indus River and tributaries that flowed out of the Himalayas. Rome\ \ was also founded on the banks of the Italian river Tiber. Large metropolises\ \ like Rotterdam, London, Montreal, Paris, New York City, Buenos Aires, Shanghai,\ \ Tokyo, Chicago, and Hong Kong owe their success in part to their easy accessibility\ \ via water and the resultant expansion of trade. Islands with safe water ports,\ \ like Singapore, have flourished for the same reason. In places such as North\ \ Africa and the Middle East, where water is more scarce, access to clean drinking\ \ water was and is a major factor in human development.\n\nHealth and pollution\n\ \nWater fit for human consumption is called drinking water or potable water. Water\ \ that is not potable may be made potable by filtration or distillation, or by\ \ a range of other methods. More than 660 million people do not have access to\ \ safe drinking water.\n\nWater that is not fit for drinking but is not harmful\ \ to humans when used for swimming or bathing is called by various names other\ \ than potable or drinking water, and is sometimes called safe water, or \"safe\ \ for bathing\". Chlorine is a skin and mucous membrane irritant that is used\ \ to make water safe for bathing or drinking. Its use is highly technical and\ \ is usually monitored by government regulations (typically 1 part per million\ \ (ppm) for drinking water, and 1–2 ppm of chlorine not yet reacted with impurities\ \ for bathing water). Water for bathing may be maintained in satisfactory microbiological\ \ condition using chemical disinfectants such as chlorine or ozone or by the use\ \ of ultraviolet light.\n\nWater reclamation is the process of converting wastewater\ \ (most commonly sewage, also called municipal wastewater) into water that can\ \ be reused for other purposes. \n\nFreshwater is a renewable resource, recirculated\ \ by the natural hydrologic cycle, but pressures over access to it result from\ \ the naturally uneven distribution in space and time, growing economic demands\ \ by agriculture and industry, and rising populations. Currently, nearly a billion\ \ people around the world lack access to safe, affordable water. In 2000, the\ \ United Nations established the Millennium Development Goals for water to halve\ \ by 2015 the proportion of people worldwide without access to safe water and\ \ sanitation. Progress toward that goal was uneven, and in 2015 the UN committed\ \ to the Sustainable Development Goals of achieving universal access to safe and\ \ affordable water and sanitation by 2030. Poor water quality and bad sanitation\ \ are deadly; some five million deaths a year are caused by water-related diseases.\ \ The World Health Organization estimates that safe water could prevent 1.4 million\ \ child deaths from diarrhoea each year.\n\nIn developing countries, 90% of all\ \ municipal wastewater still goes untreated into local rivers and streams. Some\ \ 50 countries, with roughly a third of the world's population, also suffer from\ \ medium or high water scarcity and 17 of these extract more water annually than\ \ is recharged through their natural water cycles. The strain not only affects\ \ surface freshwater bodies like rivers and lakes, but it also degrades groundwater\ \ resources.\n\nHuman uses\n\nAgriculture\nThe most substantial human use of water\ \ is for agriculture, including irrigated agriculture, which accounts for as much\ \ as 80 to 90 percent of total human water consumption. In the United States,\ \ 42% of freshwater withdrawn for use is for irrigation, but the vast majority\ \ of water \"consumed\" (used and not returned to the environment) goes to agriculture.\n\ \nAccess to fresh water is often taken for granted, especially in developed countries\ \ that have build sophisticated water systems for collecting, purifying, and delivering\ \ water, and removing wastewater. But growing economic, demographic, and climatic\ \ pressures are increasing concerns about water issues, leading to increasing\ \ competition for fixed water resources, giving rise to the concept of peak water.\ \ As populations and economies continue to grow, consumption of water-thirsty\ \ meat expands, and new demands rise for biofuels or new water-intensive industries,\ \ new water challenges are likely.\n\nAn assessment of water management in agriculture\ \ was conducted in 2007 by the International Water Management Institute in Sri\ \ Lanka to see if the world had sufficient water to provide food for its growing\ \ population. It assessed the current availability of water for agriculture on\ \ a global scale and mapped out locations suffering from water scarcity. It found\ \ that a fifth of the world's people, more than 1.2 billion, live in areas of\ \ physical water scarcity, where there is not enough water to meet all demands.\ \ A further 1.6 billion people live in areas experiencing economic water scarcity,\ \ where the lack of investment in water or insufficient human capacity make it\ \ impossible for authorities to satisfy the demand for water. The report found\ \ that it would be possible to produce the food required in the future, but that\ \ continuation of today's food production and environmental trends would lead\ \ to crises in many parts of the world. To avoid a global water crisis, farmers\ \ will have to strive to increase productivity to meet growing demands for food,\ \ while industries and cities find ways to use water more efficiently.\n\nWater\ \ scarcity is also caused by production of water intensive products. For example,\ \ cotton: 1 kg of cotton—equivalent of a pair of jeans—requires water to produce.\ \ While cotton accounts for 2.4% of world water use, the water is consumed in\ \ regions that are already at a risk of water shortage. Significant environmental\ \ damage has been caused: for example, the diversion of water by the former Soviet\ \ Union from the Amu Darya and Syr Darya rivers to produce cotton was largely\ \ responsible for the disappearance of the Aral Sea.\n\nAs a scientific standard\n\ On 7 April 1795, the gram was defined in France to be equal to \"the absolute\ \ weight of a volume of pure water equal to a cube of one-hundredth of a meter,\ \ and at the temperature of melting ice\". For practical purposes though, a metallic\ \ reference standard was required, one thousand times more massive, the kilogram.\ \ Work was therefore commissioned to determine precisely the mass of one liter\ \ of water. In spite of the fact that the decreed definition of the gram specified\ \ water at —a highly reproducible temperature—the scientists chose to redefine\ \ the standard and to perform their measurements at the temperature of highest\ \ water density, which was measured at the time as .\n\nThe Kelvin temperature\ \ scale of the SI system was based on the triple point of water, defined as exactly\ \ , but as of May 2019 is based on the Boltzmann constant instead. The scale is\ \ an absolute temperature scale with the same increment as the Celsius temperature\ \ scale, which was originally defined according to the boiling point (set to )\ \ and melting point (set to ) of water.\n\nNatural water consists mainly of the\ \ isotopes hydrogen-1 and oxygen-16, but there is also a small quantity of heavier\ \ isotopes oxygen-18, oxygen-17, and hydrogen-2 (deuterium). The percentage of\ \ the heavier isotopes is very small, but it still affects the properties of water.\ \ Water from rivers and lakes tends to contain less heavy isotopes than seawater.\ \ Therefore, standard water is defined in the Vienna Standard Mean Ocean Water\ \ specification.\n\nFor drinking\n\nThe human body contains from 55% to 78% water,\ \ depending on body size. To function properly, the body requires between of\ \ water per day to avoid dehydration; the precise amount depends on the level\ \ of activity, temperature, humidity, and other factors. Most of this is ingested\ \ through foods or beverages other than drinking straight water. It is not clear\ \ how much water intake is needed by healthy people, though the British Dietetic\ \ Association advises that 2.5 liters of total water daily is the minimum to maintain\ \ proper hydration, including 1.8 liters (6 to 7 glasses) obtained directly from\ \ beverages. Medical literature favors a lower consumption, typically 1 liter\ \ of water for an average male, excluding extra requirements due to fluid loss\ \ from exercise or warm weather.\n\nHealthy kidneys can excrete 0.8 to 1 liter\ \ of water per hour, but stress such as exercise can reduce this amount. People\ \ can drink far more water than necessary while exercising, putting them at risk\ \ of water intoxication (hyperhydration), which can be fatal. The popular claim\ \ that \"a person should consume eight glasses of water per day\" seems to have\ \ no real basis in science. Studies have shown that extra water intake, especially\ \ up to at mealtime was associated with weight loss. Adequate fluid intake is\ \ helpful in preventing constipation.\n\nAn original recommendation for water\ \ intake in 1945 by the Food and Nutrition Board of the United States National\ \ Research Council read: \"An ordinary standard for diverse persons is 1 milliliter\ \ for each calorie of food. Most of this quantity is contained in prepared foods.\"\ \ The latest dietary reference intake report by the United States National Research\ \ Council in general recommended, based on the median total water intake from\ \ US survey data (including food sources): for men and of water total for women,\ \ noting that water contained in food provided approximately 19% of total water\ \ intake in the survey.\n\nSpecifically, pregnant and breastfeeding women need\ \ additional fluids to stay hydrated. The Institute of Medicine (US) recommends\ \ that, on average, men consume and women ; pregnant women should increase intake\ \ to and breastfeeding women should get 3 liters (12 cups), since an especially\ \ large amount of fluid is lost during nursing. Also noted is that normally, about\ \ 20% of water intake comes from food, while the rest comes from drinking water\ \ and beverages (caffeinated included). Water is excreted from the body in multiple\ \ forms; through urine and feces, through sweating, and by exhalation of water\ \ vapor in the breath. With physical exertion and heat exposure, water loss will\ \ increase and daily fluid needs may increase as well.\n\nHumans require water\ \ with few impurities. Common impurities include metal salts and oxides, including\ \ copper, iron, calcium and lead, and/or harmful bacteria, such as Vibrio. Some\ \ solutes are acceptable and even desirable for taste enhancement and to provide\ \ needed electrolytes.\n\nThe single largest (by volume) freshwater resource suitable\ \ for drinking is Lake Baikal in Siberia.\n\nWashing\nThe propensity of water\ \ to form solutions and emulsions is useful in various washing processes. Washing\ \ is also an important component of several aspects of personal body hygiene.\ \ Most of the personal water use is due to showering, doing the laundry and dishwashing,\ \ reaching hundreds of liters per day per person in developed countries.\n\nTransportation\n\ \nThe use of water for transportation of materials through rivers and canals as\ \ well as the international shipping lanes is an important part of the world economy.\n\ \nChemical uses\nWater is widely used in chemical reactions as a solvent or reactant\ \ and less commonly as a solute or catalyst. In inorganic reactions, water is\ \ a common solvent, dissolving many ionic compounds, as well as other polar compounds\ \ such as ammonia and compounds closely related to water. In organic reactions,\ \ it is not usually used as a reaction solvent, because it does not dissolve the\ \ reactants well and is amphoteric (acidic and basic) and nucleophilic. Nevertheless,\ \ these properties are sometimes desirable. Also, acceleration of Diels-Alder\ \ reactions by water has been observed. Supercritical water has recently been\ \ a topic of research. Oxygen-saturated supercritical water combusts organic pollutants\ \ efficiently. Water vapor is used for some processes in the chemical industry.\ \ An example is the production of acrylic acid from acrolein, propylene and propane.\ \ The possible effect of water in these reactions includes the physical-, chemical\ \ interaction of water with the catalyst and the chemical reaction of water with\ \ the reaction intermediates.\n\nHeat exchange\nWater and steam are a common fluid\ \ used for heat exchange, due to its availability and high heat capacity, both\ \ for cooling and heating. Cool water may even be naturally available from a lake\ \ or the sea. It's especially effective to transport heat through vaporization\ \ and condensation of water because of its large latent heat of vaporization.\ \ A disadvantage is that metals commonly found in industries such as steel and\ \ copper are oxidized faster by untreated water and steam. In almost all thermal\ \ power stations, water is used as the working fluid (used in a closed-loop between\ \ boiler, steam turbine, and condenser), and the coolant (used to exchange the\ \ waste heat to a water body or carry it away by evaporation in a cooling tower).\ \ In the United States, cooling power plants is the largest use of water.\n\n\ In the nuclear power industry, water can also be used as a neutron moderator.\ \ In most nuclear reactors, water is both a coolant and a moderator. This provides\ \ something of a passive safety measure, as removing the water from the reactor\ \ also slows the nuclear reaction down. However other methods are favored for\ \ stopping a reaction and it is preferred to keep the nuclear core covered with\ \ water so as to ensure adequate cooling.\n\nFire considerations\n\nWater has\ \ a high heat of vaporization and is relatively inert, which makes it a good fire\ \ extinguishing fluid. The evaporation of water carries heat away from the fire.\ \ It is dangerous to use water on fires involving oils and organic solvents because\ \ many organic materials float on water and the water tends to spread the burning\ \ liquid.\n\nUse of water in fire fighting should also take into account the hazards\ \ of a steam explosion, which may occur when water is used on very hot fires in\ \ confined spaces, and of a hydrogen explosion, when substances which react with\ \ water, such as certain metals or hot carbon such as coal, charcoal, or coke\ \ graphite, decompose the water, producing water gas.\n\nThe power of such explosions\ \ was seen in the Chernobyl disaster, although the water involved in this case\ \ did not come from fire-fighting but from the reactor's own water cooling system.\ \ A steam explosion occurred when the extreme overheating of the core caused water\ \ to flash into steam. A hydrogen explosion may have occurred as a result of a\ \ reaction between steam and hot zirconium.\n\nSome metallic oxides, most notably\ \ those of alkali metals and alkaline earth metals, produce so much heat on reaction\ \ with water that a fire hazard can develop. The alkaline earth oxide quicklime\ \ is a mass-produced substance that is often transported in paper bags. If these\ \ are soaked through, they may ignite as their contents react with water.\n\n\ Recreation\n\nHumans use water for many recreational purposes, as well as for\ \ exercising and for sports. Some of these include swimming, waterskiing, boating,\ \ surfing and diving. In addition, some sports, like ice hockey and ice skating,\ \ are played on ice. Lakesides, beaches and water parks are popular places for\ \ people to go to relax and enjoy recreation. Many find the sound and appearance\ \ of flowing water to be calming, and fountains and other water features are popular\ \ decorations. Some keep fish and other flora and fauna inside aquariums or ponds\ \ for show, fun, and companionship. Humans also use water for snow sports i.e.\ \ skiing, sledding, snowmobiling or snowboarding, which require the water to be\ \ frozen.\n\nWater industry\nThe water industry provides drinking water and wastewater\ \ services (including sewage treatment) to households and industry. Water supply\ \ facilities include water wells, cisterns for rainwater harvesting, water supply\ \ networks, and water purification facilities, water tanks, water towers, water\ \ pipes including old aqueducts. Atmospheric water generators are in development.\n\ \nDrinking water is often collected at springs, extracted from artificial borings\ \ (wells) in the ground, or pumped from lakes and rivers. Building more wells\ \ in adequate places is thus a possible way to produce more water, assuming the\ \ aquifers can supply an adequate flow. Other water sources include rainwater\ \ collection. Water may require purification for human consumption. This may involve\ \ the removal of undissolved substances, dissolved substances and harmful microbes.\ \ Popular methods are filtering with sand which only removes undissolved material,\ \ while chlorination and boiling kill harmful microbes. Distillation does all\ \ three functions. More advanced techniques exist, such as reverse osmosis. Desalination\ \ of abundant seawater is a more expensive solution used in coastal arid climates.\n\ \nThe distribution of drinking water is done through municipal water systems,\ \ tanker delivery or as bottled water. Governments in many countries have programs\ \ to distribute water to the needy at no charge.\n\nReducing usage by using drinking\ \ (potable) water only for human consumption is another option. In some cities\ \ such as Hong Kong, seawater is extensively used for flushing toilets citywide\ \ in order to conserve freshwater resources.\n\nPolluting water may be the biggest\ \ single misuse of water; to the extent that a pollutant limits other uses of\ \ the water, it becomes a waste of the resource, regardless of benefits to the\ \ polluter. Like other types of pollution, this does not enter standard accounting\ \ of market costs, being conceived as externalities for which the market cannot\ \ account. Thus other people pay the price of water pollution, while the private\ \ firms' profits are not redistributed to the local population, victims of this\ \ pollution. Pharmaceuticals consumed by humans often end up in the waterways\ \ and can have detrimental effects on aquatic life if they bioaccumulate and if\ \ they are not biodegradable.\n\nMunicipal and industrial wastewater are typically\ \ treated at wastewater treatment plants. Mitigation of polluted surface runoff\ \ is addressed through a variety of prevention and treatment techniques. (See\ \ Surface runoff#Mitigation and treatment.)\n\nIndustrial applications\nMany industrial\ \ processes rely on reactions using chemicals dissolved in water, suspension of\ \ solids in water slurries or using water to dissolve and extract substances,\ \ or to wash products or process equipment. Processes such as mining, chemical\ \ pulping, pulp bleaching, paper manufacturing, textile production, dyeing, printing,\ \ and cooling of power plants use large amounts of water, requiring a dedicated\ \ water source, and often cause significant water pollution.\n\nWater is used\ \ in power generation. Hydroelectricity is electricity obtained from hydropower.\ \ Hydroelectric power comes from water driving a water turbine connected to a\ \ generator. Hydroelectricity is a low-cost, non-polluting, renewable energy source.\ \ The energy is supplied by the motion of water. Typically a dam is constructed\ \ on a river, creating an artificial lake behind it. Water flowing out of the\ \ lake is forced through turbines that turn generators.\n\nPressurized water is\ \ used in water blasting and water jet cutters. Also, high pressure water guns\ \ are used for precise cutting. It works very well, is relatively safe, and is\ \ not harmful to the environment. It is also used in the cooling of machinery\ \ to prevent overheating, or prevent saw blades from overheating.\n\nWater is\ \ also used in many industrial processes and machines, such as the steam turbine\ \ and heat exchanger, in addition to its use as a chemical solvent. Discharge\ \ of untreated water from industrial uses is pollution. Pollution includes discharged\ \ solutes (chemical pollution) and discharged coolant water (thermal pollution).\ \ Industry requires pure water for many applications and utilizes a variety of\ \ purification techniques both in water supply and discharge.\n\nFood processing\n\ \nBoiling, steaming, and simmering are popular cooking methods that often require\ \ immersing food in water or its gaseous state, steam. Water is also used for\ \ dishwashing. Water also plays many critical roles within the field of food science.\n\ \nSolutes such as salts and sugars found in water affect the physical properties\ \ of water. The boiling and freezing points of water are affected by solutes,\ \ as well as air pressure, which is in turn affected by altitude. Water boils\ \ at lower temperatures with the lower air pressure that occurs at higher elevations.\ \ One mole of sucrose (sugar) per kilogram of water raises the boiling point of\ \ water by , and one mole of salt per kg raises the boiling point by ; similarly,\ \ increasing the number of dissolved particles lowers water's freezing point.\n\ \nSolutes in water also affect water activity that affects many chemical reactions\ \ and the growth of microbes in food. Water activity can be described as a ratio\ \ of the vapor pressure of water in a solution to the vapor pressure of pure water.\ \ Solutes in water lower water activity—this is important to know because most\ \ bacterial growth ceases at low levels of water activity. Not only does microbial\ \ growth affect the safety of food, but also the preservation and shelf life of\ \ food.\n\nWater hardness is also a critical factor in food processing and may\ \ be altered or treated by using a chemical ion exchange system. It can dramatically\ \ affect the quality of a product, as well as playing a role in sanitation. Water\ \ hardness is classified based on concentration of calcium carbonate the water\ \ contains. Water is classified as soft if it contains less than 100 mg/l (UK)\ \ or less than 60 mg/l (US).\n\nAccording to a report published by the Water Footprint\ \ organization in 2010, a single kilogram of beef requires of water; however,\ \ the authors also make clear that this is a global average and circumstantial\ \ factors determine the amount of water used in beef production.\n\nMedical use\n\ Water for injection is on the World Health Organization's list of essential medicines.\n\ \nDistribution in nature\n\nIn the universe\n\nMuch of the universe's water is\ \ produced as a byproduct of star formation. The formation of stars is accompanied\ \ by a strong outward wind of gas and dust. When this outflow of material eventually\ \ impacts the surrounding gas, the shock waves that are created compress and heat\ \ the gas. The water observed is quickly produced in this warm dense gas.\n\n\ On 22 July 2011, a report described the discovery of a gigantic cloud of water\ \ vapor containing \"140 trillion times more water than all of Earth's oceans\ \ combined\" around a quasar located 12 billion light years from Earth. According\ \ to the researchers, the \"discovery shows that water has been prevalent in the\ \ universe for nearly its entire existence\".\n\nWater has been detected in interstellar\ \ clouds within our galaxy, the Milky Way. Water probably exists in abundance\ \ in other galaxies, too, because its components, hydrogen, and oxygen, are among\ \ the most abundant elements in the universe. Based on models of the formation\ \ and evolution of the Solar System and that of other star systems, most other\ \ planetary systems are likely to have similar ingredients.\n\nWater vapor\nWater\ \ is present as vapor in:\n Atmosphere of the Sun: in detectable trace amounts\n\ \ Atmosphere of Mercury: 3.4%, and large amounts of water in Mercury's exosphere\n\ \ Atmosphere of Venus: 0.002%\n Earth's atmosphere: ≈0.40% over full atmosphere,\ \ typically 1–4% at surface; as well as that of the Moon in trace amounts\n Atmosphere\ \ of Mars: 0.03%\n Atmosphere of Ceres\n Atmosphere of Jupiter: 0.0004% – in ices\ \ only; and that of its moon Europa\n Atmosphere of Saturn – in ices only; Enceladus:\ \ 91% and Dione (exosphere)\n Atmosphere of Uranus – in trace amounts below 50\ \ bar\n Atmosphere of Neptune – found in the deeper layers\n Extrasolar planet\ \ atmospheres: including those of HD 189733 b and HD 209458 b, Tau Boötis b, HAT-P-11b,\ \ XO-1b, WASP-12b, WASP-17b, and WASP-19b.\n Stellar atmospheres: not limited\ \ to cooler stars and even detected in giant hot stars such as Betelgeuse, Mu\ \ Cephei, Antares and Arcturus.\n Circumstellar disks: including those of more\ \ than half of T Tauri stars such as AA Tauri as well as TW Hydrae, IRC +10216\ \ and APM 08279+5255, VY Canis Majoris and S Persei.\n\nLiquid water\nLiquid water\ \ is present on Earth, covering 71% of its surface. Liquid water is also occasionally\ \ present in small amounts on Mars. Scientists believe liquid water is present\ \ in the Saturnian moons of Enceladus, as a 10-kilometre thick ocean approximately\ \ 30–40 kilometres below Enceladus' south polar surface, and Titan, as a subsurface\ \ layer, possibly mixed with ammonia. Jupiter's moon Europa has surface characteristics\ \ which suggest a subsurface liquid water ocean. Liquid water may also exist on\ \ Jupiter's moon Ganymede as a layer sandwiched between high pressure ice and\ \ rock.\n\nWater ice\nWater is present as ice on:\n\n Mars: under the regolith\ \ and at the poles.\n Earth–Moon system: mainly as ice sheets on Earth and in\ \ Lunar craters and volcanic rocks NASA reported the detection of water molecules\ \ by NASA's Moon Mineralogy Mapper aboard the Indian Space Research Organization's\ \ Chandrayaan-1 spacecraft in September 2009.\n Ceres\n Jupiter's moons: Europa's\ \ surface and also that of Ganymede and Callisto\n Saturn: in the planet's ring\ \ system and on the surface and mantle of Titan and Enceladus\n Pluto–Charon system\n\ \ Comets and other related Kuiper belt and Oort cloud objects\n\nAnd is also likely\ \ present on:\n Mercury's poles\n Tethys\n\nExotic forms\nWater and other volatiles\ \ probably comprise much of the internal structures of Uranus and Neptune and\ \ the water in the deeper layers may be in the form of ionic water in which the\ \ molecules break down into a soup of hydrogen and oxygen ions, and deeper still\ \ as superionic water in which the oxygen crystallises, but the hydrogen ions\ \ float about freely within the oxygen lattice.\n\nWater and planetary habitability\n\ \nThe existence of liquid water, and to a lesser extent its gaseous and solid\ \ forms, on Earth are vital to the existence of life on Earth as we know it. The\ \ Earth is located in the habitable zone of the Solar System; if it were slightly\ \ closer to or farther from the Sun (about 5%, or about 8 million kilometers),\ \ the conditions which allow the three forms to be present simultaneously would\ \ be far less likely to exist.\n\nEarth's gravity allows it to hold an atmosphere.\ \ Water vapor and carbon dioxide in the atmosphere provide a temperature buffer\ \ (greenhouse effect) which helps maintain a relatively steady surface temperature.\ \ If Earth were smaller, a thinner atmosphere would allow temperature extremes,\ \ thus preventing the accumulation of water except in polar ice caps (as on Mars).\n\ \nThe surface temperature of Earth has been relatively constant through geologic\ \ time despite varying levels of incoming solar radiation (insolation), indicating\ \ that a dynamic process governs Earth's temperature via a combination of greenhouse\ \ gases and surface or atmospheric albedo. This proposal is known as the Gaia\ \ hypothesis.\n\nThe state of water on a planet depends on ambient pressure, which\ \ is determined by the planet's gravity. If a planet is sufficiently massive,\ \ the water on it may be solid even at high temperatures, because of the high\ \ pressure caused by gravity, as it was observed on exoplanets Gliese 436 b and\ \ GJ 1214 b.\n\nLaw, politics, and crisis\n\nWater politics is politics affected\ \ by water and water resources. For this reason, water is a strategic resource\ \ in the globe and an important element in many political conflicts. It causes\ \ health impacts and damage to biodiversity.\n\nAccess to safe drinking water\ \ has improved over the last decades in almost every part of the world, but approximately\ \ one billion people still lack access to safe water and over 2.5 billion lack\ \ access to adequate sanitation. However, some observers have estimated that by\ \ 2025 more than half of the world population will be facing water-based vulnerability.\ \ A report, issued in November 2009, suggests that by 2030, in some developing\ \ regions of the world, water demand will exceed supply by 50%.\n\n1.6 billion\ \ people have gained access to a safe water source since 1990. The proportion\ \ of people in developing countries with access to safe water is calculated to\ \ have improved from 30% in 1970 to 71% in 1990, 79% in 2000 and 84% in 2004.\n\ \nA 2006 United Nations report stated that \"there is enough water for everyone\"\ , but that access to it is hampered by mismanagement and corruption. In addition,\ \ global initiatives to improve the efficiency of aid delivery, such as the Paris\ \ Declaration on Aid Effectiveness, have not been taken up by water sector donors\ \ as effectively as they have in education and health, potentially leaving multiple\ \ donors working on overlapping projects and recipient governments without empowerment\ \ to act.\n\nThe authors of the 2007 Comprehensive Assessment of Water Management\ \ in Agriculture cited poor governance as one reason for some forms of water scarcity.\ \ Water governance is the set of formal and informal processes through which decisions\ \ related to water management are made. Good water governance is primarily about\ \ knowing what processes work best in a particular physical and socioeconomic\ \ context. Mistakes have sometimes been made by trying to apply 'blueprints' that\ \ work in the developed world to developing world locations and contexts. The\ \ Mekong river is one example; a review by the International Water Management\ \ Institute of policies in six countries that rely on the Mekong river for water\ \ found that thorough and transparent cost-benefit analyses and environmental\ \ impact assessments were rarely undertaken. They also discovered that Cambodia's\ \ draft water law was much more complex than it needed to be.\n\nThe UN World\ \ Water Development Report (WWDR, 2003) from the World Water Assessment Program\ \ indicates that, in the next 20 years, the quantity of water available to everyone\ \ is predicted to decrease by 30%. 40% of the world's inhabitants currently have\ \ insufficient fresh water for minimal hygiene. More than 2.2 million people died\ \ in 2000 from waterborne diseases (related to the consumption of contaminated\ \ water) or drought. In 2004, the UK charity WaterAid reported that a child dies\ \ every 15 seconds from easily preventable water-related diseases; often this\ \ means lack of sewage disposal.\n\nOrganizations concerned with water protection\ \ include the International Water Association (IWA), WaterAid, Water 1st, and\ \ the American Water Resources Association. The International Water Management\ \ Institute undertakes projects with the aim of using effective water management\ \ to reduce poverty. Water related conventions are United Nations Convention to\ \ Combat Desertification (UNCCD), International Convention for the Prevention\ \ of Pollution from Ships, United Nations Convention on the Law of the Sea and\ \ Ramsar Convention. World Day for Water takes place on 22 March and World Oceans\ \ Day on 8 June.\n\nIn culture\n\nReligion\n\nWater is considered a purifier in\ \ most religions. Faiths that incorporate ritual washing (ablution) include Christianity,\ \ Hinduism, Islam, Judaism, the Rastafari movement, Shinto, Taoism, and Wicca.\ \ Immersion (or aspersion or affusion) of a person in water is a central sacrament\ \ of Christianity (where it is called baptism); it is also a part of the practice\ \ of other religions, including Islam (Ghusl), Judaism (mikvah) and Sikhism (Amrit\ \ Sanskar). In addition, a ritual bath in pure water is performed for the dead\ \ in many religions including Islam and Judaism. In Islam, the five daily prayers\ \ can be done in most cases after washing certain parts of the body using clean\ \ water (wudu), unless water is unavailable (see Tayammum). In Shinto, water is\ \ used in almost all rituals to cleanse a person or an area (e.g., in the ritual\ \ of misogi).\n\nIn Christianity, holy water is water that has been sanctified\ \ by a priest for the purpose of baptism, the blessing of persons, places, and\ \ objects, or as a means of repelling evil.\n\nIn Zoroastrianism, water (āb) is\ \ respected as the source of life.\n\nPhilosophy\n\nThe Ancient Greek philosopher\ \ Empedocles saw water as one of the four classical elements (along with fire,\ \ earth, and air), and regarded it as an ylem, or basic substance of the universe.\ \ Thales, whom Aristotle portrayed as an astronomer and an engineer, theorized\ \ that the earth, which is denser than water, emerged from the water. Thales,\ \ a monist, believed further that all things are made from water. Plato believed\ \ that the shape of water is an icosahedron - thus explaining why it flows easily\ \ compared to the cube-shaped earth.\n\nThe theory of the four bodily humors associated\ \ water with phlegm, as being cold and moist. The classical element of water\ \ was also one of the five elements in traditional Chinese philosophy (along with\ \ earth, fire, wood, and metal).\n\nSome traditional and popular Asian philosophical\ \ systems take water as a role-model. James Legge's 1891 translation of the Dao\ \ De Jing states, \"The highest excellence is like (that of) water. The excellence\ \ of water appears in its benefiting all things, and in its occupying, without\ \ striving (to the contrary), the low place which all men dislike. Hence (its\ \ way) is near to (that of) the Tao\" and \"There is nothing in the world more\ \ soft and weak than water, and yet for attacking things that are firm and strong\ \ there is nothing that can take precedence of it—for there is nothing (so effectual)\ \ for which it can be changed.\" Guanzi in the \"Shui di\" 水地 chapter further\ \ elaborates on the symbolism of water, proclaiming that \"man is water\" and\ \ attributing natural qualities of the people of different Chinese regions to\ \ the character of local water resources.\n\nFolklore \n\"Living water\" features\ \ in Germanic and Slavic folktales as a means of bringing the dead back to life.\ \ Note the Grimm fairy-tale (\"The Water of Life\") and the Russian dichotomy\ \ of and dead water ). The Fountain of Youth represents a related concept of\ \ magical waters allegedly preventing aging.\n\nArt and activism\nPainter and\ \ activist Fredericka Foster curated The Value of Water, at the Cathedral of St.\ \ John the Divine in New York City, which anchored a year long initiative by\ \ the Cathedral on our dependence on water. The largest exhibition to ever appear\ \ at the Cathedral, it featured over forty artists, including Jenny Holzer, Robert\ \ Longo, Mark Rothko, William Kentridge, April Gornik, Kiki Smith, Pat Steir,\ \ William Kentridge, Alice Dalton Brown, Teresita Fernandez and Bill Viola. \ \ Foster created Think About Water, an ecological collective of artists who use\ \ water as their subject or medium. Members include Basia Irland, Aviva Rahmani,\ \ Betsy Damon, Diane Burko, Leila Daw, Stacy Levy, Charlotte Coté, Meridel Rubenstein,\ \ Stacy Levy, Anna Macleod, and Aviva Rahmani.\n\nTo mark the 10th anniversary\ \ of access to water and sanitation being declared a human right by the UN, the\ \ charity WaterAid commissioned ten visual artists to show the impact of clean\ \ water on people’s lives.\n\nDihydrogen monoxide parody\n\nWater's technically\ \ correct but rarely used chemical name, dihydrogen monoxide, has been used in\ \ a series of hoaxes and pranks that mock scientific illiteracy. This began in\ \ 1983, when an April Fools' Day article appeared in a newspaper in Durand, Michigan.\ \ The false story consisted of safety concerns about the substance.\n\nSee also\n\ \n Outline of water\n Water (data page) is a collection of the chemical and physical\ \ properties of water.\n Aquaphobia (fear of water)\n Human right to water and\ \ sanitation\n Mpemba effect\n Oral rehydration therapy\n Thirst\n Water pinch\ \ analysis\n\nReferences\n\nFurther reading\n\n Debenedetti, PG., and HE Stanley,\ \ \"Supercooled and Glassy Water\", Physics Today 56 (6), pp. 40–46 (2003). Downloadable\ \ PDF (1.9 MB)\n\n Gleick, PH., (editor), The World's Water: The Biennial Report\ \ on Freshwater Resources. Island Press, Washington, D.C. (published every two\ \ years, beginning in 1998.) The World's Water, Island Press\n \n Journal of Contemporary\ \ Water Research & Education\n Postel, S., Last Oasis: Facing Water Scarcity.\ \ W.W. Norton and Company, New York. 1992\n Reisner, M., Cadillac Desert: The\ \ American West and Its Disappearing Water. Penguin Books, New York. 1986.\n United\ \ Nations World Water Development Report. Produced every three years.\n St. Fleur,\ \ Nicholas. The Water in Your Glass Might Be Older Than the Sun. \"The water you\ \ drink is older than the planet you're standing on.\" The New York Times (15\ \ April 2016)\n\nExternal links\n\n OECD Water statistics\n The World's Water\ \ Data Page\n FAO Comprehensive Water Database, AQUASTAT\n The Water Conflict\ \ Chronology: Water Conflict Database\n Water science school (USGS)\n Portal to\ \ The World Bank's strategy, work and associated publications on water resources\n\ \ America Water Resources Association \n Water on the web\n Water structure and\ \ science \n Why water is one of the weirdest things in the universe BBC Ideas,\ \ Video, 3:16 minutes, 2019\n The chemistry of water (NSF special report)\n The\ \ International Association for the Properties of Water and Steam\n H2O:The Molecule\ \ That Made Us, a 2020 PBS documentary\n\nArticles containing video clips\nHydrogen\ \ compounds\nInorganic solvents\nLiquids\nMaterials that expand upon freezing\n\ Nuclear reactor coolants\nOxides\nOxygen compounds" - "This article gives a chronological list of years in Australian Test cricket (descending\ \ order), with series, notable matches, and events listed with their respective\ \ years. The list of years commences in 1877, the year of the first cricket Test\ \ played between Australia and England.\n\nNote: inclusion of death notes are\ \ for Australian Test captains, and significant figures within the game. Results\ \ of Test matches show close or large wins or losses, and ties. Individual batting\ \ scores or bowling figures show significant performances. See List of Australia\ \ Test cricket records.\n\n21st century\n\n2010s \n2020 India beat Australia in\ \ Australia \n\n2017\n\n2016\n Indian cricket team in Australia in 2015–16, the\ \ 5-ODI series is won by Australia 4–1, the 3-T20 series is won by India 3–0.\n\ \ Australian cricket team in New Zealand in 2015–16 Australia won the 2 match\ \ test series 2-0.\n Australian cricket team in Sri Lanka in 2016, the 3-Test\ \ series was won 3-0 by Sri Lanka.\n Pakistani cricket team in Australia in 2016–17,\ \ the 3-Test series was won 3-0 by Australia.\n\n2015\n Australian cricket team\ \ in the West Indies in 2015, the 2-Test series is won by Australia 2–0.\n Adam\ \ Voges 130* on debut vs West Indies at Roseau. \n Australian cricket team in\ \ England and Ireland in 2015, the 5-Test series is won by England 3–2.\n New\ \ Zealand cricket team in Australia in 2015–16, the 3-Test series is won by Australia\ \ 2–0.\n David Warner scores a century in both innings vs New Zealand at Brisbane.\ \ \n First day-night Test cricket match is held in Adelaide using a pink ball.\n\ \ West Indian cricket team in Australia in 2015–16, the 3-Test series is won by\ \ Australia 2–0.\n Death of Richie Benaud.\n\n2014\n Australian cricket team in\ \ South Africa in 2013–14, the 3-Test series is won by Australia 2–1. \n David\ \ Warner scores a century in both innings vs South Africa at Cape Town. \n Australian\ \ cricket team against Pakistan in the UAE in 2014–15, the 2-Test series is won\ \ by Pakistan 2–0. \n Pakistan defeats Australia by 356 runs at Abu Dhabi. \n\ \ Indian cricket team in Australia in 2014–15, the 4-Test series is won by Australia\ \ 2–0.\n David Warner scores a century in both innings vs India at Adelaide. \ \ \n Josh Hazlewood 5/68 on debut vs India at Brisbane.\n Ryan Harris and Chris\ \ Rogers named Wisden Cricketers of the Year.\n Death of Ian Craig. \n Death of\ \ Phillip Hughes.\n\n2013\n Australian cricket team in India in 2012–13, the 4-Test\ \ series is won by India 4–0. \n Australian cricket team in England in 2013, the\ \ 5-Test series is won by England 3–0. \n England defeats Australia by 347 runs\ \ at Lord's. \n English cricket team in Australia in 2013–14, the 5-Test series\ \ is won by Australia 5–0.\n\n2012\n Michael Clarke 329* vs India at Sydney. \n\ \ Ricky Ponting & Michael Clarke 386 for the 4th wicket vs India at Adelaide.\ \ \n Australian cricket team in the West Indies in 2011–12, the 3-Test series\ \ is won by Australia 2–0. \n Sri Lankan cricket team in Australia in 2012–13,\ \ the 3-Test series is won by Australia 3–0. \n South African cricket team in\ \ Australia in 2012–13, the 3-Test series is won by South Africa 1–0.\n\n2011\n\ \ Australian cricket team in Sri Lanka in 2011, the 3-Test series is won by Australia\ \ 1–0. \n Nathan Lyon dismisses Sri Lankan batsman Kumar Sangakkara with his first\ \ ball in Test cricket at Galle.\n Nathan Lyon 5/34 on debut vs Sri Lanka at Galle.\n\ \ Shaun Marsh 141 on debut vs Sri Lanka at Pallekele. \n Australian cricket team\ \ in South Africa in 2011–12, the 2-Test series is drawn 1–1. \n Australia 47\ \ vs South Africa at Newlands. \n Pat Cummins 6/79 on debut vs South Africa at\ \ Johannesburg.\n Australia defeats South Africa by 2 wickets at Johannesburg.\n\ \ New Zealand cricket team in Australia in 2011–12, the 2-Test series is drawn\ \ 1–1. \n James Pattinson 5/27 on debut vs New Zealand at Brisbane.\n New Zealand\ \ defeats Australia by 7 runs at Hobart.\n Indian cricket team in Australia in\ \ 2011–12, the 4-Test series is won by Australia 4–0.\n\n2010\n Australian cricket\ \ team in New Zealand in 2009–10, the 2-Test series is won by Australia 2–0. \n\ \ Australian cricket team against Pakistan in England in 2010, the 2-Test series\ \ is drawn 1–1. \n Australian cricket team in India in 2010–11, the 2-Test series\ \ is won by India 2–0. \n India defeats Australia by 1 wicket at Mohali. \n English\ \ cricket team in Australia in 2010–11, the 5-Test series is won by England 3–1.\ \ \n Peter Siddle takes a hat-trick vs England at Brisbane. \n Ryan Harris scores\ \ a king pair vs England at Adelaide. \n Michael Clarke named Wisden Cricketer\ \ of the Year.\n\n2000s \n2009\n Australian cricket team in South Africa in 2008–09,\ \ the 3-Test series is won by Australia 2–1. \n Marcus North 117 on debut vs South\ \ Africa at Johannesburg. \n Phillip Hughes scores a century in both innings vs\ \ South Africa at Durban. \n Australian cricket team in England in 2009, the 5-Test\ \ series is won by England 2–1.\n West Indian cricket team in Australia in 2009–10,\ \ the 3-Test series is won by Australia 2–0.\n Pakistani cricket team in Australia\ \ in 2009–10, the 3-Test series is won by Australia 3–0.\n\n2008 \n Australian\ \ cricket team in the West Indies in 2008, the 3-Test series is won by Australia\ \ 2–0. \n Australian cricket team in India in 2008–09, the 4-Test series is won\ \ by India 2–0.\n Jason Krejza 8/215 on debut vs India at Nagpur. \n New Zealand\ \ cricket team in Australia in 2008–09, the 2-Test series is won by Australia\ \ 2–0.\n South African cricket team in Australia in 2008–09, the 3-Test series\ \ is won by South Africa 2–1. \n Death of Bill Brown.\n\n2007\n Sri Lankan cricket\ \ team in Australia in 2007–08, the 2-Test series is won by Australia 2–0.\n Indian\ \ cricket team in Australia in 2007–08, the 4-Test series is won by Australia\ \ 2–1.\n\n2006 \n Ricky Ponting scores a century in both innings vs South Africa\ \ at Sydney. \n Australian cricket team in South Africa in 2005–06, the 3-Test\ \ series is won by Australia 3–0. \n Stuart Clark 5/55 on debut vs South Africa\ \ at Cape Town.\n Ricky Ponting scores a century in both innings vs South Africa\ \ at Kingsmead. \n Australia defeats South Africa by 2 wickets at Johannesburg.\ \ \n Australian cricket team in Bangladesh in 2005–06, the 2-Test series is won\ \ by Australia 2–0.\n Australia plays its first Test at Chittagong. \n Jason Gillespie\ \ 201* as a nightwatchman vs Bangladesh at Chittagong. \n English cricket team\ \ in Australia in 2006–07, the 5-Test series is won by Australia 5–0.\n Ricky\ \ Ponting and Brett Lee named as Wisden Cricketers of the Year.\n\n2005\n Australian\ \ cricket team in New Zealand in 2004–05, the 3-Test series is won by Australia\ \ 2–0. \n Ricky Ponting scores a century in both innings vs West Indies at Brisbane.\ \ \n Australian cricket team in England in 2005, the 5-Test series is won by England\ \ 2–1. \n England defeat Australia by 2 runs at Edgbaston. \n West Indian cricket\ \ team in Australia in 2005–06, the 3-Test series is won by Australia 3–0.\n South\ \ African cricket team in Australia in 2005–06, the 3-Test series is won by Australia\ \ 2–0.\n\n2004\n Australian cricket team in Sri Lanka in 2003–04, the 3-Test series\ \ is won by Australia 3–0.\n Sri Lankan cricket team in Australia in 2004 the\ \ 2-Test series is won by Australia 1–0. \n Matthew Hayden scores a century in\ \ both innings vs Sri Lanka at Cairns. \n Australian cricket team in India in\ \ 2004–05, the 4-Test series is won by Australia 2–1.\n Michael Clarke 151 on\ \ debut vs India at Bangalore. \n New Zealand cricket team in Australia in 2004–05,\ \ the 2-Test series is won by Australia 2–0.\n Pakistani cricket team in Australia\ \ in 2004–05, the 3-Test series is won by Australia 3–0.\n Australia defeat Pakistan\ \ by 491 runs at Perth. \n Ian Harvey named as Wisden Cricketer of the Year. \n\ \ Death of David Hookes.\n\n2003\n Bangladesh cricket team in Australia in 2003,\ \ the 3-Test series is won by Australia 3–0. \n Australia plays its first Test\ \ at Darwin. \n Australia plays its first Test at Cairns. \n Australian cricket\ \ team in the West Indies in 2003, the 4-Test series is won by Australia 3–1.\ \ \n Zimbabwean cricket team in Australia in 2003–04, the 2-Test series is won\ \ by Australia 2–0. \n Matthew Hayden 380 vs Zimbabwe at Perth. \n Australia 735-6d\ \ vs Zimbabwe at Perth. \n Indian cricket team in Australia in 2003–04, the 4-Test\ \ series is drawn 1–1.\n Matthew Hayden named Wisden Cricketer of the Year.\n\n\ 2002\n Australian cricket team in South Africa in 2001–02, the 3-Test series is\ \ won by Australia 2–1.\n Australia defeats South Africa by an innings and 360\ \ runs at Johannesburg. \n English cricket team in Australia in 2002–03, the 5-Test\ \ series is won by Australia 4–1.\n Matthew Hayden scores a century in both innings\ \ vs England at Brisbane.\n Australia defeats England by 384 runs at Brisbane.\n\ \ Adam Gilchrist, Jason Gillespie and Damien Martyn named Wisden Cricketers of\ \ the Year.\n\n2001\n Australian cricket team in India in 2000–01, the 3-Test\ \ series is won by India 2–1.\n Adam Gilchrist scores a king pair vs India at\ \ Kolkata. \n India defeats Australia by 171 runs after following-on at Kolkata.\n\ \ Australian cricket team in England in 2001, the 5-Test series is won by Australia\ \ 4–1. \n New Zealand cricket team in Australia in 2001–02, the 3-Test series\ \ is drawn 0–0.\n South African cricket team in Australia in 2001–02, the 3-Test\ \ series is won by Australia 3–0. \n Justin Langer and Darren Lehmann named as\ \ Wisden Cricketers of the Year. \n Death of Don Bradman.\n\n2000\n Australian\ \ cricket team in New Zealand in 1999–2000, the 3-Test series is won by Australia\ \ 3–0. \n West Indian cricket team in Australia in 2000–01, the 5-Test series\ \ is won by Australia 5–0.\n Glenn McGrath takes a hat-trick vs West Indies at\ \ Perth. \n Tom Moody named as Wisden Cricketer of the Year.\n\n20th century\n\ \n1990s \n1999 \n Australian cricket team in the West Indies in 1998–99, the 4-Test\ \ series is drawn 2–2. \n West Indies defeats Australia by 1 wicket at Bridgetown.\n\ \ Australian cricket team in Sri Lanka in 1999, the 3-Test series is won by Sri\ \ Lanka 1–0.\n Australian cricket team in Zimbabwe in 1999–2000, the 1-Test series\ \ is won by Australia 1–0.\n Pakistani cricket team in Australia in 1999–2000,\ \ the 3-Test series is won by Australia 3–0.\n Indian cricket team in Australia\ \ in 1999–2000, the 3-Test series is won by Australia 3–0.\n Brett Lee 5/47 on\ \ debut vs India at Melbourne.\n\n1998 \n Australian cricket team in India in\ \ 1997–98, the 3-Test series is won by India 2–1.\n India defeats Australia by\ \ an innings and 219 runs at Calcutta.\n Australian cricket team in Pakistan in\ \ 1998–99, the 3-Test series is won by Australia 1–0.\n Mark Taylor 334* vs Pakistan\ \ at Peshawar.\n English cricket team in Australia in 1998–99, the 5-Test series\ \ is won by Australia 3–1.\n Matthew Elliott, Stuart Law and Glenn McGrath named\ \ as Wisden Cricketers of the Year.\n Death of Ian Johnson.\n\n1997 \n Australian\ \ cricket team in South Africa in 1996–97, the 3-Test series is won by Australia\ \ 2–1.\n Steve Waugh & Greg Blewett 385 for the 5th wicket vs South Africa at\ \ Johannesburg.\n Australia defeats South Africa by 2 wickets at Port Elizabeth.\n\ \ Australian cricket team in England in 1997, the 6-Test series is won by Australia\ \ 3–2. \n Steve Waugh scores a century in both innings vs England at Old Trafford.\n\ \ New Zealand cricket team in Australia in 1997–98, the 3-Test series is won by\ \ Australia 2–0. \n Simon Cook 5/39 on debut vs New Zealand at Perth.\n South\ \ African cricket team in Australia in 1997–98, the 3-Test series is won by Australia\ \ 1–0.\n\n1996 \n Australian cricket team in India in 1996–97, the 1-Test series\ \ is won by India 1–0.\n West Indian cricket team in Australia in 1996–97, the\ \ 5-Test series is won by Australia 3–2.\n Death of Ray Lindwall.\n\n1995 \n Greg\ \ Blewett 102* on debut vs England at Adelaide. \n Peter McIntyre scores a pair\ \ on debut vs England at Adelaide. \n Australian cricket team in the West Indies\ \ in 1994–95, the 5-Test series is won by Australia 2–1. \n Pakistani cricket\ \ team in Australia in 1995–96, the 3-Test series is won by Australia 2–1.\n Sri\ \ Lankan cricket team in Australia in 1995–96, the 3-Test series is won by Australia\ \ 3–0.\n\n1994 \n South Africa defeats Australia by 5 runs at Sydney. \n Australian\ \ cricket team in South Africa in 1993–94, the 3-Test series is drawn 1–1. \n\ \ Australian cricket team in Pakistan in 1994–95, the 3-Test series is won by\ \ Pakistan 1–0.\n Pakistan defeats Australia by 1 wicket at Karachi. \n Damien\ \ Fleming takes a hat-trick vs Pakistan at Rawalpindi. \n English cricket team\ \ in Australia in 1994–95, the 5-Test series is won by Australia 3–1.\n Shane\ \ Warne takes a hat-trick vs England at Melbourne. \n David Boon, Ian Healy, Merv\ \ Hughes and Shane Warne named Wisden Cricketers of the Year.\n\n1993 \n West\ \ Indies defeats Australia by 1 run at Adelaide. \n Australian cricket team in\ \ New Zealand in 1992–93, the 3-Test series is drawn 1–1. \n Australian cricket\ \ team in England in 1993, the 6-Test series is won by Australia 4–1. \n New Zealand\ \ cricket team in Australia in 1993–94, the 3-Test series is won by Australia\ \ 2–0.\n Australia defeats New Zealand by an innings and 222 runs at Hobart. \n\ \ South African cricket team in Australia in 1993–94, the 3-Test series is drawn\ \ 1–1. \n Death of Lindsay Hassett.\n\n1992\n South Africa defeats Australia by\ \ 5 runs at Sydney. \n Australian cricket team in Sri Lanka in 1992, the 3-Test\ \ series is won by Australia 1–0.\n Australia defeats Sri Lanka by 16 runs at\ \ Colombo. \n West Indian cricket team in Australia in 1992–93, 5-Test series\ \ is won by West Indies 2–1.\n\n1991\n Mark Waugh 138 on debut vs England at Adelaide.\ \ \n Australian cricket team in the West Indies in 1990–91, the 5-Test series\ \ is won by West Indies 2–1. \n West Indies defeats Australia by 343 runs at Bridgetown.\ \ \n Indian cricket team in Australia in 1991–92, the 5-Test series is won by\ \ Australia 4–0.\n Mark Waugh named as Wisden Cricketer of the Year.\n\n1990\n\ \ Dean Jones scores a century in both innings vs Pakistan at Adelaide. \n Australian\ \ cricket team in New Zealand in 1989–90, the 1-Test series is won by New Zealand\ \ 1–0. \n English cricket team in Australia in 1990–91, the 5-Test series is won\ \ by Australia 3–0.\n Dean Jones and Mark Taylor named as Wisden Cricketers of\ \ the Year.\n\n1980s \n1989 \n Australian cricket team in England in 1989, the\ \ 6-Test series is won by Australia 4–0. \n New Zealand cricket team in Australia\ \ in 1989–90, the 1-Test series is drawn 0–0.\n Sri Lankan cricket team in Australia\ \ in 1989–90, the 2-Test series is won by Australia 1–0. \n Australia plays its\ \ first Test at Bellerive Oval in Hobart.\n Pakistani cricket team in Australia\ \ in 1989–90, the 3-Test series is won by Australia 1–0.\n Steve Waugh named as\ \ Wisden Cricketer of the Year.\n\n1988 \n English cricket team in Australia in\ \ 1987–88, the 1-Test series, to celebrate Australia's Bicentennial, is drawn.\n\ \ Australian cricket team in Pakistan in 1988–89, the 3-Test series is won by\ \ Pakistan 1–0.\n West Indian cricket team in Australia in 1988–89, the 5-Test\ \ series is won by the West Indies 3–1.\n Merv Hughes takes a hat-trick vs West\ \ Indies at Perth.\n\n1987 \n Peter Taylor 6/78 on debut vs England at Sydney.\ \ \n Sri Lankan cricket team in Australia in 1987–88, the 1-Test series is won\ \ by Australia 1–0.\n New Zealand cricket team in Australia in 1987–88, the 3-Test\ \ series is won by Australia 1–0.\n Tony Dodemaide 6/58 on debut vs New Zealand\ \ at Melbourne.\n\n1986 \n Australian cricket team in New Zealand in 1985–86,\ \ the 3-Test series is won by New Zealand 1–0. \n Allan Border scores a century\ \ in both innings vs New Zealand at Christchurch. \n Australian cricket team in\ \ India in 1986–87, the 3-Test series is drawn 0–0.\n Australia ties Test vs India\ \ at Madras. \n English cricket team in Australia in 1986–87, the 5-Test series\ \ is won by England 2–1.\n Craig McDermott named as Wisden Cricketer of the Year.\n\ \n1985 \n Australian cricket team in England in 1985, the 6-Test series is won\ \ by England 3–1.\n New Zealand cricket team in Australia in 1985–86, the 3-Test\ \ series is won by New Zealand 2–1.\n Indian cricket team in Australia in 1985–86,\ \ the 3-Test series is drawn 0–0.\n\n1984 \n Australian cricket team in the West\ \ Indies in 1983–84, the 5-Test series is won by West Indies 3–0.\n West Indian\ \ cricket team in Australia in 1984–85, the 5-Test series is won by Australia\ \ 3–1.\n\n1983\n Australian cricket team in Sri Lanka in 1982–83, the 1-Test series\ \ is won by Australia 1–0.\n Tom Hogan 5/66 on debut vs Sri Lanka at Kandy.\n\ \ Pakistani cricket team in Australia in 1983–84, the 5-Test series is won by\ \ Australia 2–0.\n Wayne Phillips 159 on debut vs Pakistan at Perth.\n\n1982 \n\ \ Australian cricket team in New Zealand in 1981–82, the 3-Test series is drawn\ \ 1–1. \n Australian cricket team in Pakistan in 1982–83, the 3-Test series is\ \ won by Pakistan 3–0.\n English cricket team in Australia in 1982–83, the 5-Test\ \ series is won by Australia 2–1.\n Kepler Wessels 162 on debut vs England at\ \ Brisbane. \n England defeats Australia by 3 runs at Melbourne. \n Terry Alderman,\ \ Allan Border and Rod Marsh named as Wisden Cricketers of the Year.\n\n1981 \n\ \ Australian cricket team in England in 1981, the 5-Test series is won by England\ \ 3–1. \n Terry Alderman 5/62 on debut vs England at Trent Bridge.\n England defeats\ \ Australia by 18 runs after following-on. \n Mike Whitney scores a pair on debut\ \ vs England at Old Trafford. \n Dirk Wellham 103 on debut vs England at The Oval.\ \ \n West Indian cricket team in Australia in 1981–82, the 3-Test series is drawn\ \ 1–1.\n Kim Hughes named as Wisden Cricketer of the Year.\n\n1980 \n West Indies\ \ defeats Australia by 408 runs at Adelaide. \n Australian cricket team in Pakistan\ \ in 1979–80, the 3-Test series is won by Pakistan 1–0.\n Allan Border scores\ \ a century in both innings vs Pakistan at Lahore. \n Australian cricket team\ \ in England in 1980, the 1-Test series is drawn 0–0; this was the Centenary Test\ \ to mark 100 years of Test cricket in England. \n New Zealand cricket team in\ \ Australia in 1980–81, the 3-Test series is won by Australia 2–0.\n Indian cricket\ \ team in Australia in 1980–81, the 3-Test series is drawn 1–1.\n\n1970s \n1979\ \ \n Pakistani cricket team in Australia in 1978–79, the 2-Test series is drawn\ \ 1–1.\n Australian cricket team in India in 1979–80, the 6-Test series is won\ \ by India 2–0.\n West Indian cricket team in Australia in 1979–80, the 3-Test\ \ series is won by West Indies 2–0.\n English cricket team in Australia in 1979–80,\ \ the 3-Test series is won by Australia 3–0.\n\n1978 \n Australian cricket team\ \ in the West Indies in 1977–78, the 5-Test series is won by West Indies 3–1.\n\ \ English cricket team in Australia in 1978–79, the 6-Test series is won by England\ \ 5–1.\n Rodney Hogg 6/74 on debut vs England at Brisbane.\n\n1977 \n Australian\ \ cricket team in New Zealand in 1976–77, the 2-Test series is won by Australia\ \ 1–0. \n Centenary Test Australia vs England at Melbourne.\n Australian cricket\ \ team in England in 1977, the 5-Test series is won by England 3–0. \n Mick Malone\ \ 5/63 on debut vs England at The Oval.\n Indian cricket team in Australia in\ \ 1977–78, the 5-Test series is won by Australia 3–2.\n Australia defeats India\ \ by 16 runs at Brisbane.\n Australia defeats India by 2 wickets at Perth.\n Death\ \ of Jack Ryder.\n\n1976 \n Pakistani cricket team in Australia in 1976–77, the\ \ 3-Test series is drawn 1–1.\n Ian Chappell and Rick McCosker named as Wisden\ \ Cricketers of the Year.\n\n1975 \n Australian cricket team in England in 1975,\ \ the 4-Test series is won by Australia 1–0. \n West Indian cricket team in Australia\ \ in 1975–76, the 6-Test series is won by Australia 5–1.\n Greg Chappell scores\ \ centuries in both innings vs West Indies at Brisbane.\n Gary Cosier 109 on debut\ \ vs West Indies at Melbourne.\n\n1974 \n Geoff Dymock 5/58 on debut vs New Zealand\ \ at Adelaide.\n Australian cricket team in New Zealand in 1973–74, the 3-Test\ \ series is drawn 1–1. \n Greg Chappell and Ian Chappell both score centuries\ \ in both innings vs New Zealand in Wellington.\n English cricket team in Australia\ \ in 1974–75, the 6-Test series is won by Australia 4–1.\n\n1973\n Australian\ \ cricket team in the West Indies in 1972–73, the 5-Test series is won by Australia\ \ 2–0. \n New Zealand cricket team in Australia in 1973–74, the 3-Test series\ \ is won by Australia 2–0.\n Greg Chappell, Dennis Lillee, Bob Massie and Keith\ \ Stackpole named as Wisden Cricketers of the Year.\n\n1972\n Australian cricket\ \ team in England in 1972, the 5-Test series is won by England 2–1.\n Bob Massie\ \ 8/84 and 8/53 on debut vs England at Lord's.\n Pakistani cricket team in Australia\ \ in 1972–73, the 3-Test series is won by Australia 3–0.\n\n1971\n Dennis Lillee\ \ 5/84 on debut vs England at Adelaide.\n After South Africa is banned from international\ \ cricket and its 1971–72 tour of Australia is cancelled, a Rest of the World\ \ team tours Australia but these are not recognised as Test matches.\n\n1970\n\ \ Australian cricket team in South Africa in 1969–70, the 4-Test series is won\ \ by South Africa 4–0.\n English cricket team in Australia in 1970–71, the 6-Test\ \ series is won by England 1–0.\n Australia plays its first Test at the WACA Ground\ \ in Perth.\n Greg Chappell 108 on debut vs England at Perth.\n\n1960s \n1969\ \ \n Doug Walters scores a double-century and a century vs West Indies at Sydney,\ \ the first Test batsman to do so. \n Australian cricket team in India in 1969–70,\ \ the 5-Test series is won by Australia 3–1. \n Death of Vic Richardson.\n\n1968\ \ \n Australian cricket team in England in 1968, the 5-Test series is drawn 1–1.\ \ \n West Indian cricket team in Australia in 1968–69, the 5-Test series is won\ \ by Australia 3–1.\n\n1967 \n Indian cricket team in Australia in 1967–68, the\ \ 4-Test series is won by Australia 4–0.\n\n1966 \n Bob Cowper 307 vs England\ \ at Melbourne. \n Australian cricket team in South Africa in 1966-67, the 5-Test\ \ series is won by South Africa 3–1.\n\n1965 \n Australian cricket team in the\ \ West Indies in 1964–65, the 5-Test series is won by West Indies 2–1. \n English\ \ cricket team in Australia in 1965–66, the 5-Test series is drawn 1–1. \n Doug\ \ Walters 155 on debut vs England at Brisbane. \n Peter Burge, Graham McKenzie\ \ and Bob Simpson named as Wisden Cricketers of the Year. \n Death of Bill Woodfull.\n\ \n1964 \n Australian cricket team in England in 1964, the 5-Test series is won\ \ by Australia 1–0. \n Bob Simpson 311 vs England at Old Trafford. \n Australian\ \ cricket team in India in 1964–65, the 3-Test series is drawn 1–1.\n Australian\ \ cricket team in Pakistan in 1964–65, the 1-Test series is drawn 1–0.\n Bob Simpson\ \ scores a century in both innings vs Pakistan at Karachi. \n Pakistani cricket\ \ team in Australia in 1964–65, the 1-Test series is drawn 0–0.\n\n1963 \n South\ \ African cricket team in Australia in 1963–64, the 5-Test series is drawn 1–1.\n\ \n1962 \n English cricket team in Australia in 1962–63, the 5-Test series is drawn\ \ 1–1. \n Bill Alley, Richie Benaud, Alan Davidson, Bill Lawry and Norm O'Neill\ \ named as Wisden Cricketers of the Year.\n\n1961 \n Australia defeats West Indies\ \ by 2 wickets at Melbourne.\n Australian cricket team in England in 1961, the\ \ 5-Test series is won by Australia 2–1.\n Graham McKenzie 5/37 on debut vs England\ \ at Lord's.\n\n1960 \n West Indian cricket team in Australia in 1960–61, the\ \ 5-Test series is won by Australia 2–1. \n Australia ties Test vs West Indies\ \ at Brisbane.\n\n1950s \n1959 \n Australian cricket team in India in 1959–60,\ \ the 5-Test series is won by Australia 2–1. \n Australian cricket team in Pakistan\ \ in 1959–60, the 3-Test series is won by Australia 2–0. \n Death of Herbie Collins.\n\ \n1958 \n Lindsay Kline takes a hat-trick vs South Africa at Cape Town. \n English\ \ cricket team in Australia in 1958–59, the 5-Test series is won by Australia\ \ 4–0.\n\n1957 \n Australian cricket team in South Africa in 1957–58, the 5-Test\ \ series is won by Australia 3–0. \n Ian Meckiff 5/125 on debut vs South Africa\ \ at Johannesburg.\n Gil Langley named as Wisden Cricketer of the Year.\n\n1956\ \ \n Australian cricket team in England in 1956, the 5-Test series is won by England\ \ 2–1. \n Australian cricket team in India in 1956–57, the 3-Test series is won\ \ by Australia 2–0. \n Australian cricket team in Pakistan in 1956–57, the 1-Test\ \ series is won by Pakistan 1–0.\n\n1955 \n Australian cricket team in the West\ \ Indies in 1954–55, the 5-Test series is won by Australia 3–0. \n Australia 758-8d\ \ vs West Indies at Kingston.\n\n1954 \n English cricket team in Australia in\ \ 1954–55, the 5-Test series is won by England 3–1. \n Neil Harvey and Keith Miller\ \ named as Wisden Cricketers of the Year.\n Death of Warren Bardsley.\n\n1953\ \ \n Australian cricket team in England in 1953, the 5-Test series is won by England\ \ 1–0.\n\n1952 \n South African cricket team in Australia in 1952–53, the 5-Test\ \ series is drawn 2–2.\n\n1951 \n Jim Burke 101* on debut vs England at Adelaide.\ \ \n West Indian cricket team in Australia in 1951–52, the 5-Test series is won\ \ by Australia 4–1. \n Australia defeats West Indies by 1 wicket at Melbourne.\n\ \n1950\n Jack Moroney scores a century in both innings vs South Africa at Old\ \ Wanderers. \n Australia defeats South Africa by an innings and 259 runs at Port\ \ Elizabeth. \n English cricket team in Australia in 1950–51, the 5-Test series\ \ is won by Australia 4–1.\n\n1940s \n1949 \n Australian cricket team in South\ \ Africa in 1949–50, the 5-Test series is won by Australia 4–0. \n Lindsay Hassett,\ \ Bill Johnston, Ray Lindwall, Arthur Morris and Don Tallon named as Wisden Cricketers\ \ of the Year.\n\n1948 \n Don Bradman scores a century in both innings vs India\ \ at Melbourne. \n Australian cricket team in England in 1948, the 5-Test series\ \ is won by Australia 4–0.\n Australia defeats England by 409 runs at Lord's.\n\ \n1947 \n Arthur Morris scores a century in both innings vs England at Adelaide.\ \ \n Indian cricket team in Australia in 1947–48, the 5-Test series is won by\ \ Australia 4–0. \n Australia defeats India by an innings and 226 runs at Brisbane.\ \ \n Death of Warwick Armstrong.\n\n1946 \n English cricket team in Australia\ \ in 1946–47, the 5-Test series is won by Australia 3–0. \n Australia defeats\ \ England by an innings and 332 runs at Brisbane. \n Sid Barnes & Don Bradman\ \ 405 for 5th wicket vs England at Sydney. \n Death of Joe Darling.\n\n1945\n\ \ No Test cricket played due to World War II.\n Death of Clem Hill.\n\n1944\n\ \ No Test cricket played due to World War II.\n\n1943\n No Test cricket played\ \ due to World War II.\n\n1942\n No Test cricket played due to World War II.\n\ \n1941\n No Test cricket played due to World War II.\n\n1940 \n No Test cricket\ \ played due to World War II.\n Death of Monty Noble.\n\n1930s \n1939 \n Bill\ \ Brown named as Wisden Cricketer of the Year.\n\n1938 \n Australian cricket team\ \ in England in 1938, the 5-Test series is drawn 1–1. \n England defeats Australia\ \ by an innings and 579 runs at The Oval. \n Death of Hugh Massie. \n Death of\ \ Hugh Trumble.\n\n1937\n\n1936 \n English cricket team in Australia in 1936–37,\ \ the 5-Test series is won by Australia 3–2. \n Frank Ward 5/66 on debut vs England\ \ at Brisbane.\n\n1935 \n Stan McCabe, Bill O'Reilly and Bill Ponsford named as\ \ Wisden Cricketers of the Year.\n\n1934 \n Australian cricket team in England\ \ in 1934, the 5-Test series is won by Australia 2–1. \n Don Bradman 304 vs England\ \ at Headingley. \n Bill Ponsford & Don Bradman 388 for 4th wicket vs England\ \ at Headingley.\n Australia 701 vs England at The Oval. \n Australia defeats\ \ England by 562 runs at The Oval. \n Bill Ponsford & Don Bradman 451 for 2nd\ \ wicket vs England at The Oval.\n\n1933\n England defeats Australia by 338 runs\ \ at Adelaide in what comes to be known as the defining moment of the 1932–33\ \ Bodyline series.\n\n1932 \n English cricket team in Australia in 1932–33, the\ \ 5-Test series is won by England 4–1. \n Death of Jack Blackham.\n\n1931 \n Australia\ \ plays its last Test at Exhibition Ground in Brisbane. \n South African cricket\ \ team in Australia in 1931–32, the 5-Test series is won by Australia 5–0. \n\ \ Australia plays its first Test at the Gabba in Brisbane. \n Don Bradman and\ \ Clarrie Grimmett named as Wisden Cricketers of the Year.\n\n1930 \n Australian\ \ cricket team in England in 1930, the 5-Test series is won by Australia 2–1.\ \ \n Australia 729-6d vs England at Lord's. \n Don Bradman 334 vs England at Headingley.\ \ \n Australia 695 vs England at The Oval. \n West Indian cricket team in Australia\ \ in 1930–31, the 5-Test series is won by Australia 4–1.\n\n1920s \n1929 \n Archie\ \ Jackson 129 on debut vs England at Adelaide. \n Tim Wall 5/66 on debut vs England\ \ at Melbourne. \n Death of Syd Gregory.\n\n1928 \n English cricket team in Australia\ \ in 1928–29, the 5-Test series is won by England 4–1. \n Australia plays its\ \ first Test at Exhibition Ground in Brisbane. \n England defeat Australia by\ \ 675 runs at Brisbane.\n\n1927 \n Bert Oldfield and Bill Woodfull named as Wisden\ \ Cricketers of the Year.\n Death of George Giffen.\n\n1926 \n Australian cricket\ \ team in England in 1926, the 5-Test series is won by England 1–0.\n\n1925\n\ \ Australia defeats England by 11 runs at Adelaide.\n Clarrie Grimmett 5/45 and\ \ 6/37 on debut vs England at Sydney.\n\n1924 \n English cricket team in Australia\ \ in 1924–25, the 5-Test series is won by Australia 4–1. \n Bill Ponsford 110\ \ on debut vs England at Sydney.\n\n1923 \n No Test cricket played by Australia\ \ during the year.\n\n1922 \n Jack Gregory, Charlie Macartney and Ted McDonald\ \ named as Wisden Cricketers of the Year.\n\n1921 \n Arthur Mailey 9/121 vs England\ \ at Melbourne. \n Australian cricket team in England in 1921, the 5-Test series\ \ is won by Australia 3–0.\n\n1920\n English cricket team in Australia in 1920–21,\ \ the 5-Test series is won by Australia 5–0. \n Herbie Collins 104 on debut vs\ \ England at Sydney.\n\n1910s \n1919 \n Death of Dave Gregory.\n\n1918\n No Test\ \ cricket played due to World War I.\n\n1917\n No Test cricket played due to World\ \ War I.\n Death of Harry Trott.\n\n1916 \n No Test cricket played due to World\ \ War I.\n Death of Tom Horan.\n\n1915\n No Test cricket played due to World War\ \ I.\n\n1914\n No Test cricket played due to World War I.\n\n1913\n No Test cricket\ \ played by Australia during the year.\n\n1912 \n England defeats Australia by\ \ an innings and 225 runs at Melbourne. \n Australian cricket team in England\ \ in 1912, this is the first and only triangular series. The 9-Test series is\ \ won by England 4–0, with Australia 2–1, and South Africa 0–5. \n Jimmy Matthews\ \ takes a hat-trick in each innings vs South Africa at Manchester.\n\n1911 \n\ \ Victor Trumper 214* vs South Africa at Adelaide. \n Ranji Hordern 5/66 on debut\ \ vs South Africa at Melbourne. \n Australia defeats South Africa by 530 runs\ \ at Melbourne. \n English cricket team in Australia in 1911–12, the 5-Test series\ \ is won by England 4–1. \n Death of Billy Murdoch.\n\n1910 \n South African cricket\ \ team in Australia in 1910–11, the 5-Test series is won by Australia 4–1. \n\ \ Warren Bardsley named as Wisden Cricketer of the Year.\n Death of Tup Scott.\n\ \n1900s \n1909 \n Australian cricket team in England in 1909, the 5-Test series\ \ is won by Australia 2–1. \n Frank Laver 8/31 vs England at Old Trafford. \n\ \ Warren Bardsley scores centuries in both innings vs England at The Oval. \n\ \ Alan Marshal named as Wisden Cricketer of the Year.\n\n1908 \n England defeats\ \ Australia by 1 wicket at Melbourne. \n Roger Hartigan 116 on debut vs England\ \ at Adelaide.\n Jack O'Connor 5/65 on debut vs England at Adelaide.\n\n1907\n\ \ English cricket team in Australia in 1907–08, the 5-Test series is won by Australia\ \ 4–1.\n Australia defeats England by 2 wickets at Sydney.\n\n1906\n No Test cricket\ \ is played by Australia during this year.\n\n1905\n Australian cricket team in\ \ England in 1905, the 5-Test series is won by England 2–0.\n\n1904\n Hugh Trumble\ \ takes a hat-trick vs England at Melbourne.\n\n1903\n English cricket team in\ \ Australia in 1903–04, the 5-Test series is won by England 3–2. \n Warwick Armstrong,\ \ Jim Kelly and Victor Trumper named as Wisden Cricketers of the Year.\n\n1902\ \ \n Reggie Duff 104 on debut vs England at Melbourne. \n Hugh Trumble takes a\ \ hat-trick vs England at Melbourne. \n Jack Saunders 5/43 on debut vs England\ \ at Sydney. \n Australian cricket team in England in 1902, the 5-Test series\ \ is won by Australia 2–1. \n Australia 36 vs England at Edgbaston.\n Australia\ \ defeats England by 3 runs at Old Trafford. \n England defeats Australia by 1\ \ wicket at The Oval. \n Australian cricket team in South Africa in 1902–03, the\ \ 3-Test series is won by Australia 2–0.\n\n1901 \n English cricket team in Australia\ \ in 1901–02, the 5-Test series is won by Australia 4–1.\n\n1900 \n Joe Darling,\ \ Clem Hill and Monty Noble named as Wisden Cricketers of the Year.\n\n19th century\n\ \n1890s \n1899 \n Australian cricket team in England in 1899, the 5-Test series\ \ is won by Australia 1–0.\n\n1898 \n Monty Noble 6–49 on debut vs England at\ \ Adelaide.\n\n1897 \n English cricket team in Australia in 1897–98, the 5-Test\ \ series is won by Australia 4–1. \n Syd Gregory and Hugh Trumble named as Wisden\ \ Cricketers of the Year.\n\n1896 \n Australian cricket team in England in 1896,\ \ the 3-Test series is won by England 2–1. \n Australia 53 vs England at Lord's.\ \ \n Australia 44 vs England at The Oval. \n Death of Percy McDonnell.\n\n1895\ \ \n Albert Trott 8–43 on debut vs England at Adelaide.\n\n1894 \n English cricket\ \ team in Australia in 1894–95, the 5-Test series is won by England 3–2. \n England\ \ defeats Australia by 10 runs after following-on at Sydney. \n Arthur Coningham\ \ dismisses English batsman Archie MacLaren with his first ball in Test cricket\ \ at Sydney. \n George Giffen and Harry Trott named as Wisden Cricketers of the\ \ Year.\n\n1893\n Australian cricket team in England in 1893, the 3-Test series\ \ is won by England 1–0. \n Harry Graham 107 on debut vs England at Lord's.\n\n\ 1892\n Bob McLeod 5–53 on debut vs England at Melbourne.\n England defeats Australia\ \ by an innings and 230 runs at Adelaide.\n\n1891\n English cricket team in Australia\ \ in 1891–92, the 3-Test series is won by Australia 2–1.\n\n1890\n Australian\ \ cricket team in England in 1890, the 3-Test series is won by England 2–0.\n\n\ 1880s \n1889\n No Test cricket is played by Australia during this year.\n\n1888\ \ \n Australia 42 vs England at Sydney. \n Australian cricket team in England\ \ in 1888, the 3-Test series is won by England 2–1.\n\n1887 \n English cricket\ \ teams in Australia in 1887–88, the 1-Test series is won by England 1–0. \n Charlie\ \ Turner 5–76 on debut vs England at Sydney. \n J.J. Ferris 6–15 on debut vs England\ \ at Sydney.\n\n1886 \n Australian cricket team in England in 1886, the 3-Test\ \ series is won by England 3–0. \n England defeats Australia by an innings and\ \ 217 runs at The Oval.\n English cricket team in Australia in 1886–87, the 2-Test\ \ series is won by England 2–0.\n\n1885 \n Australia defeats England by 6 runs\ \ at Sydney.\n\n1884 \n Australian cricket team in England in 1884, the 3-Test\ \ series is won by England 1–0. \n Billy Murdoch 211 vs England at The Oval. \n\ \ English cricket team in Australia in 1884–85, the 5-Test series is won by England\ \ 3–2. \n Australia plays its first Test at Adelaide.\n\n1883 \n Tom Horan dismisses\ \ English batsman Walter Read with his first ball in Test cricket at Sydney.\n\ \n1882\n Australian cricket team in England in 1882, the 1-Test series is won\ \ by Australia 1–0. \n Australia 63 vs England at The Oval. \n Australia defeats\ \ England by 7 runs at The Oval. \n English cricket team in Australia in 1882–83\ \ in what becomes known as the first Ashes Series, the 4-Test series is drawn\ \ 2–2. \n Australia plays its first Test at Sydney.\n\n1881\n English cricket\ \ team in Australia in 1881–82, the 4-Test series is won by Australia 2–0. \n\ \ William Cooper 6–120 on debut vs England at Melbourne.\n\n1880\n Australian\ \ cricket team in England in 1880, the 1-Test series is won by England 1–0.\n\n\ 1870s \n1879\n Fred Spofforth takes a hat-trick vs England at Melbourne.\n\n1878\n\ \ English cricket team in Australia in 1878–79, the 1-Test series is won by Australia\ \ 1–0.\n\n1877\n English cricket team in Australia in 1876–77, the 2-Test series\ \ is drawn 1–1. \n First Test between Australia and England at Melbourne. \n Charles\ \ Bannerman 165* on debut vs England at Melbourne. \n Billy Midwinter 5–78 on\ \ debut vs England at Melbourne. \n Tom Kendall 7–55 on debut vs England at Melbourne.\n\ \nSee also \n List of Test cricket records\n List of Australia Test cricket records\n\ \ List of cricketers who have scored centuries in both innings of a Test match\n\ \ Pairs in Test and first-class cricket\n List of Australia cricketers who have\ \ taken five-wicket hauls on Test debut\n List of bowlers who have taken a wicket\ \ with their first ball in international cricket\n List of Test cricket hat-tricks\n\ \ List of Test cricket grounds\n\nNotes\n\nReferences \n\nCricket in Australia\ \ by year\nYears in Australia Test cricket\nYears in Australia Test cricket" - source_sentence: What is the syntax for the shorthand of the conditional operator in PHP 5.3? sentences: - "This article on military tanks deals with the history and development of tanks\ \ of the British Army from their first use in the First World War, the interwar\ \ period, during the Second World War, the Cold War and modern era.\n\nOverview\n\ \nTanks first appeared on the battlefield as a solution to trench warfare. They\ \ were large, heavy, slow moving vehicles capable of driving right over the top\ \ of enemy trenches; thereby eliminating the need to send soldiers \"over the\ \ top\" only to be blasted to pieces by enemies. The British Army was the first\ \ to use them, who built them in secret to begin with. To keep the enemy from\ \ finding out about this new solution, the public were informed that the vehicles\ \ were large water carriers, or tanks, and the name stuck.\n\nThe First World\ \ War established the validity of the tank concept. After the war, many nations\ \ needed to have tanks, but only a few had the industrial resources to design\ \ and build them. During and after the war, Britain and France were the intellectual\ \ leaders in tank design, with other countries generally following and adopting\ \ their designs. This early lead would be gradually lost during the course of\ \ the 1930s to the Soviet Union who with Germany began to design and build their\ \ own tanks.\n\nWhile the First World War saw the first use of the tank as a weapon\ \ of war, it was during the Second World War that the tank soon became a dominant\ \ force on the battlefield. The British, American, German and Soviet armies all\ \ had different approaches to tanks and tank warfare, each with their fair share\ \ of successes and failures. The infantry tank was a concept developed by the\ \ British and French in the years leading up to the Second World War. Infantry\ \ tanks were tanks designed to support the infantry in the attack. To achieve\ \ this they were generally heavily armoured compared to the cruiser tanks, to\ \ allow them to operate in close concert with infantry even under heavy gun fire.\ \ The extra armouring came at the expense of speed, which was not an issue when\ \ supporting relatively slow moving infantry.\n\nOnce the infantry tank-supported\ \ attack had broken through heavily defended areas in the enemy lines, other tanks\ \ such as cruisers, or light tanks, were expected to exploit their higher speed\ \ and longer range to operate far behind the front in order to cut lines of supply\ \ and communications.\n\nBackground \n\nNo one individual was responsible for\ \ the development of the tank. Rather, a number of gradual technological developments\ \ brought the development of the tank as we know it closer until its eventual\ \ form was unveiled out of necessity by the British Army. The British Army designs\ \ were forced by the trench warfare in which neither side could achieve more than\ \ small incremental gains without heavy loss of soldiers lives, but tanks changed\ \ that. They were made to cross the trenches and quickly break into the enemy\ \ rear, while other tanks supported the main attack. The development between the\ \ infantry tank and the cruiser tanks had its origins in the First World War division\ \ between the first British heavy tanks which supported the infantry and the faster\ \ Whippet Medium Mark A and its successors the Medium Mark B and Medium Mark C.\ \ During the interbellum British tank experiments generally followed these basic\ \ classifications, which were made part of the overall doctrine with the work\ \ of Percy Hobart and Captain B. H. Liddell Hart. The next development of the\ \ more heavily armoured and upgunned tanks was brought about by the tank on tank\ \ battles in the Second World War German Blitzkrieg. This continued throughout\ \ the war, and led to heavy tanks which became the basis of the current Main Battle\ \ Tanks seen throughout the armies today.\n\nBritish development\n\nThe Landship\ \ Committee commissioned Lieutenant Walter Gordon Wilson of the Royal Naval Air\ \ Service and William Tritton of William Foster & Co. of Lincoln, to produce a\ \ small landship. Constructed in great secrecy, the machine was given the code-name\ \ tank by Swinton.\n\nThe \"Number 1 Lincoln Machine\", nicknamed \"Little Willie\"\ \ weighed 14 tons and could carry a crew of three, at speeds of less than 2 mph\ \ over rough ground. Trench-crossing ability was deemed insufficient however,\ \ leading to the development of a rhomboidal design, which became known as \"\ HMLS Centipede\" and later \"Mother\", the first of the British heavy tanks. After\ \ completion on 29 January 1916 very successful trials were made, and an order\ \ was placed by the War Office for 100 units to be used on the Western Front in\ \ France, on 12 February 1916, and a second order for 50 additional units was\ \ placed in April 1916.\n\nThe great secrecy surrounding tank development, coupled\ \ with the skepticism of infantry commanders, often meant that infantry at first\ \ had little training to cooperate with tanks.\n\nThe first use of the British\ \ tanks on the battlefield was the use of 49 Mark I tanks during the Battle of\ \ the Somme on 15 September 1916, with mixed, but still impressive results. Many\ \ broke down but nearly a third succeeded in breaking through. Finally, in a preview\ \ of later developments, the British developed the lighter Whippet. This tank\ \ was specifically designed to exploit breaches in the enemy front. The Whippet\ \ was faster than most other tanks, although it carried only machinegun armament.\ \ Postwar tank designs would reflect this trend towards greater tactical mobility.\n\ \nWhile the British took the lead in tank development, the French were not far\ \ behind and fielded their first tanks in 1917. The Germans, on the other hand,\ \ were slower to develop tanks, concentrating on anti-tank weapons.\n\nFollowing\ \ the Great War, many experiments involving armoured vehicles were conducted in\ \ the United Kingdom. Particularly many advances were made in the areas of suspensions,\ \ tracks, communications, and the organization of these vehicles on the battlefield.\ \ Britain continued its technical dominance of tank design from 1915 through to\ \ at least the early 1930s. British designs, particularly those from Vickers-Armstrong,\ \ formed the basis for many of the most common tanks of the 1930s and early WWII.\ \ The Vickers 6-Ton, which was arguably the most influential design of the late\ \ 1920s, was not adopted by the British Army.\n\nThe Carden Loyd tankettes (two-man\ \ vehicles with machine guns) influenced the tankette concept through export and\ \ similar designs such as the Soviet T-27, Italian CV-33, German Panzer I and\ \ other copies. Another notable design was the Vickers Medium Mk II, a pivotal\ \ design which combined some of the best traits of WWI tanks into a much faster\ \ tank. Eventually, by the 1930s, British experiments and policy and their strategic\ \ situation led to a tank development programme with three main types of tank:\ \ light, cruiser, and infantry. The infantry tanks were intended to support dismounted\ \ infantry. The maximum speed requirement matched the walking pace of a rifleman,\ \ and the armour on these tanks was expected to be heavy enough to provide immunity\ \ to towed anti-tank guns. Armament had to be sufficient to suppress or destroy\ \ enemy machine gun positions and bunkers as well as enemy tanks. Cruiser tanks\ \ were to carry out the traditional cavalry roles of pursuit and exploitation,\ \ working relatively independently of the infantry. This led to cruiser tank designs\ \ requiring greater speed. To achieve this they were unable to carry as much armour\ \ as the infantry tanks, and tended to carry anti-tank armament. In practice both\ \ cruiser and infantry tanks entered the Second World War with the same gun. The\ \ light tanks were tasked with reconnaissance and constabulary-type colonial roles,\ \ with cost the major design factor.\n\nAn outstanding achievement of the British\ \ Army had been the creation of the Experimental Mechanised Force in the late\ \ 1920s. This was a small Brigade-sized unit developed to field-test the use of\ \ tanks and other vehicles. The unit pioneered the extensive use of radio to control\ \ widely separated small units. The unit was short-lived, however. However even\ \ though the British in the 1930s continued the design and development of tanks\ \ themselves, the Germans began to further develop tank strategy and incorporate\ \ them into their tactical employment more than the British. This doctrine of\ \ deployment led armies to equip their tanks with radios, to provide unmatched\ \ command and control, Germany along with the USSR also led the way with welding,\ \ although the US followed closely. Riveting and bolting remained in use in British\ \ designs.\n\nInfantry tanks were a continuation of the Great War tanks, heavily\ \ armoured and designed to accompany an advancing infantry unit and hence slow.\ \ Once the infantry tanks had punched through an enemy line, lighter and faster\ \ cruiser tanks would be let loose to disrupt supply lines.\n\nThe main problem\ \ with this strategy however, was that the British infantry tanks were just too\ \ slow and the cruisers of the time were vulnerable, and often mechanically unreliable.\ \ Come 1940, most of the British armour had been abandoned in France when the\ \ British Expeditionary Force was evacuated from Dunkirk, but this encouraged\ \ new designs. By the end of the war the increase in speed of the infantry tanks,\ \ and the increased armour of the cruisers, meant that there was little difference\ \ between the two classes of British tank. However, the British had to quickly\ \ build more reliable and more heavily armoured designs from the experienced gained\ \ in the early battles or acquire US designs to meet the needs.\n\nAt the start\ \ of the war most British tanks were equipped with the Ordnance QF 2-pounder (40mm)\ \ gun which was able to penetrate contemporary German armour. The trend towards\ \ bigger guns and thicker armour which resulted in heavier tanks, made itself\ \ felt as the Second World War progressed, and some tanks began to show weakness'\ \ in design.\n\nIn 1939, most tanks had maximum armour of 30 mm or less, with\ \ guns no heavier than 37–47 mm. Medium tanks of 1939 weighed around 20 tons.\ \ Also if the tank's gun was to be used to engage both unarmoured and armoured\ \ targets, then it needed to be as large and powerful as possible, making one\ \ large gun with an all-round field of fire vital. Also, mounting the gun in a\ \ turret ensured that the tank could fire from behind some cover. Hull-mounted\ \ guns required that most of the vehicle be exposed to enemy fire. Multiple-turreted\ \ or multi-gun designs such as the British A9 Cruiser Mk I slowly became less\ \ common.\n\nBritish tanks armament and use in the battles also had to change\ \ as German Blitzkrieg tactics and doctrine shifted towards faster medium and\ \ heavy tanks fighting large multi-tank battles, with the role of the infantry\ \ tank in assaults taken by simpler self-propelled artillery. In British practice,\ \ the main armament of the infantry tank went in three phases. The pre-Dunkirk\ \ British Army Matilda I infantry tank had only a single Vickers machine gun,\ \ a compromise forced by the low cost to which they had been built. The Matilda\ \ II had a capable anti-tank gun with the 40mm 2 pounder but these were only issued\ \ with solid-shot (i.e. non-explosive) for anti-tank use and was of little use\ \ for artillery close-support of infantry. The follow-up gun to the 2pdr was already\ \ in development but the need to rapidly replace the losses in France delayed\ \ its production. Eventually QF 6-pounder (57mm) guns were put into the British\ \ tanks, and these could deal with pretty much anything but head on attacks on\ \ the German Tiger and Panther tanks - thanks to their special armour piercing\ \ rounds. As the war progressed many British tanks were equipped with a gun firing\ \ the same 75mm ammunition as American Sherman tanks. These had better performance\ \ using high explosive or smoke ammunition, but could not match the 6-pounder\ \ against armour. Then the 17-pounder (76.2 mm) was developed, becoming the best\ \ British gun of the war - able to deal with almost any armour put up against\ \ it.\n\nOperational use\n\nFirst World War\n\nThe British Mark I was the world's\ \ first combat tank, entering service in August 1916, and first used in action\ \ on the morning of 15 September 1916. It was developed to be able to cross trenches,\ \ resist small-arms fire, travel over difficult terrain, carry supplies, and be\ \ able to capture fortified enemy positions. The Mark I was a development of Little\ \ Willie, the experimental tank built for the Landships Committee by Lieutenant\ \ Walter Wilson and William Tritton in the summer of 1915. A small number of Mark\ \ I tanks took part in the battle of the Somme during the Battle of Flers-Courcelette\ \ in September 1916. They were used to cut through barbed wire to clear the way\ \ for infantry, and were even driven through houses to destroy machine gunner's\ \ emplacements. Although many broke down or became stuck, almost a third that\ \ attacked made it across no mans land, and their effect on the enemy was noted,\ \ leading to a request by the British C-in-C Douglas Haig for a thousand more.\ \ The Mark II and Mark III incorporated minor improvements and changes over the\ \ Mark I with the Mark II used in the Battle of Arras in April 1917 because of\ \ delays in the production of the Mark I tank. The Mk IV incorporated thicker\ \ armour to resist German armour-piercing bullets. The Mark V had more power (150 bhp)\ \ and could be steered by one man, thanks to the epicyclic gear system created\ \ by Walter Wilson. It was first used in the Battle of Hamel on 4 July 1918 when\ \ 60 tanks contributed to a successful assault by Australian units on the German\ \ lines. During the Battle of Amiens in August 1918, several hundred of the Mark\ \ V and the lengthened Mk V* tanks, together with the new Whippet tanks, penetrated\ \ the German lines in a foretaste of modern armoured warfare.\n\nThe Mark VI did\ \ not progress past the stage of a wooden mock-up; the project was cancelled in\ \ December 1917 in order that a tank co-developed with the US (the Mark VIII)\ \ could go forward. Because of technical troubles the Mark VII, almost identical\ \ to Mks I to V, had only three produced out of an order for 74 when war ended.\ \ The Mark VIII was a cooperative design between the Allies and was also known\ \ as \"Liberty,\" \"International,\" or Anglo-American tank. It did not see combat\ \ in the war but was used and upgraded until the 1930s when given to Canada for\ \ training. The Mark IX was designed in 1917 as the world's first specialised\ \ Armoured Personnel Carrier (APC). Thirty-four were completed, but none saw service.\ \ One was experimentally equipped as an armoured ambulance, and another rebuilt\ \ as an amphibious tank by the staff of the test base at Dollis Hill. There is\ \ photographic evidence that some Mk IX were used post-WWI as Infantry Carriers,\ \ but no record of their peacetime service is known to exist. The Mark X, a further\ \ improvement on the Mk V, was planned but never built.\nThe Medium Mark A Whippet\ \ was a British tank of the First World War. It was intended to complement the\ \ slower British heavy tanks by using its relative mobility and speed in exploiting\ \ any break in the enemy lines.\nThe Whippet tanks arrived late in the First World\ \ War, and went into action in March 1918. Alongside Mark V and V* tanks, they\ \ took part in the Amiens offensive (8 August 1918) where they broke through into\ \ the German rear areas causing the loss of the artillery in an entire front sector.\n\ \nA first offensive using 49 Mark I tanks took place on 15 September 1916, during\ \ the Battle of the Somme, under Field Marshal Sir Douglas Haig, with limited\ \ success. Not until 20 November 1917, at Cambrai, did the British Tank Corps\ \ get the conditions it needed for success. Around 400 tanks penetrated almost\ \ six miles on a 7-mile front. This was their first large-scale deployment in\ \ combat. Unfortunately, success was not complete because the infantry failed\ \ to exploit and secure the tanks' gains. The British scored another victory the\ \ following year, on 8 August 1918, with 600 tanks in the Amiens salient. General\ \ Erich Ludendorff referred to that date as the \"Black Day\" of the German Army.\n\ \nThe German response to the Cambrai assault was to develop its own armoured program.\ \ Soon the massive A7V appeared. The A7V was a clumsy monster, weighing 30 tons\ \ with a crew of eighteen. By the end of the war, only twenty had been built.\ \ Although other tanks were on the drawing board, material shortages limited the\ \ German tank corps to these A7Vs and some captured Mark IVs. The A7V would be\ \ involved in the first tank vs. tank battle of the war on 24 April 1918 at Villers-Bretonneux—a\ \ battle in which there was no clear winner.\n\nParallel to the British development,\ \ France designed its own tanks. The first two, the medium Schneider CA and heavy\ \ Saint-Chamond, were not well-conceived, though produced in large numbers and\ \ showing technical innovations, as for the latter type a petro-electrical transmission\ \ and a long 75 mm gun. The later Renault FT was the first operational tank with\ \ a \"modern\" configuration: a revolving turret on top and an engine compartment\ \ in the back; it would be the most numerous tank of the war. A last development\ \ was the superheavy Char 2C, the largest tank ever built, be it some years after\ \ the armistice.\n\nNumerous mechanical failures and the inability of the British\ \ and French to mount any sustained drives in the early tank actions cast doubt\ \ on their usefulness—and by 1918, tanks were extremely vulnerable unless accompanied\ \ by infantry and ground-attack aircraft, both of which worked to locate and suppress\ \ anti-tank defenses.\n\nThe first American-produced heavy tank was the 43.5-ton\ \ Mark VIII, a US-British development of the successful British heavy tank design.\ \ Armed with two 6-pounder cannon and five .30-caliber machine guns, it was operated\ \ by an 11-man crew, had a maximum speed of 6.5 miles per hour, and a range of\ \ 50 miles. Production difficulties meant that none was produced before the War\ \ ended.\n\nBetween the wars \n\nAfter the Great War, General Erich Ludendorff\ \ of OHL, the German High Command, praised the Allied tanks as being a principal\ \ factor in Germany's defeat. The Germans had been too late in recognizing their\ \ value to consider them in their own plans.\n\nAt a time when most soldiers regarded\ \ the tank as a specialised infantry-support weapon for crossing trenches, officers\ \ in the Royal Tank Corps had gone on to envision much broader roles for mechanized\ \ organizations. In May 1918, Colonel J.F.C. Fuller, the acknowledged father of\ \ tank doctrine, had used the example of German infiltration tactics to refine\ \ what he called \"Plan 1919\". This was an elaborate concept for a mass armoured\ \ offensive in 1919.\n\nAn outstanding achievement of the British Army was the\ \ creation of the Experimental Mechanised Force (EMF) in the late 1920s. This\ \ was a small brigade-sized unit developed to field-test the use of tanks and\ \ other vehicles. The EMF formed by the British demonstrated a mobile force with\ \ its own motorised infantry and self-propelled guns. The unit pioneered the extensive\ \ use of radio to control widely separated small units. The unit was short-lived.\n\ \nIn 1920 the Infantry had plans to acquire a Light Infantry Tank. Colonel Johnson\ \ of the Tank Design Department derived such a type from the Medium Mark D. In\ \ competition Vickers built the Vickers Light Tank. but the project was abandoned\ \ in 1922 in favour of a more conventional design: the Vickers Light Tank Mark\ \ I, that would be renamed to Vickers Medium Tank Mark I in 1924. The first prototypes\ \ were sent to Bovington for trials in 1923. The Medium Mark I replaced some of\ \ the Mark V heavy tanks and served in the Royal Tank Regiment, being the first\ \ type of in total 200 tanks to be retired in 1938. The Medium Mark I was the\ \ first tank to see \"mass\" production since the last of the ten Char 2C's had\ \ been finished in 1921. As of the next tank, the Renault NC27, only about thirty\ \ were built, the British Mediums represented most of the world tank production\ \ during the Twenties.\n\nThe Medium Mark I successor, the Vickers Medium Mk II\ \ combined some of the best traits of Great War tanks into a much faster tank.\ \ It was derived from the Vickers Medium Mark I and was developed to replace the\ \ last of the Medium Mark Cs still in use. It had a rotating turret on top like\ \ the FT but mounted a dual-purpose 3-pounder gun (that could fire both high-explosive\ \ and anti-tank shells) with a coaxial machine gun.\n\nThe Medium Mark III was\ \ ordered in 1928 and proved reliable and a good gun platform. It suffered from\ \ a poorly-designed suspension, road speed increased to but during cross-country\ \ rides the bogies were often overloaded. Three Mark IIIs were built, one by Vickers\ \ and two by the Royal Ordnance Factory at Woolwich: Medium III E1, E2 and E3.\ \ The third had an improved suspension and the vehicles were in 1934 taken into\ \ use by the HQ of the Tank Brigade. One of the Mark IIIs was fitted as a command\ \ vehicle with an extra radio aerial around the turret. This was used by Brigadier\ \ Percy Hobart for the Salisbury Plain exercises during 1934.\n\nThe cavalry and\ \ the Royal Tank Corps wanted fast, lightly armoured, mobile vehicles for reconnaissance\ \ and raiding—the light and medium (or \"cruiser\") tanks. In practice the \"\ light tanks\" were often small armoured personnel carriers. Army Tank Battalions\ \ for infantry-support required thickly armoured tanks. As a consequence of this\ \ doctrinal split, firepower was neglected in tank design.\n\nAfter the First\ \ World War, the British began to produce a series of similar light tanks and\ \ developed them right up to the Second World War; the Light Tanks Mk II through\ \ to the Mk V. Eventually, by the 1930s, British experiments and their strategic\ \ situation led to a tank development programme with three main types of tank:\ \ light, cruiser and infantry. The Infantry tanks were for the support of infantry.\ \ The maximum speed requirement matched the walking pace of a rifleman and the\ \ armour on these tanks was expected to be thick enough to provide immunity against\ \ towed anti-tank guns. Armament had to be sufficient to suppress or destroy enemy\ \ machine gun positions and bunkers. Cruiser tanks gained the traditional cavalry\ \ roles of pursuit and exploitation, working relatively independently of the infantry.\ \ This led to cruiser tank designs having great speed. To achieve this they were\ \ lightly armoured and tended to carry anti-tank armament.\n\nThe light tanks\ \ were for reconnaissance and colonial repression, with cheapness the major design\ \ factor. They were not expected to fight anything other than other light tanks\ \ nor need a gun for fighting heavier tanks. They saw use in training and in limited\ \ engagements with British Empire units such as the South African Army during\ \ the East African Campaign against forces of the Italian Empire. Up until the\ \ Mk V, they had a driver–commander and a gunner. The Mk V had a driver, gunner\ \ and the commander helping on the gun. The light tanks were kept in use for training\ \ until around 1942. Some saw use in the Western Desert Campaign or Abyssinia.\ \ They were followed by the Light Tank Mk VI from 1936.\n\nThe Light Tank Mk VI\ \ was the sixth and final design in the line of tanks built by Vickers-Armstrongs\ \ for the British Army during the interwar period. The company had achieved a\ \ degree of standardization with their earlier five models and the Mark VI was\ \ identical in all but a few respects. Production of the Mk VI began in 1936 and\ \ ended in 1940 with approximately 1,000 Mark VI tanks built.\n\nWhen the Mk VI\ \ was first produced in 1936, the Imperial General Staff considered the tank to\ \ be superior to any light tank produced by other nations, and well suited to\ \ the dual roles of reconnaissance and colonial warfare. Like many of its predecessors,\ \ the Mark VI was used by the British Army for imperial policing duties in British\ \ India and other colonies in the British Empire, a role for which it and the\ \ other Vickers-Armstrongs light tanks were found to be well suited. When the\ \ British government began rearming in the 1930s, the Mk VI was the only tank\ \ with which the War Office was ready to proceed with manufacture, the development\ \ of a medium tank for the Army had hit severe problems after the cancellation\ \ of the proposed \"Sixteen Tonner\" medium tank in 1932 due to the cost and cheaper\ \ models only existed as prototypes with a number of mechanical problems. When\ \ the Second World War began in September 1939, the vast majority of the tanks\ \ available to the British Army were Mk VIs - there were 1,002 Mk VI Light Tanks.\ \ The British and Commonwealth forces employed a relatively small number of these\ \ light tanks and armoured vehicles in East Africa against the forces of the Italian\ \ Empire from June 1940 to November 1941. For the most part, an assortment of\ \ armoured cars was used. B Squadron 4th Royal Tank Regiment did include small\ \ number of Matilda II infantry tanks.\n\nIn 1934 the best features of the earlier\ \ Mk III light tank were incorporated into a cruiser tank design. Sir John Carden\ \ of Vickers-Armstrong produced this new tank, to General Staff specification\ \ A9, which was subsequently accepted as the Cruiser Mk I (A9). A prototype was\ \ tested in 1936 and it went into production the following year, 125 examples\ \ being produced in 1937 and 1938. The follow-up to the A9, the Cruiser Mk II\ \ (A10), was also designed by Carden. Designated as a \"heavy cruiser\" tank,\ \ it was put into production in July 1938. It resembled the Cruiser Mk I but had\ \ thicker armour and was one of the first British tanks with Spaced armour and\ \ the first to be equipped with the Besa machine gun.\n\nOrders for the cruisers\ \ Mk I and Mk II were restricted, since the British Army had already decided to\ \ produce a more advanced and faster cruiser tank which would incorporate the\ \ Christie suspension acquired from the American inventor J. Walter Christie and\ \ have better armour. In 1936, Giffard LeQuesne Martel, a pioneer in tank design\ \ who had published works on armoured warfare and pioneered the lightly armoured\ \ \"tankette\" to enhance infantry mobility, became Assistant Director of Mechanization\ \ at the War Office. Earlier that year Martel had witnessed demonstrations of\ \ Soviet tank designs including the BT tank, which had been influenced by Christie's\ \ work. He urged the adoption of a tank that would use the suspension system and\ \ also follow Christie's practice of using a lightweight aircraft engine such\ \ as the Liberty Engine. The government authorized purchase and licensing of a\ \ Christie design via the newly formed Nuffield Mechanisation and Aero.\n\nThe\ \ vehicle obtained from Christie became the basis of the Cruiser Mk III (A13 Mk\ \ 1) though Christie's tank required extensive redesign as it was too small. Following\ \ testing of two prototypes, the A13 was ordered into production and 65 were manufactured.\ \ The Mk III weighed , had a crew of four, a 340 hp engine which gave a top speed\ \ of and was armed with a Ordnance QF 2 pounder gun and a machine gun. When it\ \ was introduced into service in 1937, the Army still lacked a formal tank division.\ \ The Cruiser Mk IV (A13 Mk II) was a more heavily armoured version of the Mk\ \ III and was used in some of the early campaigns of the war.\n\nSecond World\ \ War\n\nFall of France \n\nBy the time the Second World War had come around,\ \ the design of the tank had shifted from its uses as a terrain covering vehicle,\ \ and the full potential of the tank as an armoured, combat vehicle had been realised.\n\ \nSince the infantry tanks were to work at the pace of the infantry units, which\ \ would be attacking on foot, high speed was not a requirement and they were able\ \ to carry heavier armour. The Infantry Tank came about as a result of a 1934\ \ requirement by the General Staff for a tank that would directly support an infantry\ \ attack. Armament would consist of a machine gun and an overall speed of a walking\ \ man when moving. Vickers designed an inexpensive (cost was a serious consideration)\ \ pilot which was delivered and accepted in 1936. Although heavily armoured it\ \ was slow and under-armed. Most would be lost or left behind in France.\n\nThe\ \ first purpose-designed infantry tanks were the Matilda I armed with a machine-gun\ \ and Matilda II, which was armed with a machine gun and a QF 2 pounder anti-tank\ \ gun. It was quickly seen that the Matilda I, with only a machine gun, was inadequate\ \ for its intended role. The second Matilda was ordered directly off the drawing\ \ board in 1937. During its production years of 1940 to 1943, 2,987 of these sturdy\ \ tanks were built. Though small, the tank presented a massive appearance due\ \ to its armoured skirts and cast armour. The Matilda 2 totally dominated all\ \ Italian armour and could claim title to \"Queen of the Desert\" until the arrival\ \ of German tanks in North Africa.\n\nThe British Army were pioneers in tank combat\ \ but by 1939 it could be argued they were behind the times in terms of strategy\ \ and tactics, their methods based on the trench warfare of the First World War.\ \ The British Army entered the Second World War with an array of poor designs\ \ and hobbled by poor doctrine. According to the theories of Captain BH Liddell\ \ Hart and Major-General Sir Percy Hobart, they split their tank force into two\ \ groups; Infantry tanks and Cruiser tanks. British tank use focused on cavalry-type\ \ missions and infantry support without the focus on the combined-arms tactics\ \ that dominated German and later Soviet thinking.\n\nThe result was a series\ \ of under-armed, mechanically unreliable designs such as the A9 which Sir John\ \ Carden of Vickers-Armstrong produced in 1934 and A10 and Crusader (A15) cruiser\ \ tanks, and the Matilda (A11) also by Vickers-Armstrongs Ltd, began in 1935 and\ \ Matilda II (A12) infantry tanks, and a series of deathtrap light tanks, the\ \ Light Tank Mk I built earlier by Vickers Armstrong from 1929, up to the Light\ \ Tank Mk V produced during 1936, that were suitable for reconnaissance work only.\n\ \nThe Matilda Mk I, (A11) and Matilda II (A12) infantry tanks fought together\ \ in France as part of the 1st Army Tank Brigade of the British Expeditionary\ \ Force in the Battle of France. They participated in the defence and counter-attack\ \ operation at Arras against the invasion by Nazi Germany in May 1940, temporarily\ \ discomfiting the 7th Panzer Division under Rommel. In the battle, elements of\ \ motorized SS regiment \"Totenkopf\" (later to be expanded into SS-Division Totenkopf)—were\ \ overrun, their standard PaK 36/37 anti-tank guns proving ineffective against\ \ the heavily armoured British Matilda tank. Rommel committed some of his armour\ \ to local counterattacks, only to find the guns of the Panzer II and Panzer 38(t)\ \ tanks could not penetrate the Matildas' armour. Desperate to prevent a British\ \ breakthrough, Rommel ordered the division's FlaK 18 anti-aircraft guns and\ \ field guns be formed into a defensive line and fire anti-tank and HE rounds\ \ in a last-ditch effort to stop the Matildas, and this stopped the British tanks.\ \ The attack made the German commanders nervous, and the battle is historically\ \ credited with shaking the confidence of the German High Command (OKW) and it\ \ may have been one of the factors for the surprise German halt on 24 May that\ \ gave the BEF the slimmest of opportunities to begin evacuation from Dunkirk.\ \ The main British force consisted of only 58 machine gun armed Matilda Is and\ \ 16 QF 2-pounder gun armed Matilda IIs supported by a few lighter armoured vehicles.\n\ \nSecondary Campaigns\n\nThe Mk I (A9) cruiser was used in the French, Greek\ \ and early North African campaigns. Sixty British Cruiser Mk II's went to Greece\ \ with the 3rd Royal Tank Regiment and fought against the German tanks, but over\ \ 90% suffered mechanical breakdowns as opposed to enemy action. The Cruiser Mk\ \ III saw action in Greece and early North African campaigns where they equipped\ \ units of the 7th Armoured Division. The Cruiser Mk IV tank saw action in the\ \ French and early North African campaigns.\n\nThe Cruiser tank Mk V Covenanter\ \ was first cruiser tank design to be given a name, and never deployed outside\ \ the British Isles. They first were used to re-equip the British 1st Armoured\ \ Division after the Fall of France.\n\nThe Crusader tanks became the main British\ \ tank, the A15 Crusader Mark I and II variants had QF 2 pounder (40mm) main gun,\ \ but the 'Crusader III' was fitted with an Ordnance QF 6 pounder (57mm) main\ \ gun. It used the same main turret as the A13 Mk III Covenanter designs, and\ \ over 5,000 tanks were manufactured. The A15 Crusader Mark 111 and Mark IV finally\ \ replaced most tanks in the British forces after the fall of France and was used\ \ extensively during the North African Campaign.\n\nDesert Campaign \n\nWhen the\ \ BEF returned to the United Kingdom, nearly all their armour was left behind\ \ and the remaining Matilda Mk Is were withdrawn. The Matilda II was used up to\ \ early 1942, in the war in North Africa, the Matilda II proved highly effective\ \ against Italian tanks, although vulnerable again to the larger calibre and medium\ \ calibre anti-tank guns. When the German Afrika Korps arrived in North Africa,\ \ the anti-aircraft gun was again pressed into the anti-tank role against the\ \ Matilda, causing heavy losses, and, by the time of the Second Battle of El Alamein\ \ in October 1942, few Matildas were still in service.\n\nCombat experience against\ \ the Germans in the Western Desert Campaign demonstrated to the British many\ \ shortcomings with their cruiser tanks. The Cruiser Mk I was an effective tank\ \ in the French, Greek and early North African campaigns. The 2-pdr gun was lethal\ \ against the primitive Italian tanks encountered first during the North African\ \ campaign, but was, at best, a mediocre weapon against the modern German armour\ \ of the Afrika Korps. The heavier Cruiser, Mk II (A10), were part of the British\ \ Expeditionary Force (BEF) sent to France in the early stages of the Second World\ \ War. Their cross-country performance was initially recorded as poor but they\ \ were still used later in North Africa at the defence of Tobruk in 1941, where\ \ reliability and suspension performance in the desert conditions was praised.\n\ \nHence a request was made in 1941 to the Nuffield Organization's subsidiary and\ \ Leyland Motors for a new heavy cruiser tank that could achieve battle superiority\ \ over German models. With the A34 Specification later called \"Comet\" the tank\ \ designers were to use a new gun, the \"77mm HV\". This gun used the same calibre\ \ (76.2 mm) projectiles as the 17-pounder but the shell casing was from the older\ \ QF 3 inch 20 cwt gun (loaded to higher pressures) permitting a smaller gun that\ \ could be readily fitted into a tank. The A34 Comet began to be delivered by\ \ September 1944. Intended to be in service by December 1944, crew training was\ \ delayed by the German Ardennes Offensive. By the end of the war, 1,200 had been\ \ produced.\n\nThey were followed by the Valentine tank (Infantry Tank Mk III)\ \ and Churchill tank (Infantry Tank Mk IV). Designed using the interior and chassis\ \ layout of the experimental A10, the Valentine met an emergency 1938 requirement\ \ for a tank to supplement the Matilda. Ordered \"off the drawing board\" in 1939,\ \ by the time production ceased in 1944, some 8,275 of these sturdy tanks had\ \ been built. Considered stable and reliable by its crews, the tank was only hampered\ \ by its small size. Unlike the Matilda tanks, this model allowed the later fitting\ \ of a larger main gun but at the expense of operating a two-man turret. The initial\ \ riveted construction soon was replaced by welding. The Valentine proved to be\ \ difficult to develop further but the Churchill went through successive variants\ \ and served up to the end of the war. The early Churchills were fraught with\ \ mechanical defects and required many changes before they were considered sound.\ \ The army had this machine designed to meet a possible need for a tank to operate\ \ in a \"shelled area\" on the Western Front which in 1939 was expected to eventually\ \ look like 1918. The initial A20 design was not successful which caused Vauxhall\ \ to take over from Harland and Wolff. The Vauxhall design was called the A22\ \ and the first production vehicles were delivered around the middle of 1941.\ \ Eventually, the teething problems were resolved and the tank went on to become\ \ one of the best tanks in the army. The tank was refined into many special roles,\ \ mostly with the Royal Engineers. The tank had excellent weight distribution\ \ and was considered very stable in movement.\n\nAs British cruiser tank designs\ \ developed into larger tanks with more powerful engines, they could carry larger\ \ guns and more armour yet still achieved high speeds. At the end of the war the\ \ cruiser tank lineage led to the \"universal tank\" in the form of the Centurion.\n\ \nIn practice the British did not operate only infantry and cruiser tanks. Lack\ \ of production capacity meant the large scale adoption of US medium tanks.\n\n\ The Cruiser Mk I was an effective tank in the French, Greek and early North African\ \ campaigns. The 2 pdr gun was lethal against the primitive Italian tanks encountered\ \ during the North African campaign, but was, at best, a mediocre weapon against\ \ the modern German armour of the Afrika Korps. Engaging the more thinly armoured\ \ flanks and rear of German tanks was generally the only way to have any effect.\ \ The minimal armour made the A9 an easy kill for most German anti-tank weapons.\ \ Also problematic was the lack of High Explosive shells for the 2 pdr gun and\ \ even worse the lack of AP for the 95 mm gun on the Close Support version. Another\ \ issue was that the areas around the front machine gun turrets created a frontal\ \ surface that was more vulnerable to enemy fire than it would have been had it\ \ been a flat plate, let alone a sloped glacis.\n\nA number of Cruiser Mark IIs\ \ were part of the British Expeditionary Force (BEF) sent to France in the early\ \ stages of the Second World War II. The A10 cross country performance was recorded\ \ as poor, but they were still used later in North Africa at the defence of Tobruk\ \ in 1941, where reliability and suspension performance in the desert conditions\ \ was praised. Sixty worn out examples were taken to Greece, by the 3rd Royal\ \ Tank Regiment and although they performed well against the German tanks, over\ \ 90% were lost due to mechanical breakdowns as opposed to enemy action (mainly\ \ tracks). (See \"A Tankie's Travels\" By Robert Watt )\n\nThe bright spots of\ \ British tank design included the Valentine, Churchill (A22), Cromwell (A27M),\ \ and Comet I (A34), which together made up a little over half of total British\ \ tank production during WWII. The Valentine was a reliable, heavily armoured\ \ infantry-support tank used successfully in the desert and by the Red Army as\ \ a light tank. The Churchill had heavy armour and good off-road capability. The\ \ Cromwell was in most respects the equal of the early model Sherman of the United\ \ States or the German Panzer IV, though by the time of its first major deployment\ \ in France in the summer of 1944, it was unremarkable compared to many other\ \ vehicles being fielded by then, its best advantage being its speed and mobility.\ \ The Comet was a design that improved on the Cromwell, fielded in the final months\ \ of the war with a modified, slightly less powerful, variant of the 17pdr, known\ \ as the 77mm QF. As a stop-gap, the Challenger (A30) Cruiser Tank, mounted a\ \ 17 Pounder gun on a lengthened Cromwell chassis with an extra road wheel each\ \ side and a widened hull centre section. From June 1944, it added heavier anti-tank\ \ firepower to cruiser tank reconnaissance units until the Comet became widely\ \ available.\n\nUS imports \n\nBeginning about 1942, most British tank units were\ \ equipped with vehicles supplied from the United States, such as the Stuart light\ \ tank, the Lee (or the Grant variant thereof) and the Lee's/Grant's replacement,\ \ the Sherman. The Stuart tanks were the first to come in with the 8th Hussars,\ \ and were part of the force of the 1st Armoured Division and also were part of\ \ the 4th Armoured Brigade and used for Operation Crusader.\n\nD-Day \n\nImmediately\ \ before and during the war, the British produced an enormous array of prototype\ \ tanks and modified tanks for a variety of specialist tasks (see Hobart's Funnies).\ \ For example, the Churchill AVRE mounted a 290 mm (11.4\") direct-fire mortar\ \ which was used for destroying buildings and clearing obstacles. Responsibility\ \ for the buildup of vehicles and the training of crews to use them was given\ \ to armoured warfare expert Percy Hobart after whom the collection was named.\n\ \nMany of the ideas had already been tried, tested or were in experimental development\ \ both by Britain and other nations. For example, the Scorpion flail tank (a modified\ \ Matilda tank) had already been used during the North African campaign to clear\ \ paths through German minefields. Soviet T-34 tanks had been modified with mine-rollers.\ \ Close-support tanks, bridgelayers, and fascine carriers had been developed elsewhere\ \ also. However, the Funnies were the largest and most elaborate collection of\ \ engineering vehicles available.\n\nBy early 1944, Hobart could demonstrate to\ \ Eisenhower and Montgomery a brigade each of swimming DD tanks, Crab mine clearers,\ \ and AVRE (Engineer) tanks along with a regiment of Crocodile flamethrowing tanks.\n\ \nMontgomery considered that the U.S. forces should use them, and offered them\ \ a half-share of all the vehicles available, but take-up was minimal. Eisenhower\ \ was in favour of the amphibious tanks but left the decision on the others to\ \ Lieutenant General Omar Bradley, then commanding the U.S. First Army. Bradley\ \ requested 25 flail tanks and 100 Churchill Crocodiles and the British War Office\ \ agreed to supply them as well as British-crewed AVREs. In the event though there\ \ was insufficient time to produce the vehicles and train the crews so on the\ \ day American forces were limited to DD tanks and their own Sherman bulldozer\ \ tanks and armoured bulldozers.\n\nThe British at Normandy were re-equipped with\ \ some of the newer British and American tanks and a few days after D-Day, the\ \ Armoured Reconnaissance regiment of the 7th Armoured Division landed at Le Hamel\ \ on Gold Beach with Cromwell tanks and began going into action almost immediately\ \ in the fighting around Villers-Bocage. The tanks were used in the advance through\ \ the Bocage with the 22nd Armoured Brigade. They were involved in action against\ \ the 2nd Panzer Division, with the tanks leading the way out of the bridgehead.\n\ \nEarly Cold War\nDuring the Cold War (1945–1990), the two opposing forces in\ \ Europe were the Warsaw Pact countries on the one side, and the North Atlantic\ \ Treaty Organization (NATO) countries on the other side. Soviet domination of\ \ the Warsaw Pact led to effective standardization on a few tank designs. In comparison,\ \ the main NATO countries, Britain, France, Germany, and the USA, developed their\ \ own tank designs with little in common, and the smaller counties generally adopted\ \ one or more of these designs.\n\nFor the UK regiments, the Centurion was the\ \ primary British tank of the post-Second World War period. Development of the\ \ tank began in 1943 and manufacture of the Centurion, began in January 1945.\ \ With the 20-pounder gun it first entered combat with the British Army in the\ \ Korean War in 1950, in support of the UN forces. It was noted for its high mobility,\ \ able to climb to the top of hills that were considered difficult for infantry,\ \ let alone tanks. Upgraded to mount the 105 mm L7 gun, it became the UK's first\ \ main battle tank. Between 1946 and 1962, 4,423 Centurions were produced, consisting\ \ of 13 basic marks and numerous variants.\n\nAt first, the Centurion was not\ \ considered capable of dealing with all Soviet tanks and it was joined by a traditional\ \ heavy tank design, the Conqueror. This design was almost as heavy as the German\ \ WWII King Tiger and was tasked with dealing with the heavy Soviet designs like\ \ the Joseph Stalin IS-3. They were issued at nine for each regiment in Germany;\ \ usually grouped in three tank troops. It used the American 120 mm gun and was\ \ expected to give long range firepower and support to the Centurion tanks that\ \ made up the bulk of British tank force. To provide even more firepower for the\ \ British Army of the Rhine tank units, Charioteer, a variant of the Cromwell\ \ tank with a 20 pounder gun, was deployed; it was a defensive weapon, in practice\ \ more a self-propelled anti-tank gun.\n\nThis hodge-podge of designs was far\ \ from ideal, and there were efforts to improve the Centurion. When equipped with\ \ the L7 105 mm gun, along with greatly improved shells, the Centurion was able\ \ to penetrate even the heaviest Soviet designs. It became the truly \"Universal\ \ tank\" it had originally been intended and began to displace other designs in\ \ service. With future combat thought to be dominated by nuclear weapons, rendering\ \ armour as ineffective as infantry, development of newer tank designs began to\ \ wane. Instead, designs like the Centurion continued to be improved with the\ \ addition of better fire control, stabilization and NBC protection. The Centurion\ \ would go on to be one of the most widely used tank designs, equipping armies\ \ around the world. Between 1946 and 1962, 4,423 Centurions were produced, consisting\ \ of 13 basic marks and numerous variants. As recently as the 2006 Israel-Lebanon\ \ conflict the Israel Defense Forces employed modified Centurions as armoured\ \ personnel carriers and combat engineering vehicles. South Africa still employs\ \ over 200 Centurions.\n\nLater cold war to today\n\nWhile the L7 equipped Centurion\ \ was an excellent tank, improvements in gunnery and especially drivetrain made\ \ it possible to equip a tank with the protection and firepower of the Conqueror\ \ with the mobility of Centurion. Leyland began experiments on such a design as\ \ early as 1956 with early prototypes in 1959. This emerged as the Chieftain,\ \ one of the most heavily armed and armoured tanks of its era, and one of the\ \ most modern designs in any force of the era. From this point the Army forces\ \ relied on single designs, adopting the main battle tank concept whole heartedly.\n\ \nIranian orders for an improved Chieftain led to what were initially relatively\ \ minor upgrades, but the development of Chobham armour in the 1960s led to the\ \ design of a new tank combining a wide variety of improvements, the Challenger.\ \ Among its many improvements, the Challenger used a laser rangefinder in a highly\ \ automated fire control system, an improved engine, a greatly improved suspension\ \ that offered far better off-road performance. Entering service in 1983, it was\ \ beaten into NATO service by the M-1 Abrams, which also used Chobham armour.\n\ \nAlmost immediately after the Chieftain entered development, the West German\ \ government began collaborating with the British on a new tank design combining\ \ features of the Chieftain with a number of new concepts. Development officially\ \ began in September 1978 with the aim of introducing a new design in the late\ \ 1980s that would replace both British and German designs. This project fell\ \ apart, but a number of experimental design concepts were then worked into the\ \ Challenger 2, which first entered service in July 1994. The Challenger 2 forms\ \ the core of the Army's heavy tank units today.\n\nThe Challenger 2 is the main\ \ tank currently being used today by the British military in combat situations.\ \ It is renowned for its durability and endurance. Only one has ever been recorded\ \ as destroyed, of which was due to a friendly fire incident involving another\ \ Challenger 2 tank. This is possibly due to the use of Chobham armour for the\ \ Challenger's outer armour. Chobham armour is an incredibly tough armour, the\ \ details of which still remain secret to the developers. It uses layers of ceramics\ \ and other materials, combined in such a way as to withstand extreme heat and\ \ impact.\n\nIn May 2021 the Ministry of Defence announced that 148 Challenger\ \ 3 tanks would be produced by upgrading current Challenger tanks with fully-digitised\ \ systems and a smoothbore gun. The contract costing £800 million to be carried\ \ out by Rheinmetall BAE Systems Land (RBSL) to be delivered 2027-2030.\n\nRecent\ \ and current conflicts\n\nGulf War \n\nThe headquarters of the 1st Armoured Division\ \ was deployed to Saudi Arabia in 1990 to command British land forces. It had\ \ two brigades under its command, 4th and 7th Armoured Brigade. During the war,\ \ it came under the US VII Corps and was part of the great armoured left-hook\ \ that destroyed many Iraqi Republican Guard formations. The two brigades in the\ \ division alternated heading the advance. The Royal Scots Dragoon Guards saw\ \ active service during the Gulf War in 1991 deploying 57 Challenger tanks.\n\n\ The Army contributed 50,000 troops to the coalition force that fought Iraq in\ \ the Persian Gulf War. This included Challenger tank units within the 1st Armoured\ \ Division\n\nBalkans conflicts \n\nThe British Army was deployed to Yugoslavia\ \ in 1992; initially this force formed part of the United Nations Protection Force.\ \ Units of the 1st Armoured Division were deployed as part of the Implementation\ \ Force (IFOR) in 1995.\n\nWar in Afghanistan \n\nIn November 2001 the United\ \ Kingdom, as a part of Operation Enduring Freedom with the United States, invaded\ \ Afghanistan to topple the Taliban. The 3rd Division were deployed in Kabul,\ \ to assist in the liberation of the troubled capital. The British Army is today\ \ concentrating on fighting Taliban forces and bringing security to Helmand province.\ \ Combat operations ended in 2014, although there are some small units that operate\ \ in a non combat role to protect healthcare staff and foreign diplomats, as well\ \ as a select few who still help train the Afghan National Army.\n\nIraq War \n\ \nThe United Kingdom participated in the 2003 invasion of Iraq, sending a force\ \ that would reach 46,000 military personnel. The 7th Armoured Brigade, consisting\ \ of 112 Challenger 2 tanks, 140 Warriors and 32 AS-90 155 mm self-propelled howitzers,\ \ entered Iraq on 21 March and advanced towards Iraq's second largest city, Basra\ \ and helped encircle and isolate it. The brigade, led by the 1st Fusiliers Battlegroup,\ \ made a rapid advance towards the city and soon reached its outskirts, securing\ \ Basra Airport and the bridges across the Shatt al-Arab. The advance by the brigade\ \ met sporadic though fierce resistance, with The Queen's Royal Irish Hussars,\ \ including an engagement between 14 Challenger 2s of the Royal Scots Dragoon\ \ Guards and 14 Iraqi tanks, all of the Iraqi tanks being destroyed; it was the\ \ largest tank engagement by the British Army since WWII.\n\nThe 1st Armoured\ \ Division, including 7th Brigade, raided the city several times and the Desert\ \ Rats, led by Challenger 2s of the Royal Scots Dragoon Guards, Queen's Royal\ \ Lancers and 2nd Royal Tank Regiment with Warriors of the 1st Fusiliers, Irish\ \ Guards and Black Watch pushed into the city on 6 April and stayed. For the most\ \ part, Basra was controlled by 1st Division, though further engagements took\ \ place. The war was officially declared over on 1 May. The Desert Rats remained\ \ in Iraq after the war, acting as peacekeepers and helping to rebuild the country\ \ while based in the British sector in the south of Iraq. The brigade began to\ \ leave in late June, being replaced by 19th Mechanised Brigade. The remaining\ \ British troops were withdrawn from Iraq after the Iraqi government refused to\ \ extend their mandate.\n\nSee also\n\n History of the tank\n Tanks in World War\ \ I\n List of interwar armoured fighting vehicles\n Tanks in World War II\n Comparison\ \ of early World War II tanks\n Tank classification\n List of military vehicles\n\ \ Rhino tank\n\nNotes\n\nReferences \n \n \n \n \n https://www.bbc.co.uk/news/uk-england-shropshire-57025266\n\ \nBritish Army\nBritish Army\nArticles containing video clips" - "In computer programming, is a ternary operator that is part of the syntax for\ \ basic conditional expressions in several programming languages. It is commonly\ \ referred to as the conditional operator, inline if (iif), or ternary if. An\ \ expression evaluates to if the value of is true, and otherwise to . One can\ \ read it aloud as \"if a then b otherwise c\".\n\nIt originally comes from CPL,\ \ in which equivalent syntax for e1 ? e2 : e3 was e1 → e2, e3.\n\nAlthough many\ \ ternary operators are possible, the conditional operator is so common, and other\ \ ternary operators so rare, that the conditional operator is commonly referred\ \ to as the ternary operator.\n\nVariations\nThe detailed semantics of \"the\"\ \ ternary operator as well as its syntax differs significantly from language to\ \ language.\n\nA top level distinction from one language to another is whether\ \ the expressions permit side effects (as in most procedural languages) and whether\ \ the language provides short-circuit evaluation semantics, whereby only the selected\ \ expression is evaluated (most standard operators in most languages evaluate\ \ all arguments).\n\nIf the language supports expressions with side effects but\ \ does not specify short-circuit evaluation, then a further distinction exists\ \ about which expression evaluates first—if the language guarantees any specific\ \ order (bear in mind that the conditional also counts as an expression).\n\n\ Furthermore, if no order is guaranteed, a distinction exists about whether the\ \ result is then classified as indeterminate (the value obtained from some order)\ \ or undefined (any value at all at the whim of the compiler in the face of side\ \ effects, or even a crash).\n\nIf the language does not permit side-effects in\ \ expressions (common in functional languages), then the order of evaluation has\ \ no value semantics—though it may yet bear on whether an infinite recursion terminates,\ \ or have other performance implications (in a functional language with match\ \ expressions, short-circuit evaluation is inherent, and natural uses for the\ \ ternary operator arise less often, so this point is of limited concern).\n\n\ For these reasons, in some languages the statement form can have subtly different\ \ semantics than the block conditional form } (in the C language—the syntax of\ \ the example given—these are in fact equivalent).\n\nThe associativity of nested\ \ ternary operators can also differ from language to language. In almost all languages,\ \ the ternary operator is right associative so that evaluates intuitively as\ \ , but PHP in particular is notoriously left-associative, and evaluates as follows:\ \ , which is rarely what any programmer expects. (The given examples assume that\ \ the ternary operator has low operator precedence, which is true in all C-family\ \ languages, and many others.)\n\nEquivalence to map\nThe ternary operator can\ \ also be viewed as a binary map operation.\n\nIn R—and other languages with literal\ \ expression tuples—one can simulate the ternary operator with something like\ \ the R expression (this idiom is slightly more natural in languages with 0-origin\ \ subscripts).\n\nHowever, in this idiom it is almost certain that the entire\ \ tuple expression will evaluate prior to the subscript expression, so there will\ \ be no short-circuit semantics.\n\nNested ternaries can be simulated as where\ \ the function returns the index of the first true value in the condition vector.\ \ Note that both of these map equivalents are binary operators, revealing that\ \ the ternary operator is ternary in syntax, rather than semantics. These constructions\ \ can be regarded as a weak form of currying based on data concatenation rather\ \ than function composition.\n\nIf the language provides a mechanism of futures\ \ or promises, then short-circuit evaluation can sometimes also be simulated in\ \ the context of a binary map operation.\n\nConditional assignment\n is used as\ \ follows:\n\n condition ? value_if_true : value_if_false\n\nThe condition is\ \ evaluated true or false as a Boolean expression. On the basis of the evaluation\ \ of the Boolean condition, the entire expression returns value_if_true if condition\ \ is true, but value_if_false otherwise. Usually the two sub-expressions value_if_true\ \ and value_if_false must have the same type, which determines the type of the\ \ whole expression. The importance of this type-checking lies in the operator's\ \ most common use—in conditional assignment statements. In this usage it appears\ \ as an expression on the right side of an assignment statement, as follows:\n\ \n variable = condition ? value_if_true : value_if_false\n\nThe ?: operator is\ \ similar to the way conditional expressions (if-then-else constructs) work in\ \ functional programming languages, like Scheme, ML, and Haskell, since if-then-else\ \ forms an expression instead of a statement in those languages.\n\nUsage\nThe\ \ conditional operator's most common usage is to make a terse simple conditional\ \ assignment statement. For example, if we wish to implement some C code to change\ \ a shop's normal opening hours from 9 o'clock to 12 o'clock on Sundays, we may\ \ use\n\nint opening_time = (day == SUNDAY) ? 12 : 9;\n\ninstead of the more verbose\n\ \nint opening_time;\n\nif (day == SUNDAY)\n opening_time = 12;\nelse\n opening_time\ \ = 9;\n\nThe two forms are nearly equivalent. Keep in mind that the is an expression\ \ and if-then-else is a statement. Note that neither the true nor false portions\ \ can be omitted from the conditional operator without an error report upon parsing.\ \ This contrasts with if-then-else statements, where the else clause can be omitted.\n\ \nMost of the languages emphasizing functional programming don't need such an\ \ operator as their regular conditional expression(s) is an expression in the\ \ first place e.g. the Scheme expression is equivalent in semantics to the C\ \ expression . This is also the case in many imperative languages, starting with\ \ ALGOL where it is possible to write , or Smalltalk () or Ruby (, although works\ \ as well).\n\nNote that some languages may evaluate both the true- and false-expressions,\ \ even though only one or the other will be assigned to the variable. This means\ \ that if the true- or false-expression contain a function call, that function\ \ may be called and executed (causing any related side-effects due to the function's\ \ execution), regardless of whether or not its result will be used. Programmers\ \ should consult their programming language specifications or test the ternary\ \ operator to determine whether or not the language will evaluate both expressions\ \ in this way. If it does, and this is not the desired behaviour, then an if-then-else\ \ statement should be used.\n\nActionScript 3\ncondition ? value_if_true : value_if_false\n\ \nAda\nThe 2012 edition of Ada has introduced conditional expressions (using \ \ and ), as part of an enlarged set of expressions including quantified expressions\ \ and expression functions. The Rationale for Ada 2012 states motives for Ada\ \ not having had them before, as well as motives for now adding them, such as\ \ to support \"contracts\" (also new).\n\nPay_per_Hour := (if Day = Sunday\n \ \ then 12.50\n else 10.00);\n\nWhen the value of an if_expression is itself\ \ of Boolean type, then the part may be omitted, the value being True. Multiple\ \ conditions may chained using .\n\nALGOL 68\nBoth ALGOL 68's choice clauses (if\ \ and the case clauses) provide the coder with a choice of either the \"bold\"\ \ syntax or the \"brief\" form.\n\n Single if choice clause:\n if condition then\ \ statements [ else statements ] fi\n \"brief\" form: ( condition | statements\ \ | statements )\n\n Chained if choice clause:\n if condition1 then statements\ \ elif condition2 then statements [ else statements ] fi\n \"brief\" form: (\ \ condition1 | statements |: condition2 | statements | statements )\n\nAPL\nWith\ \ the following syntax, both expressions are evaluated (with evaluated first,\ \ then , then ):\n\nresult ← value_if_true ⊣⍣ condition ⊢ value_if_false\n\nThis\ \ alternative syntax provides short-circuit evaluation:\n\nresult ← { condition\ \ : expression_if_true ⋄ expression_if_false } ⍬\n\nAWK\nresult = condition ?\ \ value_if_true : value_if_false\n\nBash\nA true ternary operator only exists\ \ for arithmetic expressions:\n\n((result = condition ? value_if_true : value_if_false))\n\ \nFor strings there only exist workarounds, like e.g.:\n\nresult=$([[ \"$a\" =\ \ \"$b\" ]] && echo \"value_if_true\" || echo \"value_if_false\")\n\nWhere can\ \ be any condition construct can evaluate. Instead of the there can be any other\ \ bash command. When it exits with success, the first echo command is executed,\ \ otherwise the second one is executed.\n\nC\nA traditional if-else construct\ \ in C, Java and JavaScript is written:\n\nif (a > b) {\n result = x;\n}\n\ else {\n result = y;\n}\n\nThis can be rewritten as the following statement:\n\ \nresult = a > b ? x : y;\n\nAs in the if-else construct only one of the expressions\ \ 'x' and 'y' is evaluated. This is significant if the evaluation of 'x' or 'y'\ \ has side effects. The behaviour is undefined if an attempt is made to use the\ \ result of the conditional operator as an lvalue.\n\nA GNU extension to C allows\ \ omitting the second operand, and using implicitly the first operand as the second\ \ also:\n\na == x ? : y;\n\nThe expression is equivalent to\n\na == x ? (a ==\ \ x) : y;\n\nexcept that if x is an expression, it is evaluated only once. The\ \ difference is significant if evaluating the expression has side effects. This\ \ shorthand form is sometimes known as the Elvis operator in other languages.\n\ \nC#\nIn C#, if condition is true, first expression is evaluated and becomes the\ \ result; if false, the second expression is evaluated and becomes the result.\ \ As with Java only one of two expressions is ever evaluated.\n\n// condition\ \ ? first_expression : second_expression;\n\nstatic double sinc(double x) \n{\n\ \ return x != 0.0 ? Math.Sin(x) / x : 1.0;\n}\n\nC++\nUnlike in C, the precedence\ \ of the operator in C++ is the same as that of the assignment operator ( or\ \ ), and it can return an lvalue. This means that expressions like and are both\ \ legal and are parsed differently, the former being equivalent to .\n\nIn C++\ \ there are conditional assignment situations where use of the if-else statement\ \ is impossible, since this language explicitly distinguishes between initialization\ \ and assignment. In such case it is always possible to use a function call, but\ \ this can be cumbersome and inelegant. For example, to pass conditionally different\ \ values as an argument for a constructor of a field or a base class, it is impossible\ \ to use a plain if-else statement; in this case we can use a conditional assignment\ \ expression, or a function call. Bear in mind also that some types allow initialization,\ \ but do not allow assignment, or even that the assignment operator and the constructor\ \ do totally different things. This last is true for reference types, for example:\n\ \n#include <iostream>\n#include <fstream>\n#include <string>\n\nint main(int argc,\ \ char *argv[])\n{\n std::string name;\n std::ofstream fout;\n\n if (argc\ \ > 1 && argv[1])\n {\n name = argv[1];\n fout.open(name.c_str(),\ \ std::ios::out | std::ios::app);\n }\n\n std::ostream &sout = name.empty()\ \ ? std::cout : fout;\n\n sout << \"Hello, world!\\n\";\n\n return 0;\n\ }\n\nIn this case there is no possibility of using an if-else statement in place\ \ of the operator (Although we can replace the use of with a function call,\ \ inside of which can be an if-else statement).\n\nFurthermore, the conditional\ \ operator can yield an lvalue, i.e. a value to which another value can be assigned.\ \ Consider the following example:\n\n#include <iostream>\n\nint main(int argc,\ \ char *argv[]) \n{\n int a = 0;\n int b = 0;\n\n (argc > 1 ? a : b)\ \ = 1;\n\n std::cout << \"a: \" << a\n << \" b: \" << b\n \ \ << '\\n';\n\n return 0;\n}\n\nIn this example, if the boolean expression\ \ yields the value on line 8, the value is assigned to the variable , otherwise\ \ the value is assigned to the variable .\n\nIn C++ and other various languages,\ \ ternary operators like are also possible but are very rare.\n\nCFML\nExample\ \ of the operator in CFML:\n\nresult = randRange(0,1) ? \"heads\" : \"tails\"\ ;\n\nRoughly 50% of the time the expression will return 1 (true) or 0 (false);\ \ meaning result will take the value \"heads\" or \"tails\" respectively.\n\n\ Lucee, Railo, and ColdFusion 11-specific\nLucee, Railo, and ColdFusion 11 also\ \ implement the Elvis operator, which will return the value of the expression\ \ if it is not-null, otherwise the specified default.\n\nSyntax:\n\nresult = expression\ \ ?: value_if_expression_is_null\n\nExample:\n\nresult = f() ?: \"default\";\n\ \n// where...\nfunction f(){\n if (randRange(0,1)){ // either 0 or 1 (false\ \ / true)\n return \"value\";\n }\n}\n\nwriteOutput(result);\n\nThe\ \ function will return roughly 50% of the time, otherwise will not return anything.\ \ If returns \"value\", will take that value, otherwise will take the value\ \ \"default\".\n\nCoffeeScript\nExample of using this operator in CoffeeScript:\n\ \nif 1 is 2 then \"true value\" else \"false value\"\n\nReturns \"false value\"\ .\n\nCommon Lisp\nAssignment using a conditional expression in Common Lisp:\n\n\ (setf result (if (> a b) x y))\n\nAlternative form:\n\n(if (> a b)\n (setf result\ \ x)\n (setf result y))\n\nCrystal\nExample of using this operator in Crystal:\n\ \n1 == 2 ? \"true value\" : \"false value\"\n\nReturns .\n\nThe Crystal compiler\ \ transforms conditional operators to expressions, so the above is semantically\ \ identical to:\n\nif 1 == 2\n \"true value\"\nelse\n \"false value\"\nend\n\ \nDart\nThe Dart programming language's syntax belongs to the C family, primarily\ \ inspired by languages like Java, C# and JavaScript, which means it has inherited\ \ the traditional syntax for its conditional expression.\n\nExample:\n\nreturn\ \ x.isEven ? x ~/ 2 : x * 3 + 1;\n\nLike other conditions in Dart, the expression\ \ before the must evaluate to a Boolean value.\n\nThe Dart syntax uses both \ \ and in various other ways, which causes ambiguities in the language grammar.\ \ An expression like:\n\n{ x as T ? [1] : [2] }\n\ncould be parsed as either a\ \ \"set literal\" containing one of two lists or as a \"map literal\" {((x as\ \ T?)[1]) : [2]}. The language always chooses the conditional expression in such\ \ situations.\n\nDart also has a second ternary operator, the operator commonly\ \ used for setting values in lists or maps, which makes the term \"the ternary\ \ operator\" ambiguous in a Dart context.\n\nDelphi\nIn Delphi the function can\ \ be used to achieve the same as . If the library is used, the function returns\ \ a numeric value such as an Integer, Double or Extended. If the library is used,\ \ this function can also return a string value.\n\nUsing \n\nfunction IfThen(AValue:\ \ Boolean; const ATrue: Integer; const AFalse: Integer): Integer;\nfunction IfThen(AValue:\ \ Boolean; const ATrue: Int64; const AFalse: Int64): Int64;\nfunction IfThen(AValue:\ \ Boolean; const ATrue: UInt64; const AFalse: UInt64): UInt64;\nfunction IfThen(AValue:\ \ Boolean; const ATrue: Single; const AFalse: Single): Single;\nfunction IfThen(AValue:\ \ Boolean; const ATrue: Double; const AFalse: Double): Double;\nfunction IfThen(AValue:\ \ Boolean; const ATrue: Extended; const AFalse: Extended): Extended;\n\nUsing\ \ the library\n\nfunction IfThen(AValue: Boolean; const ATrue: string; AFalse:\ \ string = ''): string;\n\nUsage example:\n\nfunction GetOpeningTime(Weekday:\ \ Integer): Integer;\nbegin\n { This function will return the opening time for\ \ the given weekday: 12 for Sundays, 9 for other days }\n Result := IfThen((Weekday\ \ = 1) or (Weekday = 7), 12, 9);\nend;\n\nUnlike a true ternary operator however,\ \ both of the results are evaluated prior to performing the comparison. For example,\ \ if one of the results is a call to a function which inserts a row into a database\ \ table, that function will be called whether or not the condition to return that\ \ specific result is met.\n\nF#\n\nIn F# the built-in syntax for if-then-else\ \ is already an expression that always must return a value.\n\nlet num = if x\ \ = 10 then 42 else 24\n\nF# has a special case where you can omit the else branch\ \ if the return value is of type unit. This way you can do side-effects, without\ \ using a else branch.\n\nif x = 10 then\n printfn \"It is 10\"\n\nBut even\ \ in this case, the if expression would return unit. You don't need to write the\ \ else branch, because the compiler will assume the unit type on else.\n\nFORTH\n\ Since FORTH is a stack-oriented language, and any expression can leave a value\ \ on the stack, all // sequences can generate values:\n\n: test ( n -- n ) 1\ \ AND IF 22 ELSE 42 THEN ;\n\nThis word takes 1 parameter on the stack, and if\ \ that number is odd, leaves 22. If it's even, 42 is left on the stack.\n\nFortran\n\ With the additions to the code in the 1995 release, the ternary operator was added\ \ to the Fortran compiler as the intrinsic function :\n\nvariable = merge(x,y,a>b)\n\ \nNote that both x and y are evaluated before the results of one or the other\ \ are returned from the function. Here, x is returned if the condition holds true\ \ and y otherwise.\n\nFreeMarker \nThis built-in exists since FreeMarker 2.3.20.\n\ \nUsed like booleanExp?then(whenTrue, whenFalse), fills the same role as the ternary\ \ operator in C-like languages.\n\n<#assign x = 10>\n<#assign y = 20>\n<#-- Prints\ \ the maximum of x and y: -->\n${(x > y)?then(x, y)}\n\nGo\nThere is no ternary\ \ if in Go, so use of the full if statement is always required.\n\nHaskell\nThe\ \ built-in if-then-else syntax is inline: the expression\n\nif predicate then\ \ expr1 else expr2\n\nhas type\n\nBool -> a -> a -> a\n\nThe base library also\ \ provides the function :\n\nbool :: a -> a -> Bool -> a\n\nIn both cases, no\ \ special treatment is needed to ensure that only the selected expression is evaluated,\ \ since Haskell is non-strict by default. This also means an operator can be defined\ \ that, when used in combination with the operator, functions exactly like in\ \ most languages:\n\n(?) :: Bool -> a -> a -> a\n(?) pred x y = if pred then x\ \ else y\ninfix 1 ?\n\n-- example (vehicle will evaluate to \"airplane\"):\narg\ \ = 'A'\nvehicle = arg == 'B' ? \"boat\" $\n arg == 'A' ? \"airplane\"\ \ $\n arg == 'T' ? \"train\" $\n \"car\"\n\nHowever,\ \ it is more idiomatic to use pattern guards\n\n-- example (vehicle will evaluate\ \ to \"airplane\"):\narg = 'A'\nvehicle | arg == 'B' = \"boat\"\n | arg\ \ == 'A' = \"airplane\"\n | arg == 'T' = \"train\"\n | otherwise\ \ = \"car\"\n\nJava\nIn Java this expression evaluates to:\n\n// If foo is selected,\ \ assign selected foo to bar. If not, assign baz to bar.\nObject bar = foo.isSelected()\ \ ? foo : baz; \n\nNote that Java, in a manner similar to C#, only evaluates the\ \ used expression and will not evaluate the unused expression.\n\nJulia\nIn Julia,\ \ \"Note that the spaces around and are mandatory: an expression like is not\ \ a valid ternary expression (but a newline is acceptable after both the and\ \ the ).\"\n\nJavaScript\nThe conditional operator in JavaScript is similar to\ \ that of C++ and Java, except for the fact the middle expression cannot be a\ \ comma expression. Also, as in C++, but unlike in C or Perl, it will not bind\ \ tighter than an assignment to its right— is equivalent to instead of .\n\n\ var timeout = settings !== null ? settings.timeout : 1000;\n\nJust like C# and\ \ Java, the expression will only be evaluated if, and only if, the expression\ \ is the matching one for the condition given; the other expression will not be\ \ evaluated.\n\nKotlin \nKotlin does not include the traditional ternary operator,\ \ however, s can be used as expressions that can be assigned, achieving the same\ \ results. Note that, as the complexity of your conditional statement grows, you\ \ might consider replacing your - expression with a expression.\n\nval max =\ \ if (a > b) a else b\n\nLua \nLua does not have a traditional conditional operator.\ \ However, the short-circuiting behaviour of its and operators allows the emulation\ \ of this behaviour:\n\n-- equivalent to var = cond ? a : b;\nvar = cond and a\ \ or b\n\nThis will succeed unless is logically false (i.e. or ); in this case,\ \ the expression will always result in . This can result in some surprising behaviour\ \ if ignored.\n\nSQL\nThe SQL expression is a generalization of the ternary operator.\ \ Instead of one conditional and two results, n conditionals and n+1 results can\ \ be specified.\n\nWith one conditional it is equivalent (although more verbose)\ \ to the ternary operator:\n\nSELECT (CASE WHEN a > b THEN x ELSE y END) AS CONDITIONAL_EXAMPLE\n\ \ FROM tab;\n\nThis can be expanded to several conditionals:\n\nSELECT (CASE\ \ WHEN a > b THEN x WHEN a < b THEN y ELSE z END) AS CONDITIONAL_EXAMPLE\n FROM\ \ tab;\n\nMySQL\nIn addition to the standard expression, MySQL provides an function\ \ as an extension:\n\nIF(cond, a, b);\n\nSQL Server\nIn addition to the standard\ \ expression, SQL Server (from 2012) provides an function:\n\nIIF(condition,\ \ true_value, false_value)\n\nOracle SQL\nIn addition to the standard expression,\ \ Oracle has a variadic functional counterpart which operates similarly to a switch\ \ statement and can be used to emulate the conditional operator when testing for\ \ equality.\n\n-- General syntax takes case-result pairs, comparing against an\ \ expression, followed by a fall-back result:\nDECODE(expression, case1, result1,\n\ \ ...\n caseN, resultN,\n \ \ resultElse)\n\n-- We can emulate the conditional operator by just selecting\ \ one case:\nDECODE(expression, condition, true, false)\n\nThe function is, today,\ \ deprecated in favour of the standard expression. This can be used in both Oracle\ \ SQL queries as well as PL/SQL blocks, whereas can only be used in the former.\n\ \nPerl\nA traditional if-else construct in Perl is written:\n\nif ($a > $b) {\n\ \ $result = $x;\n} else {\n $result = $y;\n}\n\nRewritten to use the conditional\ \ operator:\n\n$result = $a > $b ? $x : $y;\n\nThe precedence of the conditional\ \ operator in perl is the same as in C, not as in C++. This is conveniently of\ \ higher precedence than a comma operator but lower than the precedence of most\ \ operators used in expressions within the ternary operator, so the use of parentheses\ \ is rarely required.\n\nIts associativity matches that of C and C++, not that\ \ of PHP. Unlike C but like C++, perl allows the use of the conditional expression\ \ as an L-value; for example:\n\n$a > $b ? $x : $y = $result;\n\nwill assign \ \ to either or depending on the logical expression's boolean result.\n\nThe\ \ respective precedence rules and associativities of the operators used guarantee\ \ that the version absent any parentheses is equivalent to this explicitly parenthesized\ \ version:\n\n(($a > $b) ? $x : $y) = $result;\n\nThis is equivalent to the if-else\ \ version:\n\nif ($a > $b) {\n $x = $result;\n} else {\n $y = $result;\n\ }\n\nPHP\nA simple PHP implementation is this:\n\n$abs = $value >= 0 ? $value\ \ : -$value;\n\nDue to an unfortunate design of the language grammar, the conditional\ \ operator in PHP is left associative in contrast to other languages, thus given\ \ a value of T for arg, the PHP code in the following example would yield the\ \ value horse instead of train as one might expect:\n\n<?php\n$arg = \"T\";\n\ $vehicle = ( ( $arg == 'B' ) ? 'bus' : \n ( $arg == 'A' ) ? 'airplane'\ \ : \n ( $arg == 'T' ) ? 'train' : \n ( $arg == 'C' )\ \ ? 'car' : \n ( $arg == 'H' ) ? 'horse' : \n \ \ 'feet' );\necho $vehicle;\n\nThe reason is that nesting two conditional\ \ operators produces an oversized condition with the last two options as its branches:\ \ is really . This is acknowledged and will probably not change. To avoid this,\ \ nested parenthesis are needed, as in this example:\n\n<?php\n$arg = \"T\";\n\ $vehicle = $arg == \"B\" ? \"bus\" :\n ($arg == \"A\" ? \"airplane\"\ \ :\n ($arg == \"T\" ? \"train\" :\n ($arg == \"C\" ? \"car\"\ \ :\n ($arg == \"H\" ? \"horse\" :\n \"feet\"\ ))));\necho $vehicle;\n\nThis will produce the result of train being printed to\ \ the output, analogous to a right associative conditional operator.\n\nPHP 5.3\n\ \nSince PHP 5.3 there is a shorthand of the conditional operator, sometimes referred\ \ to as the \"Elvis Operator\". The syntax for this shorthand is below:\n\n$c\ \ = $a ?: $b; // equivalent to $c = $a ? $a : $b;\n\nPython\nThough it had been\ \ delayed for several years by disagreements over syntax, an operator for a conditional\ \ expression in Python was approved as Python Enhancement Proposal 308 and was\ \ added to the 2.5 release in September 2006. Python's conditional operator differs\ \ from the common operator in the order of its operands. The general form is:\n\ \nresult = x if a > b else y\n\nThis form invites considering as the normal value\ \ and as an exceptional case. \n\nPrior to Python 2.5 there were a number of\ \ ways to approximate a conditional operator (for example by indexing into a two\ \ element array), all of which have drawbacks as compared to the built-in operator.\n\ \nR\nThe traditional if-else construct in R (which is an implementation of S)\ \ is:\n\nif (a < b) {\n x <- \"true\"\n} else {\n x <- \"false\"\n}\n\nIf there\ \ is only one statement in each block, braces can be omitted, like in C:\n\nif\ \ (a < b)\n x <- \"true\"\nelse\n x <- \"false\"\n\nThe code above can be written\ \ in the following non-standard condensed way:\n\nx <- if (a < b) \"true\" else\ \ \"false\"\n\nThere exists also the function that allows rewriting the expression\ \ above as:\n\nx <- ifelse(a < b, \"true\", \"false\")\n\nThe function is automatically\ \ vectorized. For instance:\n\n> ifelse(c (0, 2) < 1, \"true\", \"false\")\n[1]\ \ \"true\" \"false\"\n\nRaku\nRaku uses a doubled symbol instead of single \n\ and a doubled symbol instead of \n\n$result = $a > $b ?? $x !! $y;\n\nRuby\n\ Example of using this operator in Ruby:\n\n1 == 2 ? \"true value\" : \"false value\"\ \n\nReturns \"false value\".\n\nA traditional if-else construct in Ruby is written:\n\ \nif a > b\n result = x\nelse\n result = y\nend\n\nThis could also be written\ \ as:\n\nresult = if a > b\n x\nelse\n y\nend\n\nThese can be rewritten as the\ \ following statement:\n\nresult = a > b ? x : y\n\nRust\nBeing an expression-oriented\ \ programming language, Rust's existing if expr1 else expr2 syntax can behave\ \ as the traditional ternary operator does. Earlier versions of the language\ \ did have the operator but it was removed due to duplication with .\n\nNote\ \ the lack of semi-colons in the code below compared to a more declarative ...\ \ block, and the semi-colon at the end of the assignment to .\n\nlet x = 5;\n\n\ let y = if x == 5 {\n 10\n} else {\n 15\n};\n\nThis could also be written\ \ as:\n\nlet y = if x == 5 { 10 } else { 15 };\n\nNote that curly braces are mandatory\ \ in Rust conditional expressions.\n\nYou could also use a expression:\n\nlet\ \ y = match x {\n 5 => 10,\n _ => 15,\n};\n\nScheme\nSame as in Common Lisp.\ \ Every expression has a value. Thus the builtin can be used:\n\n(let* ((x 5)\n\ \ (y (if (= x 5) 10 15)))\n ...)\n\nSmalltalk\nEvery expression (message\ \ send) has a value. Thus can be used:\n\n|x y|\n\nx := 5.\ny := (x == 5) ifTrue:[10]\ \ ifFalse:[15].\n\nSwift\nThe ternary conditional operator of Swift is written\ \ in the usual way of the C tradition, and is used within expressions.\n\nlet\ \ result = a > b ? a : b\n\nTcl\nIn Tcl, this operator is available in expr expressions\ \ only:\n\nset x 5\nset y [expr {$x == 5 ? 10 : 15}]\n\nOutside of expr, if can\ \ be used for a similar purpose, as it also returns a value:\npackage require\ \ math\n\nset x 5\nset y [if {$x == 5} {\n ::math::random $x\n} else {\n \ \ ::math::fibonacci $x\n}]\n\nTestStand\nIn a National Instruments TestStand\ \ expression, if condition is true, the first expression is evaluated and becomes\ \ the output of the conditional operation; if false, the second expression is\ \ evaluated and becomes the result. Only one of two expressions is ever evaluated.\n\ \ncondition ? first_expression : second_expression\n\nFor example:\n\nRunState.Root.Parameters.TestSocket.Index\ \ == 3 ? Locals.UUTIndex = 3 : Locals.UUTIndex = 0\n\nSets the local variable\ \ to 3 if is 3, otherwise it sets to 0.\n\nSimilar to other languages, first_expression\ \ and second_expression do not need to be autonomous expressions, allowing the\ \ operator to be used for variable assignment:\n\nLocals.UUTIndex = ( RunState.Root.Parameters.TestSocket.Index\ \ == 3 ? 3 : 0 )\n\nVerilog\nVerilog is technically a hardware description language,\ \ not a programming language though the semantics of both are very similar. It\ \ uses the syntax for the ternary operator.\n\n// using blocking assignment\n\ wire out;\nassign out = sel ? a : b;\n\nThis is equivalent to the more verbose\ \ Verilog code:\n\n// using blocking assignment\nwire out;\nif (sel === 1) //\ \ sel is 1, not 0, x or z\n assign out = a;\nelse if (sel === 0) // sel is\ \ 0, x or z (1 checked above)\n assign out = b;\nelse // sel is x or z (0\ \ and 1 checked above)\n assign out = [comment]; // a and b are compared bit\ \ by bit, and return for each bit\n // an x if bits\ \ are different, and the bit value if the same\n\nVisual Basic\nVisual Basic doesn't\ \ use per se, but has a very similar implementation of this shorthand statement.\ \ Using the first example provided in this article, it can do:\n\n' variable =\ \ IIf(condition, value_if_true, value_if_false)\nDim opening_time As Integer =\ \ IIf((day = SUNDAY), 12, 9)\n\nIn the above example, is a ternary function,\ \ but not a ternary operator. As a function, the values of all three portions\ \ are evaluated before the function call occurs. This imposed limitations, and\ \ in Visual Basic .Net 9.0, released with Visual Studio 2008, an actual conditional\ \ operator was introduced, using the keyword instead of . This allows the following\ \ example code to work:\n\nDim name As String = If(person Is Nothing, \"\", person.Name)\n\ \nUsing , would be evaluated even if person is (Nothing), causing an exception.\ \ With a true short-circuiting conditional operator, is not evaluated unless\ \ person is not .\n\nVisual Basic Version 9 has added the operator in addition\ \ to the existing function that existed previously. As a true operator, it does\ \ not have the side effects and potential inefficiencies of the function.\n\n\ The syntaxes of the tokens are similar: vs . As mentioned above, the function\ \ call has significant disadvantages, because the sub-expressions must all be\ \ evaluated, according to Visual Basic's evaluation strategy for function calls\ \ and the result will always be of type variant (VB) or object (VB.NET). The operator\ \ however does not suffer from these problems as it supports conditional evaluation\ \ and determines the type of the expression based on the types of its operands.\n\ \nResult type\nClearly the type of the result of the operator must be in some\ \ sense the type unification of the types of its second and third operands. In\ \ C this is accomplished for numeric types by arithmetic promotion; since C does\ \ not have a type hierarchy for pointer types, pointer operands may only be used\ \ if they are of the same type (ignoring type qualifiers) or one is void or NULL.\ \ It is undefined behaviour to mix pointer and integral or incompatible pointer\ \ types; thus\n\nnumber = spell_out_numbers ? \"forty-two\" : 42;\n\nwill result\ \ in a compile-time error in most compilers.\n\n?: in style guidelines\nConditional\ \ operators are widely used and can be useful in certain circumstances to avoid\ \ the use of an statement, either because the extra verbiage would be too lengthy\ \ or because the syntactic context does not permit a statement. For example:\n\ \n #define MAX(a, b) (((a)>(b)) ? (a) : (b))\n\nor\n\n for (i = 0; i < MAX_PATTERNS;\ \ i++)\n c_patterns[i].ShowWindow(m_data.fOn[i] ? SW_SHOW : SW_HIDE);\n\n(The\ \ latter example uses the Microsoft Foundation Classes Framework for Win32.)\n\ \nInitialization\nAn important use of the conditional operator is in allowing\ \ a single initialization statement, rather than multiple initialization statements.\ \ In many cases this also allows single assignment and for an identifier to be\ \ a constant.\n\nThe simplest benefit is avoiding duplicating the variable name,\ \ as in Python:\n\nx = 'foo' if b else 'bar'\n\ninstead of:\n\nif b:\n x =\ \ 'foo'\nelse:\n x = 'bar'\n\nMore importantly, in languages with block scope,\ \ such as C++, the blocks of an if/else statement create new scopes, and thus\ \ variables must be declared before the if/else statement, as:\n\nstd::string\ \ s;\nif (b)\n s = \"foo\";\nelse\n s = \"bar\";\n\nUse of the conditional\ \ operator simplifies this:\n\nstd::string s = b ? \"foo\" : \"bar\";\n\nFurthermore,\ \ since initialization is now part of the declaration, rather than a separate\ \ statement, the identifier can be a constant (formally, of type):\n\nconst std::string\ \ s = b ? \"foo\" : \"bar\";\n\nCase selectors\nWhen properly formatted, the conditional\ \ operator can be used to write simple and coherent case selectors. For example:\n\ \nvehicle = arg == 'B' ? bus :\n arg == 'A' ? airplane :\n arg\ \ == 'T' ? train :\n arg == 'C' ? car :\n arg == 'H' ? horse\ \ :\n feet;\n\nAppropriate use of the conditional operator\ \ in a variable assignment context reduces the probability of a bug from a faulty\ \ assignment as the assigned variable is stated just once as opposed to multiple\ \ times.\n\nProgramming languages without the conditional operator\nThe following\ \ are examples of notable general-purpose programming languages that don't provide\ \ a conditional operator:\n\n CoffeeScript\n Go programming language\n MATLAB\n\ \ Pascal although Object Pascal / Delphi do have a function to do the same (with\ \ caveats)\n Rust The construct is an expression and can be used to get the same\ \ functionality.\n \n PowerShell (in old versions) an elegant workaround is to\ \ use (<value for true>,<value for false>)[!(<condition>)]\n\nSee also\n IIf,\ \ inline if function\n Null coalescing operator, operator\n Elvis operator, ,\ \ or sometimes , as a shorthand binary operator\n Conditioned disjunction, equivalent\ \ ternary logical connective.\n\nReferences\n\nExternal links\n Description of\ \ If operator in Visual Basic\n Description of Conditional Expression in Python\ \ (PEP 308)\n Description in the Java Language Specification\n Description in\ \ the PHP Language Documentation\n\nConditional constructs\nOperators (programming)\n\ Ternary operations\nArticles with example code\n\nde:Bedingte Anweisung und Verzweigung#Auswahloperator" - "Impression Products, Inc. v. Lexmark International, Inc., 581 U.S. ___ (2017),\ \ is a decision of the Supreme Court of the United States on the exhaustion doctrine\ \ in patent law in which the Court held that after the sale of a patented item,\ \ the patent holder cannot sue for patent infringement relating to further use\ \ of that item, even when in violation of a contract with a customer or imported\ \ from outside the United States. The case concerned a patent infringement lawsuit\ \ brought by Lexmark against Impression Products, Inc., which bought used ink\ \ cartridges, refilled them, replaced a microchip on the cartridge to circumvent\ \ a digital rights management scheme, and then resold them. Lexmark argued that\ \ as they own several patents related to the ink cartridges, Impression Products\ \ was violating their patent rights. The U.S. Supreme Court, reversing a 2016\ \ decision of the Federal Circuit, held that the exhaustion doctrine prevented\ \ Lexmark's patent infringement lawsuit, although Lexmark could enforce restrictions\ \ on use or resale of its contracts with direct purchasers under regular contract\ \ law (but not as a patent infringement lawsuit). Besides printer and ink manufacturers,\ \ the decision of the case could affect the markets of high tech consumer goods\ \ and prescription drugs.\n\nBackground\n\nFactual setting\n\nLexmark International,\ \ Inc. makes and sells printers and toner cartridges for its printers. Lexmark\ \ owns a number of patents that cover its cartridges and their use. Lexmark sold\ \ the cartridges at issue in this case—some in the United States and some abroad.\n\ \nDomestic sales \n\nLexmark's domestic sales were in two categories. A \"Regular\ \ Cartridge\" is sold at \"list price\" and confers an absolute title and property\ \ right on the buyer. A \"Return Program Cartridge\" is sold at a discount of\ \ about 20 percent, and is subject to post-sale restrictions: The buyer may not\ \ reuse the cartridge after the toner runs out and may not transfer it to anybody\ \ else. The first branch of the case turns on the legal status of these post-sale\ \ restrictions.\n\nLexmark manufactured the toner cartridges with microchips in\ \ them, which send signals to the printers indicating toner level. When the amount\ \ of toner in a cartridge falls below a certain level, the printer will not operate\ \ with that cartridge. Also, the printer will not operate with a Return Program\ \ Cartridge that has been refilled by a third party. Thus, Lexmark's technology\ \ prevented violation of the post-sale restriction against refilling the Return\ \ Program Cartridges. The Regular Cartridges do not have this anti-refill feature\ \ and can therefore be refilled and reused (but they cost 20 percent more).\n\n\ \"To circumvent this technological measure,\" however, \"third parties have 'hacked'\ \ the Lexmark microchips. They created their own \"unauthorized replacement\"\ \ microchips that, when installed in a Return Program cartridge, fool the printer\ \ into allowing reuse of that cartridge. Various companies purchase used Return\ \ Program Cartridges from the customers who bought them from Lexmark. They replace\ \ the microchips with \"unauthorized replacement\" microchips, refill the cartridges\ \ with toner, and sell the \"re-manufactured\" cartridges to resellers such as\ \ Impression Products for marketing to consumers for use with Lexmark printers.\ \ Lexmark had previously argued in Lexmark International, Inc. v. Static Control\ \ Components, Inc. that replacing these microchips violated copyright law and\ \ the Digital Millennium Copyright Act (DMCA), but both federal and the Supreme\ \ Court have ruled against Lexmark, affirming that replacing the microchips is\ \ not in violation of copyright.\n\nImported cartridges\n\nThe second branch of\ \ the case involves cartridges that Lexmark sold outside the US. While some of\ \ the foreign-sold cartridges were Regular Cartridges and some were Return Program\ \ Cartridges, this branch of the case does not involve any distinction among the\ \ two types of imported cartridges.\n\nTrial court decision\n\nThe district court\ \ granted Impression's motion to dismiss Lexmark's claim of infringement involving\ \ the single-use cartridges Lexmark had first sold in the United States. The district\ \ court concluded that the Supreme Court in Quanta Computer, Inc. v. LG Electronics,\ \ Inc. found exhaustion where \"the Supreme Court determined that the agreements\ \ [at issue] broadly authorized Intel [the seller] to sell the licensed products\ \ without restrictions or conditions.\" The district court said \"that Quanta\ \ overruled Mallinckrodt sub silentio,\" and therefore \"those post-sale use restrictions\ \ do not prevent patent rights from being exhausted given that the initial sales\ \ were authorized and unrestricted.\"\n\nThe district court held, however, that\ \ the exhaustion doctrine did not apply to the cartridges that Lexmark had sold\ \ abroad. It said that international exhaustion did not apply to patents because\ \ Kirtsaeng v. John Wiley & Sons, Inc., which established international exhaustion\ \ in at least some cases, applied only to copyrights. The court therefore denied\ \ Impression's motion to dismiss Lexmark's claim of infringement involving the\ \ cartridges Lexmark had sold abroad.\n\nGovernment amicus curiae position\n\n\ In its amicus curiae brief, the US Government argued that Mallinckrodt had been\ \ wrongly decided in 1992 and in any case it had been overruled sub silentio in\ \ Quanta. It stated:\nIn the view of the United States, the first authorized sale\ \ of a patented article in the United States wholly exhausts the patentee's exclusive\ \ rights in that article, notwithstanding any post-sale restriction imposed by\ \ the patentee.\nThe government also argued that the decision of Jazz Photo Corp.\ \ v. United States International Trade Commission (2001) should be partially overruled\ \ in light of Kirtsaeng insofar as it held that foreign sales can never exhaust\ \ US patent rights. When the patentee neither makes nor authorizes a foreign sale,\ \ as occurred in Boesch v. Graff, it is proper to say no exhaustion occurred.\ \ But when the patentee makes or authorizes a foreign sale, and fails expressly\ \ to reserve its US rights, then exhaustion should be found. In the present case,\ \ Lexmark made the foreign sales and failed to expressly reserve its US rights;\ \ therefore, the sale exhausted the patent rights.\n\nFederal Circuit decision\n\ \nThe parties each appealed. After a three judge panel had heard oral argument\ \ the Federal Circuit sua sponte set the case for argument en banc in the first\ \ instance and invited the filing of amicus curiae briefs.\n\nMajority opinion\n\ \nJudge Taranto, writing for a 10-2 majority, reaffirmed both of the prior Federal\ \ Circuit rulings. In summary, the court held:\n\nFirst, we adhere to the holding\ \ of Mallinckrodt, Inc. v. Medipart, Inc. that a patentee, when selling a patented\ \ article subject to a single-use/no-resale restriction that is lawful and clearly\ \ communicated to the purchaser, does not by that sale give the buyer, or downstream\ \ buyers, the resale/reuse authority that has been expressly denied. Such resale\ \ or reuse, when contrary to the known, lawful limits on the authority conferred\ \ at the time of the original sale, remains unauthorized and therefore remains\ \ infringing conduct under the terms of § 271. Under Supreme Court precedent,\ \ a patentee may preserve its § 271 rights through such restrictions when licensing\ \ others to make and sell patented articles; Mallinckrodt held that there is no\ \ sound legal basis for denying the same ability to the patentee that makes and\ \ sells the articles itself. We find Mallinckrodt's principle to remain sound\ \ after the Supreme Court's decision in Quanta Computer, Inc. v. LG Electronics,\ \ Inc. . . .\n\nSecond, we adhere to the holding of Jazz Photo Corp. v. International\ \ Trade Comm'n, that a U.S. patentee, merely by selling or authorizing the sale\ \ of a U.S.-patented article abroad, does not authorize the buyer to import the\ \ article and sell and use it in the United States, which are infringing acts\ \ in the absence of patentee-conferred authority. Jazz Photo's no-exhaustion ruling\ \ recognizes that foreign markets under foreign sovereign control are not equivalent\ \ to the U.S. markets under U.S. control in which a U.S. patentee's sale presumptively\ \ exhausts its rights in the article sold. A buyer may still rely on a foreign\ \ sale as a defense to infringement, but only by establishing an express or implied\ \ license—a defense separate from exhaustion, as Quanta holds—based on patentee\ \ communications or other circumstances of the sale. We conclude that Jazz Photo's\ \ no-exhaustion principle remains sound after the Supreme Court's decision in\ \ Kirtsaeng v. John Wiley & Sons, Inc., in which the Court did not address patent\ \ law or whether a foreign sale should be viewed as conferring authority to engage\ \ in otherwise-infringing domestic acts. Kirtsaeng is a copyright case holding\ \ that 17 U.S.C. § 109(a) entitles owners of copyrighted articles to take certain\ \ acts \"without the authority\" of the copyright holder. There is no counterpart\ \ to that provision in the Patent Act, under which a foreign sale is properly\ \ treated as neither conclusively nor even presumptively exhausting the U.S. patentee's\ \ rights in the United States.\n\nDomestic exhaustion\n\nIn this part of its opinion,\ \ the Federal Circuit reaffirmed its Mallincrodt decision and rejected contentions\ \ that Quanta had silently overruled it.\n\n§ 271 abrogates common-law rule\n\n\ The court began by distinguishing the Patent Act's and Copyright Act's respective\ \ approaches to infringement. in 17 U.S.C. § 109(a) the Copyright Act says, \"\ Notwithstanding the provisions of section 106(3),\" defining infringement by selling,\ \ a purchaser \"is entitled, without the authority of the copyright owner, to\ \ sell or otherwise dispose of the possession\" of a purchased copy of a work.\ \ In contrast, the Patent Act contains no exhaustion provision. Therefore, the\ \ Patent Act requires a \"conferral of 'authority' by the patentee . . . in order\ \ for the actions listed in § 271(a) not to constitute infringement.\" This means\ \ there must be \"permission from the patentee\" to avoid infringement. The court\ \ does not accept exhaustion as a form of \"constructive\" permission. Hence,\ \ if the patentee places explicit limits or conditions on its permission, they\ \ qualify the scope of the permission. This has the effect of limiting the common\ \ law.\n\nGeneral Talking Pictures rule applies to \"conditional\" sale\n\nThe\ \ court turned to the General Talking Pictures decision, which holds \"that Lexmark\ \ would not have exhausted its patent rights in those cartridges, upon the manufacturing\ \ licensee's sale (the first sale), if a buyer with knowledge of the restrictions\ \ resold or reused them in violation of the restrictions.\" Although the government\ \ in its amicus curiae brief and defendant Impression argue \"that a different\ \ result is required—that Lexmark automatically lost its patent rights—simply\ \ because Lexmark sold the Return Program cartridges itself, subject to the same\ \ communicated restriction, rather than having left the manufacture and sale to\ \ others under license,\" the court does not accept that:\n\nWe conclude otherwise,\ \ as we did in Mallinckrodt and subsequent decisions. A sale made under a clearly\ \ communicated, otherwise-lawful restriction as to post-sale use or resale does\ \ not confer on the buyer and a subsequent purchaser the \"authority\" to engage\ \ in the use or resale that the restriction precludes. And there is no sound reason,\ \ and no Supreme Court precedent, requiring a distinction that gives less control\ \ to a practicing-entity patentee that makes and sells its own product than to\ \ a non-practicing-entity patentee that licenses others to make and sell the product.\n\ \nQuanta distinguishable and inapplicable\n\nThe court turned to the Quanta decision\ \ and found it inapplicable to the present issues. \"'Quanta did not involve a\ \ patentee's sale at all, let alone one subject to a restriction or, more particularly,\ \ a single-use/no-resale restriction.\" Rather, Quanta involved a patentee's (LGE's)\ \ license to a manufacturer (Intel) that sold to the accused infringer (Quanta).\ \ LGE had not limited Intel's license to manufacture the patented product, although\ \ it imposed contractual obligations on Intel. \"No conditions limited Intel's\ \ authority to sell products substantially embodying the patents.\" The Federal\ \ Circuit emphasized: \"There were no patentee sales, and there were no restrictions\ \ on the sales made by the licensee.\" Those facts were removed from the case\ \ at bar. Thus the Quanta \"Court's discussion of that issue does not undermine\ \ Mallinckrodts ruling that a patentee can preserve its patent rights through\ \ restrictions on its sales.\" The Federal Circuit also emphasized as significant\ \ the failure of the Quanta Court to explicitly repudiate Mallinckrodt despite\ \ the fact that in its amicus brief \"the government prominently featured an argument\ \ that Mallinckrodt was incorrect and should be repudiated.\"\n\nPrior cases\n\ \nThe court then turned to the prior Supreme Court cases. Reviewing them, it found\ \ that although they used sweeping language that a patentee's sale of the patented\ \ product placed it beyond the reach of the patent, so that no post-sale restriction\ \ could be enforced under the patent laws, that language went beyond the actual\ \ facts of the cases. First, the sales were in most cases without any condition\ \ or restriction on what the buyer might do with the product. Second, in the cases\ \ where an explicit condition or restriction was imposed, the case involved a\ \ tie-in or a price-fix.\n\nThe Court conceded that in the General Electric case,\ \ the Supreme Court had said: \"It is well settled, as already said, that where\ \ a patentee makes the patented article, and sells it, he can exercise no future\ \ control over what the purchaser may wish to do with the article after his purchase.\ \ It has passed beyond the scope of the patentee's rights.\" But that case involved\ \ an antitrust challenge to GE's distribution of lamps that did not meet that\ \ description. The case involved price restrictions on a licensed manufacturer.\ \ The Federal Circuit then explained that the word \"settled\" in the Supreme\ \ Court's statement had a special, narrow meaning:\n\"We read that language to\ \ deem 'settled' only what was settled in the cited precedents—a patentee's sales\ \ without restrictions exhaust patent rights in the item sold.\" Thus, the Supreme\ \ Court's sweeping exhaustion language applies precedentially only to cases in\ \ which either the sale was without condition or restriction or else the sale\ \ was made with a tie-in or price-fixing condition. \"But the Court did not rule\ \ that all restrictions on a patentee's sale were ineffective to preserve the\ \ patentee's patent-law rights.\"\n\nSimilarly, in United States v. Univis Lens\ \ Co., the Supreme Court's sweeping language must now be limited to the factual\ \ context of the case:\n\nMoreover, although some language in Univis, like language\ \ in other decisions in the area, can be taken out of context and read as going\ \ beyond the specific restrictions involved, the most the Court ruled, even as\ \ to patent law all by itself, was that a vertical price-control restriction was\ \ ineffective to preserve patent rights after sales of articles embodying the\ \ patents. While Univis is controlling on what it decided on the issues before\ \ it, we do not think it appropriate to give broad effect to language in Univis,\ \ taken out of context, to support an otherwise-unjustified conclusion here on\ \ a question not faced there.\n\nThe Federal Circuit therefore drew this conclusion\ \ from the past series of Supreme Court cases on exhaustion:\n\nFor the foregoing\ \ reasons, we think that the best lesson to draw from the Supreme Court's precedents,\ \ as applied to the question before us, is that a patentee may preserve its patent\ \ rights by otherwise-proper restrictions when it makes and sells patented articles\ \ itself and not only when it contracts out manufacturing and sales.\n\nPatent\ \ law trumps common law\n\nThe Federal Circuit returned to the common law and\ \ Lord Coke's commentary on it. Again, the court insisted that Congress had overridden\ \ the common law's prohibitions on post-sale restraints, in order to promote technological\ \ progress:\n\n[W]hatever considerations might go into a jurisdiction's choice\ \ as to the background rule for personal property in general, lawmaking authorities\ \ may reasonably make different choices for particular kinds of property. Notably,\ \ as to intellectual property in its various forms, Congress, implementing the\ \ Constitution, has long deemed it important to incentivize creation and disclosure\ \ through grants to the creator of rights to exclude others for a time. . . .\ \ That overriding legislative prescription removes the patented-article sale from\ \ the scope of Lord Coke's 1628 description of his country's general judicially\ \ fashioned property law. . . . In short, notwithstanding Lord Coke's description\ \ of English general personal-property judge-made law, the patent-specific statutory\ \ analysis must govern here.\n\nLikely effects on public\n\nThe court then turned\ \ to what it called \"the likely real-world consequences of one answer or another\ \ to the exhaustion question presented here.\" The court noted that in Kirtsaeng\ \ the Supreme Court had envisioned serious adverse effects on competition unless\ \ Coke's 1628 property law rules were followed. The Federal Circuit said that\ \ did not apply to patents:\n\n[W]e see no basis for predicting the extreme, lop-sided\ \ impacts the Court found plausible in Kirtsaeng in different circumstances. Mallinckrodt\ \ has been the governing case law since 1992 and has been reiterated in subsequent\ \ precedent. And yet we have been given no reliable demonstration of widespread\ \ problems not being solved in the marketplace. Given General Talking Pictures,\ \ the only question is about patentees' ability to do for their own sales what\ \ they already can do by contracting out their manufacturing and sales. Regarding\ \ the specific scenario we are addressing today—in which the patentee has sought\ \ to preserve its patent rights by conditioning its first sale on a single-use/no-resale\ \ restriction of which the accused infringer had adequate notice at the time of\ \ purchase—we have been given no proof of a significant problem with enforcing\ \ patent rights.\n\nFurthermore, the Federal Circuit maintained, the conduct challenged\ \ here can have benefits. Under Lexmark's program, customers who agree to the\ \ restriction pay a lower price than those who do not. It could be that the companies\ \ that refill the cartridges use inferior products that could harm the Lexmark\ \ machines, which \"could harm Lexmark's reputation.\" To assume that the restrictions\ \ are illegitimate would run counter to the trends \"over the last four decades,\ \ that have displaced the strict condemnation of various vertical restrictions\ \ that characterized\" earlier antitrust and patent-misuse law in the first part\ \ of the twentieth century. \"Field-of-use, territorial, and other limitations\ \ on intellectual property licenses may serve procompetitive ends by allowing\ \ the licensor to exploit its property as efficiently and effectively as possible.\"\ \ Therefore, the court concluded it is appropriate to apply to post-sale restrictions\ \ the same tolerance that the General Talking Pictures doctrine accords limitations\ \ in manufacturing licenses.\n\nInternational exhaustion\n\nIn this part of its\ \ opinion, the Federal Circuit reaffirmed its Jazz Photo opinion and rejected\ \ contentions that Kirtsaeng had undermined the basis for Jazz Photo. The Federal\ \ Circuit insisted that \"Kirtsaeng says nothing about patent law.\"\n\nThe court\ \ emphasized the differences between patent law and copyright law. For example,\ \ patent law gives patentees an exclusive right to use of the invention but copyright\ \ law gives no general exclusionary right as to use (it gives exclusive public\ \ performance and display \"use\" rights, but not others). Also, it is much more\ \ costly and time-consuming to obtain a patent than a copyright. The court did\ \ not explain, however, the way that or other differences between copyrights and\ \ patents called for contrary results as to international exhaustion.\n\nThe court\ \ did say that the US patent statute gives patentees the reward available from\ \ \"sales in American markets, not from sales in foreign markets.\" A sale in\ \ a foreign market therefore does not furnish a proper basis for finding exhaustion.\ \ \"American markets differ substantially from markets in many other countries,\ \ and not just because of disparities in wealth that can lead to dramatically\ \ different prices\" in this country and abroad (as was the case in Kirtsaeng).\ \ \"Government policies differ dramatically, including policies on price regulation\ \ and, most particularly, policies on the availability and scope of patent protection.\"\ \ The court did not explain further, however, whether and how such dramatic differences\ \ in policy applied to the toner cartridges at issue in the present case.\n\n\ The court then turned to the only Supreme Court case on foreign exhaustion, Boesch\ \ v. Graff. In that case, Graff was the assignee of a US patent. Boesch bought\ \ the product from a German supplier who had a prior-user right under German law\ \ to make and sell the product, because the supplier had begun activity before\ \ the application for the German counterpart patent was filed. The US assignee\ \ and the inventor had no connection with Boesch. When Graff imported the product\ \ into the US, Boesch sued for infringement. The US courts found Boesch liable.\ \ The rights that Boesch had under German law did not entitle him to import the\ \ product into the US. That is governed by US law. The US patentee had never \"\ received any royalty or given any license to use the patented article in any part\ \ of the United States.\"\nAccordingly, the court held, a foreign sale does not\ \ of its own force authorize importation into the US.\n\nThis does not mean, however,\ \ that a patentee by its conduct cannot waive its US rights, be estopped from\ \ asserting them, or be found to have granted an implied license.\n\nThe court\ \ expressed concern that overruling Jazz Photo would harm the US drug industry:\n\ \nThere seems to be no dispute that U.S.-patented medicines are often sold outside\ \ the United States at substantially lower prices than those charged here and,\ \ also, that the practice could be disrupted by the increased arbitrage opportunities\ \ that would come from deeming U.S. rights eliminated by a foreign sale made or\ \ authorized by the U.S. patentee.\n\nFinally, the court rejected a proposal that\ \ exhaustion should be presumed unless the patentee express states that it reserves\ \ its US rights. Foreign governments might \"prohibit sellers from stating reservations\ \ of rights that would make importation into and sale in the United States more\ \ difficult.\" Also: \"Intermediary companies between the foreign purchase and\ \ the importation into the United States may be created that make it difficult\ \ for the U.S. patentee to carry an affirmative burden of proving adequate notice\ \ of reservations attached to a foreign-sold article.\"\n\nDissenting opinion\n\ \nJudge Dyk, joined by Judge Hughes, dissented from both branches of the court's\ \ exhaustion analysis. Judge Dyk summarized his dissent in these terms:\n\nI would\ \ overrule our decision in Mallinckrodt as inconsistent with governing Supreme\ \ Court authority and overrule Jazz Photo to the extent that it imposes a blanket\ \ ban on foreign exhaustion. I would recognize foreign exhaustion where the U.S.\ \ rights holder has not notified the buyer of its retention of the U.S. patent\ \ rights.\n\nDomestic exhaustion\n\nIn this part of the dissent, Judge Dyk argued\ \ that the majority had misunderstood the Supreme Court's exhaustion jurisprudence\ \ in order to substitute its own ideas of the proper balance between patent rights\ \ and public rights. He began by saying:\n\nFirst, I agree with the government\ \ that Mallinckrodt was wrong when decided, and in any event cannot be reconciled\ \ with the Supreme Court's recent decision in Quanta Computer, Inc. v. LG Electronics,\ \ Inc. We exceed our role as a subordinate court by declining to follow the explicit\ \ domestic exhaustion rule announced by the Supreme Court.\n\nHe argued that since\ \ 1850 the Supreme Court has held that a sale by the patentee or its licensee\ \ exhausts all patent rights. In such cases, \"The question of whether the seller\ \ has 'authorized' the buyer to use or resell the item is simply irrelevant.\"\ \ Post-sale restrictions could not be enforced under federal patent law. The only\ \ Supreme Court case to depart from that principle was Henry v. A.B. Dick Co.,\ \ and it was explicitly overruled five years later by Motion Picture Patents Co.\ \ v. Universal Film Mfg. Co. The principle of the overruled Dick case that a patentee\ \ could impost a post-sale restriction by giving a buyer notice of it was \"the\ \ same as the panel's holding in Mallinckrodt and the majority's holding in this\ \ case.\"\n\nHe insisted that the majority opinion misread the Motion Picture\ \ Patents decision by asserting \"that it only 'held particular restrictions improper'\ \ . . . but 'did not rule that all restrictions on a patentee's sale were ineffective\ \ to preserve the patentee's patent-law rights.'\" He explained:\n\nThat is not\ \ accurate. Motion Picture Patents did not leave behind the remnants of A.B. Dick—minus\ \ tie-ins and resale price maintenance. To the contrary, the Court in Motion Picture\ \ Patents found that \"[t]he patent law furnishes no warrant for\" the restrictions\ \ imposed by the patent owner.\n\nLater cases, such as Quanta, confirmed this\ \ \"broad patent exhaustion rule [in Motion Picture Patents] and left no room\ \ for a resurrection of A.B. Dick.\"\n\nHe next turned to the majority's referenced\ \ to \"conditional sales\" and \"unconditional sales,\" and said that the majority\ \ misconstrued the terms. \"Conditional sales,\" he said, as used in pre-Mallinckrodt\ \ case law referred only to the retention of title for a security interest in\ \ installment purchases. \"In other words, a sale with restrictions could nonetheless\ \ be an 'unconditional' sale in which title passes, with the restrictions invalid\ \ under the patent laws because of exhaustion.\"\n\nHe then criticized the majority\ \ for making up special rules for patent cases that differed from the common law\ \ and general legal principles, citing Supreme Court admonitions not to do that--\"\ The Supreme Court has repeatedly instructed us not to ignore traditional legal\ \ principles to fashion rules 'unique to patent disputes.'\"\n\nFinally, Judge\ \ Dyk took issue on multiple grounds with the majority's efforts to distinguish\ \ and limit the Supreme Court's rulings. \"The majority's justifications for refusing\ \ to follow Supreme Court authority establishing the exhaustion rule misconceive\ \ our role as a subordinate court.\" Each justification in the majority decision\ \ was unsupportable, he said.\n\n \"First, the majority characterizes the statement\ \ of the exhaustion rule in the Supreme Court cases as mere dictum because in\ \ those cases there was either no restriction imposed or the restriction would\ \ otherwise violate the antitrust laws. But the cases impose no such qualification\ \ on the rule announced. The Supreme Court has repeatedly advised the courts of\ \ appeals that our task is to follow the rules proclaimed by the Court, and not\ \ to attempt to distinguish Supreme Court cases on their facts.\"\n \"Second,\ \ the majority relies on 35 U.S.C. §§ 271(a) and 154(a)(1) to suggest that a broad\ \ reading of the exhaustion doctrine is inconsistent with statutory language making\ \ an act of infringement . . . any use or sale of a patented invention 'without\ \ authority' of the patent owner, and providing the patent owner with a 'right\ \ to exclude.'\" But the patent exhaustion doctrine is a limitation on the operation\ \ of those sections, and applies notwithstanding them.\n \"Third, the majority\ \ claims that giving full sweep to the articulation of the exhaustion doctrine\ \ in Quanta and other cases would be inconsistent with the Supreme Court's decision\ \ in General Talking Pictures Corp. v. Western Electric Co. . . . The majority\ \ suggests it would be incongruous if 'a patentee cannot preserve its patent rights\ \ against uses of a patented article . . . if, instead of licensing someone else\ \ to make and sell the article, it chooses to make and sell the article itself.' \"\ \n\nBut General Talking Pictures was a case of a license to manufacture in a limited\ \ field, not a sale with a post-sale restriction. The cases recognize that distinction.\ \ Thus, in Quanta the Supreme Court stated that General Talking Pictures \"held\ \ that exhaustion did not apply because the manufacturer had no authority to sell\ \ the amplifiers for commercial use.\" And where the manufacturer in that case\ \ (Intel) did have a general authority to make and sell, the Supreme Court held\ \ that exhaustion applied to the sale.\n\nThe majority found \"tension\" between\ \ \"the Supreme Court's broad statement of the exhaustion rule and General Talking\ \ Pictures\" and sought to resolve it by extending the rule of General Talking\ \ Pictures and contracting the exhaustion doctrine in the area of possible conflict.\ \ But, Dyk maintained:\n\n[I]t is not our task to ignore Supreme Court rulings\ \ as \"unjustifi[ed]\" or \"unsound\" because they are purportedly inconsistent\ \ with other Supreme Court cases. The distinction between restrictions on sales\ \ (impermissible) and restrictions on licensees (permissible) exists in the Court's\ \ precedent, and it is not for us to decide if it is a sound distinction.\n\n\ \ \"Finally, the majority proposes that we should somehow sustain the restriction\ \ here because it may be pro-competitive. Exhaustion does not turn on whether\ \ a particular post-sale restriction is desirable or undesirable, pro-competitive\ \ or anti-competitive, but whether the sale was authorized and the item has passed\ \ beyond the scope of the patent monopoly.\" Furthermore, the Supreme Court said\ \ in Kirtsaeng that a prohibition on resale is \"manifestly anti-competitive.\"\ \n\nDyk concluded his discussion of domestic exhaustion with the statement: \"\ There is, in sum, no colorable basis for the majority's failure to follow the\ \ exhaustion rule for domestic sales as articulated by the Court in Quanta and\ \ numerous other cases.\"\n\nInternational exhaustion\n\nIn this part of the dissent,\ \ Judge Dyk argued for a nuanced balance that called for different results depending\ \ on whether the patentee was responsible for the sale abroad that was alleged\ \ to trigger exhaustion.\n\nHe began by pointing out that because Lexmark's foreign\ \ sales were made without any restrictions or reservations, \"even under the majority's\ \ cramped view of exhaustion, there is no question that the sales would have exhausted\ \ Lexmark's domestic patent rights. The issue is whether the foreign location\ \ of the sale should lead to a different result, as we previously held in Jazz\ \ Photo.\"\n\nHe then turned to \"the centerpiece of the majority's holding that\ \ there is a doctrinal blanket ban on foreign exhaustion, namely the Supreme Court's\ \ decision in Boesch v. Graff. But \"Boesch announced no such blanket ban,\" he\ \ said. \"It did not even involve an authorized sale by the holder of U.S. patent\ \ rights but rather a sale by a third party under a foreign law's prior use exception.\"\ \ But \"Boesch does not apply here because the foreign sales were made by Lexmark.\"\ \n\nIn every US lower court decision before Jazz Photo: \"When the sale was made\ \ by an entity not holding U.S. patent rights, as in Boesch, or when the authorized\ \ foreign seller clearly reserved U.S. rights, there was no exhaustion.\" In contrast,\ \ \"where the foreign sale was made by a seller holding U.S. patent rights without\ \ a contractual reservation of U.S. rights, exhaustion occurred as a result of\ \ an authorized foreign sale.\"\n\nDyk maintained that \"Kirtsaeng provides significant\ \ guidance and cannot be dismissed as simply a copyright case, or as limited to\ \ the 'first sale' provision of the Copyright Act.\" Rather, the policies that\ \ animated Kirtsaeng typically apply to patent exhaustion. But because in some\ \ cases a difference may be significant, there should be abalanced approach. Dyk\ \ argued for \"put[ting] the burden on the U.S. rights holder to provide notice\ \ of a reservation of U.S. rights to the purchaser.\" Thus, he \"would recognize\ \ foreign exhaustion where the U.S. rights holder has not notified the buyer of\ \ its retention of the U.S. patent rights.\"\n\nSupreme Court\nIn March 2016,\ \ Impression filed a petition for certiorari in the U.S. Supreme Court. Impression\ \ presented these questions in its petition:\n\n   1. Whether a \"conditional\ \ sale\" that transfers title to the patented item while specifying post-sale\ \ restrictions on the article's use or resale avoids application of the patent\ \ exhaustion doctrine and therefore permits the enforcement of such post-sale\ \ restrictions through the patent law's infringement remedy.\n   2. Whether, in\ \ light of this Court's holding in Kirtsaeng v. John Wiley & Sons, Inc., 133 S.\ \ Ct. 1351, 1363 (2013), that the common law doctrine barring restraints on alienation\ \ that is the basis of exhaustion doctrine \"makes no geographical distinctions,\"\ \ a sale of a patented article—authorized by the U.S. patentee—that takes place\ \ outside of the United States exhausts the U.S. patent rights in that article.\n\ \nOn June 20, 2016, the Court invited the Solicitor General to file briefs in\ \ this case expressing the views of the United States. In October 2016, the government\ \ filed the requested amicus curiae brief. It recommended grant of certiorari\ \ on both questions. The brief argues that the \"Federal Circuit's decision misreads\"\ \ the Supreme Court's precedents and \"would substantially erode the exhaustion\ \ doctrine.\" The Supreme Court granted certiorari on December 2, 2016 and heard\ \ oral argument in the case on March 21, 2017. The Court published its decisions\ \ on May 30, 2017.\n\nMajority\nA unanimous Court found that Lexmark exhausted\ \ its patent rights upon first sale domestically, even with the single-use/no-resale\ \ restrictions imposed by Lexmark in contracts with its customers, although such\ \ restrictions could be enforced under contract law. The Court noted that the\ \ exhaustion doctrine has a long history and that any change would have significant\ \ effects on commerce in the modern world, noting that \"extending the patent\ \ rights beyond the first sale would clog the channels of commerce, with little\ \ benefit from the extra control that the patentees retain,\" noting that complex\ \ modern supply chains can involve large numbers of patents. Chief Justice Roberts,\ \ in his opinion, compared the situation to automobile repair shops: \"The business\ \ works because the shop can rest assured that, so long as those bringing in the\ \ cars own them, the shop is free to repair and resell those vehicles. That smooth\ \ flow of commerce would sputter if companies that make the thousands of parts\ \ that go into a vehicle could keep their patent rights after the first sale.\"\ \n\nSeven justices joined the Court's opinion extending that reasoning to items\ \ imported from abroad. Lexmark had argued, and the Federal Circuit agreed, that\ \ sale abroad \"does not trigger patent exhaustion unless the patentee 'expressly\ \ or implicitly transfers or licenses' its rights.\" The Court, however, ruled\ \ that \"[a]n authorized sale outside the United States, just as one within the\ \ United States, exhausts all rights under the Patent Act.\" The Court relied\ \ on its 2013 decision in Kirtsaeng v. John Wiley & Sons, Inc. on a nearly identical\ \ issue under copyright law. Because the underlying statute was not clear as to\ \ its geographical scope, the Court in Kirtsaeng decided that, because the statute\ \ was based in the common law exhaustion doctrine, which is not limited in geographic\ \ extent, the statute at issue was therefore not intended to be limited to only\ \ U.S. sales. Applying the same principle to patent law, which historically has\ \ a close connection with copyright law, was \"straightforward\" and \"the bond\ \ between [copyright and patent law] leaves no room for a rift on the question\ \ of international exhaustion\".\n\nPartial dissent\nJustice Ginsburg dissented\ \ from the Court's holding with respect to imported items. Adhering to substantially\ \ the same reasoning of her dissent in Kirtsaeng, Justice Ginsburg argued that\ \ because patent law is territorial and the sale of an item abroad is \"independent[]\ \ of the U.S. patent system, it makes little sense to say that such a sale exhausts\ \ an inventor's U.S. patent rights.\" She would have upheld the Federal Circuit's\ \ decision that sale abroad does not exhaust a patentee's rights in the United\ \ States.\n\nCommentary\n\nGerstein\n\nRobert M. Gerstein concluded that further\ \ review in the Supreme Court was likely:\nGiven the Supreme Court's interest\ \ in patent cases, a vigorous dissent in Lexmark that relies on a number of Supreme\ \ Court precedents, including Quanta and Kirtsaeng, and the position of the Justice\ \ Department that Quanta overruled Mallinckrodt, it would not be surprising to\ \ see the Supreme Court take up Lexmark in its next term.\n\nDodd and Dowd\n\n\ Jeff C. Dodd and Matthew J. Dowd viewed the decision as an affirmation of strong\ \ patent rights:\nLexmark embraces a very strong view of patent rights and a narrow\ \ view of the scope of exhaustion. It affirms that patent holders have wide latitude\ \ to segment and control distribution in the market channels for products covered\ \ by patents. This latitude is particularly wide with respect to limiting the\ \ import into the United States of patented goods sold in authorized sales in\ \ foreign markets even where restrictions on resale were not proven to have been\ \ communicated to foreign buyers. Even so, the court left open the possibility\ \ that foreign sales, under the right circumstances, may incorporate an implied\ \ license to import and use the product within the United States.\n\nCukierski\ \ and Masia\n\nKevin J. Cukierski and Adam H. Masia see the decision as \"pro-patent\ \ owner\" but warn again premature celebration:\nBut take caution—it is likely\ \ that the Supreme Court will be asked to hear the case. Given the tension between\ \ this case and the Supreme Court's language in Quanta and Kirtsaeng, along with\ \ the discord at the district court level and among commentators before the Federal\ \ Circuit's decision, there's a good chance the Supreme Court will do so. Until\ \ the Supreme Court has its say, you should take precautions in case the Supreme\ \ Court takes an expansive view of patent exhaustion and decides to remove these\ \ exceptions.\n\n\"Without Precedent\"\n\nAnother commentator (unsigned comment)\ \ indicated a skeptical view of the Federal Circuit's tendency to march to a different\ \ drummer. After quoting Judge Dyk's admonition, \"We exceed our role as a subordinate\ \ court by declining to follow the explicit domestic exhaustion rule announced\ \ by the Supreme Court,\" he (or she) observed:\nFor present purposes, it is simply\ \ worth noting that the Federal Circuit appears to be inching closer again to\ \ the concept that patent law is simply a unique beast, with unique rules and\ \ requirements. The Supreme Court has taken a skeptical view of that approach\ \ in the past. And may well again.\n\nJahn, Pichler, and Lo\n\nPaul Jahn, Rufus\ \ Pichler and Lincoln Lo raise many questions (mostly about \"clear communication\"\ ) about what the Lexmark majority opinion left unresolved:\n\n Conflict or tension\ \ with Quanta: \"Quanta expressly distinguished implied licenses and exhaustion,\ \ holding that disclaimers of license rights are 'irrelevant' where 'the right\ \ to practice the patents is based not on implied license but on exhaustion.' \"\ \ But \"the Federal Circuit appears to treat exhaustion like an implied license—one\ \ that the patentee can disclaim by 'clearly communicate[d]' restrictions.\" Quanta\ \ appears to hold that the patentee's attempt to impose a post-sale restriction\ \ on a manufacturing licensee is ineffective if the license does not conform to\ \ the General Talking Pictures case.\n \"[W]hat arrangement between a seller and\ \ buyer is sufficient to deny 'authority.'? It was undisputed in Lexmark that\ \ there was 'an express and enforceable contractual agreement' between Lexmark\ \ and each end-user, and that the no-resale and no-reuse restrictions were binding\ \ on end users. Yet throughout the Lexmark opinion, the majority suggests that\ \ restrictions may be sufficient if 'clearly communicated'—even if well short\ \ of a contractual meeting of the minds.\"\n Another way to put this is what is\ \ a \"clear communication\"? In Jazz Photo, the Federal Circuit noted that the\ \ \"package instructions [were] not in the form of a contractual agreement by\ \ the purchaser to limit reuse of the cameras.\" Accordingly, \"There was no\ \ showing of a 'meeting of the minds' whereby the purchaser, and those obtaining\ \ the purchaser's discarded camera, may be deemed to have breached a contract\ \ or violated a license limited to a single use of the camera.\" The writers conclude,\ \ therefore, \"It is unclear if the Federal Circuit intended an expansion of the\ \ patentee-seller's ability to avoid exhaustion.\"\n Also, how clear must a \"\ clear communication\" be? \"The Federal Circuit appears to limit infringement\ \ claims against subsequent downstream buyers to those 'having knowledge of the\ \ restrictions.' The appellate court did not elaborate on what defenses a subsequent\ \ downstream purchaser without knowledge may have, assuming no exhaustion. The\ \ court only mentions in passing that 'we do not have before us the questions\ \ that would arise, whether under principles governing bona fide purchasers or\ \ otherwise, if a downstream re-purchaser acquired a patented article with less\ \ than actual knowledge of the restriction.' \"\n Finally, does the court's focus\ \ on \"clear communication\" have a negative impact on post-sale restrictions\ \ that a limited licensee under General Talking Pictures is required to impose?\ \ \"The Federal Circuit suggested repeatedly that buyers' knowledge of the licensee's\ \ field of use limitation may be required for a licensee's sale to be non-exhaustive.\ \ While General Talking Pictures did not clearly resolve this question, many licensors\ \ have assumed that sales by a licensee outside of its licensed field are unauthorized\ \ altogether and are therefore non-exhaustive regardless of the purchaser's knowledge\ \ of the field of use limitation.\" Therefore, does the emphasis, here \"on the\ \ buyer's knowledge, even if dicta, add to the uncertainty concerning this issue\"\ ?\n\nCastanias, Nix, and Kazhdan\n\nGregory A. Castanias, Kelsey I. Nix, and Daniel\ \ Kazhdan also point to unresolved issues over which patent owners \"must still\ \ be cautious\":\nLexmark explicitly left open several fact-specific questions,\ \ including (i) what happens if someone acquires a patented article with \"less\ \ than actual knowledge\" of the restrictions placed on the original sale by the\ \ patent owner and (ii) when would a foreign buyer have an \"implied license\"\ \ to sell in the United States, independent of patent exhaustion. These issues\ \ will surely be raised in future cases.\n\nCrouch\n\nDennis Crouch, in Patently-O\ \ commented on the issues and provided a summary of the merits briefs filed in\ \ the Supreme Court as of January 31, 2017. Crouch opposed the Federal Circuit's\ \ ruling on these grounds:\nWith personal property courts long ago rejected servitudes\ \ (such as use and resale restrictions) that bind subsequent purchasers. Unlike\ \ real property, personal property moves and is often transferred without substantial\ \ paperwork or record-keeping, and allowing a set of unique restrictions has the\ \ potential of gumming up the marketplace. The Federal Circuit in this case went\ \ all the way to the other side — holding that the presumption in foreign sales\ \ is that no US patent rights are exhausted. I purchased my last couple of smart\ \ phones through the used market – and have also repaired them several times.\ \ Under the law, I probably should have taken steps to ensure that all of the\ \ original equipment manufacturers affirmatively granted repair and resale rights.\ \ Coming together, the Federal Circuit's approach here has the potential to limit\ \ the market for the repair and reselling of goods. I would suggest that those\ \ activities are incredibly beneficial to our society in terms of resource allocation\ \ and avoiding waste as well as empowering citizens and avoiding anticompetitive\ \ market behavior.\n\nNotes and references\n\nNotes\n\nReferences\n\nExternal\ \ links\n \n SCOTUSblog coverage\n Podcast – Interview with proprietor of Impression\ \ Products\n\nIntellectual property law\nUnited States patent case law\nUnited\ \ States Court of Appeals for the Federal Circuit cases\nUnited States Supreme\ \ Court cases\nUnited States Supreme Court cases of the Roberts Court\n2015 in\ \ United States case law\n2017 in United States case law\nLexmark" - source_sentence: Who is one of Jesus' Seventy Apostles named Cephas? sentences: - "Aristagoras (), d. 497/496 BC, was the leader of the Ionian city of Miletus in\ \ the late 6th century BC and early 5th century BC and a key player during the\ \ early years of the Ionian Revolt against the Persian Achaemenid Empire. He was\ \ the son-in-law of Histiaeus, and inherited the tyranny of Miletus from him.\n\ \nBackground \n\nBy the time extant history hears of him, Aristagoras is already\ \ serving as deputy governor of Miletus, a polis on the western coast of Anatolia\ \ around 500 BC. He was the son of Molpagoras, previous tyrant of an independent\ \ Miletus, and brother-in-law (and nephew) of Histiaeus, whom the Persians had\ \ set up as tyrant, but never quite trusted. After general Megabazus presented\ \ his complaints about Histiaeus to Darius I of Persia, the latter summoned Histiaeus\ \ to his court and detained him at Susa, the main reason being that he wanted\ \ a trustworthy advisor. On the recommendation of Histiaeus, the Achaemenids then\ \ appointed Aristagoras as the new ruler of Miletus. Aristagoras ruled Miletus\ \ while Histiaeus remained in Susa. The assignment was put forward as temporary.\ \ Privately, everyone knew that he was being kept under observation away from\ \ his troops.\n\nAristagoras was the main orchestrator of the Ionian Revolt on\ \ secret instruction by Histiaeus, when the latter learned of Persian plans to\ \ interfere directly in Miletus. Aristagoras took advantage of Greek dissatisfaction\ \ with Persian rule to incite an alliance of the Greek poleis of Ionia. Soliciting\ \ assistance from the states of mainland Greece he failed to obtain the help of\ \ a major state, Sparta. He did obtain the half-hearted assistance of Athens.\ \ Their attack on the satrapy of Lydia having been defeated, they withdrew, abandoning\ \ Aristagoras to his fate.\n\nIn the last months of the failing revolt, the Persians\ \ were reconquering rebel country city by city. Choosing not to remain and make\ \ a stand alone, Aristagoras led a colony to Thrace, where he had negotiated a\ \ franchise to settle from the Thracians. No sooner did he arrive than he and\ \ all his men were massacred in a surprise attack by the Thracians, for reasons\ \ unspecified by Herodotus, whether loyal to the Great King, or influenced by\ \ the Scythians, who hated the Ionians for their rescue of the Great King, or\ \ just because they changed their minds about the number of Hellenes they would\ \ allow in their country. The revolt gained momentum briefly but then began to\ \ fail again. When all was nearly lost, the Great King allowed Histiaeus to convince\ \ him that he could settle the conflict and now should be sent back to Miletus.\ \ Aristagoras was gone. According to Herodotus, they never met again.\n\nHistiaeus\ \ never succeeded in reaching Miletus. Reporting first to Sardis, undoubtedly\ \ still recovering from fire, whether with or without the Great King's complicity\ \ (Herodotus does not say), he was interrogated concerning his true loyalties.\ \ Histiaeus swore complete ignorance of the events of the revolt and unquestionable\ \ loyalty to the Persians. He admitted nothing, but the satrap, Artaphernes, was\ \ not in the least deceived. He said, \"I will tell thee how the case stands,\ \ Histaeus: this shoe is of thy stitching; Aristagoras has but put it on.\"\n\n\ Seeing that the jig was up, Histiaeus escaped that night and took ship at the\ \ coast, probably at Ephesus. He had no trouble raising troops and finding ships,\ \ but he found that he was not trusted by the revolutionaries. Miletus would not\ \ have him back. He became a soldier of fortune in the Aegean until he was hunted\ \ down and executed by Artaphernes. The Ionian revolt was finally settled in 494/493\ \ BC. The Persians went on to plot the conquest of Greece under the pretext of\ \ a punitive campaign against Athens.\n\nFailure of the Naxos expedition \n\n\ Certain exiled citizens of Naxos came to Miletus to seek refuge. They asked Aristagoras\ \ to supply them with troops, so that they could regain control of their homeland.\ \ Aristagoras considered that if he was able to supply troops to the Naxians,\ \ then he could become ruler of Naxos. So he agreed to assist the Naxians. He\ \ explained that he did not have enough troops of his own, but that Artaphernes,\ \ Darius’ brother and the Persian satrap of Lydia, who commanded a large army\ \ and navy on the coast of Asia, could help supply troops. The Naxians agreed\ \ to Aristagoras seeking Artaphernes' support and supplied him with money.\n\n\ Aristagoras travelled to Sardis and suggested that Artaphernes attack Naxos and\ \ restore the exiles. The Persians would then gain control of the island. He explained\ \ to Artaphernes that Naxos “was a fine and fertile island, close to the Ionian\ \ coast, and rich both in treasures and slaves.” It was also the gateway to the\ \ Cyclades, which the Persians did not yet rule. Aristagoras promised that he\ \ would both fund the expedition and give Artaphernes a bonus sum. He also tempted\ \ Artaphernes by adding that capturing the island would place other poleis of\ \ the Cyclades under his control. They would serve as bases for an invasion of\ \ Euboea. After securing the permission of Susa, Artaphernes agreed and promised\ \ 200 ships.\n\nThe following spring, Aristagoras and the Naxian exiles sailed\ \ with the fleet. Unfortunately for the success of the invasion, Aristagoras quarrelled\ \ with the Persian admiral Megabates. He interfered in the discipline of the latter\ \ over the ship captains to save a friend from harsh punishment for an infraction\ \ (failure to set a watch on his ship). Aristagoras saved his friend but lost\ \ the friendship and loyalty of the Persian admiral, who expected to be in overall\ \ command. The schism was irreparable, being the very first incident of the subsequent\ \ Ionian revolt. Megabates sabotaged the entire operation by secretly informing\ \ the Naxians that they were about to be attacked, taking away the element of\ \ surprise. Naxos then had enough time to prepare for a siege. Four months later,\ \ the siege still held, the Persians were out of supplies and had only limited\ \ funds remaining. The expedition was then considered a failure and the Persians\ \ sailed home.\n\nIonian Revolt \n\nDue to his failure to make good on his Naxian\ \ promises, Aristagoras’ political position was at risk. He began to plan a revolt\ \ with the Milesians and the other Ionians. Meanwhile, Histiaeus, still detained\ \ at Susa, had tattooed a message upon the shaved head of a slave. Once his hair\ \ had grown back, he sent him to Aristagoras. The message told Aristagoras to\ \ revolt. Histiaeus, desperate to resume his authority at Miletus, hoped Darius\ \ would send him to deal with a Milesian revolt.\n\nBoth leaders being of the\ \ same mind, Aristagoras conferred with a council of his supporters, who agreed\ \ to a rebellion in Miletus in 499 BC. Aristagoras was supported by most of the\ \ citizens in council, except the historian Hecataeus. Hecataeus voted against\ \ the revolt because he believed that the Ionians would be out-matched. Defeat\ \ would be inevitable. Once the vote was taken, however, there is no evidence\ \ that he recused himself from the revolt. In fact, he had suggestions to make.\ \ Once the war began, the Ionians did not allow any fence-sitting among themselves,\ \ although they could not stop the larger allies from withdrawing. In general\ \ knowledge, warring nations do not allow citizens of any social status to comment\ \ from the sidelines without participating in the war effort.\n\nAs soon as the\ \ vote for war was certain, Aristagoras took steps to secure Persian military\ \ assets. The Naxos fleet was recovering from its ordeal at Myus. Now in a position\ \ of command – Herodotus is not specific – Aristagoras sent a party under Iatragoras\ \ to arrest the admirals still with the fleet, some several men. Ironically, these\ \ were mainly Greek. They were later released and sent home. Now that the rebellion\ \ was in the open, Aristagoras “set himself to damage Darius in every way he could\ \ think of.”\n\nThe scope of the revolt spread rapidly to all Ionia. Aristagoras\ \ foresaw that one city would soon be crushed. He therefore set about to create\ \ an alliance of all the Ionian cities, but the members also came from regions\ \ beyond Ionia. He made a number of constitutional changes, not all of which are\ \ clear. First he relinquished his own tyranny. Approaching the other states,\ \ he convinced them to end theirs. Finally he ordered all of the states to create\ \ a board of generals to report, apparently, to him. When his government was in\ \ place he sailed to Lacedaemon and other states of Greece in search of allies.\n\ \nThere has been some question as to the exact meaning of Herodotus' governmental\ \ terms, and as to the form of government of the Ionian alliance. The most fundamental\ \ question is where Aristagoras got his authority over the Ionians in the first\ \ place. They were all under the satrapy of Lydia, not under Miletus. The satrap\ \ was Persian. The tyrant of Miletus was appointed by the satrap, but he also\ \ appointed all the other tyrants. For reasons not specified in Herodotus, Miletus\ \ had the upper hand.\n\nOne can only assume a leadership role of some kind of\ \ Aristagoras over the other tyrants, whether personal or according to some unspecified\ \ convention. In order to gain the participation of the people in the revolt,\ \ we are told, Aristagoras \"let go\" the tyranny and established isonomia, which\ \ the translators translate variously with imprecise terms, such as \"equality\ \ of government.\" According to Liddell and Scott, a standard dictionary of ancient\ \ Greek, Thucydides uses it to mean the \"equality of rights\" in a democracy.\n\ \nApparently Aristagoras established democracy, but then he went on to \"put a\ \ stop to tyranny\" in all the other Ionian cities, and moreover to insist that\ \ they select boards of generals reporting to him, which are not democratic powers.\ \ No voting is mentioned. Apparently a new sovereign state had been formed with\ \ Aristagoras as its chief. He had not stepped down, but up. The state had the\ \ power to levy taxes and troops. Aristagoras was commander of the joint armed\ \ forces. Miletus was to be the new capital. In fact the new sovereign Ionia issued\ \ its own coinage between 499 and its destruction by the Persians in 494.\n\n\ Spartan refusal to provide assistance \n\nAristagoras appealed to the Spartan\ \ king, Cleomenes I, to help them throw off the Persian yoke. He praised the quality\ \ of the Spartan warriors, and argued that a pre-emptive invasion of Persia would\ \ be easy. To illustrate his view, he had brought along a \"bronze tablet on which\ \ a map of all the earth was engraved, and all the sea, and all the rivers.\"\ \ No more information is given about the map, but the circumstantial evidence\ \ suggests it was most likely the world map of Hecataeus of Miletus, an important\ \ player in Milesian political life of the times.\n\nAristagoras claimed that\ \ the Persians would be easy to defeat, as they fought in “trousers and turbans,”\ \ clearly not a sign of good warriors. He also tempted him with Persian riches.\ \ Cleomenes asked Aristagoras to wait two days for an answer. When they next met,\ \ Cleomenes asked how long it would take to reach Susa, and upon learning that\ \ it was a three months’ journey, he firmly refused Spartan assistance as his\ \ troops would be gone for too long. At the time, Sparta was concerned over possible\ \ attacks from the Argives. The Greek historian Herodotus claimed that Aristagoras\ \ attempted to change Cleomenes’ mind with bribes, until the king's young daughter\ \ Gorgo warned that Aristagoras would corrupt him. Aristagoras left without the\ \ requested assistance.\n\nDefeat of the Athenians \n\nAristagoras next went to\ \ Athens, where he made a convincing speech, promising “everything that came into\ \ his head, until at last he succeeded.” Won over, the Athenians agreed to send\ \ ships to Ionia and Aristagoras went before them. The Athenians subsequently\ \ arrived in Miletus with twenty triremes and five others that belonged to the\ \ Eretrians. Herodotus described the arrival of these ships as the beginning of\ \ troubles between Greeks and barbarians. Once all his allies had arrived, Aristagoras\ \ put his brother Charopinus and another Milesian, Hermophantus, in charge of\ \ the expedition, and the whole contingent set out for the provincial capital,\ \ Sardis, while Aristagoras remained to govern at Miletus.\n\nThe first leg of\ \ the journey was to proceed along the coast to Ephesus. Using it as base, they\ \ went overland to Sardis, on which they descended by surprise. The satrap Artaphernes\ \ and his forces retreated to the acropolis immediately. A fire, started by accident\ \ in the town, accidentally burned down the temple of the Lydian goddess Cybebe\ \ (Cybele). Attributing the fire to Ionian maliciousness, the Persians later used\ \ it as an excuse for burning Greek temples.\n\nThe fire forced the defenders\ \ of the acropolis to abandon it in favor of the marketplace. Its defence coincided\ \ fortuitously with the arrival of Persian reinforcements. Interpreting the tumult\ \ as a counter-attack, the Ionians retreated to Tmolus, a nearby elevation, from\ \ which they escaped by night. The reinforcements followed the Ionians, caught\ \ up with them near Ephesus and soundly defeated them.\n\nThe Persians had obtained\ \ Lydia, including all the Greek cities, by defeating the last Anatolian-speaking\ \ kingdom of the same name. They made such a show of mercy as to win the hearts\ \ and minds of the Anatolians, as well as of some of the Greeks. In that sense,\ \ the \"Ionian Revolt\" was de facto an Anatolian civil war. A call for assistance\ \ went rapidly around the satrapy. Joint Persian-Anatolian forces hastened overnight\ \ to the assistance of the satrap.\n\nThey arrived with such short notice and\ \ major fanfare as to frighten away the Ionian-Athenian forces. The Cambridge\ \ Ancient History article attributes this swift arrival to the Persian cavalry,\ \ which also had no trouble tracking and catching the Ionians before the gates\ \ of Ephesus. The losses of the East Greeks were so great that they slunk away,\ \ so to speak, leaving Aristagoras and the rebels to fend for themselves. An air\ \ of doom pervaded the revolt, but they fought with such spirit that the rebellion\ \ spilled over into the islands\n\nAfter this battle, the Athenians refused to\ \ continue to fight in the Ionian Revolt and returned to Athens. Because of their\ \ participation in this battle, however, the Persian king, Darius, swore vengeance\ \ on Athens and commanded a servant to repeat to him three times every day at\ \ dinner, “Master, remember the Athenians.” The story is somewhat and probably\ \ hypocritically naive (but not necessarily on that account false), as the Persians\ \ intended expansion into the Balkans all along. They still held parts of Thrace\ \ from their previous abortive expedition into Scythia, only stopped when they\ \ learned the true size of the country (most of Russia) and the danger of their\ \ position in it.\n\nThe Ionians fought on, gaining control of Byzantium and the\ \ surrounding towns as well as the greater part of Caria and Caunus. They were\ \ not, however, alone. In this last phase of the conflict, almost all of Cyprus\ \ also rebelled against the Persians. Onesilus, the younger brother of Gorgus,\ \ the ruler of Salamis, tried to convince his brother to rebel against Persia\ \ and join in the Ionian Revolt. When his brother refused to support the revolt,\ \ Onesilus waited until he left Salamis and then shut the city gates on him. Gorgus\ \ fled to the Persians while Onesilus took over and convinced the Cyprians to\ \ revolt. They then proceeded to lay siege to the city of Amathus.\n\nManville's\ \ theory of a power struggle between Aristagoras and Histiaeus\nHerodotus’ account\ \ is the best source we have on the events that amounted to a collision between\ \ Persia, which was expanding westward, and classical Greece at its peak. Nevertheless,\ \ its depictions are often scanty and uncertain, or incomplete. One of the major\ \ uncertainties of the Ionian revolt in Herodotus is why it occurred in the first\ \ place.\n \nIn retrospect the case seems obvious: Persia disputed the Hellenes\ \ for control of cities and territories. The Hellenes had either to fight for\ \ their freedom or submit. The desirability of these material objects was certainly\ \ economic, although considerations of defence and ideology may well have played\ \ a part. These are the motives generally accepted today, after long retrospect.\n\ \nHerodotus apparently knew of no such motives, or if he did, he did not care\ \ to analyse history at that level. J D Manville characterizes his approach as\ \ the attribution of “personal motivation” to players such as Aristagoras and\ \ Histiaeus. In his view, Herodotus “may seem to overemphasize personal motivation\ \ as a cause,” but he really does not. We have either to fault Herodotus for his\ \ lack of analytical perspicacity or try to find credible reasons in the historical\ \ context for actions to which Herodotus gives incomplete explanations.\n \nManville\ \ suggests that the unexplained places mark events in a secret scenario about\ \ which Herodotus could not have known, but he records what he does know faithfully.\ \ It is up to the historian to reconstruct the secret history by re-interpretation\ \ and speculation, a technique often used by historical novelists. Manville puts\ \ it forward as history.\n\nThe main players are portrayed by Herodotus as naturally\ \ hypocritical. They always have an ulterior motive which they go to great lengths\ \ to conceal behind persuasive lies. Thus neither Aristagoras nor Histiaeus are\ \ fighting for freedom, nor do they cooperate or collaborate. Each has a personal\ \ motive related to greed, ambition, or fear. Manville fills in the uncertainties\ \ with hypothetical motives. Thus he arrives, perhaps less credibly for his invention,\ \ at a behind-the-scenes struggle for dominance between Aristagoras and Histiaeus.\ \ They can best be described as rivals or even enemies. Some of the high points\ \ of the argument are as follows.\n \nWhile Histiaeus was away serving Darius,\ \ Aristagoras acted in his stead as deputy of Miletus where, it is argued, he\ \ worked on securing his own power. The word for deputy is epitropos, which he\ \ was when the Naxian deputation arrived. By the time the fleet departs for Naxos,\ \ Aristagoras has promoted himself to “tyrant of Miletus.” There is no explicit\ \ statement that he asked Histiaeus’ permission or was promoted by Histaeus. Instead,\ \ Aristagoras turned to Artaphernes, who was said to be jealous of Histiaeus.\ \ It is true that Artaphernes would not move without consulting the Great King,\ \ and that the latter's advisor on Greek affairs was Histiaeus. However, Manville\ \ sees a coup by Aristagoras, presuming not only that the Great King's advisor\ \ did not advise, but was kept in the dark about his own supersession.\n\nWhen\ \ the expedition failed, Histiaeus sent his tattooed slave to Aristagoras, not\ \ as encouragement to revolt, but as an ultimatum. Manville provides an underlying\ \ value system to fill in the gap left by Herodotus: revolt was so unthinkable\ \ that Histiaeus could bring the fantasies of his opponent back to reality by\ \ suggesting that he do it, a sort of “go ahead, commit suicide.” Histiaeus was,\ \ in Manville's speculation, ordering Aristagoras to give up his rule or suffer\ \ the consequences. Apparently, he was not being kept in the dark by the king\ \ after all. Manville leaves us to guess why the king did not just crush the revolt\ \ by returning the supposedly loyal Histiaeus to power.\n\nHowever, at this time\ \ Histiaeus was still required to remain in Susa and, despite his threat, he was\ \ unable to do anything if Aristagoras did revolt. Realizing that this would be\ \ his last chance to gain power Aristagoras started the revolt despite Histiaeus’\ \ threat. This is a surprise to Manville's readers, as we thought he already had\ \ power via a coup. Manville does note the contradiction mentioned above, that\ \ Aristagoras gave up tyranny, yet was able to force democracy on the other cities\ \ and command their obedience to him. We are to see in this paradox a strategy\ \ to depose Histiaeus, whom we thought was already deposed.\n \nThe tale goes\ \ on to an attempt by Histiaeus to form an alliance with Artaphernes to depose\ \ the usurper and regain his power at Miletus. Artaphernes, though he was involved\ \ in open war with Aristagoras, refuses. The tale told by Manville thus contains\ \ events related by Herodotus supplemented by non-events coming from Manville's\ \ imagination.\n\nMyres’ Theory of a balance of power between thalassocracies\ \ \nJohn Myres, classical archaeologist and scholar, whose career began in the\ \ reign of Queen Victoria and did not end until 1954, close friend and companion\ \ of Arthur Evans, and intelligence officer par excellence of the British Empire,\ \ developed a theory of the Ionian Revolt that explains it in terms of the stock\ \ political views of the empire, balance of power and power vacuum. Those views,\ \ still generally familiar, assert that peace is to be found in a region controlled\ \ by competing geopolitical powers, none of which are strong enough to defeat\ \ the others. If a power drops from the roster for any reason, a “vacuum” then\ \ exists, which causes violent competition until the balance is readjusted.\n\n\ In a key article of 1906, while Evans was excavating Knossos, the Ottoman Empire\ \ had lost Crete due to British intervention, and questions of the “sick man\ \ of Europe” were being considered by all the powers. Referring to the failing\ \ Ottoman Empire and the power vacuum that would be left when it fell, the young\ \ Myres published an article studying the balance of what he termed “sea-power”\ \ in the eastern Mediterranean in classical times. The word \"sea-power\" was\ \ intended to define his “thalassocracy.”\n\nMyres was using sea-power in a specifically\ \ British sense for the times. The Americans had their own idea of sea power,\ \ expressed in Alfred Thayer Mahan’s great strategic work, ‘’The Influence of\ \ Sea Power upon History’’. which advocated maintaining a powerful navy and using\ \ it for strategic purposes, such as “command of the sea,” a kind of domination.\ \ The United States Naval Academy used this meaning for its motto, ‘’ex scientia\ \ tridens’’, “sea-power through knowledge.” It named one of its buildings, Mahan\ \ Hall.\n\nFar different is Myres’ “sea-power” and the meaning of thalassocracy,\ \ which means “rule of the seas.“ In contrast to “tridens,” rule of the seas is\ \ not a paternalistic but democratic arrangement. Where there are rulers, there\ \ are the ruled. A kind of exclusivity is meant, such as in Rule, Britannia!.\ \ Specifically, in a thalassocracy, the fleets of the ruler may go where they\ \ will and do as they please, but the ruled may go nowhere and engage in no operation\ \ without express permission of the ruler. You need a license, so to speak, to\ \ be on ruled waters, and if you do not have it, your ships are attacked and destroyed.\ \ “Shoot on sight” is the policy. And so Carthaginian ships sank any ships in\ \ their waters, etc.\n\nThe list of thalassocracies \nThalassocracy was a new\ \ word in the theories of the late 19th century, from which some conclude it was\ \ a scholarly innovation of the times. It was rather a resurrection of a word\ \ known from a very specific classical document, which Myres calls “the List of\ \ Thalassocracies.” It occurs in the Chronicon of Eusebius, the early 4th century\ \ Bishop of Caesarea Maritima, the ruins now in Israel. In Eusebius, the list\ \ is a separate chronology. Jerome, 4th-century theologian and historian, creator\ \ of the Vulgate, interspersed the same items, translated into Latin, in his\ \ Chronicon of world events. The items contain the words “obtinuerunt mare,” strictly\ \ speaking, “obtained the sea,” and not “hold sea power,” although the latter\ \ meaning may be implied as a result. Just as Jerome utilized the chronology of\ \ Eusebius, so Eusebius utilized the chronology of Castor of Rhodes, a 1st-century\ \ BC historian. His work has been entirely lost except for fragments, including\ \ his list of thalassocracies. A thousand years later, the Byzantine monk, George\ \ Syncellus, also used items from the list in his massive Extract of Chronography.\n\ \nOver the centuries the realization grew that all these references to sea-power\ \ in the Aegean came from a single document, a resource now reflected in the fragments\ \ of those who relied on it. C Bunsen, whose translator was one of the first to\ \ use thalassocracy, attributed its discovery to the German scholar, Christian\ \ Gottlob Heyne In a short work composed in 1769, published in 1771, Eusebius’\ \ Chronicon being known at that time only through fragments in the two authors\ \ mentioned, Heyne reconstructed the list in their Greek and Latin (with uncanny\ \ accuracy), the whole title of the article being Super Castoris epochis populorum\ \ thalattokratesanton H.E. (hoc est) qui imperium maris tenuisse dicuntur, “About\ \ Castor's epochs of thalattocratizing peoples; that is, those who are said to\ \ have held the imperium over the sea.” To thalattokratize is “to rule the sea,”\ \ not just to hold sea power like any other good fellow with a strong navy. The\ \ thalattokratizer holds the imperium over the watery domain just as if it were\ \ a country, which explains how such a people can “obtain” and “have” the sea.\ \ The list presented therefore is one of successive exclusive domains. No two\ \ peoples can hold the same domain or share rule over it, although they can operate\ \ under the authority of the thalassocrat, a privilege reserved for paying allies.\n\ \nAccording to Bunsen, the discovery and translation of the Armenian version of\ \ Eusebius’ Chronicon changed the nature of the search for thalassocracy. It provided\ \ the original document, but there was a disclaimer attached, that it was in fact\ \ “an extract from the epitome of Diodorus,” meaning Diodorus Siculus, a 1st-century\ \ BC historian. The disclaimer cannot be verified, as that part of Diodorus’ work\ \ is missing, which, however, opens the argument to another question: if Eusebius\ \ could copy a standard source from Diodorus, why cannot Diodorus have copied\ \ it from someone else?\n\nIt is at this point that Myres picks up the argument.\ \ Noting that thalassokratesai, “be a thalassocrat,” meaning “rule the waves,”\ \ was used in a number of authors: elsewhere by Diodorus, by Polybius, 2nd century\ \ BC historian, of Carthage, of Chios by Strabo, 1st century BC geographer and\ \ some others, he supposes that the source document might have been available\ \ to them all (but not necessarily, the cautious Myres points out). The document\ \ can be dated by its content: a list of 17 thalassocracies extending from the\ \ Lydian after the fall of Troy to the Aeginetan, which ended with the cession\ \ of power to Athens in 480 BC. The Battle of Salamis included 200 new Athenian\ \ triremes plus all the ships of its new ally, Aegina. Despite various revolts\ \ Aegina went on to become part of the Delian League, an imperial treaty of the\ \ new Athenian thalassocracy. Thucydides writes of it after 432 BC, but Herodotus,\ \ who visited Athens “as late as 444 B.C.” does not know a thing about it. This\ \ tentative date for the Eusebian list does not exclude the possibility of an\ \ earlier similar document used by Herodotus.\n\nMyres’ historical reconstruction\ \ of the list \n\nThe order of thalassocracies in the various versions of the\ \ list is nearly fixed, but the dates need considerable adjustment, which Myres\ \ sets about to reconcile through all historical sources available to him. He\ \ discovers some gaps. The solidest part of the list brackets the Ionian Revolt.\ \ The Milesian thalassocracy is dated 604-585 BC. It was ended by Alyattes of\ \ Lydia, founder of the Lydian Empire, who also fought against the Medes. The\ \ latter struggle was ended by the Eclipse of Thales at the Battle of the Halys\ \ River in 585 BC, when the combatants, interpreting the phenomenon as a sign,\ \ made peace. The Lydians were now free to turn on Miletus, which they did for\ \ the next 11 years, reducing it. When the Persians conquered Lydia in 547/546\ \ they acquired the Ionian cities.\n\nAfter 585 BC there is a gap in the list.\ \ Lesbos and one or more unknown thalassocrats held the sea in unknown order.\ \ In 577 BC began the thalassocracy of Phocaea. Breaking out of its Anatolian\ \ cage, it founded Marseilles and cities in Spain and Italy, wresting a domain\ \ away from Carthage and all other opponents. Their thalassocracy ended when,\ \ in the revolt of the Lydian Pactyas, who had been instructed to collect taxes\ \ by the Persians, but used them to raise an army of revolt, the Ionian cities\ \ were attacked by the Persians. The Phocaeans abandoned Phocaea about 534 BC\ \ and after much adventuring settled in the west.\n\nThe thalassocracy of Samos\ \ spans the career of the tyrant, Polycrates, there. The dates of the tyrant are\ \ somewhat uncertain and variable, but at some time prior to 534 BC, he and his\ \ brothers staged a coup during a festival at Samos. Samos happened to have a\ \ large navy of pentekonters. Becoming a ship collector, he attacked and subdued\ \ all the neighbouring islands, adding their ships to his fleet. Finally he added\ \ a new model, the trireme. His reign came to an end about 517 BC when, taking\ \ up the Great King's invitation to a friendly banquet for a discussion of prospects,\ \ he was suddenly assassinated. There were no prospects.\n\nHowever, if he had\ \ chosen not to attend, he was doomed anyway. Some of his trireme captains, learning\ \ of a devious plot by him to have them assassinated by Egyptian dignitaries while\ \ on official business, sailed to Sparta to beg help, which they received. The\ \ adventurous young king, Cleomenes I, was spared the trouble of killing Polycrates,\ \ but led an expedition to Samos anyway, taking the thalassocracy for two years,\ \ 517-515. Adventure and piracy not being activities approved by the Spartan people,\ \ they tagged him as insane and insisted he come home. The sea was now available\ \ to Naxos, 515-505.\n\nAftermath \n\nThe Hellenes had obtained a foothold on\ \ the coast of Anatolia by siding with rebel coastal Anatolian states against\ \ the Hittite Empire. Their position was made more solid by the fall of Troy against\ \ a coalition of mainland Greek kings. The coastal cities managed to retain their\ \ positions against the subsequent Phrygian invasion of Anatolia by joining with\ \ the rump Anatolian states, while the Hittites withdrew into neo-Hittite states\ \ in Syria. The coastal cities, now entirely Hellenic, continued to receive immigrants\ \ from mainland Greece.\n\nThe massive transfer of Persian-speaking population\ \ from the steppes of Central Asia to the range they now occupy presented the\ \ Anatolian Hellenes with an impossible strategic problem. They could not hope\ \ to oppose their small armies against the resources of the vast Persian empire\ \ unless they could once again receive major support from the mainland Greek states,\ \ especially the maritime power of Athens. Those states, however, were reluctant\ \ to take on the might of ancient Persia.\n\nConsequently, the Hellenic states\ \ in Anatolia submitted reluctantly to Persian rule, and were placed in the new\ \ satrapy of Lydia, with capital at Sardis. The satrap of Lydia allowed self-rule\ \ as long as taxes were paid and the supremacy of ancient Persia was granted.\ \ Many of the Anatolian cities proved loyal subjects. However, underlying resentment\ \ against Persian rule was universal.\n\nPersia was not interested in the status\ \ quo. Their desire to expand to the west brought them into conflict with Ionia\ \ over the question of self-rule, one of the principles of the agreement of the\ \ city-states to submit. Their interference in Miletus was the spark that set\ \ off the Ionian revolt. Aristagoras, the first rebel ruler, appeared then as\ \ the champion of Greek freedom. The Ionians had high hopes of independence.\n\ \nDue to the disparity in resources and the reluctance of the mainland states\ \ to involve themselves, the tide soon turned in favour of the Persians. After\ \ only one year, the Cyprians were once again forced into submission by Persia.\ \ The cities around the Hellespont fell one after another to Daurises, the son-in-law\ \ of king Darius. The Carians fought the Persians at the Maeander River and were\ \ defeated with severe casualties.\n\nAristagoras, seeing the rebellion falling\ \ to pieces around him, and little help forthcoming from the Greeks, began looking\ \ for a shelter to which he could execute a strategic retreat. He and his men\ \ resolved on Myrcinus in Thrace, which had been an Ionian stronghold in the abortive\ \ Persian invasion of Scythia. He put Pythagoras, “a man of distinction,” in\ \ charge of Miletus and set sail for Thrace, where he attempted to establish a\ \ colony on the Strymon river, at the same site as the later Athenian colony of\ \ Amphipolis.\n\nThe Thracians, not now disposed to tolerate any further presence\ \ of Greeks in their country, opposed this incursion. He gained control of the\ \ territory but later, while besieging a neighbouring town, Aristagoras was killed\ \ in battle.\n\nExpecting a swift Persian victory, Aristagoras had hoped to establish\ \ a redoubt of Ionians, who would come to the assistance of Miletus at a later\ \ time. By an accidental sequence of historical events his reputation drew the\ \ ire of his main historian, Herodotus of Halicarnassus, an Ionian partisan, to\ \ such a degree that it suffers yet. Although a champion of freedom, Aristagoras\ \ is the only man in all his histories that Herodotus openly calls a coward, blaming\ \ his supposed flight for the defeat of the revolt. The revolt apparently intensified\ \ and spread into the islands. Aristagoras had no way of knowing that he would\ \ have been in the van of it, or that the Thracians would not allow a redoubt.\n\ \nThe revolt was over by 494/493 BC. Going directly for Miletus in 494, the Persians\ \ defeated the Ionians with their own weapon, the ship, in the Battle of Lade,\ \ an island off Miletus. The city was then subject to a siege and the war lost\ \ at its fall. Although there was some mild devastation of rebel cities (except\ \ for Miletus, which was razed and the population decimated and transported),\ \ the Persians were interested in ruling rather than revenge. They began to plan\ \ forthwith for the largest invasion of Greece yet undertaken, executed starting\ \ 490 BC in a series of conflicts called the Greco-Persian Wars, which are yet\ \ famous. Unfortunately for the Persians, they were forced to adopt contingents\ \ of Ionian Greeks into their armies and navies.\n\nHerodotus as a source \n\n\ Most of the information on Aristagoras and his actions comes from the writings\ \ of the ancient Greek historian Herodotus. On the one hand he is virtually the\ \ only literary source for the events he presents as history. While in many ways\ \ he reflects some of the best of ancient historiography, on the other hand, his\ \ work is sprinkled with motivational and logical lacunae, creating textual paradoxes\ \ everywhere, causing some scholars to be critical of his value as a historical\ \ source, especially regarding the Ionian Revolt. For purposes of this presentation,\ \ textual criticism may be polarized into two camps: the cynical, discrediting\ \ Herodotus as an unreliable source, and the affirmative, which credits him with\ \ being reliable as far as he goes.\n\nThe cynical view\n\nManville's cynical\ \ view concerning an imaginary power struggle between Aristagoras and Histiaeus\ \ isolated from the usual contexts of war and society has already been mentioned\ \ above. Manville has no confidence in Herodotus' ability to relate connected\ \ history and therefore supplies connections for him out of his own speculations.\ \ He was preceded in this method by the earlier work of Mabel Lang. A 1968 article\ \ by Lang focuses on the paradoxes of the Ionian revolt. For example, Histiaeus\ \ originally won the Great King's favor by protecting his escape from Scythia\ \ over a key bridge of the Danube. Despite this vital rescue to save the king\ \ and all his forces, he shortly after plots a rebellion!\n\nLang suggests that\ \ one might conclude to an ulterior motive at the bridge, \"to ingratiate himself\ \ with Darius so that he could be on the inside of the king's policy.\" Apparently,\ \ to be on the inside of his policy he has to save his life and the lives of all\ \ his army by letting him escape from the large Scythian army not far behind.\ \ He prefers to keep him alive for nothing more serious than keeping an eye on\ \ him. Nonchalantly Lang writes: \"Presumably revolt was already in the air,....\"\ \ It could not have been far in the air if Histiaeus passed up a chance for total\ \ victory at the outset, a prized goal of many a lightning campaign in world history\ \ afterwards.\n\nThe basic problem is Lang's cynicism: \"we should not hope to\ \ discover the truth about the result merely by accepting the narrative ....\"\ \ Accordingly, she rehearses a catalogue of paradoxes similar to Manville's weaving\ \ her own fantasy of unattested events to contain it. Her explanation of why such\ \ a tale is necessary is similarly speculative: \"the failure of the revolt not\ \ only gave prominence to every aspect and event which would explain, justify\ \ or anticipate the disastrous results but also cast into the shade any intentions\ \ which deserved a better fate and any temporary successes during the course of\ \ the war.\" Not having any other account with which to compare these events,\ \ she cannot possibly know that.\n\nThe affirmative view \n\nThe cynical view\ \ described above reflects a difference in expectation between Herodotus and his\ \ target audiences, which by the accidents of time are multiple and various.\ \ He did not write for us moderns. Reading that he was the first historian whose\ \ work survived in anything more than scattered fragments, we expect him to have\ \ the proper concern of modern historians for continuity and causality, which\ \ other ancient historians, such as Thucydides, have. Herodotus is not one of\ \ those. With regard to causation, the Cambridge Ancient History article asserts:\ \ “...Herodotus does not seem to have innovated: he merely accepted the causation\ \ appropriate to his subject and period.”\n\nIt would be convenient to attribute\ \ this unconcern to a sort of intermediate phase between mythology and history,\ \ as many do. Such a view is neglectful of the ravages of time. Herodotus was\ \ not the first historian in any way, only the first whose work survived. He wrote\ \ of the Ionian Revolt a full generation after it happened; moreover, he was not\ \ a participant. He relied on the work of several previous historians at Miletus,\ \ of which fragments and mention have survived, chief of which was Hecataeus of\ \ Miletus.\n\nHerodotus apparently designed his work according to a specific plan\ \ and style. Whether the previous historians used it is not known, due to the\ \ paucity of evidence, but it seems unlikely. He appears to use Hecataeus as a\ \ framework for his historical events. The fragments of Hecataeus suggest that\ \ he wrote only an annal-like sequence long on names and events but short on connecting\ \ narrative. To this framework Herodotus adds the logoi, or independent anecdotes\ \ of persons and events derived from independent oral traditions, which Herodotus\ \ obtained by interview with record-keepers and state historians. The disconnectedness\ \ comes from their being independent. It is pointless, therefore, to try to invent\ \ connections.\n\nThe ancient historians have therefore invented a special category\ \ for Herodotus, that he was a logographer, or teller of logoi, based on his own\ \ characterization of his sources as logopoioi, “story makers.” Usually the logographers\ \ include Hecataeus and the other historians of his generation, who lived through\ \ the revolt. There is little evidence of their logography. Whether Herodotus\ \ stands alone or is part of a Milesian tradition is a matter of speculation.\n\ \nValidation of Herodotus therefore rests on validation of his logoi. There is\ \ no general validation, but the much-desired archaeological and inscriptional\ \ evidence appears to validate a few events as far as they go: some names, circumstances\ \ of war, and similar peripheral facts. He cannot be validated as a modern historian,\ \ but he does have an overall design, which is “Biblical” or “Bible-like” in scope.\ \ He is trying to do an epic in prose similar to the Homerica in verse. His topic\ \ is not the Trojan War, but the Graeco-Persian Wars. (The Homerica have been\ \ called the pagan Greek “Bible.\") Says Oswyn Murray in the Cambridge Ancient\ \ History, \nIt is certainly hard to find fault with his general view that the\ \ only adequate explanation for the Persian Wars must be a complete account of\ \ relations between the two peoples since the conquest of the Ionian cities in\ \ 545 B.C.\n\nIn short, Herodotus is personal because the Homerica are personal.\ \ Both genres intend to portray the illustrious or non-illustrious deeds and doings\ \ of persons in the contexts of mighty wars. Thus Aristagoras personally can be\ \ called a “coward.” The lying that they do is metis, “cunning,” an admired Greek\ \ virtue practised by the greatest hero of them all, the crafty Odysseus. The\ \ literary tradition of it went on. Virgil could include the half-line Timeo Graecos\ \ dona ferentes, “I fear Greeks bearing Gifts,” in the Aeneid.\n\nThe expectation\ \ of modernity in Herodotus is misplaced. Validation must be sought for individual\ \ logoi. The whole work or any part of it cannot logically be condemned on the\ \ basis of one or a group of paradoxes. All scepticism must have a reason for\ \ doubting. The inconsistencies of Herodotus are not a valid reason, which is\ \ generally true. But few stories are ever free of inconsistency, and if they\ \ are, they are suspect on that account (“too good to be true”).\n\nDenials of\ \ Herodotus' validity, from mild to severe, although widespread, were never universal.\ \ As an example of ancient information generally agreed to be invalid, many works\ \ attributed to various authors have been placed in the \"pseudo-\" category after\ \ as much as centuries of review. There was never any such universal and long-standing\ \ denial of Herodotus. On the contrary, the main events, such as the Battles of\ \ Marathon and Thermopylae, have been accepted as basically credible by many scholars\ \ of many ages. It is therefore misplaced to speak of the \"rehabilitation\" of\ \ Herodotus in medical or neo-ideologic terms.\n\nAccordingly, the most sanguine\ \ view treats his work as though no problems exist regarding it. Referring to\ \ the Cambridge Ancient History article on the Ionian Revolt by Murray, Georges\ \ addresses \"the question of Herodotus' veracity and reliability.\" Repeating\ \ Murray's criticism that \"the traditions concerning the revolt itself are ...\ \ fragmented into individual episodes of folly, treachery, or heroism\" and therefore\ \ are not \"trustworthy materials for the history of the revolt,\" he asserts\ \ to the contrary that \"Herodotus' account furnishes the material for a coherent\ \ and credible account of the actions and events it presents ....\"\n\nHaving\ \ said this, Georges must now show that, rather than being paradoxical, Herodotus\ \ is coherent and credible. Like Lang, having no other account to offer, he must\ \ make his demonstrations from the text of Herodotus, which he spends the rest\ \ of the article doing, disputing most of Murray's interpretations. The contradictions\ \ are not to be viewed as contradictions. He does not address the question of\ \ why, if they are not so, it is necessary to spend an article in disputation\ \ over them. The result is a new set of speculations fully as imaginary as Murray's,\ \ not being based on any alternative texts.\n\nThere is hope, however, as fragments\ \ of Greek texts and inscriptions continue to be discovered. Meanwhile, it seems\ \ common knowledge that the public of any age is not going to relinquish credibility\ \ in Herodotus' great depiction of the Persian Wars.\n\nNotes\n\nReferences\n\n\ External links \n \n\nIonian Revolt\nAncient Milesians\nArchaic tyrants\n6th-century\ \ BC Greek people\n5th-century BC Greek people\n6th-century BC births\n5th-century\ \ BC deaths\nGreek people of the Greco-Persian Wars\nAncient Greeks from the Achaemenid\ \ Empire\nRulers in the Achaemenid Empire" - "The language of Jesus and his disciples is believed to be Aramaic. This is the\ \ common language of Judea in the first century AD, most likely a Galilean dialect\ \ distinguishable from that of Jerusalem. This is generally agreed upon by historians.\ \ The villages of Nazareth and Capernaum in Galilee, where Jesus spent most of\ \ his time, were Aramaic-speaking communities. It is also likely that Jesus knew\ \ enough Koine Greek to converse with those not native to Judea, and it is reasonable\ \ to assume that Jesus was well versed in Hebrew for religious purposes.\n\nCultural\ \ and linguistic background\n\nAramaic was the common language of the Eastern\ \ Mediterranean during and after the Neo-Assyrian, Neo-Babylonian, and Achaemenid\ \ empires (722–330 BC) and remained a common language of the region in the first\ \ century AD. In spite of the increasing importance of Greek, the use of Aramaic\ \ was also expanding, and it would eventually be dominant among Jews both in the\ \ Holy Land and elsewhere in the Middle East around 200 AD and would remain so\ \ until the Islamic conquests in the seventh century.\n\nDead Sea Scrolls\nAccording\ \ to Dead Sea Scrolls archaeologist Yigael Yadin, Aramaic was the language of\ \ Hebrews until Simon Bar Kokhba's revolt (132 AD to 135 AD). Yadin noticed the\ \ shift from Aramaic to Hebrew in the documents he studied, which had been written\ \ during the time of the Bar Kokhba revolt. In his book, Bar Kokhba: The rediscovery\ \ of the legendary hero of the last Jewish Revolt Against Imperial Rome, Yigael\ \ Yadin notes, \"It is interesting that the earlier documents are written in Aramaic\ \ while the later ones are in Hebrew. Possibly the change was made by a special\ \ decree of Bar Kokhba who wanted to restore Hebrew as the official language of\ \ the state\".\n\nIn another book by Sigalit Ben-Zion, Yadin said: \"it seems\ \ that this change came as a result of the order that was given by Bar Kokhba,\ \ who wanted to revive the Hebrew language and make it the official language of\ \ the state.\" Yadin points out that Aramaic was the lingua franca at the time.\n\ \nJosephus\nHebrew historian Josephus comments on learning Greek in first century\ \ Judea:\n\nIn the first century AD, the Aramaic language was widespread throughout\ \ the Middle East, as is supported by the testimony of Josephus's The Jewish War.\n\ \nJosephus chose to inform people from what are now Iran, Iraq, and remote parts\ \ of the Arabian Peninsula about the war of the Jews against the Romans through\ \ books he wrote \"in the language of our country\", prior to translating into\ \ Greek for the benefit of the Greeks and Romans:\n\nH. St. J. Thackeray (who\ \ translated Josephus' Jewish Wars from Greek into English) also points out, \"\ We learn from the proem that the Greek text was not the first draft of the work.\ \ It had been preceded by a narrative written in Aramaic and addressed to \"the\ \ barbarians in the interior\", who are more precisely defined lower down as the\ \ natives of Parthia, Babylonia, and Arabia, the Jewish dispersion in Mesopotamia,\ \ and the inhabitants of Adiabene, a principality of which the reigning house,\ \ as was proudly remembered, were converts to Judaism (B. i, 3, 6). Of this Aramaic\ \ work the Greek is described as a \"version\" made for the benefit of the subjects\ \ of the Roman Empire, i.e. the Graeco-Roman world at large.\n\nIn , the \"Field\ \ of Blood\" was known to all the inhabitants of Jerusalem in their own language\ \ as Akeldama, which is the transliteration of the Aramaic words \"Haqal Dama\"\ .\n\nJosephus differentiated Hebrew from his language and that of first-century\ \ Israel. Josephus refers to Hebrew words as belonging to \"the Hebrew tongue\"\ \ but refers to Aramaic words as belonging to \"our tongue\" or \"our language\"\ \ or \"the language of our country\".\n\nJosephus refers to a Hebrew word with\ \ the phrase \"the Hebrew tongue\": \"But the affairs of the Canaanites were at\ \ this time in a flourishing condition, and they expected the Israelites with\ \ a great army at the city Bezek, having put the government into the hands of\ \ Adonibezek, which name denotes the Lord of Bezek, for Adoni in the Hebrew tongue\ \ signifies Lord.\"\n\nIn this example, Josephus refers to an Aramaic word as\ \ belonging to \"our language\": \"This new-built part of the city was called\ \ 'Bezetha,' in our language, which, if interpreted in the Grecian language, may\ \ be called 'the New City.'\"\n\nOn several occasions in the New Testament, Aramaic\ \ words are called Hebrew. For example, in (KJV), the gospel-writer narrates\ \ that Jesus, \"bearing his cross[,] went forth into a place called the place\ \ of a skull, which is called in the Hebrew Golgotha.\" The last word is, in fact,\ \ Aramaic. The word \"Golgotha\" is a transliteration of an Aramaic word, because\ \ -tha in Golgotha is the Aramaic definite article on a feminine noun in an emphatic\ \ state.\n\nPhonology\n\nAramaic phrases in the Greek New Testament\n\nThe Greek\ \ New Testament transliterates a few Semitic words. When the text itself refers\ \ to the language of such Semitic glosses, it uses words meaning \"Hebrew\"/\"\ Jewish\" (Acts 21:40; 22:2; 26:14: têi hebraḯdi dialéktōi, lit. 'in the Hebrew\ \ dialect/language') but this term is often applied to unmistakably Aramaic words\ \ and phrases; for this reason, it is often interpreted as meaning \"the (Aramaic)\ \ vernacular of the Jews\" in recent translations.<ref>E.g. Geoffrey W.Bromley\ \ (ed.)The International Standard Bible Encyclopedia, W.B.Eeerdmans, Grand Rapids,\ \ Michigan 1979, 4 vols. vol.1 sub.'Aramaic' p.233: 'in the Aramaic vernacular\ \ of Palestine</ref>\n\nA small minority of scholars believe that most or all\ \ of the New Testament was originally written in Aramaic.Glenn David Bauscher.\ \ 2007. The Original Aramaic New Testament in Plain English. . This theory is\ \ known as Aramaic primacy.\n\nTalitha kum (Ταλιθὰ κούμ)\n\nMark :\n And taking\ \ the hand of the child, he said to her, \"Talitha kum\", which translates as,\ \ \"Little girl, I say to you, get up.\"This verse gives an Aramaic phrase, attributed\ \ to Jesus bringing the girl back to life, with a transliteration into Greek,\ \ as ταλιθὰ κούμ. A few Greek manuscripts (Codex Sinaiticus, Vaticanus) of Mark's\ \ Gospel have this form of the text, but others (Codex Alexandrinus, the text-type\ \ known as the Majority Text, and also the Latin Vulgate) write κοῦμι (koumi,\ \ cumi) instead. The latter is in the Textus Receptus and is the version which\ \ appears in the KJV.\n\nThe Aramaic is ṭlīthā qūm. The word ṭlīthā is the feminine\ \ form of the word ṭlē, meaning \"young\". Qūm is the Aramaic verb 'to rise, stand,\ \ get up'. In the feminine singular imperative, it was originally qūmī. However,\ \ there is evidence that in speech, the final -ī was dropped so the imperative\ \ did not distinguish between masculine and feminine genders. The older manuscripts,\ \ therefore, used a Greek spelling that reflected pronunciation whereas the addition\ \ of an 'ι' was perhaps due to a bookish copyist.\n\nIn square script Aramaic,\ \ it could be טליתא קומי or טליתא קום.\n\nEphphatha (Ἐφφαθά)\n\nMark \n And looking\ \ up to heaven, he sighed and said to him, \"Ephphatha,\" which is 'be opened'.Once\ \ again, the Aramaic word is given with the transliteration, only this time, the\ \ word to be transliterated is more complicated. In Greek, the Aramaic is written\ \ ἐφφαθά. This could be from the Aramaic ethpthaḥ, the passive imperative of the\ \ verb pthaḥ, 'to open', since the th could assimilate in western Aramaic. The\ \ pharyngeal ḥ was often omitted in Greek transcriptions in the Septuagint (Greek\ \ Old Testament) and was also softened in Galilean speech.\n\nIn Aramaic, it could\ \ be אתפתח or אפתח. This word was adopted as the official motto of Gallaudet University,\ \ the United States' most prominent school for the deaf.\n\nAbba (Ἀββά[ς])\n\n\ Mark 14:36\"Abba, Father,\" he said, \"everything is possible for you. Take this\ \ cup from me. Yet not what I will, but what you will.\"Galatians 4:6Because you\ \ are his sons, God sent the Spirit of his Son into our hearts, the Spirit who\ \ calls out, \"Abba, Father.\"Romans 8:15The Spirit you received does not make\ \ you slaves, so that you live in fear again; rather, the Spirit you received\ \ brought about your adoption to sonship. And by him we cry, \"Abba, Father.\"\ Abba, an originally Aramaic form borrowed into the Greek Old Testament as a name\ \ (2Chr 29:1) [standing for the Hebrew Abijah ()], common in Mishnaic Hebrew and\ \ still used in Modern Hebrew (written Αββά[ς] in Greek, and ’abbā in Aramaic),\ \ is immediately followed by the Greek equivalent (Πατήρ) with no explicit mention\ \ of it being a translation. In Aramaic, it would be אבא.\n\nNote, the name Barabbas\ \ is a Hellenization of the Aramaic Bar Abba (בר אבא), literally \"Son of the\ \ Father\".\n\nRaca (Ρακά)\n\nMatthew 5:22But I say unto you, That whosoever is\ \ angry with his brother [without a cause] shall be in danger of the judgment:\ \ and whosoever shall say to his brother, Raca, shall be in danger of the council:\ \ but whosoever shall say, Thou fool, shall be in danger of hell fire.(The bracketed\ \ text does not appear in all recensions and is absent in the Latin Vulgate.)\n\ \nRaca, or Raka, in the Aramaic and Hebrew of the Talmud, means empty one, fool,\ \ empty head.\n\nIn Aramaic, it could be ריקא or ריקה.\n\nMammon (Μαμωνάς)\n\n\ Gospel of Matthew 6:24No one can serve two masters: for either they will hate\ \ the one, and love the other; or else they will hold to the one, and despise\ \ the other. You cannot serve God and mammon.Luke 16:9–13And I say unto you, Make\ \ to yourselves friends of the mammon of unrighteousness; that, when ye fail,\ \ they may receive you into everlasting habitations. He that is faithful in that\ \ which is least is faithful also in much: and he that is unjust in the least\ \ is unjust also in much. If therefore ye have not been faithful in the unrighteous\ \ mammon, who will commit to your trust the true riches? And if ye have not been\ \ faithful in that which is another man's, who shall give you that which is your\ \ own? No servant can serve two masters: for either he will hate the one, and\ \ love the other; or else he will hold to the one, and despise the other. Ye cannot\ \ serve God and mammon.2 Clement 6Now the Lord declares, \"No servant can serve\ \ two masters.\" If we desire, then, to serve both God and mammon, it will be\ \ unprofitable for us. \"For what will it profit if a man gain the whole world,\ \ and lose his own soul?\" This world and the next are two enemies. The one urges\ \ to adultery and corruption, avarice and deceit; the other bids farewell to these\ \ things. We cannot, therefore, be the friends of both; and it behoves us, by\ \ renouncing the one, to make sure of the other. Let us reckon that it is better\ \ to hate the things present, since they are trifling, and transient, and corruptible;\ \ and to love those [who are to come,] as being good and incorruptible. For if\ \ we do the will of Christ, we shall find rest; otherwise, nothing shall deliver\ \ us from eternal punishment, if we disobey His commandments. (Roberts-Donaldson)\n\ \nIn Aramaic, it could be ממון (or, in the typical Aramaic \"emphatic\" state\ \ suggested by the Greek ending, ממונא). This is usually considered to be an originally\ \ Aramaic word borrowed into Rabbinic Hebrew, but its occurrence in late Biblical\ \ Hebrew and, reportedly, in 4th century Punic may indicate that it had a more\ \ general \"common Semitic background\".\n\nIn the New Testament, the word Mamōnâs\ \ is declined like a Greek word, whereas many of the other Aramaic and Hebrew\ \ words are treated as indeclinable foreign words.\n\nRabbuni (Ραββουνί)Jesus\ \ saith unto her, Mary. She turned herself, and saith unto him, Rabboni; which\ \ is to say, Master. (KJV)\n\nAlso in Mark 10:51. Hebrew form rabbi used as title\ \ of Jesus in Matthew 26:25,49; Mark 9:5, 11:21, 14:45; John 1:38, 1:49, 4:31,\ \ 6:25, 9:2, 11:8.\n\nIn Aramaic, it would have been רבוני.\n\nMaranatha (Μαραναθά)\n\ \nDidache 10:6 (Prayer after Communion)Let grace come, and let this world pass\ \ away. Hosanna to the God (Son) of David! If any one is holy, let him come; if\ \ any one is not so, let him repent. Maran-Atha. Amen. (Roberts-Donaldson)\n\n\ 1 Corinthians 16:22If any man love not the Lord Jesus Christ, let him be Anathema\ \ Maranatha.Depending on how one selects to split the single Greek expression\ \ of the early manuscripts into Aramaic, it could be either מרנא תא (marana tha,\ \ \"Lord, come!\") or מרן אתא (maran atha, \"Our Lord has come\").\n\nEli, Eli,\ \ lema sabachthani (Ἠλί, Ἠλί, λεμὰ σαβαχθανί)\n\nMatthew 27:46\n Around the ninth\ \ hour, Jesus shouted in a loud voice, saying \"Eli, Eli, lema sabachthani?\"\ \ which is, \"My God, my God, why have you forsaken me?\"Mark 15:34\n And at the\ \ ninth hour, Jesus shouted in a loud voice, \"Eloi, Eloi, lama sabachthani?\"\ \ which is translated, \"My God, my God, for what have you forsaken me?\"This\ \ phrase, among the Sayings of Jesus on the cross, is given in these two versions.\ \ The Matthean version of the phrase is transliterated in Greek as Ἠλί, Ἠλί, λεμὰ\ \ σαβαχθανί. The Markan version is Ἐλωΐ, Ἐλωΐ, λαμὰ σαβαχθανί (elōi rather than\ \ ēli and lama rather than lema).\n\nOverall, both versions appear to be Aramaic\ \ rather than Hebrew because of the verb (šbq) \"abandon\", which is originally\ \ Aramaic.Davies, William D. and Dale C. Allison. 1997. Critical and Exegetical\ \ Commentary on the Gospel According to Saint Matthew. Volume III. P.624 The \"\ pure\" Biblical Hebrew counterpart to this word, (‘zb) is seen in the second\ \ line of Psalm 22, which the saying appears to quote. Thus, Jesus is not quoting\ \ the canonical Hebrew version (ēlī ēlī lāmā ‘azabtānī) attributed in some Jewish\ \ interpretations to King David cited as Jesus' ancestor in Matthew's Genealogy\ \ of Jesus if the Eli, Eli version of Jesus' outcry is taken; he may be quoting\ \ the version given in an Aramaic Targum (surviving Aramaic Targums do use šbq\ \ in their translations of the Psalm 22 ).\n\nThe Markan word for \"my god\",\ \ Ἐλωΐ, definitely corresponds to the Aramaic form אלהי, elāhī. The Matthean one,\ \ Ἠλί, fits in better with the אלי of the original Hebrew Psalm, as has been pointed\ \ out in the literature; however, it may also be Aramaic because this form is\ \ attested abundantly in Aramaic as well.Williams P.J. 2004. The linguistic background\ \ to Jesus' Dereliction Cry. The New Testament in its first century setting (ed.\ \ Williams P.J., Andre D. Clarke et al.) p. 7-8.\n\nIn the next verse, in both\ \ accounts, some who hear Jesus' cry imagine that he is calling for help from\ \ Elijah (Ēlīyā in Aramaic).\nAlmost all ancient Greek manuscripts show signs\ \ of trying to normalize this text. For instance, the peculiar Codex Bezae renders\ \ both versions with ηλι ηλι λαμα ζαφθανι (ēli ēli lama zaphthani). The Alexandrian,\ \ Western and Caesarean textual families all reflect harmonization of the texts\ \ between Matthew and Mark. Only the Byzantine textual tradition preserves a distinction.\n\ \nThe Aramaic word form šəḇaqtanī is based on the verb šəḇaq/šāḇaq, 'to allow,\ \ to permit, to forgive, and to forsake', with the perfect tense ending -t (2nd\ \ person singular: 'you'), and the object suffix -anī (1st person singular: 'me').\n\ \nIn Hebrew, the saying would be \"\", the Aramaic phrase would be \"\" or \"\"\ .\n\nJot and tittle ()\n\nMatthew 5:18For assuredly, I say to you, till heaven\ \ and earth pass away, one jot or one tittle will by no means pass from the Law\ \ (that is, the Torah) till all is fulfilled.The quotation uses them as an example\ \ of extremely minor details. In the Greek text translated as English jot and\ \ tittle is found iota and keraia. Iota is the smallest letter of the Greek alphabet\ \ (ι), but since only capitals were used at the time the Greek New Testament was\ \ written (Ι; still, it is the smallest of all the Greek majuscules) and because\ \ the Torah was written in Hebrew, it probably represents the Hebrew yodh (י)\ \ which is the smallest letter of the Hebrew alphabet. Keraia is a hook or serif.\n\ \nKorban (Κορβάν)\n\nMatthew 27:6But the chief priests, taking the pieces of silver,\ \ said, ‘It is not lawful to put them into the treasury, since they are blood\ \ money.’In Aramaic (קרבנא) it refers to the treasury in the Temple in Jerusalem,\ \ derived from the Hebrew Korban (קרבן), found in Mark 7:11 and the Septuagint\ \ (in Greek transliteration), meaning religious gift or offering.\n\nThe Greek\ \ is declined as a Greek noun, much like other examples.\n\nSikera (Σίκερα)\n\ \nLuke 1:15for he will be great in the sight of the Lord. He must never drink\ \ wine or strong drink; even before his birth he will be filled with the Holy\ \ Spirit.Hosanna ()\n\nMark 11:9Then those who went ahead and those who followed\ \ were shouting, Hosanna! Blessed is the one who comes in the name of the Lord!This\ \ word is derived from הושע נא. It is generally considered to be a quote from\ \ Psalms 118:25 \"O , save (us)\", but the original Biblical Hebrew form was הושיעה\ \ נא. The shortened form הושע could be either Aramaic or Hebrew.Balz, Horst. Exegetical\ \ Dictionary of the New Testament, Volume 3. P.509\n\nAramaic personal names in\ \ the New Testament\nPersonal names in the New Testament come from a number of\ \ languages; Hebrew and Greek are most common. However, there are a few Aramaic\ \ names as well. The most prominent feature in Aramaic names is bar (Greek transliteration\ \ βαρ, Aramaic bar), meaning 'son of', a common patronym prefix. Its Hebrew equivalent,\ \ ben, is conspicuous by its absence. Some examples are:\n  – Bartholomew (Βαρθολομαῖος\ \ from bar-Tōlmay, perhaps \"son of furrows\" or \"ploughman\").\n  – Simon bar-Jona\ \ (Σίμων Βαριωνᾶς from Šim‘ōn bar-Yōnā, \"Simon son of Jonah\").\n  – Simon bar-Jochanan\ \ (\"Simon son of John\").\n  – Barabbas (Βαραββᾶς from bar-Abbā, \"son of the\ \ father\").\n  – Bartimaeus (Βαρτιμαῖος possibly from combination of Aramaic\ \ bar and Greek timaios meaning \"honorable\" or \"highly prized\", perhaps \"\ honorable son\").\n  – Barsabbas (Βαρσαββᾶς from bar-Šabbā, \"son of the Sabbath\"\ ).\n  – Joseph who is called Barnabas (Βαρνάβας from bar-Navā meaning \"son of\ \ prophecy\", \"the prophet\", but given the Greek translation υἱὸς παρακλήσεως;\ \ usually translated as \"son of consolation/encouragement\", the Greek could\ \ mean \"invocation\" as well).\n  – Bar-Jesus (Βαριησοῦς from bar-Išo, \"son\ \ of Jesus/Joshua\").\n\nBoanerges (Βοανηργές)\nMark 3:17\n And James, the son\ \ of Zebedee, and John, the brother of James, and he gave them the name Boanerges,\ \ which is Sons of Thunder.Jesus surnames the brothers James and John to reflect\ \ their impetuosity. The Greek rendition of their name is Βοανηργές (Boanērges).\n\ \nThere has been much speculation about this name. Given the Greek translation\ \ that comes with it ('Sons of Thunder'), it seems that the first element of the\ \ name is bnē, 'sons of' (the plural of 'bar'), Aramaic (בני). This is represented\ \ by βοάνη (boanē), giving two vowels in the first syllable where one would be\ \ sufficient. It could be inferred from this that the Greek transliteration may\ \ not be a good one. The second part of the name is often reckoned to be rḡaš\ \ ('tumult') Aramaic (רגיש), or rḡaz ('anger') Aramaic (רגז). Maurice Casey, however,\ \ argues that it is a simple misreading of the word for thunder, r‘am (due to\ \ the similarity of s to the final m). This is supported by one Syriac translation\ \ of the name as bnay ra‘mâ. The Peshitta reads ܒܢܝ ܪܓܫܝ bnay rḡešy, which would\ \ fit with a later composition for it, based on a Byzantine reading of the original\ \ Greek.\n\nCephas (Κηφᾶς)\nJohn 1:42\n He brought him to Jesus. Jesus looked\ \ at him and said, \"You are Simon son of John, you shall be called Cephas\",\ \ which is translated 'Peter'. (New International Version)\n1 Corinthians 1:12\n\ \ But I say that each of you says \"I am of Paul\", or \"I am of Apollos\", or\ \ \"I am of Cephas\", or \"I am of Christ\".Galatians 1:18 NRSVThen after three\ \ years I did go up to Jerusalem to visit Cephas and stayed with him for fifteen\ \ days;In these passages, 'Cephas' is given as the nickname of the apostle better\ \ known as Simon Peter. The Greek word is transliterated (Kēphâs).\n\nThe apostle's\ \ given name appears to be Simon, and he is given the Aramaic nickname, kēpā,\ \ meaning 'rock' or 'stone'. The final sigma (ς) is added in Greek to make the\ \ name masculine rather than feminine. That the meaning of the name was more important\ \ than the name itself is evidenced by the universal acceptance of the Greek translation,\ \ (Petros). It is not known why Paul uses the Aramaic name rather than the Greek\ \ name for Simon Peter when he writes to the churches in Galatia and Corinth.\ \ He may have been writing at a time before Cephas came to be popularly known\ \ as Peter.\n\nAccording to Clement of Alexandria, there were two people named\ \ Cephas: one was Apostle Simon Peter, and the other was one of Jesus' Seventy\ \ Apostles. Clement goes further to say it was Cephas of the Seventy who was condemned\ \ by Paul in Galatians 2 for not eating with the Gentiles, though this is perhaps\ \ Clement's way of deflecting the condemnation from Simon Peter. In 1708, a French\ \ Jesuit, Jean Hardouin, wrote a dissertation that argues \"Peter\" was actually\ \ \"another Peter\", thus the emphasis of using the name Cephas (Aramaic for Peter).\ \ In 1990 Bart D. Ehrman wrote an article on the Journal of Biblical Literature,\ \ similarly arguing that Peter and Cephas should be understood as different people,\ \ citing the writing of Clement of Alexandria and the Epistula Apostolorum and\ \ in support of his theory; Ehrman's article received a detailed critique by Dale\ \ Allison, who argued that Peter and Cephas are the same person. Ehrman later\ \ retracted his proposal, deeming it \"highly unlikely\".\n\nIn Aramaic, it could\ \ be כיפא.\n\nThomas (Θωμᾶς)\nJohn 11:16\n Then Thomas, who was called Didymus,\ \ said to his co-disciples, \"Now let us go that we might die with him!\"Thomas\ \ () is listed among the disciples of Jesus in all four gospels and the Acts of\ \ the Apostles. However, it is only in John's Gospel that more information is\ \ given. In three places (John 11:16, 20:24 and 21:2), he is given the name Didymus\ \ (), the Greek word for a twin. In fact, \"the Twin\" is not just a surname,\ \ it is a translation of \"Thomas\". The Greek —Thōmâs—comes from the Aramaic\ \ tōmā, \"twin\". Therefore, rather than two personal names, Thomas Didymus, there\ \ is a single nickname, the Twin. Christian tradition gives him the personal name\ \ Judas, and he was perhaps named Thomas to distinguish him from others of the\ \ same name.\n\nIn Aramaic, it could be ܬܐܘܡܐ.\n\nTabitha (Ταβιθά)\nActs 9:36\n\ \ In Joppa, there was a disciple named Tabitha, which is translated Dorcas.The\ \ disciple's name is given both in Aramaic (Ταβιθά) and Greek (Δορκάς). The Aramaic\ \ name is a transliteration of Ṭḇīthā, the female form of (Ṭaḇyā). Both names\ \ mean 'gazelle'.\n\nIt may be just coincidence that Peter's words to her in verse\ \ 40, \"Tabitha, get up!\" (), are similar to the \"talitha kum\" phrase used\ \ by Jesus.\n\nIn Aramaic, it could be טביתא.\n\nAramaic place names in the New\ \ Testament\nGethsemane (Γεθσημανῆ)\nMatthew 26:36\n Then Jesus went with them\ \ to a place called Gethsemane.Mark 14:32\n And they went to a place that has\ \ the name Gethsemane.The place where Jesus takes his disciples to pray before\ \ his arrest is given the Greek transliteration Γεθσημανῆ (Gethsēmanē). It represents\ \ the Aramaic Gath-Šmānē, meaning 'the oil press' or 'oil vat' (referring to olive\ \ oil).\n\nIn Aramaic, it could be ܓܕܣܡܢ. This place name is more properly an\ \ Aramaized version of an original Hebrew place name. Gath גת is a normal word\ \ for press in Hebrew, generally used for a wine press not an olive press though;\ \ and shemanei שמני is the Hebrew word shemanim שמנים meaning \"oils\", the plural\ \ form of the word shemen שמן, the primary Hebrew word for oil, just in a construct\ \ form (-ei instead of the ordinary plural suffix -im). The word in Aramaic for\ \ \"oil\" is more properly mišḥa (משחא), as also attested in Jewish writings in\ \ Aramaic from the Galilee (see Caspar Levias, A Grammar of Galilean Aramaic,\ \ Jewish Theological Seminary of America, 1986).\n\nGolgotha (Γολγοθᾶ)\nMark 15:22\n\ \ And they took him up to the place Golgotha, which is translated Place of the\ \ Skull.John 19:17\n And carrying his cross by himself, he went out to the so-called\ \ Place of the Skull, which is called in 'Hebrew' Golgotha.Gagūltā Aramaic, means\ \ 'skull'. The name appears in all of the gospels except Luke, which calls the\ \ place simply Kranion (Κρανίον) 'the Skull' in Greek, with no Semitic counterpart.\ \ The name 'Calvary' is taken from the Latin Vulgate translation, Calvaria.\n\n\ In Aramaic, it could be ܓܓܘܠܬܐ. Though this word has the Aramaic final form -ta\ \ / -tha, it is otherwise also closer to the Hebrew word for skull, gulgolet גולגולת,\ \ than to the Aramaic form.\n\nGabbatha (Γαββαθᾶ)\nJohn 19:13\n When Pilate heard\ \ these words, he brought Jesus outside and sat on the judge's bench at a place\ \ called The Stone Pavement, or in Hebrew, Gabbatha.The place name appears to\ \ be Aramaic. According to Josephus, War, V.ii.1, #51, the word Gabath means\ \ high place, or elevated place, so perhaps a raised flat area near the temple.\ \ The final \"א\" could then represent the emphatic state of the noun.\n\nIn\ \ Aramaic, it could be גבהתא.\n\nAkeldama (Ἀκελδαμά) \nActs 1:19\n And this became\ \ known to all the inhabitants of Jerusalem, so that field was called, in their\ \ own dialect, Akeldama, that is Field of Blood.The place of Judas Iscariot's\ \ death is clearly named Field of Blood in Greek. However, the manuscript tradition\ \ gives a number of different spellings of the Aramaic. The Majority Text reads\ \ Ἀκελδαμά (Akeldama); other manuscript versions give Ἀχελδαμάχ (Acheldamach),\ \ Ἁκελδαμά (Hakeldama), Ἁχελδαμά (Hacheldama) and Ἁκελδαμάχ (Hakeldamach). Despite\ \ these variant spellings the Aramaic is most probably ḥqēl dmā, 'field of blood'.\ \ While the seemingly gratuitous Greek sound of kh at the end of the word is\ \ difficult to explain, the Septuagint similarly adds this sound to the end of\ \ the Semitic name Ben Sira to form the Greek name for the Book of Sirakh ().\ \ The sound may be a dialectic feature of either the Greek speakers or the original\ \ Semitic language speakers.\n\nIn Aramaic, it could be חקל דמא.\n\nPool of Bethesda\ \ (Βηθεσδά)\nJohn 5:2\n Now there is in Jerusalem near the Sheep Gate a pool,\ \ which in Aramaic is called Bethesda and which is surrounded by five covered\ \ colonnades.Bethesda was originally the name of a pool in Jerusalem, on the path\ \ of the Beth Zeta Valley, and is also known as the Sheep Pool. Its name in Aramaic\ \ means \"House of Grace\". It is associated with healing. In John 5, Jesus was\ \ reported healing a man at the pool.\n\nFor other Aramaic place names in the\ \ New Testament beginning with beth'' (\"house of\"), see Bethabara, Bethany,\ \ Bethphage and Bethsaida and Bethlehem.\n\nIn Aramaic, \"Bethesda\" could be\ \ spelled בית חסדא.\n\nSee also\n Semitic languages\n\nReferences\n\nSources \n\ \n \n \n \n \n \n \n \n \n \n \n \n\n1st-century Christianity\nJesus\nLanguage\ \ and mysticism\nJesus" - "\n\nCo (continued)\n\nCom\n\n|- class=\"vcard\"\n| class=\"fn org\" | Combe\n\ | class=\"adr\" | East Sussex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Combe\n| class=\"adr\" | Somerset\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe (Salcombe)\n\ | class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Combe (Yealmpton)\n| class=\"adr\" | Devon\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe\ \ (Buckfastleigh)\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe\n| class=\"adr\" | Herefordshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combe\n| class=\"adr\" | Oxfordshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe\n| class=\"adr\" | Berkshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combe Almer\n| class=\"adr\" | Dorset\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combebow\n| class=\"adr\" | Devon\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combe Common\n| class=\"adr\" | Surrey\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe Down\n| class=\"adr\" | Bath\ \ and North East Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Combe Fishacre\n| class=\"adr\" | Devon\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe\ \ Florey\n| class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Combe Hay\n| class=\"adr\" | Bath\ \ and North East Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Combeinteignhead\n| class=\"adr\" | Devon\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe\ \ Martin\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Combe Moor\n| class=\"adr\" | Herefordshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combe Pafford\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combe Raleigh\n| class=\"adr\"\ \ | Devon\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Comberbach\n| class=\"adr\" | Cheshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Comberford\n| class=\"adr\"\ \ | Staffordshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Comberton\n| class=\"adr\" | Herefordshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Comberton\n\ | class=\"adr\" | Cambridgeshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Combe St Nicholas\n| class=\"adr\" | Somerset\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combe Throop\n| class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Combpyne\n| class=\"adr\"\ \ | Devon\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Combrew\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combridge\n| class=\"adr\" | Staffordshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combrook\n| class=\"adr\" | Warwickshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Combs\n| class=\"adr\" | Suffolk\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combs\n| class=\"adr\" | Kirklees\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Combs\n| class=\"adr\" | Derbyshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Combs Ford\n| class=\"adr\" | Suffolk\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Combwich\n| class=\"adr\" | Somerset\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Comers\n| class=\"adr\" | Aberdeenshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Come-to-Good\n| class=\"adr\" |\ \ Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Comeytrowe\n| class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Comford\n| class=\"adr\" |\ \ Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Comfort\n| class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Comhampton\n| class=\"adr\"\ \ | Worcestershire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Comins Coch\n| class=\"adr\" | Ceredigion\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Comiston\n\ | class=\"adr\" | City of Edinburgh\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Comley\n| class=\"adr\" | Shropshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Commercial End\n| class=\"adr\" | Cambridgeshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Commins\n| class=\"adr\" |\ \ Denbighshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Commins Coch\n| class=\"adr\" | Powys\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Common Cefn-llwyn\n\ | class=\"adr\" | Monmouthshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Commondale\n| class=\"adr\" | North Yorkshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Common Edge\n| class=\"adr\" | Lancashire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Common End\n| class=\"adr\"\ \ | Cumbria\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Common End\n| class=\"adr\" | Derbyshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Common Hill\n| class=\"adr\"\ \ | Herefordshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Common Moor\n| class=\"adr\" | Cornwall\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Common Platt\n\ | class=\"adr\" | Wiltshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Commonside\n| class=\"adr\" | Cheshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Commonside\n\ | class=\"adr\" | Derbyshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Commonside\n| class=\"adr\" | Nottinghamshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Common Side\n| class=\"adr\" | Cheshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Common Side (Heanor)\n| class=\"\ adr\" | Derbyshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Common Side (Barlow)\n| class=\"adr\" | Derbyshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Commonwood\n\ | class=\"adr\" | Hertfordshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Commonwood\n| class=\"adr\" | Wrexham\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Commonwood\n| class=\"adr\" | Shropshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Common-y-coed\n| class=\"\ adr\" | Monmouthshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Comp\n| class=\"adr\" | Kent\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compass\n| class=\"adr\" |\ \ Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Compstall\n| class=\"adr\" | Stockport\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\n| class=\"adr\" |\ \ Berkshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Compton\n| class=\"adr\" | Derbyshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton (Plymouth)\n| class=\"\ adr\" | Devon\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Compton (Marldon)\n| class=\"adr\" | Devon\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton (near\ \ Winchester)\n| class=\"adr\" | Hampshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton (King's Somborne)\n| class=\"\ adr\" | Hampshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Compton\n| class=\"adr\" | Leeds\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\n| class=\"\ adr\" | Staffordshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Compton (Guildford)\n| class=\"adr\" | Surrey\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\ \ (Waverley, near Farnham)\n| class=\"adr\" | Surrey\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\n| class=\"adr\" |\ \ West Sussex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Compton\n| class=\"adr\" | Wiltshire\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\n| class=\"\ adr\" | Wolverhampton\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Compton Abbas\n| class=\"adr\" | Dorset\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton Abdale\n\ | class=\"adr\" | Gloucestershire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Compton Bassett\n| class=\"adr\" | Wiltshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Compton Beauchamp\n| class=\"adr\" | Oxfordshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton Bishop\n| class=\"\ adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Compton Chamberlayne\n| class=\"adr\" | Wiltshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\ \ Common\n| class=\"adr\" | Bath and North East Somerset\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton Dando\n\ | class=\"adr\" | Bath and North East Somerset\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton Dundon\n| class=\"\ adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Compton Durville\n| class=\"adr\" | Somerset\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton\ \ End\n| class=\"adr\" | Hampshire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Compton Green\n| class=\"adr\" | Gloucestershire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Compton Greenfield\n| class=\"adr\" | South Gloucestershire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton Martin\n\ | class=\"adr\" | Bath and North East Somerset\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Compton Pauncefoot\n| class=\"\ adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Compton Valence\n| class=\"adr\" | Dorset\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Comrie\n|\ \ class=\"adr\" | Perth and Kinross\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Comrie\n| class=\"adr\" | Fife\n| class=\"\ note\" | \n| class=\"note\" | \n|}\n\nCon\n\n|- class=\"vcard\"\n| class=\"fn\ \ org\" | Conanby\n| class=\"adr\" | Rotherham\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conchra\n| class=\"adr\" |\ \ Argyll and Bute\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Concord\n| class=\"adr\" | Sunderland\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conder Green\n\ | class=\"adr\" | Lancashire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Conderton\n| class=\"adr\" | Worcestershire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Condicote\n\ | class=\"adr\" | Gloucestershire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Condorrat\n| class=\"adr\" | North Lanarkshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Condover\n| class=\"adr\" | Shropshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coneygar\n| class=\"adr\" | Dorset\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coney Hall\n| class=\"adr\" | Bromley \n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coney Hill\n| class=\"adr\" | Gloucestershire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coneyhurst\n| class=\"adr\" | West Sussex\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coneysthorpe\n| class=\"adr\"\ \ | North Yorkshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Coneythorpe\n| class=\"adr\" | North Yorkshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coney\ \ Weston\n| class=\"adr\" | Suffolk\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Conford\n| class=\"adr\" | Hampshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Congdon's Shop\n| class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Congelow\n| class=\"adr\"\ \ | Kent\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Congerstone\n| class=\"adr\" | Leicestershire\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Congham\n| class=\"\ adr\" | Norfolk\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Congleton\n| class=\"adr\" | Cheshire\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Congleton Edge\n\ | class=\"adr\" | Cheshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Congl-y-wal\n| class=\"adr\" | Gwynedd\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Congresbury\n\ | class=\"adr\" | North Somerset\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Congreve\n| class=\"adr\" | Staffordshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Conham\n| class=\"adr\" | South Gloucestershire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conicavel\n| class=\"adr\"\ \ | Moray\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Coningsby\n| class=\"adr\" | Lincolnshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conington (South Cambridgeshire)\n\ | class=\"adr\" | Cambridgeshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Conington (Huntingdonshire)\n| class=\"\ adr\" | Cambridgeshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Conisbrough\n| class=\"adr\" | Doncaster\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conisby\n\ | class=\"adr\" | Argyll and Bute\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Conisholme\n| class=\"adr\" | Lincolnshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coniston\n| class=\"adr\" | East Riding of Yorkshire\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coniston\n| class=\"\ adr\" | Cumbria\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Coniston Cold\n| class=\"adr\" | North Yorkshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conistone\n\ | class=\"adr\" | North Yorkshire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Conkwell\n| class=\"adr\" | Wiltshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Connah's Quay\n| class=\"adr\" | Flintshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Connel\n| class=\"adr\" |\ \ Argyll and Bute\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Connel Park\n| class=\"adr\" | East Ayrshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conniburrow\n\ | class=\"adr\" | Milton Keynes\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Connista\n| class=\"adr\" | Highland\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Connon\n| class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Connor Downs\n| class=\"adr\" |\ \ Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Conock\n| class=\"adr\" | Wiltshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conon Bridge\n| class=\"adr\"\ \ | Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n|\ \ class=\"fn org\" | Cononish\n| class=\"adr\" | Stirling\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cononley\n|\ \ class=\"adr\" | North Yorkshire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Cononley Woodside\n| class=\"adr\" |\ \ North Yorkshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Cononsyth\n| class=\"adr\" | Angus\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conordan\n|\ \ class=\"adr\" | Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Conquermoor Heath\n| class=\"adr\" | Shropshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Consall\n| class=\"adr\" | Staffordshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Consett\n| class=\"adr\" |\ \ Durham\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Constable Burton\n| class=\"adr\" | North Yorkshire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Constable\ \ Lee\n| class=\"adr\" | Lancashire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Constantine\n| class=\"adr\" | Cornwall\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Constantine Bay\n| class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Contin\n| class=\"adr\" |\ \ Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Conwy\n| class=\"adr\" | Conwy County Borough\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Conyer\n| class=\"\ adr\" | Kent\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n|\ \ class=\"fn org\" | Conyer's Green\n| class=\"adr\" | Suffolk\n| class=\"note\"\ \ | \n| class=\"note\" | \n|}\n\nCoo\n\n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cooden\n| class=\"adr\" | East Sussex\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Cookbury\n| class=\"adr\" | Devon\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cookbury Wick\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Cookham\n| class=\"adr\" | Berkshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cookham Dean\n| class=\"adr\" | Berkshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cookham Rise\n| class=\"adr\"\ \ | Berkshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Cookhill\n| class=\"adr\" | Worcestershire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cooklaw\n\ | class=\"adr\" | Northumberland\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Cookley\n| class=\"adr\" | Worcestershire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cookley Green\n| class=\"adr\" | Oxfordshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cookney\n| class=\"adr\" |\ \ Aberdeenshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Cookridge\n| class=\"adr\" | Leeds\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cooksbridge\n| class=\"\ adr\" | East Sussex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Cooksey Corner\n| class=\"adr\" | Worcestershire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cooksey\ \ Green\n| class=\"adr\" | Worcestershire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Cook's Green\n| class=\"adr\" |\ \ Suffolk\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Cook's Green\n| class=\"adr\" | Essex\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cookshill\n| class=\"adr\"\ \ | City of Stoke-on-Trent\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cooksland\n| class=\"adr\" | Cornwall\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cooksmill\ \ Green\n| class=\"adr\" | Essex\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Cooksongreen\n| class=\"adr\" | Cheshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coolham\n| class=\"adr\" | West Sussex\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coolhurst Wood\n| class=\"adr\"\ \ | West Sussex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Cooling\n| class=\"adr\" | Kent\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coolinge\n| class=\"adr\"\ \ | Kent\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Cooling Street\n| class=\"adr\" | Kent\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe\n| class=\"adr\" |\ \ Buckinghamshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Coombe (Bude)\n| class=\"adr\" | Cornwall\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe (Camborne)\n\ | class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coombe (Liskeard)\n| class=\"adr\" | Cornwall\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coombe (St Austell)\n| class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe (Truro)\n| class=\"\ adr\" | Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Coombe (Sidmouth)\n| class=\"adr\" | Devon\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe\ \ (Teignmouth)\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe (Tiverton)\n| class=\"adr\"\ \ | Devon\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Coombe\n| class=\"adr\" | Gloucestershire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe\n| class=\"adr\" |\ \ Hampshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Coombe\n| class=\"adr\" | Kent\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe\n| class=\"adr\" | Kingston\ \ upon Thames\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Coombe (Crewkerne)\n| class=\"adr\" | Somerset\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe\ \ (Taunton)\n| class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe\n| class=\"adr\" | Wiltshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coombe Bissett\n| class=\"adr\" | Wiltshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombe Dingle\n| class=\"\ adr\" | City of Bristol\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coombe Keynes\n| class=\"adr\" | Dorset\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombelake\n\ | class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coombes\n| class=\"adr\" | West Sussex\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombesdale\n\ | class=\"adr\" | Staffordshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Coombeswood\n| class=\"adr\" | Dudley\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coomb Hill\n| class=\"adr\" | Kent\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coomb Islands\n| class=\"adr\"\ \ | Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n|\ \ class=\"fn org\" | Coombs End\n| class=\"adr\" | South Gloucestershire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coombses\n\ | class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coopersale\n| class=\"adr\" | Essex\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coopersale\ \ Street\n| class=\"adr\" | Essex\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Cooper's Corner\n| class=\"adr\" | Kent\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cooper's Green\n| class=\"adr\" | East Sussex\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cooper's Green\n| class=\"\ adr\" | Hertfordshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Cooper's Hill\n| class=\"adr\" | Bedfordshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cooper's\ \ Hill\n| class=\"adr\" | Surrey\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Cooper Street\n| class=\"adr\" | Kent\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cooper Turning\n| class=\"adr\" | Bolton\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cootham\n| class=\"adr\" |\ \ West Sussex\n| class=\"note\" | \n| class=\"note\" | \n|}\n\nCop\n\n|- class=\"\ vcard\"\n| class=\"fn org\" | Copcut\n| class=\"adr\" | Worcestershire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copdock\n\ | class=\"adr\" | Suffolk\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coped Hall\n| class=\"adr\" | Wiltshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copenhagen\n\ | class=\"adr\" | Denbighshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Copford\n| class=\"adr\" | Essex\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copford Green\n\ | class=\"adr\" | Essex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Copgrove\n| class=\"adr\" | North Yorkshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copinsay\n\ | class=\"adr\" | Orkney Islands\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Copister\n| class=\"adr\" | Shetland Islands\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coplandhill\n| class=\"adr\" | Aberdeenshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cople\n| class=\"adr\" | Bedfordshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Copley\n| class=\"adr\" | Durham\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Copley\n| class=\"adr\" | Calderdale\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Copley\n| class=\"adr\" | Tameside\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Copley Hill\n| class=\"adr\" |\ \ Kirklees\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Coplow Dale\n| class=\"adr\" | Derbyshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copmanthorpe\n| class=\"adr\"\ \ | York\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Copmere End\n| class=\"adr\" | Staffordshire\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copnor\n| class=\"\ adr\" | City of Portsmouth\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Copp\n| class=\"adr\" | Lancashire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coppathorne\n\ | class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coppenhall\n| class=\"adr\" | Cheshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coppenhall\n\ | class=\"adr\" | Staffordshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Coppenhall Moss\n| class=\"adr\" | Cheshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Copperhouse\n| class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coppice\n| class=\"adr\" | Oldham\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coppicegate\n| class=\"adr\" | Shropshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coppingford\n| class=\"adr\"\ \ | Cambridgeshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Coppins Corner\n| class=\"adr\" | Kent\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coppleham\n\ | class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Copplestone\n| class=\"adr\" | Devon\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coppull\n\ | class=\"adr\" | Lancashire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coppull Moor\n| class=\"adr\" | Wigan\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copsale\n\ | class=\"adr\" | West Sussex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Copse Hill\n| class=\"adr\" | Merton\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copshaw\ \ Holm (Newcastleton)\n| class=\"adr\" | Scottish Borders\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copster Green\n\ | class=\"adr\" | Lancashire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Copster Hill\n| class=\"adr\" | Oldham\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copston\ \ Magna\n| class=\"adr\" | Warwickshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Cop Street\n| class=\"adr\" | Kent\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Copt Green\n| class=\"adr\" | Warwickshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copthall Green\n| class=\"\ adr\" | Essex\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Copt Heath\n| class=\"adr\" | Solihull\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copt Hewick\n\ | class=\"adr\" | North Yorkshire\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Copthill\n| class=\"adr\" | Durham\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Copthorne\n| class=\"adr\" | West Sussex\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copthorne\n| class=\"adr\"\ \ | Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n|\ \ class=\"fn org\" | Copthorne\n| class=\"adr\" | Cheshire\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copthorne\n\ | class=\"adr\" | Shropshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Coptiviney\n| class=\"adr\" | Shropshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Copt\ \ Oak\n| class=\"adr\" | Leicestershire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Copton\n| class=\"adr\" | Kent\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Copy's Green\n| class=\"adr\" | Norfolk\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Copythorne\n| class=\"adr\" | Hampshire\n\ | class=\"note\" | \n| class=\"note\" | \n|}\n\nCoq\n\n|- class=\"vcard\"\n| class=\"\ fn org\" | Coquet Island\n| class=\"adr\" | Northumberland\n| class=\"note\" |\ \ \n| class=\"note\" | \n|}\n\nCor\n\n|- class=\"vcard\"\n| class=\"fn org\" |\ \ Corarnstilbeg\n| class=\"adr\" | Highland\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corbets Tey\n| class=\"adr\" |\ \ Havering\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Corbridge\n| class=\"adr\" | Northumberland\n| class=\"note\" | \n\ | class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corbriggs\n| class=\"\ adr\" | Derbyshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Corby\n| class=\"adr\" | Northamptonshire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corby Glen\n\ | class=\"adr\" | Lincolnshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corby Hill\n| class=\"adr\" | Cumbria\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cordon\n\ | class=\"adr\" | North Ayrshire\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Cordwell\n| class=\"adr\" | Norfolk\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Coreley\n| class=\"adr\" | Shropshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Cores End\n| class=\"adr\" | Buckinghamshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corfe\n| class=\"adr\" | Somerset\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Corfe Castle\n| class=\"adr\" | Dorset\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corfe Mullen\n| class=\"adr\" | Dorset\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corfhouse\n| class=\"adr\" | Argyll\ \ and Bute\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Corfton\n| class=\"adr\" | Shropshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corfton Bache\n| class=\"\ adr\" | Shropshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Corgarff\n| class=\"adr\" | Aberdeenshire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corgee\n|\ \ class=\"adr\" | Cornwall\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corhampton\n| class=\"adr\" | Hampshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corlannau\n\ | class=\"adr\" | Neath Port Talbot\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Corley\n| class=\"adr\" | Warwickshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corley Ash\n| class=\"adr\" | Warwickshire\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corley Moor\n| class=\"adr\"\ \ | Coventry\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n|\ \ class=\"fn org\" | Cornaa\n| class=\"adr\" | Isle of Man\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornaigbeg\n\ | class=\"adr\" | Argyll and Bute\n| class=\"note\" | \n| class=\"note\" | \n\ |- class=\"vcard\"\n| class=\"fn org\" | Cornaigmore\n| class=\"adr\" | Argyll\ \ and Bute\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Cornard Tye\n| class=\"adr\" | Suffolk\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornbank\n| class=\"adr\"\ \ | Midlothian\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Cornbrook\n| class=\"adr\" | Shropshire\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corner Row\n\ | class=\"adr\" | Lancashire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornett\n| class=\"adr\" | Herefordshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corney\n\ | class=\"adr\" | Cumbria\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornforth\n| class=\"adr\" | Durham\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornhill\n\ | class=\"adr\" | Powys\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornhill\n| class=\"adr\" | Aberdeenshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornhill\n\ | class=\"adr\" | Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornhill\n| class=\"adr\" | City of Stoke-on-Trent\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cornhill\n| class=\"adr\" | City of Aberdeen\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornhill on-Tweed\n| class=\"\ adr\" | Northumberland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corn Holm\n| class=\"adr\" | Orkney Islands\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornholme\n\ | class=\"adr\" | Calderdale\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornish Hall End\n| class=\"adr\" | Essex\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornriggs\n\ | class=\"adr\" | Durham\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornsay\n| class=\"adr\" | Durham\n| class=\"note\"\ \ | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornsay Colliery\n\ | class=\"adr\" | Durham\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Cornton\n| class=\"adr\" | Stirling\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corntown\n\ | class=\"adr\" | Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corntown\n| class=\"adr\" | The Vale of Glamorgan\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cornwell\n| class=\"adr\" | Oxfordshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Cornwood\n| class=\"adr\" | Devon\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cornworthy\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corpach\n| class=\"adr\" | Highland\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corpusty\n| class=\"adr\" | Norfolk\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corran (Loch Hourn)\n| class=\"\ adr\" | Highland\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Corran (Lochaber)\n| class=\"adr\" | Highland\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corrany\n\ | class=\"adr\" | Isle of Man\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corrie\n| class=\"adr\" | North Ayrshire\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corrie\ \ Common\n| class=\"adr\" | Dumfries and Galloway\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corriecravie\n| class=\"adr\"\ \ | North Ayrshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\ \n| class=\"fn org\" | Corriedoo\n| class=\"adr\" | Dumfries and Galloway\n| class=\"\ note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corrigall\n\ | class=\"adr\" | Orkney Islands\n| class=\"note\" | \n| class=\"note\" | \n|-\ \ class=\"vcard\"\n| class=\"fn org\" | Corrimony\n| class=\"adr\" | Highland\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corringham\n| class=\"adr\" | Essex\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corringham\n| class=\"adr\" | Lincolnshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corris\n| class=\"adr\" | Gwynedd\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Corris Uchaf\n| class=\"adr\" | Gwynedd\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corry\n| class=\"adr\" | Highland\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Corsback\n| class=\"adr\" | Highland\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corscombe\n| class=\"adr\" | Dorset\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corse\n| class=\"adr\" | Gloucestershire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corse\n| class=\"adr\" | Aberdeenshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corse Lawn\n| class=\"adr\" | Gloucestershire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corsham\n| class=\"adr\" | Wiltshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corsiehill\n| class=\"adr\" | Perth\ \ and Kinross\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Corsley\n| class=\"adr\" | Wiltshire\n| class=\"note\" |\ \ \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corsley Heath\n\ | class=\"adr\" | Wiltshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corsock\n| class=\"adr\" | Dumfries and Galloway\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corston\n| class=\"adr\" | Wiltshire\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Corston\n| class=\"adr\" | Bath\ \ and North East Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"\ vcard\"\n| class=\"fn org\" | Corstorphine\n| class=\"adr\" | City of Edinburgh\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Cortachy\n| class=\"adr\" | Angus\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Corton\n| class=\"adr\" | Wiltshire\n\ | class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"fn org\"\ \ | Corton\n| class=\"adr\" | Suffolk\n| class=\"note\" | \n| class=\"note\" |\ \ \n|- class=\"vcard\"\n| class=\"fn org\" | Corton Denham\n| class=\"adr\" |\ \ Somerset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Cortworth\n| class=\"adr\" | Rotherham\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Corwen\n| class=\"adr\" |\ \ Denbighshire\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n\ | class=\"fn org\" | Cory\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"\ note\" | \n|- class=\"vcard\"\n| class=\"fn org\" | Coryates\n| class=\"adr\"\ \ | Dorset\n| class=\"note\" | \n| class=\"note\" | \n|- class=\"vcard\"\n| class=\"\ fn org\" | Coryton\n| class=\"adr\" | Devon\n| class=\"note\" | \n| class=\"note\"\ \ | \n|- class=\"vcard\"\n| class=\"fn org\" | Coryton\n| class=\"adr\" | Cardiff\n\ | class=\"note\" | \n| class=\"note\" | \n|}" - source_sentence: What were the names of the regular battalions in the King's Own (Royal Lancaster Regiment)? sentences: - "The 1st Royal Lancashire Militia (The Duke of Lancaster's Own) was an auxiliary\ \ regiment raised in the county of Lancashire in North West England during the\ \ 17th Century. Primarily intended for home defence, it saw active service in\ \ Ireland under King William III, as well as against the Jacobite Risings of 1715\ \ and 1745. It spent long periods on defence duties during the wars of the 18th\ \ Century and early 19th Century, and was stationed on the Ionian Islands during\ \ the Crimean War. It later became part of the King's Own (Royal Lancaster Regiment)\ \ and saw active service in the Second Boer War. After its conversion to the Special\ \ Reserve under the Haldane Reforms, it supplied reinforcements to the fighting\ \ battalions during World War I. After a shadowy postwar existence the unit was\ \ finally disbanded in 1953.\n\nBackground\n\nUniversal obligation to military\ \ service in the Shire levy was long established in England, and its legal basis\ \ was updated by two Acts of 1557. This legislation placed selected men, the 'Trained\ \ Bands', under the command of a Lord Lieutenant appointed by the monarch; this\ \ is seen as the starting date for the organised county militia in England. The\ \ trained bands were an important element in the country's defence at the time\ \ of the Armada in the 1580s, and control of the bands was an area of dispute\ \ between King Charles I and Parliament that led to the English Civil War. Lord\ \ Wharton had been appointed Lord Lieutenant of Lancashire by Parliament in 1641,\ \ and on the outbreak of hostilities in July 1642 he attempted to seize the trained\ \ bands' magazine at Manchester. However, he was forestalled by Lord Strange and\ \ William Farington (appointed Commissioner of Array by the King), who had already\ \ gained control of the magazines at Liverpool and Preston for the Royalists.\ \ The resulting skirmish at Manchester on 15 July, when Strange and his men were\ \ driven out by Wharton's Parliamentarians, was among the first battles of the\ \ war.\n\nOnce Parliament had established full control in 1648 it passed new Militia\ \ Acts that replaced lords lieutenant with county commissioners, who were appointed\ \ by Parliament or the Council of State, after which the term 'Trained Band' began\ \ to disappear in most counties. Under the Commonwealth and Protectorate, the\ \ militia received pay when called out and operated alongside the New Model Army\ \ to control the country.\n\nOld County Regiment\n\nAfter the Restoration of the\ \ Monarchy, the English Militia was re-established by the Militia Act of 1661\ \ under the control of the king's lords-lieutenant, the men to be selected by\ \ ballot. It was popularly seen as the 'Constitutional Force' to counterbalance\ \ a 'Standing Army', a concept that was tainted by association with the New Model\ \ Army that had supported Cromwell's military dictatorship, and almost the whole\ \ burden of home defence and internal security was entrusted to the militia. \n\ \nThe Lancashire Militia were called out in 1663 when there were rumours of plots\ \ against the new regime, and no sooner had they been sent home in October than\ \ they were called out again on receipt of new information. Some counties were\ \ slacking in training and equipping their men: in 1674 most of the weapons of\ \ the Lancashire Militia were found to be defective, and many had to be replaced\ \ again in 1689.\n\nNine Years' War\nFollowing the Glorious Revolution, in which\ \ King William III supplanted James II, the militia were called out in 1689. The\ \ Lord Lieutenant of Lancashire, William Stanley, 9th Earl of Derby, organised\ \ three regiments of foot and three Troops of horse from the County palatine of\ \ Lancaster:\n Colonel the Earl of Derby – 7 companies\n Colonel Roger Nowell\ \ – 7 companies\n Colonel Alexander Rigby – 8 companies\n The Earl of Derby's\ \ Troop\n Captain Thomas Greenhalgh's Troop\n Captain Sir Roger Bradshaigh's Troop.\n\ \nThese regiments volunteered for service in William's campaign in Ireland. After\ \ training on Fulwood Moor, near Preston, the Lancashire brigade, commanded by\ \ the Earl of Derby's brother, Lieutenant-Colonel the Hon James Stanley (1st Foot\ \ Guards), sailed with the army from Wallasey and landed at Carrickfergus on 14\ \ June 1690. It played a full part in the campaign, serving in the Siege of Carrickfergus,\ \ at the Battle of the Boyne, and the Siege of Athlone. After a short tour of\ \ garrison duty in Dublin, the Lancashire brigade embarked at Howth in September\ \ to return to England to be disembodied on 15 October. Lieutenant-Colonel Stanley\ \ then recruited a number of veterans from the brigade for the regiment he was\ \ joining in Flanders. He succeeded to the command after his colonel was killed\ \ at the Battle of Steenkerque, after which the unit became 'Stanley's Regiment'\ \ (later the Bedfordshire Regiment). Colonel Stanley succeeded his brother as\ \ 10th Earl of Derby and Lord Lieutenant of Lancashire in 1702.\n\nAt the end\ \ of the Nine Years War in 1697 the militia in Lancashire consisted of 1601 men\ \ organized into 22 companies and three regiments, with 150 horsemen in three\ \ Troops. The three colonels were Major-General the Earl of Macclesfield (lord\ \ lieutenant), Roger Kirkby, MP, and Sir Ralph Assheton, 2nd Baronet, of Middleton,\ \ MP.\n\nJacobite Rising of 1715\n\nAfter the outbreak of the Jacobite Rising\ \ of 1715 the Lancashire Militia was ordered in August to assemble at Lancaster\ \ Castle under the command of Col Philip Hoghton. He found that fewer than half\ \ of the balloted men turned out, only 560 in all, enough to organise a single\ \ battalion. When a force of reputedly 3–4000 Scottish Highlanders and English\ \ Jacobites advanced from Carlisle, Hoghton was ordered to fall back from Lancaster\ \ to Preston to await further orders. He marched out early on 7 November and the\ \ Jacobites entered Lancaster the same day, taking over the ordnance stores in\ \ the castle. From Preston the Lancashire Militia and a newly arrived regiment\ \ of dragoons were ordered to Wigan, and the Jacobites occupied Preston on 9 November,\ \ where they built street barricades and placed the town in a state of defence.\ \ However, they were disappointed by the small number of Lancashire Jacobites\ \ who joined them, about 1200 badly-armed men. Major-General Charles Wills reached\ \ Wigan from Manchester on 11 November with a considerable force of government\ \ troops. Further troops under Lieutenant-General George Carpenter were also approaching\ \ from Clitheroe.\n\nWills advanced on Preston next day, and finding the bridge\ \ over the River Ribble unguarded, began his attack on the town. Brigadier-General\ \ Philip Honywood led the Lancashire Militia together with three dismounted troops\ \ of dragoons against the barricade at the west end of Fishergate. They first\ \ stormed the houses west of the churchyard and set fire to them as a diversion\ \ to assist the column attacking the churchyard barricade, and then moved against\ \ Fishergate, preceded by skirmishers. Colonel Hoghton detached the left wing\ \ of the Lancashire Militia and a troop of dragoons to attack the Friargate barricade\ \ while he led the right wing and remaining dragoons in columns of attack against\ \ Fishergate. Hoghton and his men reached the top of the barricade but were driven\ \ back by heavy musketry fire from the neighbouring houses, having suffered serious\ \ casualties; Honywood ordered them to withdraw. The attack at Friargate fared\ \ no better. But the Government troops renewed the attack after dark, Col Hoghton\ \ leading his men silently up to the Fishergate barricade then rushed it with\ \ the bayonet. The rebels took refuge in the houses, which were set on fire, and\ \ the street fighting continued by the light of the fires. Carpenter's troops\ \ arrived in the morning, to relieve the exhausted militia and completely invest\ \ the town, poised to complete the task of capturing it. A brigade of Dutch troops\ \ was also about to arrive, having marched from London. The rebel commanders,\ \ realising that they could hold out no longer, surrendered.\n\nThe Lancashire\ \ Militia had four officers killed, seven wounded, and 105 non-commissioned officers\ \ (NCOs) and privates killed and wounded, around a third of the total government\ \ casualties at the Battle of Preston. On 16 November the regiment marched back\ \ to Lancaster with 250 prisoners to be lodged in the castle. It remained there\ \ for the rest of the year, escorting parties of prisoners for trial, until it\ \ was disembodied about 15 January 1716.\n\nJacobite Rising of 1745\nThe Lancashire\ \ Militia was next called out for service against the Jacobite Rising of 1745.\ \ Orders to embody the militia were issued to the lord lieutenant, Edward Stanley,\ \ 11th Earl of Derby, on 26 September after the government's forces had been defeated\ \ at the Battle of Prestonpans. Derby complained that although there were sufficient\ \ weapons (though of poor quality), the three regiments of foot and three troops\ \ of horse had not been called out for training in the 30 years since the Battle\ \ of Preston. He and his deputy lieutenants scrambled to raise money and find\ \ officers and army pensioners who could train the raw troops gathering at Bury.\ \ By 5 November Derby had assembled a regiment of eight companies. The Lancaster\ \ and Lonsdale Company, under the command of Captain William Bradshaw, was left\ \ at Lancaster to guard the ordnance stores and prison there. Major William Ffarington\ \ of Shaw Hall, Leyland, was sent with a detachment of two companies to guard\ \ Chorley. In the meantime, the Corporation of Liverpool had raised a 648-strong\ \ volunteer regiment, the Liverpool Blues, which was fully armed and could be\ \ put into the field. \n\nOn 17 November the Jacobite army reached Carlisle, which\ \ soon surrendered, and began moving south. Two days later Derby ordered the companies\ \ at Bury and Chorley to concentrate at Liverpool, and ordered Bradshaw to requisition\ \ as many waggons and carts as he could to move the ordnance stores out of Lancaster\ \ to 'a secure and secret place' at Ulverston. These moves were carried out next\ \ day, regimental headquarters (HQ) was established at the Talbot Hotel in Liverpool,\ \ and the Earl handed over command to Maj Ffarington. The commander of the government\ \ forces, Field Marshal George Wade, advised the militia to operate in small bodies\ \ to harry the advancing rebel army, firing from hedges and preventing it from\ \ sending out plundering parties. The Jacobites reached Lancaster on 24 November\ \ and Preston on 27 November, while detachments marched through Wigan, Chorley\ \ and Bolton. They hoped to gather recruits in Lancashire but were disappointed\ \ until they reached Manchester on 28 November, where there were sufficient volunteers\ \ to form the Manchester Regiment.\n\nThe Liverpool Blues, being better armed\ \ and equipped than the Lancashire Militia, were sent out on 29 November under\ \ Colonel Campbell to Warrington to prevent the rebels from using the bridge over\ \ the Mersey. As darkness approached they opened fire on what was thought to be\ \ a group of Highlanders but turned out to be a flock of geese. Next day they\ \ repulsed the Jacobite detachment from Preston, and broke down Warrington Bridge.\ \ On 1 December Col Campbell marched to Cheadle and Stockport, blowing up the\ \ bridges there and forcing the Jacobite artillery and baggage to cross by temporary\ \ rafts. After feinting towards Wales, the Jacobites reached Derby on 4 December.\ \ Government forces were now closing in on the Jacobite army and it was clear\ \ that there was not going to be an uprising in their favour in England. The Jacobite\ \ commanders decided to retreat to Scotland. Hindered by the Liverpool Blues'\ \ demolitions, they did not reach Manchester until 8 December, with stragglers\ \ being picked off by the Blues.\n\nThe advance guards of the government forces\ \ under Maj-Gens James Oglethorpe and Sir John Ligonier joined the Liverpool Blues\ \ at Lancaster on 14 December. Next day Capt Bradshaw and his company (95 all\ \ ranks) arrived from Ulverston with orders to put himself under Campbell's command.\ \ By now the Duke of Cumberland had arrived to take overall command, and he sent\ \ Oglethorpe with his dragoons and the Liverpool Blues to harry the Jacobite rearguard.\ \ They marched via Kendal (17 December) and continued over Shap Fell in moonlight\ \ and a snowstorm to surprise the Jacobites next morning. The dragoons pursued\ \ the Jacobite rearguard through Shap village as far as Clifton Moor, where the\ \ Jacobites were drawn up to cover the retreat of their guns across the bridges\ \ into Penrith. The Liverpool Blues deployed in front of Clifton, with Bradshaw's\ \ company and some dragoons covering the road at Clifton Dykes. They piled arms\ \ and cooked a meal, then at 20.00 that evening Oglethorpe ordered them to advance\ \ in support of his dragoons. Bradshaw's company formed on the right of the Liverpool\ \ Blues (the position taken by the grenadier company in a line regiment). The\ \ delaying action (the Clifton Moor Skirmish) was well handled by the Jacobite\ \ commander, Lord George Murray, who led a counter-charge of Highlanders, and\ \ Oglethorpe was blamed for the heavy losses suffered by his dragoons in their\ \ dismounted attack. The Liverpool Blues followed the Highlanders with volley\ \ fire, but the Jacobites succeeded in reaching Penrith with the loss of a few\ \ guns and waggons. Bradshaw commended Corporal Shaw of his company for rescuing\ \ three people from a burning house in Clifton. The company had lost one killed\ \ and three wounded in the two skirmishes at Shap and Clifton\n\nCumberland's\ \ army followed the Jacobites through Penrith to Carlisle. The Lancashire Militia\ \ company was left at Penrith to guard the prisoners, while the Liverpool Blues\ \ were present at the 10-day siege of Carlisle Castle. Cumberland marched into\ \ Scotland on 4 January 1746 (finally defeating the Jacobites at the Battle of\ \ Culloden on 16 April) while the Liverpool Blues escorted the prisoners from\ \ Carlisle (including those of the Manchester Regiment) to Lancashire for trial.\ \ Bradshaw's company similarly escorted the prisoners from Penrith to Lancaster.\ \ The Lancashire Militia was then disembodied on 12 January 1746; it was not called\ \ out again for training or active service until the Seven Years' War.\n\n1st\ \ Royal Lancashire Militia\n\nSeven Years' War\nUnder threat of French invasion\ \ during the Seven Years' War a series of Militia Acts from 1757 reorganised the\ \ county militia regiments, the men being conscripted by means of parish ballots\ \ (paid substitutes were permitted) to serve for three years. Lancashire's quota\ \ was set at 800 men in one regiment, but despite the enthusiasm of the acting\ \ lord lieutenant, Lord Strange, the county was slow to raise its quota. A regiment\ \ would have its arms issued from the Tower of London when it reached 60 per cent\ \ of its established strength, but in the case of Lancashire this was not until\ \ 18 July 1760, and the regiment was finally embodied for service on 23 December\ \ that year.\n\nThe regiment assembled on 28 December with six companies at Preston\ \ and four at Manchester. After training, it marched on 9 July 1761 to join other\ \ militia regiments at Warley Camp in Essex, arriving on 13 August. On 15 October\ \ King George III presented the Lancashire Militia with its new Regimental Colours,\ \ and on 23 October they were granted the title Royal Lancashire Militia (RLM)\ \ with the colonel's company designated 'the King's Company'. The regiment then\ \ marched to Nottingham for winter quarters. On 11 June 1762 the regiment was\ \ marched south again to join the militia camp at Winchester in Hampshire on 30\ \ June. Preliminaries of peace having been signed, the regiment was ordered on\ \ 18 October to march back to Lancashire, where it was disembodied at Manchester\ \ on 15 December 1762.\n\nIn peacetime, the reformed militia regiments were supposed\ \ to be assembled for 28 days' annual training. In 1763 part of the RLM camped\ \ at Fulwood Moor near Preston from 18 May to 14 June, but it was not called out\ \ again until 1778.\n\nWar of American Independence\nThe militia was called out\ \ after the outbreak of the War of American Independence when the country was\ \ threatened with invasion by the Americans' allies, France and Spain. The Royal\ \ Warrant for the embodiment of the Royal Lancashire Militia was issued on 26\ \ March and the regiment was embodied on 1 April 1778 under the command of the\ \ 12th Earl of Derby. After six weeks' training the regiment was marched to camp\ \ at Winchester. In October it was billeted among small Hampshire towns: Lymington\ \ (HQ + 3 companies), Romsey (3 companies), Ringwood, Christchurch, Downton and\ \ Fordingbridge (1 company each). Then in November it marched back to Liverpool\ \ for the winter, setting up its HQ at the Talbot Hotel once more.\n\nWhile at\ \ Liverpool a large number of unfit and time-expired men were discharged and a\ \ new ballot held to refill the ranks, necessitating a great deal of training.\ \ In June 1779 the regiment moved to Newcastle upon Tyne, with two companies detached\ \ to Sunderland until February 1780 when they relieved the Regular garrison of\ \ Tynemouth Castle. In June 1780 the regiment marched to Chester Castle; three\ \ companies were detached at Macclesfield and two at Nantwich. It spent the winter\ \ from November 1780 at Manchester, with some companies detached to Warrington.\ \ In June 1781 two companies each from Manchester and Warrington moved to Chester,\ \ returning to Warrington the following November. By now the regiment was organised\ \ like the regulars with a Grenadier Company (the King's Company), a Light Company,\ \ and eight line or 'hat' companies. From April 1782 the regiment was broken up\ \ in detachments across Cumberland: Carlisle Castle (4 companies), Cockermouth\ \ (2 companies), Workington (2 companies), Whitehaven and Maryport (1 company\ \ each). Although Cumberland was remote from a possible French invasion, Whitehaven\ \ had been attacked by John Paul Jones in 1778. The regiment remained at these\ \ stations until 22 January 1783, when two companies were ordered from Carlisle\ \ Castle to Lancaster, and then on 17 February marched with HQ from Lancaster\ \ to Manchester. By now a peace treaty had been drawn up (it was signed in September)\ \ and orders were issued to the Earl of Derby on 28 February to disembody the\ \ RLM. This was carried out at Manchester in March 1783. The Earl of Derby then\ \ resigned the colonelcy to concentrate on his parliamentary duties; he nominated\ \ a distant kinsman, Thomas Stanley of Cross Hill, MP, to succeed him.\n\nFrom\ \ 1784 to 1792 the militia were generally assembled for their 28 days' annual\ \ training, but to save money only two-thirds of the men were actually called\ \ out each year. However, it appears that the Royal Lancashire Militia did no\ \ training until the Stanleys called them out in 1790.\n\nFrench Revolutionary\ \ War\nThe militia were re-embodied in January 1793 shortly before Revolutionary\ \ France declared war on Britain. The Royal Lancashire Militia assembled at Preston\ \ on 22 January, but on 25 January were ordered to disperse across Lancashire\ \ – Liverpool (4 companies), Wigan (3 companies), Blackburn (2 companies) and\ \ Chorley (1 company) – which hindered training. \n\nDuring the French Wars the\ \ militia were employed anywhere in the country for coast defence, manning garrisons,\ \ guarding prisoners of war, and for internal security, while the regulars regarded\ \ them as a source of trained men if they could be persuaded to transfer. Their\ \ traditional local defence duties were taken over by the part-time Volunteers\ \ and later by a compulsory Local Militia.\n\nIn February 1793 the civil authorities\ \ in the West Riding of Yorkshire feared an outbreak of disorder and requested\ \ a military force. The RLM was sent, with HQ and four companies going to Leeds,\ \ three companies to Halifax, then to Sheffield and Barnsley, and three to Wakefield,\ \ Horset and Horbury. When regular troops arrived to keep the peace in May the\ \ RLM was moved to Doncaster, with detached companies at Bawtry, Blyth, Retford\ \ and Moorgate. During the rest of the year companies and pairs of companies went\ \ out to other towns before returning to Doncaster. In April 1794 the regiment\ \ was moved to the East Midlands, with six companies at Stamford and four at Peterborough.\ \ In June 1794 the RLM joined the great anti-invasion camp on the South Downs\ \ above Brighton, which included regular and fencible regiments as well as militia.\ \ In November it moved to winter quarters across Kent, with HQ at Canterbury Barracks.\ \ In 1795 it went to Dover Castle, spending May in camp at Hythe, returning to\ \ Canterbury in October with the companies in billets across north Kent. The regiment\ \ was then moved to billets around Greenwich and Deptford in November as part\ \ of a concentration round London to prevent disorder. In the spring of 1796 detachments\ \ were marched through Surrey before returning to Greenwich, then in June the\ \ regiment crossed to Warley Camp before going into winter quarters at Chelmsford.\n\ \nLancashire's militia quota set in 1760 was small in proportion to its population,\ \ which soared during the Industrial Revolution. By 1796 it represented only one\ \ man in every 43 of those eligible. But in that year an additional ballot was\ \ carried out to raise men for the 'Supplementary Militia' to reinforce the standing\ \ militia regiments and to form additional temporary regiments. Lancashire's quota\ \ was increased to five regiments, and on 1 March 1797 the RLM was ordered to\ \ send a party to Lancaster to begin training them. Although recruitment of such\ \ large numbers became difficult, the 1st Royal Lancashire Supplementary Militia\ \ was raised on 1 March 1797 at Liverpool under the personal command of the 13th\ \ Earl of Derby as lord lieutenant. On 17 August 1798 it was placed on a permanent\ \ footing as the 2nd Royal Lancashire Militia (2nd RLM), after which the 'Old\ \ County Regiment' became the 1st Royal Lancashire Militia (1st RLM).\n\nIn March\ \ 1797 the 1st RLM was scattered across villages north of London, but on 11 April\ \ it was ordered to Plymouth, where it was quartered at the Maker Redoubts overlooking\ \ Plymouth Sound for the rest of the year. By the end of the year, with so many\ \ senior officers in parliament and the parties away training the supplementary\ \ militia, the strength of the regiment at Plymouth was down to about 400 men,\ \ under the command of the senior captain. Two of the companies may have been\ \ organised and equipped as rifle companies at this time.\n\nIrish Rebellion\n\ In March 1798 legislation was passed to allow the militia to volunteer for service\ \ in Ireland, where a Rebellion had broken out. The 1st Royal Lancashire Militia\ \ immediately volunteered, and the regiment was recruited to full strength (1200\ \ men) from the supplementary militia to replace the time-expired men. The contractors\ \ having failed to provide enough uniforms in time, the 136 time-expired men were\ \ stripped of their uniforms, hats and boots to clothe the recruits, leading to\ \ a serious complaint to the War Office about their treatment. The recruits arrived\ \ at Plymouth from Lancashire and the regiment embarked at the end of June. But\ \ the news from Ireland having improved the voyage was cancelled and the regiment\ \ returned to camp on Maker Heights. It was not until the end of August that the\ \ 1st RLM embarked again as part of a militia brigade in response to the French\ \ intervention in Ireland. The regiment landed at Ballyhack in Waterford Harbour\ \ on 11 September and then marched to New Ross, preparatory to moving north. However,\ \ the French expedition had already been defeated at the Battle of Ballinamuck,\ \ and the follow-up expedition was defeated at sea without landing. When the regiment\ \ reached Clonmel on 21 October the rebellion was effectively over. The regiment\ \ went into winter quarters but guard and picket duties heavy while the area was\ \ still in disorder.\n\nWith the end of the Irish Rebellion the government encouraged\ \ militiamen to volunteer for the regular army: the 1st RLM was one of a number\ \ of regiments that offered to serve abroad as a complete unit. However the legislation\ \ did not allow for this and the offer was declined, though Col Stanley encouraged\ \ his men to volunteer as individuals, and some 350 did so, over 150 joining the\ \ 20th Foot (later the Lancashire Fusiliers). Meanwhile, the trials of the rebels\ \ were continuing, and in May 1799 the militia brigade at Clonmel was put on alert\ \ to march at short notice in case of trouble, or of another French landing. In\ \ September, after a year's service in Ireland, the 1st RLM prepared to embark\ \ for England. Before departure one whole company, about 100 strong, recruited\ \ from Bolton and its neighbourhood, volunteered to transfer to the 36th Foot.\ \ The reduced regiment – about 560 other ranks (ORs) – embarked from Waterford\ \ on 9 October, landing at Bristol on 12 October. It rested at Tetbury and then\ \ on 21 October it began its march back to Lancashire. On arrival at Preston on\ \ 6 November the regiment was ordered to disembody.\n\nThe supplementary militia\ \ having been abolished, the remaining balloted men in Lancashire were distributed\ \ to the 1st, 2nd and 3rd RLM to fill vacancies – the officers of the 1st RLM\ \ complaining about the quality of the men they were assigned. The regiment completed\ \ disembodiment on 28 December 1799. It was called out again for training 5 August\ \ 1801, assembling at Lancaster (now its permanent HQ). A few days later it was\ \ informed that it would be embodied for active service again at the end of the\ \ training. On 26 September it began the march to its new station of Tynemouth\ \ Castle. On arrival, with the newly balloted men, it had a strength of 900 ORs.\ \ The Peace of Amiens was signed on 27 March 1802, and on 1 April the regiment\ \ was ordered to march back to Lancaster to disembody once more, apart from the\ \ small permanent staff.\n\nNapoleonic Wars\nThe Peace of Amiens was short-lived,\ \ and the militia was called out again on 1 April 1803. After establishing a depot\ \ at Lancaster to train the newly balloted men the 1st RLM marched on 23 May to\ \ join the encampment at Danbury, Essex, under the command of Lt-Col John Plumbe,\ \ Col Stanley being unwell. The recruits followed from Lancaster on 20 July, bringing\ \ the regiment up to full strength of 1200 men in 12 companies. It remained at\ \ Danbury Camp until August 1804, when it was transferred to Brabourne Lees Camp\ \ in Kent, and then in June 1805 to Portsmouth. In August and September 1805 the\ \ 1st RLM was at Weymouth, Dorset, while the royal family was in residence, then\ \ in October moved to Exeter and the surrounding villages, where it spent the\ \ winter. In the spring it returned to Weymouth where it trained the newly balloted\ \ men, who replaced those time-expired and those who had volunteered for the regulars\ \ (one whole company had done so). It returned to Exeter for the winter of 1806,\ \ staying there and at Stonehouse Barracks, Plymouth, until May 1809. At that\ \ time it was ordered to Tavistock and then to Bristol, detaching 100 men to embark\ \ at Ilfracombe to sail to Milford Haven and Haverfordwest to reinforce the garrison\ \ there. The detachment rejoined HQ at Bristol in June, and the regiment stayed\ \ there until March 1811. During 1810 it had recruiting parties detached to Bolton,\ \ Manchester, Preston and Wigan. On 8 March 1811 the 1st RLM was ordered to march\ \ from Bristol to Hull; however on 25 March it was diverted en route to deal with\ \ Luddite disturbances that had broken out at Nottingham. It was ordered to resumed\ \ its march to Hull Barracks on 22 April. In October it was sent to Berwick-upon-Tweed\ \ and Tweedmouth, with detachments at Eyemouth and Holy Island. In March 1812\ \ it moved into Scotland, to Dunbar and Haddington, and then to Dalkeith. It remained\ \ there, with occasional detachments to Penicuik where there was a large Prisoner-of-war\ \ camp to be guarded, until December 1814.\n\nThe militia had become one of the\ \ biggest sources of recruits to the regular army, and the 1st RLM was expected\ \ to supply a quota of 100 volunteers each year, rising to a draft or 244 men\ \ in February 1814. Colonel Plumbe also volunteered the whole regiment for service\ \ in Ireland, and roughly half the men agreed to extend their service accordingly.\ \ In March 1814 this body (12 officers and about 340 ORs) embarked at Portpatrick\ \ for Donaghadee, from where it marched to Belfast and then Athlone, arriving\ \ on 14 June. Napoleon had abdicated in April and peace was declared on 30 May,\ \ but the 1st RLM had still not been disembodied in February 1815 when he escaped\ \ from Elba and the war was resumed. The three regiments of Lancashire Militia,\ \ which happened to be stationed together at Dublin, were allowed to recruit back\ \ to full strength by ballot and 'by beat of drum'. They also provided drafts\ \ of around 1000 volunteers to the regular regiments being sent to Belgium. The\ \ 1st RLM supplied 23 NCOs and men to the 1st Foot Guards, and 11 each to the\ \ 33rd Foot and 71st (Highland) Light Infantry, with individuals to other regiments.\ \ There is a story that many of the Guardsmen at the Battle of Waterloo were still\ \ wearing their Militia uniforms.\n\nWaterloo ended the war, but much of the regular\ \ army remained in France as part of the Army of Occupation for several months,\ \ and the Lancashire Militia continue their garrison duty at Dublin. The 1st RLM\ \ now being very weak, drafts of balloted men continued to be despatched from\ \ Lancaster until February 1816, when it was finally ordered to return for disembodiment.\ \ It embarked from Dublin on 25 March and landed at Liverpool, arriving at Lancaster\ \ on 5 April and being disembodied on 15 April.\n\nLong peace\nMilitia training\ \ was suspended in most years after Waterloo, but the 1st RLM was called out for\ \ its 28 days' training in 1821, 1825 and 1831. Balloting continued, but the permanent\ \ staff was progressively reduced over the years. Just before the 1831 training\ \ King William IV bestowed on the three Lancashire Militia Regiments the additional\ \ title The Duke of Lancaster's Own. No further militia training took place for\ \ the next 21 years. Although officers continued to be appointed to fill vacancies\ \ the ballot was suspended.\n\n1852 reforms\nThe Militia of the United Kingdom\ \ was revived by the Militia Act of 1852, enacted during a period of international\ \ tension. As before, units were raised and administered on a county basis, and\ \ filled by voluntary enlistment (although conscription by means of the Militia\ \ Ballot might be used if the counties failed to meet their quotas). Training\ \ was for 56 days on enlistment, then for 21–28 days per year, during which the\ \ men received full army pay. Under the Act, Militia units could be embodied by\ \ Royal Proclamation for full-time service in three circumstances:\n 1. 'Whenever\ \ a state of war exists between Her Majesty and any foreign power'.\n 2. 'In all\ \ cases of invasion or upon imminent danger thereof'.\n 3. 'In all cases of rebellion\ \ or insurrection'.\n\nIn the case of the 1st RLM some younger officers were appointed,\ \ including John Talbot Clifton of Lytham Hall, formerly of the 1st Life Guards,\ \ as colonel, together with new permanent staff officers and regular army NCOs,\ \ and the revived regiment was called out for its first 21 day training on 8 November\ \ 1852. The staff NCOs and the few experienced officers had their hands full when\ \ the special trains brought the 500 undisciplined recruits from Bolton and Manchester,\ \ but had made good progress after three weeks' drilling on Giant Axe Field. The\ \ officers' mess now adopted the traditional Lancashire form of the Loyal toast:\ \ 'The Queen, Duke of Lancaster', which the regiment kept thereafter.\n\nCrimean\ \ War\nIn May 1853, in view of the worsening international situation, the government\ \ ordered the lord lieutenant (the Earl of Sefton) to recruit the three Lancashire\ \ militia regiments up to their full strengths of 1200 each. The 1st RLM was called\ \ out for 28 day's annual training on 24 May, in which the staff were assisted\ \ by drill sergeants from the 50th Foot stationed nearby at Preston.\n\nWar having\ \ broken out with Russia in March 1854 and an expeditionary force sent to the\ \ Crimea, the Militia were called out for home defence. The 1st RLM assembled\ \ at Lancaster on 24 May for 28 days' training before embodiment. Colonel Clifton\ \ had already offered the regiment for overseas service – the first such offer\ \ made in this war by a militia regiment – and the government accepted a body\ \ of 500 men. On 16 June the regiment divided, 500 men for the service companies,\ \ the other 700 dismissed to their homes until further notice. The service battalion\ \ travelled by train to Deptford Dockyard, moving on 16 July to Portsmouth. In\ \ September, training began with the new Enfield rifled musket. In November there\ \ was a call to reinforce the army in the Crimea, and 250 men from the service\ \ companies of the 1st RLM volunteered. It was not until December that Parliament\ \ passed Acts allowing whole militia regiments to volunteer, and recalling out\ \ the men who had been disembodied in order to fill the vacancies.\n \nThe regiment\ \ now prepared to embark for the Ionian Islands (then a British protectorate)\ \ to release the garrison to fight in the Crimea. The men who had not volunteered\ \ or were unfit for overseas service were formed into a regimental depot at Fort\ \ Cumberland, Portsmouth. The depot returned to Lancaster on 1 March 1855, and\ \ the service companies embarked on the transport Calcutta two days later. It\ \ sailed on 4 March and they disembarked at Corfu on 16 March, taking up quarters\ \ in the Citadel Barracks, with detachments on the islands of Fano, Paxo and Santa\ \ Maura. Its first task was to send the Grenadier Company on 20 March to suppress\ \ a riot on Vido among the convalescent soldiers from the Crimea. On 15 May the\ \ bulk of the regiment re-embarked for Zante, leaving detachments on Santa Maura,\ \ Cerigo and Cephalonia. In September there was a cholera outbreak at Zante, and\ \ in two weeks the regiment lost one officer, two NCOs and 275 men dead, and 54\ \ invalided home. Two drafts of reinforcements arrived from the depot at Lancaster,\ \ 150 men on 25 November and 250 more on 15 January 1856. The Grenadier Company\ \ at Santa Maura had been unaffected by cholera, and was chosen to go to the Crimea\ \ to reinforce the army for its projected operations following the fall of Sevastopol\ \ in September 1855 (the only militia unit accepted). However, there were no further\ \ operations and the war ended on 30 March 1856 before the company had left the\ \ islands. The 1st RLM embarked on the troopship Colombo on 21 May, but its passage\ \ was delayed when the ship ran aground at Argostoli Bay, where it had gone to\ \ pick up the Grenadier Company. The ship was deemed to be overcrowded, and two\ \ companies were left at Malta to follow by a later steamer. The main body reached\ \ Portsmouth on 3 June, and went by trains to Lancaster on 8 and 9 June. The two\ \ companies from Malta were not disembodied until 16 July. After the regiment\ \ was disembodied it was awarded the Battle honour Mediterranean for its service.\n\ \nFurther militia regiments had been raised in Lancashire after 1852, bringing\ \ the total to seven of infantry and one of artillery. Each had its own recruiting\ \ areas across the county, those of the 1st RLM being Bolton (Great and Little),\ \ Fylde, Lancaster and Manchester. During the Crimean War the depot of the 1st\ \ RLM built a barracks on Windy Hill at Lancaster for 200 men and a storehouse\ \ with a parade ground for 800 men later known as Springfield Barracks. Plans\ \ to convert some old warehouses at St Georges Quay were scrapped when the war\ \ ended. Annual training for the 1st RLM resumed in 1857. It was usually held\ \ on Giant Axe Field, but at Ulverston when camp coincided with elections in Lancaster.\ \ In some years a joint field day was held with one of the Lancashire Rifle Volunteer\ \ Corps during annual training. From 1876 the regiment adopted the practice of\ \ camping at Scale Hall field, about from Lancaster, during its annual training.\n\ \nCardwell reforms\n\nUnder the 'Localisation of the Forces' scheme introduced\ \ by the Cardwell Reforms of 1872, Militia regiments were brigaded with their\ \ local regular and Volunteer battalions – for the 1st RLM this was with the 4th\ \ (King's Own) Regiment of Foot in Sub-District No 11 (County of Lancaster). The\ \ Militia now came under the War Office rather than their county lords lieutenant,\ \ and officers' commissions were signed by the Queen.\n\nAlthough often referred\ \ to as brigades, the sub-districts were purely administrative organisations,\ \ but in a continuation of the Cardwell Reforms a mobilisation scheme began to\ \ appear in the Army List from December 1875. This assigned regular and militia\ \ units to places in an order of battle of corps, divisions and brigades for the\ \ 'Active Army', even though these formations were entirely theoretical, with\ \ no staff or services assigned. The 1st, 2nd and 3rd Royal Lancashire Militia\ \ formed 1st Brigade of 3rd Division, VI Corps. The brigade would have mustered\ \ at Manchester in time of war.\n\nThe Hon Frederick Stanley, MP, formerly captain\ \ in the Grenadier Guards, was appointed lieutenant-colonel commandant of the\ \ regiment (later of the 1st Battalion) on 23 June 1874, the rank of colonel in\ \ the militia having been abolished. He was also Financial Secretary to the War\ \ Office from 1874 to 1877, and Secretary of State for War 1878–80, which meant\ \ that he was often absent during training.\n\nCardwell's localisation scheme\ \ provided for the regular and militia regiments to be linked in pairs, sharing\ \ a single permanent depot. The 4th (King's Own) already had two battalions; the\ \ 1st RLM split to form its own second battalion on 26 September 1877, each being\ \ initially of six companies. A new regimental depot, Bowerham Barracks, was built\ \ at Lancaster between 1876 and 1880.\n\nMilitia battalions now had a large cadre\ \ of permanent staff (about 30). Around a third of the recruits and many young\ \ officers went on to join the regular army. In addition, the Militia Reserve\ \ introduced in 1867 consisted of present and former militiamen who undertook\ \ to serve overseas in case of war. During the international crisis caused by\ \ the Russo-Turkish War in 1877, the 1st RLM offered its service and was informed\ \ that it might be embodied for garrison duty. In the event the militia was not\ \ embodied, but the regular and militia reserves were called out the following\ \ year, those belonging to Sub-District No 11 assembling at Lancaster on 3 April.\ \ On 22 April they entrained to join the depot of the 4th (King's Own) at the\ \ Portsdown Hill Forts, where they served until 30 July when they were dismissed\ \ to heir homes.\n\n3rd and 4th Battalions, King's Own (Royal Lancaster Regiment)\n\ The Childers Reforms of 1881 took Cardwell's reforms further, with the linked\ \ regular and militia regiments becoming single county regiments. In the case\ \ of the Lancaster district this was the King's Own (Royal Lancaster Regiment)\ \ ('The King's Own') of four battalions: the 1st and 2nd were the regulars, while\ \ the 1st Royal Lancashire Militia (The Duke of Lancaster's Own) became the 3rd\ \ and 4th Bns, together with affiliated Volunteer Force battalions. As the regimental\ \ history put it, the 1st and 2nd Bns King's Own had amalgamated with the 1st\ \ and 2nd Bns Duke's Own. The two militia battalions continued to be administered\ \ as a single double-battalion regiment until 1 August 1900.\n\nIn 1882 the 3rd\ \ and 4th Battalions began their annual training at Lancaster on 3 July, but at\ \ the end of the month their training was extended for 56 days, embodying them\ \ for garrison duty during the crisis surrounding the Anglo-Egyptian War. Both\ \ battalions entrained for Preston on 31 July, and went to Fulwood Barracks, which\ \ were grossly overcrowded by the arrival of their 12 companies in addition to\ \ the reservists of the regular regiment stationed there. The two battalions returned\ \ to Lancaster on 26 August to be disembodied.\n\nSecond Boer War\nAfter the disasters\ \ of Black Week at the start of the Second Boer War in December 1899, most of\ \ the regular army was sent to South Africa, and many militia units were embodied\ \ to replace them for home defence and to garrison certain overseas stations.\ \ The 4th Bn King's Own was embodied on 13 December 1899 and the 3rd Bn on 23\ \ January 1900. Both battalions volunteered for overseas service.\n\nThe 4th Battalion\ \ left first, embarking with a strength of 25 officers and 666 ORs under the command\ \ of Lt-Col W. Kemmis and landing at Cape Town on 1 February 1900. It proceeded\ \ to the advanced base at Naauwpoort and was employed on the lines of communication\ \ with detachments guarding towns, bridges and culverts between Norvalspont and\ \ Port Elizabeth, Graaff-Reinet and Hanover Road. In August 1900 a column consisting\ \ of 200 men of the battalion and 40 of Nesbitt's Horse carried out a demonstration\ \ through the disaffected district of Hanover. On 30 December the Boers attacked\ \ and burned a train at the 'Gates of Hell' about from Naauwpoort: two companies\ \ of the battalion only arrived in time to exchange a few shots with the retiring\ \ enemy. In December, Lt-Col Kemmis was appointed commandant of Naauwpoort. On\ \ 23 February 1901 2nd Lt Hunt with 30 men guarding the Fish River bridge and\ \ station successfully held off Commandant Kritzinger and about 250 Boers for\ \ four hours before the armoured train came to their assistance and drove off\ \ the Boers. On 7 March Capt Worsley Taylor with 40 men of the 4th Bn and about\ \ 60 Mounted infantry (MI) was attacked by a superior force while repairing the\ \ Colesberg–Philippolis telegraph line. Taylor and his men took up a defensive\ \ position on a Kopje and held it for 24 hours until a relief column arrived from\ \ Colesberg. On 29 May Battalion HQ moved to Norvalspont and the battalion occupied\ \ the northern bank of the Orange River. Finally, it concentrated at De Aar on\ \ 5 July preparatory to embarking for home. During the campaign the battalion\ \ lost one officer and 21 ORs killed or died of disease. The 4th Bn was disembodied\ \ on 3 August 1901. It was awarded the battle honour South Africa 1900–01, and\ \ the officers and men received the Queen's South Africa Medal with the clasps\ \ 'Cape Colony', 'Orange Free State', and 'South Africa 1901'.\n\nThe 3rd Bn embarked\ \ for South Africa with a strength of 25 officers and 686 ORs under the command\ \ of Col B.N. North. It landed at Cape Town on 1 March 1900 and was deployed along\ \ the lines of communication in Orange River Colony, with Battalion HQ and three\ \ companies guarding the important railway bridge and supply depot at Zand River\ \ Bridge. They were attacked on 14 March by a Boer force that included artillery,\ \ driving them off after a day's fighting. The battalion also supplied an MI company\ \ that took part in the action at Ventersburg with a column under Col North operating\ \ with armoured trains. This force obliged the Boers to abandon their position\ \ at Zeegatacht, near Brandfort, on 16 January 1901, and North with the MI and\ \ armoured train drove them from Huten Beck on 28 January. At this time the rest\ \ of the battalion was holding the blockhouse line and railway from Kroonstad\ \ to Bloemfontein, driving off several attacks. In October 1901 the battalion\ \ was divided into several detachments that engaged Theron's Commando around Ceres.\ \ The battalion re-assembled on 10 January 1902 to embark for England, where it\ \ was disembodied on 8 February 1902. During the campaign the battalion had lost\ \ 51 ORs killed or died of disease. It was awarded the battle honour South Africa\ \ 1900–02, the Queen's South Africa Medal with the clasps 'Cape Colony' and 'Orange\ \ Free State', and the King's South Africa Medal with the clasps 'South Africa\ \ 1901' and 'South Africa 1902', and Lt-Col North was awarded a Companionship\ \ of the Order of the Bath (CB).\n\nSpecial Reserve\nAfter the Boer War, the future\ \ of the Militia was called into question. There were moves to reform the Auxiliary\ \ Forces (Militia, Yeomanry and Volunteers) to take their place in the six Army\ \ Corps proposed by the Secretary of State for War, St John Brodrick. However,\ \ little of Brodrick's scheme was carried out. Under the more sweeping Haldane\ \ Reforms of 1908, the militia was replaced by the Special Reserve, (SR) a semi-professional\ \ force whose role was to provide reinforcement drafts for Regular units serving\ \ overseas in wartime, rather like the earlier Militia Reserve. The 3rd Battalion\ \ became the 3rd (Reserve) Battalion, King's Own, on 19 July 1908, but the 4th\ \ Bn was disbanded on 31 August.\n\nWorld War I\n\nOn the outbreak of war on 4\ \ August 1914 the battalion was embodied at Lancaster under Lt-Col J.M.A. Graham.\ \ It then moved to its war station at Saltash, Cornwall, for a few days before\ \ the bulk of the battalion moved to Sunderland. It probably helped to organise\ \ the 10th (Reserve) Battalion, King's Own, from Kitchener's Army volunteers,\ \ when that was formed at Saltash in October 1914. From 1915 to 1917 the 3rd Bn\ \ was at Plymouth, but by November 1917 it had moved to Harwich. As well as forming\ \ part of the Plymouth and Harwich Garrisons, the battalion's role was to train\ \ and despatch drafts of reservists, special reservists, recruits and returning\ \ wounded for the regular battalions. The 1st King's Own served on the Western\ \ Front, while the 2nd Bn returned from India and after a few months on the Western\ \ Front spent the rest of the war on the Macedonian Front. \n\nThousands of men\ \ for the regular battalions would have passed through the ranks of the 3rd Bn\ \ during the war. It was disembodied on 30 July 1919, when the remaining personnel\ \ were drafted to the 1st Bn.\n\nPostwar\nThe SR resumed its old title of Militia\ \ in 1921 and then became the Supplementary Reserve in 1924, but like most militia\ \ battalions the 3rd King's Own remained in abeyance after World War I. By the\ \ outbreak of World War II in 1939, no officers remained listed for the battalion.\ \ The militia was formally disbanded in April 1953.\n\nCommanders\nThe following\ \ officers commanded the regiment as Colonel, as Honorary Colonel, or served as\ \ Lt-Col Commandant of one of its battalions:\n William Stanley, 9th Earl of Derby\ \ appointed 1689\n Philip Hoghton, appointed 1 June 1715\n Edward Stanley, 11th\ \ Earl of Derby appointed 25 October 1745\n James Smith-Stanley, Lord Strange,\ \ appointed 15 July 1760, died 1 June 1771\n Edward Smith-Stanley, 12th Earl of\ \ Derby appointed 14 February 1772, resigned\n Thomas Stanley of Cross Hill, MP,\ \ appointed 28 October 1783, died 26 December 1816\n Peter Patten Bold, appointed\ \ 8 June 1817, died 1819\n John Plumbe-Tempest, promoted 4 November 1819, resigned\ \ 1852\n John Talbot Clifton, formerly 1st Life Guards, appointed 2 October 1852,\ \ resigned 1870\n William Assheton Cross, promoted 8 December 1870, appointed\ \ Hon Col 13 May 1871\n Robert Whitle, appointed 31 May 1872.\n Frederick Stanley,\ \ 16th Earl of Derby, KG, GCB, GCVO, Lt-Col Commandant, 1st Bn, 23 June 1874;\ \ appointed Hon Col 27 February 1886, died 14 June 1908\n Thomas Dawson Sheppard,\ \ Lt-Col Commandant, 2nd Bn, 26 September 1877\n George Blucher Heneage Marton,\ \ 20 March 1886, Lieutenant-Colonel Commandant, commanding 3rd Battalion.\n Joseph\ \ Lawson Whalley, 26 November 1887, commanding 4th Battalion\n B.N. North, CB,\ \ MVO, former Lt-Col Commandant, 3rd Bn, appointed Hon Col 19 July 1908\n\nUniforms\ \ & Insignia\nThe uniform of the Royal Lancashire Militia was red with the blue\ \ facings appropriate to 'Royal' regiments. The regimental colour presented in\ \ 1761 was blue and bore the coat of arms of the Duchy of Lancaster (on a shield\ \ gules, three lions of England (passant gardant) or, in chief a label azure of\ \ three points, each charged with three fleur-de-lis of France). The regimental\ \ colour presented by Queen Charlotte at Weymouth in 1806 simply carried the words\ \ 'FIRST ROYAL LANCASHIRE MILITIA' surrounded by a wreath of roses, thistles and\ \ shamrocks.\n\nAs a reward for its service in Ireland in 1798 the badge of the\ \ 'Harp and Crown' was bestowed on the regiment, and the 'Red Rose of Lancaster'\ \ in 1803. The set of colours believed to have been presented by the Lord Lieutenant\ \ of Ireland when the regiment was stationed in Dublin in 1816 bore the harp in\ \ the centre of the King's colour and the crowned red rose with 'LANCASTER' in\ \ Old English script in the three outer corners of the regimental colour. The\ \ colonel's wife, Mrs Clifton, presented new colours to the reformed regiment\ \ in 1853 and again in 1870 after the regulation size of colours was made smaller.\ \ The regimental colour bore a red rose inside a circle with the words 'DUKE OF\ \ LANCASTER'S OWN' surrounded by a wreath of roses, thistles and shamrocks. Above\ \ was a crown, below were the Roman numeral 'I' and two scrolls, the upper saying\ \ 'ROYAL LANCASHIRE MILITIA', the lower the battle honour 'MEDITERRANEAN'; the\ \ crown, numeral and upper scroll also appeared on the Queen's colour. The smaller\ \ 1870 colours were similar, but the numeral I had disappeared and the scroll\ \ now read '1. ROYAL LANCASHIRE MILITIA'. Lady Constance Stanley presented the\ \ 2nd Bn's colours in 1880: the design was the same, but the lettering on the\ \ scrolls was 'First Royal Lancashire Militia, 2nd Battalion, Mediterranean',\ \ which was repeated in black on a yellow ground in the centre of the Queens colour.\n\ \nAbout 1790 the buttons had the letters 'RL' inside a crowned star; the figure\ \ '1' was added above the letters after the creation of the 2nd RLM, and these\ \ buttons were retained until 1829. The officers' shako plate in 1812–16 consisted\ \ of the stylised cipher 'GR' above an enamelled red rose, with a silver spray\ \ of leaves beneath and the numeral '1' at the bottom, the whole plate a highly\ \ stylised escutcheon topped with a crown. The ORs' plate was plain brass, the\ \ word 'LANCASTER\" appearing between the cipher and rose, and no numeral at the\ \ bottom. The cap badge of 1852 was circular, with 'LANCASTER' in Old English\ \ lettering above a red rose, a spray of leaves below; the officer's belt plate\ \ carried this badge without the spray of leaves but surmounted by a crown, on\ \ a decorated star. The OR's Glengarry badge of 1874–81 had the royal crest (a\ \ crowned lion statant gardant on a crown) over the red rose within a spray of\ \ grass, with a scroll underneath inscribed 'THE DUKE OF LANCASTER'S OWN'.\n\n\ In 1881 the regiment combined the insignia of the King's Own and the Duke's Own,\ \ with the Red Rose of Lancaster surmounted by the Lion of England. Later this\ \ was replaced by the lion over the words 'KING'S OWN'.\n\nPrecedence\nIn September\ \ 1759 it was ordered that militia regiments on service were to take their relative\ \ precedence from the date of their arrival in camp. In 1760 this was altered\ \ to a system of drawing lots where regiments did duty together. During the War\ \ of American Independence all the counties were given an order of precedence\ \ determined by ballot each year, beginning in 1778. For the Lancashire Militia\ \ the positions were:\n 38th on 1 June 1778\n 43rd on 12 May 1779\n 30th on 6\ \ May 1780\n 12th on 28 April 1781\n 32nd on 7 May 1782\n\nThe militia order of\ \ precedence balloted for in 1793 (when Lancashire was 37th) remained in force\ \ throughout the French Revolutionary War: this covered all the regiments formed\ \ in the county. Another ballot for precedence took place at the start of the\ \ Napoleonic War, when Lancashire was 52nd. This order continued until 1833. In\ \ that year the King drew the lots for individual regiments and the resulting\ \ list remained in force with minor amendments until the end of the militia. The\ \ regiments raised before the peace of 1763 took the first 47 places: the 1st\ \ RLM was 45th. Formally, the regiment became the 45th, or 1st Royal Lancashire\ \ Militia, but the 1st RLM like most regiments seems to have paid little attention\ \ to the additional number.\n\nSee also\n Militia (English)\n Militia (Great Britain)\n\ \ Militia (United Kingdom)\n Special Reserve\n Lancashire Militia\n King's Own\ \ Royal Regiment (Lancaster)\n\nFootnotes\n\nNotes\n\nReferences\n\n W.Y. Baldry,\ \ 'Order of Precedence of Militia Regiments', Journal of the Society for Army\ \ Historical Research, Vol 15, No 57 (Spring 1936), pp. 5–16.\n Ian F.W. Beckett,\ \ The Amateur Military Tradition 1558–1945, Manchester: Manchester University\ \ Press, 1991, .\n Burke's Peerage, Baronetage and Knightage, 100th Edn, London,\ \ 1953.\n W.Y. Carman, 'Militia Uniforms 1780', Journal of the Society for Army\ \ Historical Research, Vol 36, No 147 (September 1958), pp. 108–9.\n Col John\ \ K. Dunlop, The Development of the British Army 1899–1914, London: Methuen, 1938.\n\ \ Cross Fleury, Time-Honoured Lancaster: Historic Notes on the Ancient Borough\ \ of Lancaster, Lancaster: Eaton & Bulfield, 1891.\n Sir John Fortescue, A History\ \ of the British Army, Vol I, 2nd Edn, London: Macmillan, 1910.\n Sir John Fortescue,\ \ A History of the British Army, Vol II, London: Macmillan, 1899.\n Sir John Fortescue,\ \ A History of the British Army, Vol III, 2nd Edn, London: Macmillan, 1911.\n\ \ Sir John Fortescue, A History of the British Army, Vol IV, Pt II, 1789–1801,\ \ London: Macmillan, 1906.\n J.B.M. Frederick, Lineage Book of British Land Forces\ \ 1660–1978, Vol I, Wakefield: Microform Academic, 1984, .\n Lt-Col James Moncrieff\ \ Grierson (Col Peter S. Walton, ed.), Scarlet into Khaki: The British Army on\ \ the Eve of the Boer War, London: Sampson Low, 1899/London: Greenhill, 1988,\ \ .\n H.G. Hart, The New Annual Army List (various dates).\n Col George Jackson\ \ Hay, An Epitomized History of the Militia (The Constitutional Force), London:United\ \ Service Gazette, 1905/Ray Westlake Military Books, 1987, .\n Richard Holmes,\ \ Soldiers: Army Lives and Loyalties from Redcoats to Dusty Warriors, London:\ \ HarperPress, 2011, .\n Brig E.A. James, British Regiments 1914–18, Samson Books\ \ 1978/Uckfield: Naval & Military Press, 2001, .\n Roger Knight, Britain Against\ \ Napoleon: The Organization of Victory 1793–1815, London: Allen Lane, 2013/Penguin,\ \ 2014, .\n H.G. Parkyn, 'English Militia Regiments 1757–1935: Their Badges and\ \ Buttons', Journal of the Society for Army Historical Research, Vol 15, No 60\ \ (Winter 1936), pp. 216–248.\n Edward M. Spiers, The Army and Society 1815–1914,\ \ London: Longmans, 1980, .\n Edward M. Spiers, The Late Victorian Army 1868–1902,\ \ Manchester: Manchester University Press, 1992/Sandpiper Books, 1999, .\n Katherine\ \ Thomasson & Francis Buist, Battles of the '45, London: Batsford 1962/Pan 1967.\n\ \ J.R. Western, The English Militia in the Eighteenth Century, London: Routledge\ \ & Kegan Paul, 1965.\n Maj R.J.T. Williamson & Col J. Lawson Whalley, History\ \ of the Old County Regiment of Lancashire Militia, London: Simpkin, Marshall,\ \ 1888.\n\nExternal sources\n British History Online\n Electric Scotland\n King's\ \ Own Royal Regiment Museum, Lancaster\n Lancashire Infantry Museum\n Lancashire\ \ Record Office, Handlist 72 Archived from the original\n Museum of the Manchester\ \ Regiment\n Richard A. Warren, This Re-illuminated School of Mars: Auxiliary\ \ forces and other aspects of Albion under Arms in the Great War against France\n\ \nLancashire Militia\nLancashire\nMilitary units and formations in Lancashire\n\ Military units and formations in Lancaster, Lancashire\nMilitary units and formations\ \ established in 1661\nMilitary units and formations disestablished in 1881" - "João Pedro Coelho Marinho de Sousa (born 30 March 1989), known as João Sousa\ \ (), is a Portuguese professional tennis player. He is currently ranked world\ \ No. 137 by the Association of Tennis Professionals (ATP). Continuously ranked\ \ in the world's top-100 between July 2013 and March 2021, and with four ATP Tour\ \ singles titles, Sousa is often regarded as the best Portuguese tennis player\ \ of all time. He is nicknamed Conquistador (Portuguese for \"Conqueror\") for\ \ sharing his birthplace of Guimarães with Afonso I, the country's first king.\ \ Sousa is coached by former player Frederico Marques and practices at the BTT\ \ Tennis Academy in Barcelona.\n\nSousa began playing tennis at the age of seven.\ \ After winning national youth titles, he decided at the age of fifteen to invest\ \ in his career by moving to Barcelona. After an unimpressive junior career, Sousa\ \ turned professional in 2008 and won his first singles tournament in 2009. He\ \ started playing in the ATP Challenger Tour in 2008, winning his first tournament\ \ at this level in 2011. Sousa debuted in the top-level ATP World Tour in 2008,\ \ and rose to prominence at the 2013 Malaysian Open, where he became the first\ \ Portuguese player to win a World Tour-level singles tournament.\n\nSousa holds\ \ several Portuguese men's tennis records. In October 2013, he ranked 49th in\ \ the world after his victory at the Malaysian Open, becoming the first Portuguese\ \ player to break into the singles top 50. In November 2015, Sousa reached a career-high\ \ and Portuguese-best ranking of world  33, following his second ATP World Tour\ \ singles title at the Valencia Open. In May 2016, he improved his personal ranking\ \ best, becoming the first Portuguese player to enter the top 30, as a result\ \ of reaching his first Masters 1000 quarter-finals in Madrid. In 2014, he was\ \ the first Portuguese player to compete exclusively at the ATP World Tour in\ \ a single season; the first to be seeded in a Grand Slam tournament (2014 US\ \ Open); and the second to reach the quarterfinals in a Grand Slam event (2015\ \ US Open doubles). Sousa is the fourth Portuguese player to reach the singles\ \ top 100, and the second to do so in both singles and doubles rankings, after\ \ Nuno Marques. He is also the Portuguese player with the largest career prize\ \ money, and the most wins at Grand Slam singles tournaments.\n\nEarly and personal\ \ life \nJoão Sousa was born on 30 March 1989 in Guimarães, Portugal, to Armando\ \ Marinho de Sousa, a judge and amateur tennis player, and Adelaide Coelho Sousa,\ \ a bank clerk. Sousa has a younger brother named Luís Carlos. At age seven, Sousa\ \ began playing tennis with his father at a local club. In 2001, he won the national\ \ under-12 singles title, beating future Davis Cup partner Gastão Elias in the\ \ semifinals, and was runner-up in doubles. In 2003, he partnered with Elias to\ \ win the national under-14 doubles title. Sousa also played football at local\ \ clubs Vitória de Guimarães – of which he is a keen supporter – and Os Sandinenses\ \ until the age of 14, when he decided to give up on football and the goal of\ \ studying medicine to pursue a professional tennis career. He briefly joined\ \ the National Tennis Training Center in Maia until he was forced to leave after\ \ its closure.\n\nIn September 2004, aged 15, Sousa moved to Barcelona, Spain,\ \ to attend a boarding school and join the Catalan Tennis Federation. A year later,\ \ he joined the BTT Tennis Academy, which was recommended to him by former member\ \ and countryman Rui Machado. He was first coached by Álvaro Margets, under the\ \ supervision of one of his mentors, Francisco Roig. At the academy, he met and\ \ shared a flat with his future coach, Frederico Marques. Sousa continues to practice\ \ at BTT, even after joining the ATP Tour.\n\nDuring his youth, Sousa's idols\ \ were Pete Sampras, Juan Carlos Ferrero, and Roger Federer. He is fluent in Portuguese,\ \ Spanish, Catalan, as well as English, French and Italian. Since 2008, Sousa\ \ has been dating Júlia Villanueva, whom he met during his training in Barcelona.\n\ \nTennis career\n\nPre-2008: Junior years\nSousa made his debut in a junior tournament\ \ in August 2004 at the Grade 4 Taça Diogo Nápoles in Porto, reaching the semifinals.\ \ His first junior doubles title came in April 2005 at a Grade 4 tournament in\ \ Guadeloupe, where he also reached his first junior singles final. Though he\ \ never won a singles title on the junior circuit, Sousa reached three singles\ \ finals and won five doubles titles, including a Grade 2 tournament in France.\ \ In 2005 Sousa was runner-up at the Portugal under-16 National Championship,\ \ losing in the final to Gastão Elias. He had previously won the doubles title\ \ at the 2004 edition in the same age category.\n\nSousa peaked at number 61 in\ \ the world junior rankings in early 2007, shortly after entering the main draw\ \ of the 2006 Orange Bowl. His only participation at a junior Grand Slam was short-lived;\ \ he lost in the first qualifying round of the 2007 French Open Boys' Singles\ \ tournament. Sousa's last junior tournament was the European Junior Championships\ \ in Austria in July 2007.\n\nDespite not having turned professional before 2008,\ \ Sousa made his debut at a senior tournament in October 2005 after entering as\ \ a wild card in the main draw of a Futures doubles tournament in Barcelona. His\ \ first win as a senior came at a Futures doubles tournament, in August 2006 in\ \ Oviedo, and his debut singles tournament participation and win both came in\ \ May 2007 at a Futures tournament in Lleida, Spain. Sousa would not go beyond\ \ quarterfinals at any Futures event until 2008.\n\n2008–2012: Early career\n\ In 2008, Sousa began the season by winning his first professional title at the\ \ final of a Futures doubles tournament in Murcia. He reached two more doubles\ \ finals that year, winning a second title in August in Bakio. The biggest achievement\ \ in his 2008 campaign came at the Estoril Open. Entering through qualifying rounds,\ \ Sousa made his debut at the main draw of an ATP Tour-level tournament. He had\ \ his first ATP win over Austrian Oliver Marach, losing to Frederico Gil in the\ \ second round. Sousa also started playing at the ATP Challenger Tour and for\ \ the Portugal Davis Cup team in 2008. He played two singles dead rubbers, winning\ \ over Cyprus' Eleftherios Christou in July and losing to Ukraine's Illya Marchenko\ \ in September.\n\nBesides winning two more Futures doubles titles in three finals\ \ in Irun and Espinho, Sousa reached his first four singles finals at the same\ \ level in 2009. He won the title in the La Palma final. At the Estoril Open,\ \ Sousa was granted a wild card to participate in his first doubles ATP World\ \ Tour level tournament, but lost in the first round. During 2009, Sousa was twice\ \ called to the Portugal Davis Cup team, winning both singles dead rubbers he\ \ took part in – over Philippos Tsangaridis from Cyprus in March and Algeria's\ \ Sid-Ali Akkal in July. In 2010, Sousa won his first Challenger title at the\ \ Tampere's doubles tournament in August. In the 2010 season, Sousa did not enter\ \ any ATP tournament; he began shifting his schedule increasingly from the Futures\ \ circuit to the Challenger tour. He was more successful in the Futures, winning\ \ three singles titles in four finals at Valldoreix, Tenerife and Lanzarote, and\ \ doubles titles in Lanzarote, Córdoba and two in Tenerife. At the Davis Cup,\ \ Sousa played two more dead rubbers, winning for the second time in three seasons\ \ over Cyprus' Christou and losing to Bosnian Damir Džumhur.\n\nSousa reached\ \ several milestones in 2011. At the Challenger Tour, he won his first singles\ \ title at that level in Fürth in June. At the ATP World Tour, Sousa participated\ \ as a wildcard in the singles and doubles events in Estoril, losing in the second\ \ round of the former to Canadian Milos Raonic. He also made his first attempt\ \ at entering the main draw of a Grand Slam tournament, but fell in the qualifying\ \ rounds at the Australian Open, Wimbledon and the US Open. In October, Sousa's\ \ participation at the Sabadell Futures was his last presence in the main draw\ \ of a tournament in that category. He won three more singles and one doubles\ \ Futures titles, making his career titles at this level seven singles and nine\ \ doubles titles. Once again, Sousa was called for two dead rubbers at Davis Cup,\ \ winning over Martin Kližan from Slovakia and losing to Switzerland's Marco Chiudinelli.\ \ In October 2011, he hired Frederico Marques as a coach when he was ranked world\ \ number 220.\n\nAt the 2012 Estoril Open, Sousa reached the quarterfinals of\ \ an ATP tour tournament for the first time, losing to Albert Ramos. At the 2012\ \ French Open, he made his debut as a qualifier in the main draw of a Grand Slam\ \ tournament. He would lose in the first round to 20th seed Marcel Granollers\ \ in four sets. He did not progress past the qualifying rounds at the other three\ \ Grand Slam tournaments. He also entered main draw events at the Barcelona Open\ \ (lost to Frederico Gil in the 2nd round) and the Croatia Open (lost to Matthias\ \ Bachinger in the 1st round). At Challenger tournaments, Sousa won two singles\ \ titles out of three finals – Mersin and Tampere – and one doubles title at Fürth.\n\ \nHis role at the 2012 Davis Cup rose in importance. Sousa played his first doubles\ \ rubber against Israel, partnering with Gastão Elias in a loss against Andy Ram\ \ and Jonathan Erlich. Ram also beat Sousa in a dead rubber – his last as of 2016.\ \ In September, Sousa played three rubbers against Slovakia. He won the first\ \ singles match over Lukáš Lacko but lost the doubles with Elias and his second\ \ singles match to Martin Kližan, meaning Portugal's relegation from Europe/Africa\ \ Zone Group I to Group II in 2013. In this same month, Sousa became the top-ranked\ \ Portuguese tennis player for the first time, at No. 107. In October, his world\ \ ranking rose to No. 99, and Sousa became the fourth Portuguese player to enter\ \ the ATP top-100 singles ranking after Nuno Marques, Frederico Gil and Rui Machado.\n\ \n2013: Breakthrough in the ATP\nSousa started the 2013 season with his first\ \ participation in ATP tour level hardcourt tournaments at the Chennai Open and\ \ the Sydney International. Despite being knocked out of both tournaments in the\ \ first round, he returned to the top-100 world rankings. At the Australian Open,\ \ Sousa won his first Grand Slam on his second attempt, following a first-round\ \ win over wildcard John-Patrick Smith. He lost to world number three Andy Murray\ \ in straight sets in the second round. In February, Sousa participated in the\ \ Portugal Davis Cup team in their Europe/Africa Zone Group II tie against Benin.\ \ He won his singles match against Loic Didavi and the doubles match partnering\ \ with Pedro Sousa. Portugal won the tie 5–0 and progressed to the second round.\ \ Sousa then played his first clay court tournaments of the season at the Chile\ \ Open and ATP Buenos Aires, where he again lost in the first round. At the Mexican\ \ Open, he defeated former top 10 Jürgen Melzer in the first round, but lost in\ \ the second round to Santiago Giraldo.\n\nDespite failing to qualify for the\ \ Indian Wells Masters, Sousa entered for the first time in his career in the\ \ main draw of a Masters event at the Miami Masters. He lost in the first round\ \ to former world number 1 Lleyton Hewitt in straight sets. Sousa did not play\ \ in April after fracturing his left foot during a Davis Cup training session.\ \ He was scheduled to return as a wild card at the Portugal Open. His invitation\ \ was given to world number 4 David Ferrer instead, which stirred some controversy\ \ in Portuguese media. Later in the season, Sousa showed uncertainties about his\ \ future Portugal Open participation, which prompted tournament director João\ \ Lagos to comment on the contention. Ahead of the 2014 edition, the controversy\ \ was no longer an issue.\n\nSousa returned to action at the Madrid Masters qualifying\ \ rounds and at his first Challenger tournament of the season in Bordeaux, but\ \ he lost early in these attempts. At the 2013 French Open, Sousa won his first\ \ round match over Go Soeda in straight sets, and lost in the second round to\ \ Spaniard Feliciano López. He returned to the Challenger circuit with a singles\ \ title at Fürth and an early loss at Košice. It was also his second title in\ \ Fürth, after the triumph in 2011. Sousa missed the 2013 Wimbledon Championships\ \ main draw after losing in the third qualifying round to Julian Reister. He would\ \ also lose in the qualifying rounds of the doubles competition, while partnering\ \ with Teymuraz Gabashvili. In July, he played exclusively in Challenger tournaments,\ \ being runner-up in singles and doubles in San Benedetto, re-entering the top\ \ 100 rankings, which he has maintained ever since. He won the singles title in\ \ his hometown Guimarães. This remains his last participation at the ATP Challenger\ \ Tour, having won five singles and two doubles titles at the level. After losing\ \ in the qualifying rounds of the Cincinnati Masters, Sousa returned to the ATP\ \ World Tour at the Winston-Salem Open in August, losing to Alex Bogomolov, Jr.\ \ in the second round. In his first US Open appearance, Sousa reached the third\ \ round after defeating 25th seed Grigor Dimitrov and Jarkko Nieminen in back-to-back\ \ 5-set matches. He ended his campaign losing to world No. 1 Novak Djokovic. This\ \ was his best result at Grand Slams yet.\n\nIn September, Sousa joined Portugal's\ \ Davis Cup team to face Moldova in the semifinals of Europe/Africa Zone Group\ \ II. He won his first singles match over Maxim Dubarenco and the doubles match\ \ with Gastão Elias. He lost his second singles match to Radu Albot in an epic\ \ five-set duel which lasted nearly five hours. Portugal won 3–2 and was promoted\ \ to Group I in 2014. Following early-round wins over Paolo Lorenzi and Sergiy\ \ Stakhovsky at the St. Petersburg Open, Sousa beat former ATP top 20 player Dmitry\ \ Tursunov in the quarterfinals to advance to his first career ATP tour semifinal.\ \ He would lose there to Guillermo García-López.\n\nSousa's breakthrough title\ \ came at the Malaysian Open, in the early rounds of which he defeated Ryan Harrison\ \ and Pablo Cuevas. In the quarterfinals, Sousa defeated world No. 4 David Ferrer\ \ in straight set; Sousa's first career win over a top-10 player. Then, he qualified\ \ to his first ATP tour level final after getting past Jürgen Melzer in three\ \ sets. Sousa beat Frenchman Julien Benneteau in three sets in the final after\ \ saving one match point, becoming the first Portuguese player to win an ATP World\ \ Tour singles tournament. He also became the highest ranked Portuguese ever,\ \ climbing from No. 77 to No. 51. The previous record holder was Rui Machado,\ \ who was world No. 59 in 2011. Sousa officially entered the top 50 for the first\ \ time on 7 October 2013.\n\nIn October, Sousa had a first-round loss at the Kremlin\ \ Cup and a second round appearance at the Valencia Open. After beating Guillermo\ \ Garcia-Lopez in the first round, Sousa lost to 2013 Wimbledon semifinalist Jerzy\ \ Janowicz. Sousa finished his 2013 season by being eliminated from the Paris\ \ Masters in the qualifying round. At world No. 49, he became the first Portuguese\ \ to finish the season in the top 50. In November, Sousa was nominated for the\ \ 2013 Portuguese Sportsman of the Year award, losing to cyclist Rui Costa. At\ \ the same ceremony, he was named Tennis Personality of the Year by the Portuguese\ \ Tennis Federation.\n\n2014: Consolidating presence in the ATP World Tour\nSousa\ \ began the 2014 season with a first-round loss at the 2014 Qatar Open. At the\ \ Sydney International's doubles competition, he partnered with Lukáš Rosol to\ \ defeat the Bryan brothers, the then-world No. 1 doubles team, en route to the\ \ semifinals. At the 2014 Australian Open, he was beaten by world No. 137 and\ \ future Grand Slam champion Dominic Thiem in the first round. Partnering with\ \ Colombian Santiago Giraldo, Sousa was eliminated in the first round of the doubles\ \ competition by Mahesh Bhupathi and Rajeev Ram. Later in January, Sousa joined\ \ the Portugal Davis Cup team to face Slovenia for the Europe/Africa Group I 1st\ \ Round. He won his first singles match against Janez Semrajc, but then lost in\ \ the doubles match and his second singles match against Blaž Kavčič. Portugal\ \ eventually lost 3–2 and fell to a relegation playoff. In February, he started\ \ with early round losses at the Open Sud de France and ATP Buenos Aires. Sousa\ \ played at the Rio Open and reached the quarterfinals, where he was beaten by\ \ world No. 1 Rafael Nadal. Sousa ended February with a second-round defeat and\ \ exit to Andy Murray at the Mexican Open. During the North American hard court\ \ Masters swing in March, Sousa started the Indian Wells Masters with a win over\ \ Aleksandr Nedovyesov, followed with a second-round loss to 20th seed Ernests\ \ Gulbis. At the 2014 Sony Open Tennis in Miami, Florida, Sousa reached the third\ \ round. After beating 26th seed Gilles Simon in the second round, he lost to\ \ world No. 7 Tomáš Berdych.\n\nSousa began the spring clay court season at the\ \ Grand Prix Hassan II in Casablanca, where he was beaten by world No. 273 Roberto\ \ Carballés Baena in a second-round match lasting over three hours. This loss\ \ started an eight-match losing streak that lasted the remainder of the clay court\ \ season – it included losses at the Monte-Carlo Masters, at the Barcelona Open,\ \ at the Portugal Open, at the Madrid Masters, at the Rome Masters, and at the\ \ Düsseldorf Open. In the first round of the 2014 French Open, Sousa suffer his\ \ eighth consecutive loss against world No. 2 Novak Djokovic. During this run\ \ of losses, Sousa reached the semifinals of the Portugal Open's doubles competition\ \ and the third round at the 2014 French Open doubles competition, where he partnered\ \ with American Jack Sock and lost to Andrey Golubev and Sam Groth.\n\nSousa made\ \ his debut at an ATP grass tournament main draw at the Halle Open. In the first\ \ round, he beat German wild card Jan-Lennard Struff and snapped the eight match\ \ losing streak. Then, he faced former world No. 1 and 6-time Halle champion Roger\ \ Federer in the second round. After winning a close first set, Sousa ended up\ \ losing in three sets to the Swiss. At the Rosmalen Grass Court Championships,\ \ Sousa became the first Portuguese player ever to reach the semifinal of an ATP\ \ tour level grass tournament. He beat in succession Paolo Lorenzi, Mate Pavić\ \ and Thiemo de Bakker, losing in the semifinals to Benjamin Becker. To cap his\ \ grass court season, Sousa played his first ever Wimbledon Championships main\ \ draw match at the 2014 edition, with a straight sets loss in the first round\ \ to world No. 3 Stan Wawrinka. In the doubles competition, he partnered with\ \ Argentinian Carlos Berlocq to play a four-hour, five-set first round loss to\ \ Martin Kližan and Dominic Thiem.\n\nIn July, Sousa reached his second-career\ \ ATP tour level final and his first of 2014 at the Swedish Open, defeating the\ \ defending champion Carlos Berlocq in the semifinals. He lost the final to the\ \ Uruguayan Pablo Cuevas in straight sets. After losing in early rounds at the\ \ German Open and the Croatia Open, Sousa entered the Canada Masters, where he\ \ was defeated in the first round by 11th seed Gulbis. At the Cincinnati Masters,\ \ Sousa was defeated by Andy Murray in the second round. Sousa was also eliminated\ \ in the second round of the Winston-Salem Open. At that tournament's doubles\ \ competition, Sousa reach his third semifinal of the season, teaming up with\ \ Romanian Florin Mergea. At the 2014 US Open, Sousa became the first Portuguese\ \ player to be seeded at a Grand Slam tournament, with the 32nd seed at the singles\ \ competition. He started with a five-set win over Canadian Frank Dancevic. In\ \ the second round, he lost to David Goffin. In the doubles competition, Sousa\ \ partnered with Serbian Dušan Lajović and beat the Americans Marcos Giron and\ \ Kevin King in the first round. They eventually fell to 4th seed Marcelo Melo\ \ and Ivan Dodig in the second round.\n\nIn September, Sousa was selected to join\ \ Portugal Davis Cup team against Russia for the Europe/Africa's Group I Relegation\ \ Playoff. He lost both his singles and doubles matches, confirming the relegation\ \ of Portugal to Group II in 2015. Sousa rebounded at the 2014 Moselle Open with\ \ his second ATP singles final of the season, after defeating former ATP top-10\ \ Gaël Monfils in the semifinals. He lost the final in straight sets to Goffin.\ \ Sousa followed this with a first-round loss to Benjamin Becker at the 2014 Malaysian\ \ Open, where Sousa was the defending champion, and dropped out of the Top-50\ \ for the first time in 11 months. However, a quarterfinal appearance at the doubles\ \ tournament enabled him to enter the ATP doubles top-100 for the first time.\ \ He became the second Portuguese player to reach the top-100 of both ATP rankings,\ \ after Nuno Marques. It was the first time since January 1996 that a Portuguese\ \ player held a spot on the singles and doubles top-100s simultaneously. At the\ \ China Open, Sousa lost in the second round to reigning US Open champion Marin\ \ Čilić. He followed it with a debut at the Shanghai Masters, where he lost to\ \ Juan Mónaco in the first round. Sousa also lost in the first round at the Stockholm\ \ Open, but rebounded at the Valencia Open with his second career win over a top-10\ \ doubles team, the defending champions Alexander Peya and Bruno Soares, in the\ \ first round. Alongside Leonardo Mayer, he reached his fourth doubles semifinal\ \ that season, the first at ATP 500 level. At the Paris Masters, Sousa suffered\ \ another early exit, ending his 2014 ATP tour campaign.\n\nSousa ended 2014 as\ \ world No. 54, failing to keep his top-50 status from the previous season. He\ \ became the first Portuguese player to maintain top-100 status by playing exclusively\ \ on the ATP World Tour in a single season. In November, he was nominated for\ \ the 2014 Portuguese Sportsman of the Year award, again losing to cyclist Rui\ \ Costa.\n\n2015: Second ATP title and quarterfinal at Grand Slam\nSousa began\ \ the 2015 season with an early round loss at the Auckland Open. At the 2015 Australian\ \ Open, he started his campaign with wins over wild card Jordan Thompson and Martin\ \ Kližan. He progressed to a third round match-up with 6th seed Andy Murray, becoming\ \ the second Portuguese player to reach that stage. Sousa lost in straight sets\ \ to Murray. In the doubles competition, Sousa partnered with Santiago Giraldo\ \ to reach the second round, where they lost to 2nd seeds Julien Benneteau and\ \ Édouard Roger-Vasselin. In February, Sousa participated at the Open Sud de France.\ \ After defeating Philipp Kohlschreiber in the quarterfinals, he lost in the semifinals\ \ to Jerzy Janowicz in three sets. After early round losses at the Rotterdam Open\ \ and Open 13, Sousa reached the second round of the Dubai Tennis Championships,\ \ where he was beaten by Murray. Sousa was then called for the Davis Cup team\ \ to face Morocco for the Europe/Africa Zone's Group II first round in early March.\ \ He won his singles rubber and partnered with Frederico Ferreira Silva to win\ \ the doubles rubber and close the tie in Portugal's favour. After injuring his\ \ knee and suffering breathing difficulties, Sousa was eliminated from both Indian\ \ Wells Masters and Miami Masters in the first round. He returned to Barcelona\ \ for recovery.\n\nSousa returned in April at the Monte-Carlo Masters, losing\ \ in the second round to Milos Raonic. At the Barcelona Open and Estoril Open,\ \ he lost in early rounds and then was eliminated from the Madrid Masters in the\ \ second round by Stan Wawrinka and from the Rome Masters in the first round by\ \ John Isner. At the Geneva Open, Sousa won a first round match over his Brazilian\ \ homophone João Souza, which was notable for the umpire needing to refer to each\ \ player by their nationality to distinguish between them during the calls. Sousa\ \ proceeded to the final, his first of the season, where he lost to Thomaz Bellucci.\ \ At the French Open, Sousa beat Canadian Vasek Pospisil in straight sets in the\ \ first round, and was defeated by 3rd seed Andy Murray in the second round. In\ \ the men's doubles of the tournament, Sousa partnered with Bellucci and was knocked\ \ out in the first round by 11th seeds Jamie Murray and John Peers. In June, Sousa\ \ did not have a strong grass court season; he was defeated in the early rounds\ \ at the Rosmalen Grass Court Championships, the Queen's Club Championships and\ \ the Nottingham Open. At Wimbledon, Sousa was again eliminated in straight sets\ \ in the first round by French Open champion and 4th seed Stan Wawrinka. His results\ \ did not improve in the men's doubles competition, from which he was eliminated\ \ in the first round while partnering with Santiago Giraldo.\n\nAt the Davis Cup\ \ Group II second round against Finland, Sousa rebounded with wins in his two\ \ singles rubbers and in the doubles rubber with Gastão Elias. At the Croatia\ \ Open, he beat in succession Andreas Seppi, Fabio Fognini and Roberto Bautista\ \ Agut to reach his second final in 2015. Sousa lost the final to Dominic Thiem.\ \ After a quarterfinal exit at the Swiss Open, He suffered a first-round loss\ \ to Bernard Tomic at the Canada Masters and reached the second round at the Cincinnati\ \ Masters, where he lost to Marin Čilić. Following a brief appearance at the Winston-Salem\ \ Open, Sousa was defeated at the US Open by Ričardas Berankis in five sets in\ \ the first round. In the men's doubles competition, Sousa became the second Portuguese\ \ player to reach the quarterfinals of a Grand Slam event after Nuno Marques,\ \ also in men's doubles at the 2000 Australian Open. Sousa and his partner Argentinian\ \ Leonardo Mayer were denied a presence in the semifinals by Americans Sam Querrey\ \ and Steve Johnson.\n\nSousa returned to the Davis Cup in September to help Portugal\ \ defeat Belarus and gain promotion to Europe/Africa Zone's Group I in 2016. Despite\ \ losing his first singles rubber, he won the doubles rubber with Elias and the\ \ deciding singles rubber against Uladzimir Ignatik. At the St. Petersburg Open,\ \ Sousa reached his third final of the season. Following wins over Marcel Granollers,\ \ Simone Bolelli and Dominic Thiem, Sousa was runner-up to Milos Raonic in three\ \ sets. In October, Sousa entered on a 1–4 run with early round losses at the\ \ Malaysian Open, the Japan Open, the Shanghai Masters and the Kremlin Cup. At\ \ the Valencia Open, Sousa capped the season with his second career ATP title\ \ and the first of the season in four final attempts. After beating four higher-ranked\ \ players, including Benoît Paire, Sousa defeated 7th seed Roberto Bautista Agut\ \ in the final in three sets. He reached a new career-high ranking in the following\ \ week at world No. 34. Sousa finished the season at career-high world No. 33\ \ with 38 singles wins. In November, he received the award for Tennis Personality\ \ of the Year for the second time from the Portuguese Tennis Federation and the\ \ Confederação do Desporto de Portugal.\n\nDuring 2015, physiotherapist Carlos\ \ Costa, known for his work with Tommy Haas, occasionally joined Sousa's entourage\ \ in selected tournaments; Sousa wanted to have a part-time member in his team\ \ responsible for that area. In 2016, Costa is expected to follow Sousa for at\ \ least 10 weeks but will remain focused on Haas' return until Wimbledon.\n\n\ 2016: Top 30 and first Masters 1000 quarterfinals\nAfter training at Rafael Nadal's\ \ home ground in the pre-season, Sousa began the 2016 season with a first-round\ \ loss to Fabio Fognini at the Auckland Open. Due to Richard Gasquet's absence\ \ by injury, he became the first Portuguese ever to be seeded at the Australian\ \ Open, entering the singles main draw as the 32nd seed. Following wins over Mikhail\ \ Kukushkin and Santiago Giraldo, Sousa lost in the third round to world No. 2\ \ Andy Murray for the second successive year. In the doubles event, Sousa partnered\ \ with Leonardo Mayer but the pair were eliminated in the first round.\n\nIn April,\ \ Sousa reached his first Masters 1000 quarterfinals at the Mutua Madrid Open\ \ 2016, after beating Nicolas Mahut, lucky loser Marcel Granollers and Jack Sock.\ \ He lost to Rafael Nadal in three sets. His clay season ended with a second-round\ \ exit at the French Open, where he lost to Ernests Gulbis in four sets.\n\nIn\ \ June, Sousa entered Wimbledon as 31st seed. After beating Dmitry Tursunov in\ \ five sets and Dennis Novikov in four sets, he lost to Jiri Vesely in the third\ \ round, making for his best run ever at Wimbledon.\n\nSousa entered the 2016\ \ Rogers Cup where he lost in the 1st round 6–3, 6–3 to semi-finalist Gaël Monfils.\n\ At the 2016 Olympic Games in Rio de Janeiro, Sousa won his first match but lost\ \ in the next round in three sets to eventual silver medalist Juan Martín del\ \ Potro. Three weeks later at the 2016 US Open, he inflicted the heaviest defeat\ \ of the Men's Singles draw, defeating Víctor Estrella Burgos in the first round\ \ conceding only 2 games in 3 sets. He went on to defeat Feliciano Lopez in 4\ \ sets but his run ended losing to a resurgent Grigor Dimitrov.\n\nAfter dropping\ \ the points from the 2015 Valencia Open in late October, Sousa finished the season\ \ at 43rd in the ATP Rankings, with just over 1,000 points.\n\n2017: Two ATP finals\n\ \nJoão Sousa trained with Rafael Nadal in the offseason for the second year running.\ \ He started the 2017 season at the 2017 Auckland Open once again, where he reached\ \ the final after beating Albert Ramos-Vinolas, Brydan Klein, Robin Haase and\ \ Marcos Baghdatis. He lost in three sets to Jack Sock, but the result allowed\ \ him to re-enter the Top 40 in the ATP Singles Rankings. Sousa's January ended\ \ with a first-round exit at the Australian Open, having lost in five sets to\ \ Jordan Thompson, his worst result at this Grand Slam since 2014.\n\nSousa started\ \ the South American swing at the Argentina Open, having lost in the quarter-finals\ \ to eventual finalist Kei Nishikori. At the Rio Open, Sousa crashed out in the\ \ first round, losing in two sets to Roberto Carballes Baena, in a match that\ \ lasted just under an hour. His last clay tournament in South America was the\ \ Brasil Open, where he lost in the semi-finals to Albert Ramos-Vinolas in three\ \ sets.\n\nIn March, Sousa entered the first two Masters 1000 tournaments of the\ \ season. At the BNP Paribas Open, he lost to Mischa Zverev in the second round.\ \ At the Miami Open, Sousa entered as 30th seed, receiving a bye for the first\ \ round, but lost in the second round to Fabio Fognini.\n\nSousa's late May ended\ \ with second round at the French Open, having lost to three sets to Serbian number\ \ 2's Novak Djokovic by 6–1, 6–4, 6–3. But, Sousa already won at first round with\ \ Serbia's Janko Tipsarevic, having won by four sets by 4–6, 7–6 (7–3), 6–2, 6–2.\n\ \nAfter the clay court season was over, he continued a streak of consecutive losses,\ \ losing matches to Philipp Kohlschreiber, Radu Albot and Dustin Brown at the\ \ Gerry Weber Open, Antalya Open and Wimbledon respectively.\n\nSousa's streak\ \ remained active in Croatia Open Umag, losing in 3 sets to Aljaz Bedene. However,\ \ he would eventually turn it around winning by reaching the quarterfinals in\ \ Swiss Open Gstaad and reaching the final in Generali Open Kitzbühel.\n\nHe would\ \ go on to have more losses in the remainder of the year and not many more wins.\ \ One of these includes two crucial defeats in the Davis Cup where Portugal could\ \ have qualified for the World Group for the first time in its history, specially\ \ considering the absence of the Zverev brothers and Kohlschreiber.\n\n2018–2019:\ \ Home title, Grand Slam fourth rounds\nIn 2018, Sousa made the third round of\ \ the Indian Wells Masters and the fourth round of the Miami Masters. At Indian\ \ Wells, he defeated 4th seed and world number 5 Alexander Zverev in the second\ \ round before losing to 32nd seed Milos Raonic in three sets. At Miami, he defeated\ \ 7th seed and world number 9 David Goffin in the second round only losing one\ \ game in the process before losing to 19th seed Chung Hyeon in straight sets.\n\ \nSousa became the first Portuguese player to win his home title in Estoril, after\ \ beating Daniil Medvedev, countryman Pedro Sousa, Kyle Edmund, Stefanos Tsitsipas\ \ and Frances Tiafoe.\nHe reached the fourth round at the US Open, losing to eventual\ \ champion Novak Djokovic.\n\nSousa failed to defend his title at the following\ \ 2019 Estoril, losing to David Goffin in the second round. \nHe reached the fourth\ \ round for a second week at 2019 Wimbledon, losing to Rafael Nadal.\n\n2020–2021:\ \ Dip in form and rankings, out of top 100, 200th ATP career win\nThroughout 2020\ \ and 2021, Sousa showed a severe dip in form. Since the beginning of 2020, Sousa\ \ posted a win-loss record of 1–20 on the ATP tour and his ranking plummeted from\ \ No. 58 at the beginning of 2020 to No. 147 as of July 26, 2021. It was the first\ \ time since 2013 that Sousa had fallen outside the top 100 in singles rankings.\n\ \nAt the 2020 Davis Cup. Sousa defeated Romanian Filip Cristian Jianu to record\ \ his 200th career win.\n\n2022: First ATP title since 2018, back to top 100\n\ \nAt the 2022 Australian Open, Sousa participated in the qualifications to enter\ \ the men's singles main draw as a qualifier. He fell short of doing so, as he\ \ lost to Radu Albot in the final round of qualifying. Sousa would still end up\ \ entering the main draw as a lucky loser, facing Jannik Sinner. He lost to Sinner\ \ in straight sets.\n\nAfter being in a serious slump for more than 2 years, Sousa\ \ finally earned one of his best results in recent years, winning his 4th career\ \ singles title in Pune. He defeated Emil Ruusuvuori in the final to win his first\ \ tour-level title since 2018. As a result, he moved 51 positions up, returning\ \ into the top 100 to No. 86.\n\nPlaying style\n\nSousa's game is strongly based\ \ on his serve and forehand. He is right-handed and plays with a two-handed backhand.\ \ Sousa has said the forehand is his favourite shot and that he prefers playing\ \ on clay courts. He is known for expressing his emotions on court at times, often\ \ focusing on his coach or the umpire. Andy Murray described Sousa as a tough\ \ opponent who never backs down from a fight, while Novak Djokovic called him\ \ a \"tough\" and mentally strong player who \"takes the best out of the opponent\"\ . Jamie Murray said Sousa has a \"good forehand\" and \"likes playing on clay\"\ , despite his better results on hard courts. He has been described as having the\ \ potential of becoming a top-20 player.\n\nSousa's game pattern has become more\ \ offensive-minded and consistent, and his game has evolved in recent years from\ \ playing on clay to becoming more proficient on other surfaces. He won his first\ \ Challenger title on hard courts in July 2013 in his hometown Guimarães. Later\ \ in September, Sousa went on an 8–1 run to cap his semifinal run at the 2013\ \ St. Petersburg Open and win the title at the 2013 Proton Malaysian Open, both\ \ ATP tour hard-court indoor tournaments. He continued his form on faster courts\ \ in 2014, with deep runs on grass courts at the 2014 Gerry Weber Open and Topshelf\ \ Open, and a final appearance in a hard court indoor tournament at the 2014 Moselle\ \ Open. Despite a results slump on clay earlier in the season, he still achieved\ \ his first ATP tour-level final on clay at the 2014 Swedish Open, and, eventually\ \ triumphing in home turf at the 2018 Estoril Open.\n\nEquipment and endorsements\n\ As of October 2013, Sousa has been represented by Polaris Sports, a subsidiary\ \ of Jorge Mendes's Gestifute, which manages the career of other major Portuguese\ \ sportspeople, including Cristiano Ronaldo. Sousa uses a Wilson racquet, and\ \ is endorsed by Lotto Sport Italia since January 2014, in a two-year partnership\ \ which covers the supply of footwear, clothing and accessories. In May 2015,\ \ Sousa started a partnership with sports supplements company Gold Nutrition.\ \ Sousa's endorsement of sporting attires was switched to Joma in 2020.\n\nPortuguese\ \ clothing brand Mike Davis announced an agreement with Sousa to associate him\ \ with the brand's casual sportswear during 2014. Portuguese private bank BES\ \ was another endorser of Sousa's career before its bailout in August 2014. In\ \ February 2015, private bank Millenium BCP announced a sponsorship agreement\ \ with Sousa.\n\nEarlier in his career, Sousa said he struggled to find local\ \ endorsements and also lamented the financial struggles of the Portuguese Tennis\ \ Federation, which prevented support for his growing participation in the ATP\ \ Tour. He criticized local government for lack of support of sports other than\ \ football. During his junior and early professional career, Sousa's expenses\ \ were supported mainly by his parents and through bank loans.\n\nCareer statistics\n\ \nGrand Slam tournament performance timelines\n\nSingles\nCurrent through the\ \ 2022 Australian Open.\n\nDoubles\n\nATP Masters 1000 finals\n\nDoubles: 1 (1\ \ runner-up)\n\nAwards\n2013 – CDP Portuguese Tennis Personality of the Year\n\ 2014 – CNID Portuguese Athlete of the Year\n2015 – CDP Portuguese Tennis Personality\ \ of the Year\n\nNotes\n\nReferences\n\nExternal links\n\n Official website\n\n\ Profiles\n \n \n \n \n\n1989 births\nLiving people\nPortuguese expatriate sportspeople\ \ in Spain\nSportspeople from Guimarães\nPortuguese male tennis players\nTennis\ \ players at the 2016 Summer Olympics\nTennis players at the 2020 Summer Olympics\n\ Olympic tennis players of Portugal" - "A clinical officer (CO) is a gazetted officer who is qualified and licensed to\ \ practice medicine. \n\nIn her books, \"Beyond the State: The Colonial Medical\ \ Service in British Africa\" and \"Indian Doctors in Kenya, 1895 - 1940: The\ \ Forgotten History\", the author Anna Greenwood notes that before 1923 there\ \ were twice as many Indian doctors as there were European doctors working in\ \ the Colonial Medical Service. The Indian doctors had migrated to British Africa\ \ along with the coolies who came to work on the Uganda Railway. The Indian doctors\ \ faced discrimination and were not appointed to nor paid at the same rank as\ \ medical officers (European doctors). Instead, they were designated as Assistant\ \ or Sub-assistant surgeons despite having attended similar 3 - 4 year Indian\ \ medical schools that were recognized by the General Medical Council in the UK\ \ and performing clinical and administrative duties that were largely identical\ \ to those of the European doctors. From the mid-1920s the Indians were removed\ \ from the colonial service as they were not deemed to be the proper face of the\ \ imperial services in Africa. The Indian Assistant and Sub-Assistant Surgeons\ \ were thus replaced with similarly qualified Africans who came to be known as\ \ clinical officers when the authorizing legislation was passed in 1988 abolishing\ \ the Assistant and Sub-Assistant Surgeon and similar positions.\n\nIn Kenya,\ \ the origin of the clinical officer can be traced back to around 1888 when Sir\ \ William Mackinnon, 1st Baronet founded the Imperial British East Africa Company.\ \ The company was granted royal charter by Queen Victoria and was used by the\ \ Government of the United Kingdom to establish its influence in the East Africa\ \ Protectorate (present day Kenya). As the influence grew a healthcare system\ \ developed to meet the medical needs of the colony. In 1901 Kenyatta National\ \ Hospital was established as the Native Civil Hospital and later renamed the\ \ King George VI Hospital after King George VI of the United Kingdom. In 1958\ \ the European Hospital (present-day Nairobi Hospital) was established in the\ \ same area to serve the European settlers. The need for qualified medical staff\ \ who would provide preventive, promotive, curative and rehabilitative services\ \ in hospitals and communities led to the establishment of the first formal training\ \ programme for clinical officers at Kenyatta National Hospital in 1928. The programme\ \ initially admitted experienced nurses and took them through a one-year certificate\ \ course which prepared them for advanced practice. The nursing track was discontinued\ \ and new students had to complete a medical course and sit and pass continuous\ \ assessment tests and final qualifying examinations which covered the biomedical\ \ sciences, medicine, surgery, paediatrics, obstetrics and gynecology, community\ \ health and health service management. The training expanded after Kenya's independence\ \ in 1962 through to 1970 when the newly created University of Nairobi started\ \ its own medical school and also used Kenyatta National Hospital as its teaching\ \ hospital. Legislation to regulate medical practice by clinical officers was\ \ passed in 1988 thus creating the Clinical Officers Council in 1989. In 1990\ \ the Kenya Medical Training College was established by the government with campuses\ \ in all major towns and in 1996 the Roman Catholic Diocese of Kakamega established\ \ St. Mary's School of Clinical Medicine at St. Mary's Hospital in Mumias which\ \ become the second and third institutions to offer the training in Kenya. By\ \ this time clinical officers had to complete an accredited four-year programme\ \ of study, practicals and internship in clinical medicine and surgery and have\ \ their names entered in the clinical officers register which was cleaned annually\ \ and taken to the government printer to be published in the Kenya Gazette. Private\ \ practice by clinical officers who had left government service after working\ \ for a minimum of 10 years was now allowed. Undergraduate degrees in clinical\ \ medicine were first offered by Egerton University and other universities as\ \ from 2006 and in 2012 the Commission for University Education Act No. 42 of\ \ 2012 removed the accreditation role from all regulatory bodies such as the Clinical\ \ Officers Council (COC) and the Kenya Medical Practitioners and Dentists Council\ \ (KMPDC) making the Commission for University Education (CUE) the only authorized\ \ accrediting body for all university degrees in Kenya including the degree in\ \ clinical medicine. In 2017 the old legislation was repealed and the Clinical\ \ Officers Council reconstituted by the Clinical Officers (Training, Registration\ \ and Licensing) Act No. 20 of 2017 which requires each clinical officer, clinic\ \ or medical centre to be registered by the council and to maintain a current\ \ practice license and a current practising certificate in order to operate legally\ \ within the scope of medicine, dentistry, orthopedics or health work. A clinical\ \ officer may, with respect to patients - examine, diagnose, order laboratory\ \ and imaging investigations, prescribe treatment and perform procedures as per\ \ their scope of training. Clinical officers are members of the Kenya Clinical\ \ Officers Association and the Kenya Union of Clinical Officers. In June 2020\ \ the Public Service Commission approved the Revised Scheme of Service for Clinical\ \ Personnel which was issued by the State Department for Public Service to define\ \ the clinical officer's career structure, job description, standards for recruitment,\ \ training and advancement, and career planning and succession management within\ \ the civil service. The scheme is administered by the Ministry of Health through\ \ the Cabinet Secretary and the Principal Secretary in conjunction with the Public\ \ Service Commission and the County Chief Officer for Health in each of the 47\ \ Counties of Kenya.\n\nClinical officer is a professional designation established\ \ by the government through the Clinical Officers Council (COC) which has jurisdiction\ \ and responsibility for the clinical officer's training, registration and licensing\ \ and each officer must (1) study clinical medicine and surgery or clinical medicine\ \ and community health for three or four years (2) graduate from a government-accredited\ \ medical training college (3) sit and pass a government licensing examination\ \ (4) complete an internship year at a teaching hospital (5) be registered as\ \ a clinical officer (6) have a medical practice licence (7) complete a three-year\ \ period of clinical supervision under a senior clinical officer or a senior medical\ \ officer (8) have a practising certificate if they have a private practice which\ \ allows one to provide general medical services on their own directly to the\ \ public (9) undergo one or two additional years of specialized training (optional)\ \ and (10) become a trainer. Clinical Officer (CO) is a protected professional\ \ title and its use by unregistered persons is prohibited by law and punishable\ \ by up to five years in jail with or without a fine. Globally, the title may\ \ not have legal restrictions and can refer to a job grade rather than a medical\ \ qualification such as junior assistive clinical staff (e.g. in Zambia and Tanzania),\ \ licensed medical professionals (e.g. in Kenya and Malawi) or high-level corporate\ \ officers, directors, and managers (e.g. Chief Clinical Officers in Europe and\ \ the United States).\n\nA clinical officer observes, interviews and examines\ \ sick and healthy individuals in all specialties to determine and document their\ \ health status and applies relevant pathological, radiological, psychiatric and\ \ community health techniques, procedures and findings needed to classify diseases\ \ and related health problems and to establish a provisional or final diagnosis\ \ upon which to prescribe, initiate, carry out or terminate treatment or therapy\ \ based on their specialized knowledge, skills and experience in clinical pharmacology,\ \ use of clinical guidelines, best practices and disease patterns as well as individual\ \ patient and community characteristics while being actively pharmacovigilant\ \ to prevent, identify, minimize and manage drug reactions, drug errors, side\ \ effects and poisoning, overdiagnosis, overscreening, overtreatment and futile\ \ care. A clinical officer performs general and specialized medical duties such\ \ as diagnosis and treatment of disease and injury, ordering and interpreting\ \ medical tests, performing routine medical and surgical procedures, referring\ \ patients to other practitioners and managing health departments, institutions,\ \ projects and systems.\n\nClinical officers, medical officers and medical practitioners\ \ are the only officers who are gazetted and licensed to practice medicine in\ \ Kenya. They work under oath and generate credible health data and information\ \ within communities and health institutions and cascade the same to the county\ \ and national governments, government agencies and third parties through standard\ \ recording and reporting tools from the Ministry of Health which are used to\ \ capture data on disease outbreaks, physical injuries and deformities, mental\ \ illness, drug resistance, disability, nutritional disorders, births and deaths\ \ among others.\n\nOverview\nTo practice medicine and surgery or dentistry as\ \ a clinical officer one requires at least four years of full-time medical training,\ \ supervised clinical practice and internship at an accredited medical training\ \ institution and hospitals and registration with the relevant medical board in\ \ their country. After a prescribed number of years in active practice, one may\ \ complete a further one or two-year residency programme in order to specialize\ \ in any approved branch of clinical medicine and surgery such as anesthesia or\ \ pediatrics, or get an advanced medical qualification from the university. There\ \ are no pathways (post-basic or post-graduate entry-level conversion programs)\ \ for nurses and other health workers hence it takes at least eight years of specialised\ \ medical training and experience for a clinical officer to graduate with a post-basic\ \ qualification. \"Clinical officer\" in some countries such as Tanzania and Zambia\ \ refers to a different cadre of health workers, comparable to \"medical assistants\"\ \ in Malawi, who have less than three years of training but who may upgrade to\ \ a similar level by becoming Assistant Medical Officers (AMOs) or Medical Licentiates\ \ (MLs).\"medical assistants/Sub Assistant Community Medical Officer\" in Bangladesh,\ \ a Four Year medical diploma course conducting state medical faculty of Bangladesh\ \ under ministry of Health and family welfare.\n\nA clinician can specialize in\ \ any other field that is deemed appropriate by them and not just clinical medicine.\ \ China also has masters of clinical medicine. In countries like Tanzania, UK,\ \ and other countries, clinical medicine is regarded as a medical course and graduates\ \ are allowed to apply to masters of medicine specialties.\n\nNo significant difference\ \ has been demonstrated in studies comparing treatment decisions, patient outcomes,\ \ quality of care provided and level of knowledge about diseases between a clinical\ \ officer and a medical officer (a non-specialist physician) except in countries\ \ where nurses were mistakenly assessed as clinical officers. However, because\ \ of the nature of practice, populations served and resources at ones disposal,\ \ a clinical officer is less likely to administer expensive treatment, prescribe\ \ expensive (but not necessarily better) drugs or engage in futile care.\n\nThe\ \ success of HIV/AIDS prevention and treatment initiatives in Africa is mostly\ \ attributed to use of clinical officers to diagnose the disease and provide comprehensive\ \ medical care. Access to emergency obstetric care through greater deployment\ \ of the clinical officer is one way of attaining the Millennium Development Goals\ \ 4 (reducing child mortality) and 5 (improving maternal health).\n\nWorldwide,\ \ patients are seen by many other practitioners other than the traditional doctor\ \ such as:\nOsteopathic physicians, Podiatrists, Optometrists and Anesthesiologist\ \ assistants in the United States\nEmergency and Clinical Officer Pakistan\nPhysician\ \ Assistants in the United States, United Kingdom, Netherlands, Liberia and Ghana\n\ Assistant Doctors in China,\nSurgical Care and Emergency Care Practitioners in\ \ the UK,\nAssistant Physicians in Saudi Arabia,\nHealth Extension Officers in\ \ Papua New Guinea\nmedical assistants/Sub Assistant Community Medical Officer\ \ in Bangladesh\nMedical Assistants in Fiji\nAssistant Medical Officers in Malaysia\n\ Surgical Technologists in Mozambique\nClinical Associates in South Africa.\n\n\ Scope of practice\nA clinical officer takes the Hippocratic oath and, depending\ \ on jurisdiction, may be registered by the same statutory board as physicians\ \ (in the southern countries such as Zambia and Malawi) or a separate board (in\ \ the eastern countries such as Kenya and Uganda). The broad nature of medical\ \ training prepares one to work at all levels of the health care system. Most\ \ work in primary care health centres and clinics, and casualty departments in\ \ hospitals where one will diagnose and treat all common diseases, including serious\ \ and life-threatening ones, in all age groups; and stabilise then admit, discharge\ \ or refer emergency cases. In smaller hospitals one may work as a hospitalist\ \ and one who has specialized in a clinical field provides advanced medical and\ \ surgical care and treatment such as administering anesthesia, performing general\ \ or specialised surgery, supervising other health workers and other administrative\ \ duties.\n\nA clinical officer's scope of practice depends on one's training\ \ and experience, jurisdiction and workplace policies. In Malawi, for instance,\ \ a clinical officer performs all routine surgical and obstetric operations such\ \ as exploratory laparatomy, emergency orthopaedics and Caesarean section. However,\ \ in Kenya, Tanzania and Mozambique one has to undergo further specialized training\ \ in order to perform such major operations safely.\n\nIn rural and small urban\ \ health facilities a clinical officer is usually the highest medical care provider\ \ and works with minimal resources, relying on the traditional medical history\ \ and physical examination, often with little or no laboratory facilities, to\ \ make a diagnosis and provide treatment. In bigger and better equipped facilities\ \ a clinical officer generally acquires superior knowledge, experience and skills\ \ and provides high quality and a wider range of services in district, provincial\ \ and national hospitals, universities and colleges, research institutions and\ \ private medical facilities.\n\nA clinical officer is usually the lowest entry-level\ \ cadre in the medical hierarchy but with years of experience and/or further training\ \ one can rise to the same or a higher grade than a physician. In most countries,\ \ however, wages are usually low compared to training and responsibilities and\ \ career progression is usually restricted by awarding terminal degrees and diplomas,\ \ training students who have not attained the minimum university entry grade and,\ \ in some countries, not awarding any degree or recognition for advanced training.\ \ In such countries, this usually results in a demotivated and low quality workforce\ \ and resulting poor health indicators.\n\nThe United States' Centers for Disease\ \ Control and Prevention and other international health and research institutions\ \ make extensive use of COs in their projects in Africa and clinical officers\ \ have been the backbone of HIV care and treatment enabling the rollout of ARVS\ \ to even the most rural hard to reach areas in Africa.\n\nResearch done by the\ \ University of Birmingham and published in the British Medical Journal concluded\ \ that the effectiveness and safety of caeserian sections carried out by clinical\ \ officers did not differ significantly compared with doctors. Better health outcomes\ \ including lower maternal mortality rates were observed where COs had completed\ \ further specialised training particularly in anaesthesia.\n\nIn the multi-country\ \ study, poor outcomes were observed in Burkina Faso and Zaire - the only countries\ \ where the procedure was performed by trained nurses. Higher rates of wound infection\ \ and Wound dehiscence in these countries was thought to be due to the nurses'\ \ poor surgical technique and need for enhanced training.\n≠\n\nKenya\nKenya has\ \ a comprehensive framework of parallel laws and regulations that govern the medical\ \ practice of medical officers and clinical officers. The supreme health policy\ \ and medical authorities in the republic are the cabinet secretary of health\ \ and the director of medical services who oversee the registration and licensing\ \ of medical institutions and the training, registration and licensing of medical\ \ practitioners through the Medical Practitioners and Dentists Board and the Clinical\ \ Officers Council.\n\nAs a British colony in 1928, Kenya started training a select\ \ group of natives to practice medicine and care for the local population who\ \ were increasingly accepting and seeking western medicine. After independence\ \ from Britain in 1963, medical training in Kenya initially adopted the four-year\ \ medical school system used in the US rather than the six-year UK model. This\ \ was heavily influenced by the Kennedy Airlift which followed initial funding\ \ by the African-American Students Foundation (AASF) in 1959 and led to hundreds\ \ of young Kenyan students getting scholarships to study in American institutions:\ \ These students came back to Kenya after their studies and joined the civil service\ \ in the early post-independence Kenya. It was also around this time that the\ \ first DOs were accepted as medical officers by the US civil service and by 1967\ \ the structure and duration of medical training in Kenya was similar to the US\ \ MD training. When the University of Nairobi split from the University of East\ \ Africa and became the first university in Kenya in 1970, it continued to teach\ \ the six-year British degree which led to the creation of two statutory bodies:\ \ the Kenya Medical Practitioners and Dentists Board in 1978 which had jurisdiction\ \ over medical officers and physicians, and the Clinical Officers Council in 1989\ \ which had jurisdiction over clinical officers. Instead of residency for the\ \ clinical officer, the higher diploma in paediatrics, ophthalmology and other\ \ specializations was introduced in the late 1970s as a post-basic course for\ \ those who had worked for three or more years and, after ten years of service,\ \ one became a Senior Clinical Officer and qualified for a licence to practice\ \ under his own name as a private medical practitioner. The BSc. Clinical Medicine\ \ and Surgery degree was later introduced in 2006.\n\nClinical officers play a\ \ central role in Kenya's medical sector today. There were 8,600 clinical officers\ \ on the register in 2010 compared to 7,100 medical officers. They are trained\ \ by the universities, the Kenya Medical Training College (KMTC), St. Mary's School\ \ of Clinical Medicine and other private institutions. The Ministry of Health,\ \ through the Clinical Officers Council (COC) regulates their training and practice,\ \ accredits training institutions, and approves the syllabi of the universities\ \ and colleges. The Kenya Medical Training College (KMTC), also under the Ministry\ \ of Health, has campuses in regional teaching hospitals and trains the majority\ \ of clinical officers. St. Mary's School of Clinical Medicine and St. Mary's\ \ Mission Hospital in Mumias, owned by the Roman Catholic diocese of Kakamega,\ \ was the first private institution to train clinical officers. It admits students\ \ who got the minimum university entry grade in high school and have passed a\ \ written examination and oral interview. The students sit the same examination\ \ as their counterparts at the KMTC and are examined by consultants from the public\ \ service.\n\nOn 28 October 1981 lawmakers addressed the National Assembly as\ \ follows:\nMr. Orengo: On a point of order Mr. Deputy Speaker Sir. Is it really\ \ in order for the hon. Member for Butere to impute that this house does not know\ \ that clinical officers are not allowed to practice. I think the motion is just\ \ after legalizing the position and not saying that they are not allowed to practice.\n\ Mr. Shikuku: Mr. Deputy Speaker, we heard Dr. Chibule saying that he is going\ \ to give us a list of 20 clinical officers who are being refused permission to\ \ practice and forced to go back to government practice and this is the thing\ \ I am trying to reply to. The hon. Member was in the house when Dr. Chibule said\ \ this but I do not know why he did not hear him say so, but nevertheless, let\ \ me continue. Mr. Deputy speaker, Sir, the clinical officers helped the government\ \ during the recent doctors' strike in the country when we virtually depended\ \ on them. Now, Sir, part (a) of this motion is not the responsibility of the\ \ Ministry of Health because if anybody wants to pursue higher education, even\ \ from this house, it is upon him first to make sure he has the prerequisite qualifications\ \ to pursue higher studies. So, with that, the ministry is not concerned. Part\ \ (b) of the motion is the most important. We are requiring the enactment of a\ \ law to cover our present clinical officers to allow them to practice and be\ \ covered by the law as the doctors are covered. This is the point and the assistant\ \ minister for health has produced a paper which is going to be presented before\ \ the cabinet after which it will come to the house. Now, Sir, where do we disagree?\ \ There is no place where we disagree. What we are trying to say is that the government\ \ is already doing what it is being asked to do, and that is why we are saying\ \ that this matter has more or less been overtaken by events. Therefore, we are\ \ not going to be asked to do what we are already doing.\nMr. Orengo: On a point\ \ of order, Mr. Deputy Speaker, Sir.\nMr. Shikuku: You can have as many points\ \ of order as you like!\n\nThe dual diploma in clinical medicine and surgery plus\ \ an internship year is the standard qualification for clinical officers which\ \ is awarded on completion of a four-year training programme which started as\ \ various programmes that were used to train medical practitioners in the East\ \ Africa Protectorate in the 1920s and which now resembles the North American\ \ four-year MD and DO medical school programmes (including being structured in\ \ 9 trimesters over 3 years to meet the minimum 130 weeks of instruction recommended\ \ by the Liaison Committee on Medical Education) instead of the more recent six-year\ \ MBChB programme that was introduced in the 1970s and is more common in European\ \ and Commonwealth countries:\n\nMedical Officers training:\n Is a six-year professional\ \ degree programme accredited by the Medical Practitioners and Dentists Board\ \ involving\n Two years of pre-clinical training in medical sciences followed\ \ by\n Four years of training in clinical medicine, surgery and community health\ \ including a mandatory one-year internship and\n Registration, licensing and\ \ gazettment by the Medical Practitioners and Dentists Board giving\n Unlimited\ \ practice rights with\n Specialisation and private practice allowed and eligible\ \ for full professional membership of the Kenya Medical Association (KMA)\n\n\ Clinical Officers training:\n Is a four or five-year professional diploma or degree\ \ programme accredited by the Clinical Officers Council involving \n One year\ \ of pre-clinical training in medical sciences followed by\n Three or four years\ \ of training in clinical medicine, surgery and community health including a mandatory\ \ one-year internship and\n Registration, licensing and gazettment by the Clinical\ \ Officers Council giving\nUnlimited practice rights with\n Specialisation and\ \ private practice allowed and eligible for full professional membership of the\ \ Kenya Clinical Officers Association (KCOA)\n\nThe current training follows international\ \ guidelines and the two qualifications are awarded jointly on successful completion\ \ of a comprehensive nine trimester programme of full-time study, practicals and\ \ examinations which are covered over three years leading to a fourth mandatory\ \ year of internship in a teaching hospital. A fifth and sixth residency specialisation\ \ years are undertaken after registration by the Clinical Officers Council and\ \ three years of work experience in general medicine which leads to the award\ \ of a general degree in clinical medicine or a specialist diploma in pediatrics,\ \ orthopedics, psychiatry, anaesthesia, reproductive health and other specialties.\n\ \nA clinical officer is therefore able to graduate and join the workforce in a\ \ minimum of four calendar years and provides medical services within the full\ \ scope of family and emergency medicine or within a narrower scope depending\ \ on their area of specialisation.\n\nRegistration by the Clinical Officers Council\ \ (COC) entitles one to render medical services in any public or private medical\ \ institution or to practice medicine independently as a private practitioner.\ \ Registration also qualifies one to join and participate in the affairs of the\ \ Kenya Clinical Officers Association (KCOA), including its annual KCOA Scientific\ \ Conference, and the Kenya Union of Clinical Officers (KUCO). As per the government's\ \ Revised Scheme of Service for Clinical Personnel (2014) a clinical officer works\ \ at any of 8 grades depending on ones seniority.\n\nAs gazetted officers all\ \ registered clinical officers are legally authorized to prepare, sign, issue\ \ and keep safe custody of official state documents such as medical examination\ \ reports, sick notes, postmortem examination reports and death certificates and\ \ to appear in courts of law as expert witnesses. For this reason, a clinical\ \ officer is the officer in-charge of a health center or a district hospital and\ \ is part of the medical team in bigger hospitals where one may head a department\ \ or work under a senior clinical officer or a physician.\n\nClinical officers\ \ are direct healthcare providers who manage and administer health institutions,\ \ medical schemes and projects in primary healthcare (PHC) settings and are frontline\ \ stakeholders in Universal Health Coverage in Kenya which is one of the key pillars\ \ of the government's 5-year development plan under President Uhuru Kenyatta.\ \ The four pillars of the 5-year development plan are 1. Manufacturing 2. Affordable\ \ housing 3. Universal Health Coverage and 4. Food security.\n\nLegal status\n\ In Kenya's public health system, a clinical officer is an alternative practitioner\ \ who is trained and authorized by law to perform any technical, administrative\ \ or legal duties that require a medical doctor. However, due to the shorter training\ \ period when compared to medical officers (i.e. 4 years instead of 6 years),\ \ a clinical officer joins the public service at a lower grade and gains seniority\ \ through experience, additional training or further education.\n\nLike the term\ \ medical officer, the term clinical officer is a protected title whose use without\ \ the authority of the Clinical Officers Council is prohibited and a punishable\ \ offense under Kenyan laws. Court rulings uphold that a registration certificate\ \ or a licence issued by the council automatically confers the status of a medical\ \ officer or a qualified medical practitioner to a clinician and the titles are\ \ used interchangeably in medico-legal documents because a qualified clinical\ \ officer has a recognized medical qualification and is eligible for registration\ \ as a medical practitioner under Section 11(1) of the Medical Practitioners and\ \ Dentists Act in addition to being expressly authorized to practice medicine,\ \ surgery or dentistry by Section 7(4) of the Clinical Officers ActCriminal Appeal\ \ 198 of 2008 - Kenya LawCriminal Case 6 of 2004 - Kenya LawCAP. 249\n\nFrom the\ \ Anatomy Act, the legal definition of a medical officer is any public officer\ \ who is entitled to be registered as a medical practitioner if he applied under\ \ any law in the country: Section 14(1) of the Medical Practitioners and Dentists\ \ Act and Section 7(4) of the Clinical Officers Act are the only two laws that\ \ can authorize one to practice medicine and render medical or dental services\ \ in the public sector if they hold a registration certificate or in the private\ \ sector if they hold a current licence as well. The Public Health Act further\ \ defines a medical officer of health as a public officer who is responsible for\ \ health nationally (the Director of Medical Services and the Director clinical\ \ services) or regionally (the County or Sub-County Medical Officer of Health\ \ and the County or Sub-County Clinical Officer).\n\nLike his counterparts in\ \ the public service, a clinical officer in the private sector has the same practice\ \ rights and privileges as a medical officer and both are authorized to work independently\ \ and specialize in any approved branch of general or specialised medicine. The\ \ Competition Act No.12 of 2010 directly prohibits and addresses multi-sectoral\ \ abuse of dominance, consumer welfare, exemptions, cartels and unwarranted concentration\ \ of economic power among practitioners.\n\nA register of active clinical officers\ \ and medical institutions is available online on the Clinical Officers Council\ \ and Ministry of Health websites.\n\nThe Clinical Officers (Training, Registration\ \ and Licensing) Act No. 20 of 2017\n\nThe Clinical Officers (Training, Registration\ \ and Licensing) Act No.20 of 2017 is the law that governs the medical practice\ \ of a clinical Officer. It establishes the Clinical Officers Council whose functions\ \ are to:\nadvise the government on policy matters relating to clinical medicine\ \ practice\nprescribe the minimum educational entry requirements for persons wishing\ \ to be trained as clinical officers\napprove institutions other than those established\ \ or accredited under the Universities Act, 2012 for the training of clinical\ \ officers\nestablish, approve and accredit programs for continuing professional\ \ educational programs\nregister and license clinical officers for the purposes\ \ of this Act\nmaintain a register and records of all clinical officers registered\ \ under this Act\ncause to be published in the Kenya Gazette every calendar year\ \ the names of all registered clinical officers\npromote development and adoption\ \ of codes of practice\nregulate the professional conduct and ensure the maintenance\ \ and improvement of the standards of practice of clinical medicine\ncollaborate\ \ with other medical professional associations, organisations and other relevant\ \ bodies, in the furtherance of the functions of the Council and those bodies\n\ consider and deal with any other matter pertaining to clinical officers including\ \ prescribing badges, insignias or uniforms to be worn by clinical officers and\n\ carry out other functions related to the implementation of this Act.\n\nTraining\n\ \nAlthough training programmes existed as early as 1928, the first university\ \ to train clinical officers was Egerton University in 1999. Programs also exist\ \ at Jomo Kenyatta University of Agriculture and Technology, Kenya Methodist University\ \ (KEMU) Mt Kenya University. and Presbyterian university of East Africa (PUEA).\ \ The diploma in Clinical Medicine and Surgery is completed in nine 15-week trimesters\ \ over three calendar years (or 135 weeks which, notably, exceeds the minimum\ \ 130 weeks of instruction required to complete US MD programs). The BSc. Clinical\ \ Medicine and Surgery is completed over 4 years.\n\nStudents study the biomedical\ \ and clinical sciences such as anatomy, physiology and pathology in the first\ \ year followed by the clinical subjects (medicine, surgery, pediatrics, obstetrics\ \ and gynecology) in the second year. The third and fourth year involves supervised\ \ clinical practice and internship in teaching hospitals where they rotate in\ \ all the departments, receive beside lectures, attend consultants' ward rounds,\ \ clerk patients and present medical histories, perform deliveries and first-assist\ \ in major surgery. They also attend clinical meetings and write prescriptions\ \ which at this stage must be counter-signed by a supervising clinician.\n\nThere\ \ is special emphasis on primary care with modules on community health taught\ \ throughout the course. Before starting their internship after the third year,\ \ clinical officers spend at least one month in a Provincial Rural Health Training\ \ Centre where they immunize children, examine pregnant women and offer family\ \ planning services in mother and child health clinics. They also treat in-patients\ \ and out-patients under the guidance of qualified Clinical officers and organise\ \ outreach services where they venture into remote rural villages, seeing patients\ \ and immunising children. During this time they complete a project in community\ \ diagnosis.\n\nThey also learn Health Service Management which prepares them\ \ for their management and leadership roles in health centers and other institutions.\n\ \nInternship and registration\n\nAll clinical officers must work as full-time\ \ interns for one year without pay or any form of motivation at an approved public\ \ or mission hospital before getting a licence to practice medicine, a situation\ \ that has resulted to major strikes by clinical officers in the past leading\ \ to operation standstills in public hospitals when these strikes occur. On passing\ \ the final qualifying examination, they take the hippocratic oath then apply\ \ for provisional registration by the Clinical Officers Council, the statutory\ \ body that regulates the practice of clinical officers in the country. The internship\ \ involves supervised rotations in the major clinical departments namely casualty,\ \ medicine, paediatrics, surgery, obstetrics and gynecology. They are supervised\ \ by consultants in the respective fields. The consultants ensure that they can\ \ practice clinical medicine safely before signing them off for registration.\ \ An internship booklet signed by the consultants is required for registration.\ \ After registration one is required to apply for a licence from the COC which\ \ allows them to practice medicine, surgery and dentistry legally in the country.\ \ This licence is renewable every two years. Renewal requires evidence of having\ \ attained 60 Continuous Professional Development (CPD) points in the CPD diary\ \ by further training, research and publications, attending conferences and Continuing\ \ Medical Education (CME) sessions or major ward rounds and outreach activities.\n\ \nCareers Progression\n\nAn experienced clinical officer usually holds a senior\ \ clinical, administrative or teaching position within their organisation or establishes\ \ and manages his/her own private practice. One who holds the Diploma in Clinical\ \ Medicine and Surgery can upgrade his/her qualification to the BSc. Clinical\ \ Medicine and Surgery or undertake postgraduate training at the university. One\ \ may also enroll for the Higher Diploma programme at the Kenya Medical Training\ \ College.\n\nThe Higher Diploma in Clinical Medicine and Surgery requires at\ \ least three years of working experience and lasts twelve to eighteen months\ \ leading to a specialised qualification and re-designation as a specialised clinical\ \ officer in one of the medical specialties such as paediatrics, reproductive\ \ health, anaesthesia, ENT, ophthalmology and cataract surgery, orthopaedics,\ \ psychiatry/clinical psychology, skin and chest diseases, epidemiology, pathology\ \ and Community medicine. A specialised clinical officer provides advanced medical\ \ and surgical care including invasive procedures in their specialty such as caeserian\ \ section, cataract surgery, tonsillectomy, psychotherapy and administration of\ \ anaesthesia.\n\nMalawi\nMedical care is generally provided by clinical officers\ \ who are even capable of providing surgical care. Clinical Officers are trained\ \ for 4 years, (3+1 year of clinical internship at designated teaching hospitals).\ \ One meta-analysis documented that the provision of caesarean section by clinical\ \ officers does not result in a significant increase in maternal or perinatal\ \ mortality. In other, words there was no difference whether the operation was\ \ done by clinical officers or medical doctor\n\nSudan\nSouthern Sudan separated\ \ from the Arab North (Sudan) in July 2011 after years of civil war that left\ \ much of the southern part in ruins. The healthcare system is almost non-existent.\ \ AMREF started training clinical officers by setting up the Maridi National Health\ \ Training Institute, name=\"AMREF\"/>\n\nThe graduates supplement the efforts\ \ of COs trained in neighboring countries, e.g. Kenya, Uganda and Tanzania, most\ \ of whom work for international humanitarian agencies.\nSince 2014,Juba institute\ \ of Health Sciences and Ayii University 2021 have now joined in production of\ \ competent cadres in the Health in the Republic of South Sudan.\n\nTanzania\n\ In Tanzania, training is under the Ministry of Health. There are numerous clinical\ \ officer training schools and programs last three years. Internship is not required\ \ for registration.\n\nExperienced clinical officers may enrol for an advanced\ \ diploma in clinical medicine which takes two years to complete. This qualification\ \ is regarded as equivalent to a first degree in medicine by universities and\ \ the Ministry of Health in the country. The graduates are known as Assistant\ \ Medical Officers which no longer exist since 2017 so a clinical officer can\ \ upgrade by studying a bachelor's degree in clinical medicine in any East African\ \ country for three years or study it in Tanzania for four years and graduate\ \ as a doctor equivalent to an MD graduate even in salary and job opportunities\ \ or can study the Medical Doctor(MD) which is a 5-year course plus 1 internship\ \ year making 6 years and can add 1 year to be Medical bachelor and Bachelor in\ \ Surgery(MBBS) if interested.\n\n A further two years training from the Clinical\ \ Officer level leads to a specialist qualification in anaesthesia, medicine,\ \ surgery and radiology etc.\n\nKampala International University has opened a\ \ campus in Dar es Salaam where it is now offering its Bachelor of Clinical Medicine\ \ and Community Health.\n\nUganda\nBy 1918, Uganda was training clinical officers\ \ who were called medical assistants at the time. The training is under the Ministry\ \ of Education and takes place in clinical officer training schools. Postsecondary\ \ programs last three years, focusing on medicine and hospice care, followed by\ \ a two-year internship.\n\nKampala International University offers a Bachelor\ \ of Clinical Medicine and Community Health. High school graduates take four-and-a-half\ \ years to complete this degree while practicing clinical officers take three\ \ years.\n\nZambia\nIn Zambia, clinical officers who complete a three-year diploma\ \ of Science in Clinical Medicine course are called CLINICAL OFFICER -GENERAL\ \ (COG). Those who complete a three-year diploma in clinical psychiatry are called\ \ CLINICAL OFFICER -PSYCHIATRY (COP). Currently the upgrade of this diploma is\ \ a Bachelor of Science and holders are called medical licentiates. Medical licentiates\ \ have advanced skills in medicine and surgery and may be deployed interchangeably\ \ with physicians. Medical licentiates outnumber general physicians (with university\ \ degree) across all regions, with the ratio ranging from 3.8 COs per physician\ \ in Lusaka to 19.3 in the Northwestern provinces. They perform routine surgical\ \ and obstetric operations as well as providing clinical care in hospitals. The\ \ College of Surgeons of East, Central and Southern Africa (COSECSA) is involved\ \ in their training to increase their surgical skills through the Clinical Officers\ \ Surgical Training (COST) programme.\n\nBurkina Faso\nIn Burkina Faso, as elsewhere\ \ in sub-Saharan Africa, the use of non-physician clinicians began as a temporary\ \ measure while more doctors were trained, but has become a permanent strategy\ \ in the face of a crisis in health human resources. Different training alternatives\ \ have been used. Two-year advanced training programs in surgery were developed\ \ for registered nurses. Clinical officers (known as attachés de santé en chirurgie)\ \ were district medical officers trained with an additional six-month curriculum\ \ in emergency surgery.\n\nMany studies show that trained COs provide quality\ \ medical and surgical care with outcomes similar to physicians' providing similar\ \ care in the same setting. However, nurses re-trained to become COs have been\ \ associated with more adverse outcomes as shown in a study using 2004-2005 hospital\ \ data from six regions of Burkina Faso, which associated them with higher maternal\ \ and neonatal mortality when they performed caeserian sections. The observed\ \ higher fatality rate pointed to a need for refresher courses and closer supervision\ \ of the nurses.\n\nEthiopia\nThe first medical school in Ethiopia was initially\ \ a \"health officer\" training institution. The training of health officers started\ \ at Gonder University in 1954. Health officers training programs across Ethiopia\ \ require that students have some of the highest scores in National University\ \ Entrance Examination to be admitted. Health officers hold bachelor's degrees\ \ and undergo a three-year training program plus one-year internship. Those who\ \ complete a 2–3 years master's degree programs provide advanced care (e.g. emergency\ \ surgery).\n\nGhana\nIn Ghana, Medical Assistants (MAs) have traditionally been\ \ experienced nurses who have undergone an 18-month post-basic course to become\ \ MAs. High school graduates can now attend a three-year diploma course to become\ \ MAs. In Ghana, from 2012, the nomenclature Medical Assistant had been changed\ \ to Physician Assistant.. The new name Physician Assistant is not known among\ \ most Ghanaians The term Physician Assistant (PA) refers to three distinct groups\ \ of health professionals trained on the medical model to practise medicine and\ \ dentistry. They are the PA-Medical, PA-Dental and also known as Community Oral\ \ Health Officers and PA-Anaesthesia (also known as Nurse Anesthetists). These\ \ groups of mid-level health providers were trained exclusively in the past by\ \ Health Training Institutions (HTIs) under the Ministry of Health with the aim\ \ of extending care to the populace where physician numbers were scanty or not\ \ present. Currently, there are eight universities in Ghana offering a 4-year\ \ Bachelor of Science degree in Physician Assistantship. The objective of the\ \ Bachelor of Physician Assistantship programme is to train graduates who will\ \ possess the ability to evaluate the health status of an individual, diagnose\ \ and treat acute illness as well as life saving interventions, manage chronic\ \ diseases, deliver preventive care and counsel individuals on psychosocial problems\ \ in independently or in collaboration with a physician.\n\nIn 2016, the PA-Anaesthesia\ \ group broke away and became certified registered anaesthetist (CRA) according\ \ to the Health Professions Regulatory Acts 857 which addressed them as certified\ \ registered anaesthetist. PAs are qualified by graduation from the PA educational\ \ programme and certification by the Ghana Medical and Dental Council. Newly qualified\ \ PAs who are successful in their licentiate examinations by the MDC are issued\ \ with provisional registrations to enable them undertake one-year internship\ \ in an accredited institution, a prerequisite for permanent registration which\ \ would also serve as national service but without pay for the twelve months.\n\ \nPA students in all PA training schools belong to the Physician Assistant Association\ \ of Ghana (PASAG). In order to foster unity, camaraderie and bond among members\ \ of the association, and to promote excellence in the discharge of their professional\ \ mandate, quiz competitions are held every year. The maiden edition was won by\ \ the Presbyterian University College, Ghana. After permanent certification, among\ \ other things, PAs diagnose and treat illnesses, conduct physical examinations,\ \ counsel individuals on preventative health issues, and order and interpret laboratory\ \ tests. In addition, PAs are first or second assistants in major surgery, and\ \ provide pre- and post-operative care, and for that matter are trained and well\ \ versed in surgical skills. Thus, PAs play roles in preventive Medicine, as well\ \ as in educational, research, and administrative activities. The physician assistant\ \ is part of the medical team and is placed above the nurse but below the medical\ \ officer They perform tasks originally performed by doctors. Some call PAs as\ \ \"village doctors\" or \"chiefs.\" To the patient, a PA is a doctor, since the\ \ PA practises medicine just as a doctor\n\nThe Ghana Physician Assistant Association-Medical\ \ at their last annual delegate congress, voted for a name change from the current\ \ name Physician Assistant to Clinical Officer. The members of the association\ \ believes that the Assistant attached to their name is limitation to what the\ \ PA actually does. The PA is not an assistant but an independent medical professional\ \ trained and licensed to practise medicine and dentistry. The association has\ \ therefore presented a new job description and the new name clinical officer\ \ to the Ministry of Health. The meeting which was chaired by the chief director\ \ of the ministry of health Dr. Afisah Zakariah who promised to address the grievances\ \ of the PAs soon to be known as Clinical Officers\n\nLiberia\nIn Liberia, the\ \ Tubman National Institute of Medical Arts (TNIMA) was established in 1945. In\ \ 1965, the physician assistant (PA) programme was established as a joint venture\ \ between the Liberian government, WHO and UNICEF. Initially it was a one-year\ \ course, but currently it is a three-year diploma course accredited by the Liberia\ \ National Physician Assistant Association (LINPAA) and the Liberia Medical and\ \ Dental Association Board. In order to legally practice medicine as a PA one\ \ must sit and pass a state exam administered by the medical board.\n\nMozambique\n\ \nIn Mozambique, tecnicos de cirurgia, or surgical technologists, are experienced\ \ Clinical Officers who undergo further residential training in surgery under\ \ the supervision of senior surgeons lasting two years at Maputo Central Hospital,\ \ and a one-year internship at a provincial hospital. They are trained to carry\ \ out emergency surgery, obstetrics and traumatology and are deployed to the district\ \ hospitals where they are usually the sole surgical care providers.\n\nSouth\ \ Africa\n\nSouth Africa trains clinical associates for three years and awards\ \ them the Bachelor of Clinical Medical Practice degree. The first program was\ \ launched by the late Health Minister Tshabalala Msimang on 18 August 2008 at\ \ the Walter Sisulu University in Mthatha. The first class graduated in December\ \ 2010. Programs also exist at the University of Pretoria and the University of\ \ the Witwatersrand.\n\nInternational\nThe specialised nature of medical training\ \ in the developed world has created a shortage of general practitioners and runaway\ \ expenditure on healthcare by governments. primary care is increasingly being\ \ provided by non-physician providers such as physician assistants.\n\nUnited\ \ States\n\nPhysician assistants in the United States train for at least two years\ \ at the postsecondary level and can hold an associate, bachelors or master's\ \ degree. Most PAs have earned a master's degree. Some institutions offer a Doctor\ \ of Science degree in the same. The profession is represented by the American\ \ Academy of Physician Assistants.\n\nUnited Kingdom\nThe United Kingdom has in\ \ recent years employed physician assistants from the United States on a trial\ \ basis as it plans to introduce this cadre into their health care system. Several\ \ UK universities are already offering a post-graduate diploma in Physician Assistant\ \ studies. The PAs of the UK are represented by the Association of UK PAs.\n\n\ Australia\n\nThe University of Queensland offers a one-and-a-half-year Master\ \ of Physician Assistant Studies to those with a bachelor's degree. Those with\ \ a post-secondary healthcare qualification such as registered nurses and paramedics\ \ can access the programme via a Graduate Certificate in Physician Assistant Studies;\ \ as long as they have at least five years full-time working experience. It has\ \ been announced that PAs will be allowed to work in Queensland as fully licensed\ \ practitioners in 2014.\n\nChina\nChina has about 880,000 Rural Doctors and 110,000\ \ assistant doctors who provide primary care to rural populations where they are\ \ also known as barefoot doctors. They typically have about one year of training;\ \ those who sit and pass government examinations qualify to be rural doctors.\ \ Those who fail become community health workers. However, there is a government\ \ move to have all rural doctors complete three years training.\n\nFiji\nAfrica\ \ and the rest of the world are perhaps following a well trode path. In 1879,\ \ a group of Indians arrived in Fiji by ship having survived cholera and smallpox\ \ en route. During a period of crew quarantine, a small group was trained in vaccination.\ \ The experience was considered so successful that a few years later, in 1885,\ \ a group of young Fijian men started a three-year training program at the Suva\ \ Medical School, now known as the Fiji School of Medicine. The title given to\ \ the professional practice has had many names over the years, including Native\ \ Medical Practitioner, Assistant Medical Practitioner, Assistant Medical Officer,\ \ and Primary Care Practitioner (PCP). By 1987, the PCPs were training for three\ \ years before going back to their communities to serve one-year internship, followed\ \ by another two years of study after which they were awarded a MBBS degree.\n\ \nIndia\nUnder British rule, India trained licentiate doctors for three years.\ \ They were then registered with the General Medical Council of Britain. Most\ \ of them worked among the rural population providing medical care.\n\nAfter independence,\ \ and on the recommendation of the bhore committee in 1946, the training of licentiate\ \ doctors was stopped and their qualifications converted to MBBS degrees. They\ \ were then grandfathered into the Medical Council of India.\n\nThe plan was to\ \ train enough doctors who would serve the whole country. However, the plan has\ \ not borne fruit and doctors generally leave their rural posts after their internship\ \ for more lucrative and glamorous careers in the big cities.\n\nAs of 2009, the\ \ Indian government plans to introduce a three-and-a-half-year Bachelor of Rural\ \ Medicine and Surgery (BRMS) degree to train doctors who will work in remote\ \ Indian villages. On graduation they will undergo a one-year internship period\ \ at a regional hospital before being licensed. Those with five years' experience\ \ will qualify for post-graduate studies on equal standing with their MBBS counterparts.\n\ \nIn India, the Madras Medical Mission in Chennai, collaborating with Birla Institute\ \ of Technology and Frontier Lifeline has since 1992 offered a Bachelor of Science\ \ degree in Physician Assistant studies. The program duration is four years, comprising\ \ three years classroom and laboratory coursework then one year compulsory internship.\ \ Several other universities offer similar courses in partnership with US universities.\ \ PAs in India can pursue masters and doctor of science degrees.\n\nBangladesh\n\ \nMid-label Medical Care Health Human Resources of Bangladesh are Medical Assistant\ \ product of Medical Assistant Training School(MATS).3 year Medical Assistant\ \ Course Started 1976.\nNow 4 year Medical Assistant course\n3 years academy+1\ \ year internship\nSee Also Sub assistant community Medical Officer\n\nHistory\n\ Bangladesh was part of British India until independence, and then spent a quarter\ \ of a century as East Pakistan before Bangladesh seceded and became an independent\ \ nation.\n\nBritish India\nModern Bangladesh was mostly part of Bengal in British\ \ India.\n\nIn 1914 the State medical Faculty of Bengal was established to conduct\ \ trained Licentiate of Medical Faculty doctors (LMF Doctor) for four years Mid-Label\ \ Diploma Physician. They were then registered with the General Medical Council\ \ of Britain. Most of them worked among the rural population providing medical\ \ care. At independence East Pakistan had five medical schools:\n Mitford Medical\ \ School, Dhaka (1875-1957)\n Lytton Medical School, Mymensingh (1924-1962) \n\ \ Chittagong Medical School (1927-1957)\n Sylhet Medical School (1948-1962) \n\ \ Rajshahi Medical School (1954-1958)\n\nEast Pakistan\nAfter independence from\ \ Britain, the training of licentiate doctors was continued in East Pakistan and\ \ the training goes for three years and they become professional doctors, with\ \ the doctor title, whose degree is equivalent to clinical medicine. and on the\ \ recommendation of the bhore committee in 1946, started MBBS Degree. They were\ \ then grandfathered into the Medical Council of India & Pakistan. In 1962 Health\ \ Minister Monem Khan introduced Condensed MBBS Course for LMF Doctor at Sir Salimullah\ \ Medical College, Dhaka from 1963 to 1972.\n\nBangladesh\nAfter independence\ \ from Pakistan, the training of licentiate doctors (LMF Doctor) course was stopped.\ \ All Medical School Converted Medical College & Course Started MBBS.\nThe First\ \ Five year plan [1973] of the Father of Nations sheik Mujibur Rahman Government\ \ planned to create new health cadre namely \"Medical Assistant\" & institution\ \ name \"Medical Assistant Training School\" (MATS). In 1976 started Medical Assistant\ \ training course under State Medical Faculty of Bangladesh & Ministry of health\ \ & family welfare. In 1980 1st Batch Medical Assistant student enter government\ \ service. In 1983 Medical Assistant get Bangladesh Medical & Dental Council Registration\ \ 1st time. In 1996 Medical Assistant Post of DGHS & DGFP Converted Sub-Assistant\ \ Community Medical Officer (SACMO) prime minister Sheike Hasina government, DGFP\ \ Implement it but DGHS no implement. In 2011 by the court order implement SACMO\ \ in DGHS Bangladesh.\n\nFrom 2009 session Medical Assistant Course developed\ \ 4-year course (3 year Institution + 1 year Internship). Nowadays Medical Assistant\ \ Course conducted in 8 public institution & 146 private institution.\n\nAbout\ \ 65% rural population receive primary medical treatment from Sub-Assistant medical\ \ officer (medical assistant). Medical Assistant no scope of Higher education\ \ & promotion. But Bongobondu sheike mujibur rahman government The First Five\ \ year plan[1973] page 520 & 521 brief details on Medical Assistant (After passing\ \ medical assistant course & 3 year service rural area in national service entering\ \ qualification of medical college for MBBS course).\n\nInstitution [MATS]\n\n\ There are now 8 Government Medical Assistant Training Schools\nTangail Medical\ \ Assistant Training School (Tangail MATS)\nSirajgonj Medical Assistant Training\ \ School (Sirajgonj MATS)\nKushtia Medical Assistant Training School (Kushtia\ \ MATS)\nBagerhat Medical Assistant Training School (Bagerhat MATS)\nNoakhali\ \ Medical Assistant Training School (Noakhali MATS)\nFaridpur Medical Assistant\ \ Training School (Faridpur MATS)\nJhenaidah Medical Assistant Training School\ \ (Jhenaidah MATS)\nComilla Medical Assistant Training School (Comilla MATS)\n\ \nThere are 146 Private Medical Assistant Training Schools.\n\nMalaysia\nMalaysia\ \ started training Medical Assistant in the early 1900s after independence from\ \ Britain. This profession has undergone several transformations over the decades\ \ in line with the current healthcare development in the country. The current\ \ name of this profession is Assistant Medical Officers (AMO), they are trained\ \ for three years in an undergraduate academic program (Diploma in Medical and\ \ Health Sciences or formerly known as Diploma in Medical Assistant) recognized\ \ by the Malaysian Qualifications Agency. In order to practice, all Assistant\ \ Medical Officers are compulsory to register under the regulating body of Malaysia\ \ Medical Assistant Board (Medical Assistant Act (registration),1977) and serve\ \ a compulsory resident posting for six months in Emergency and Trauma Department\ \ (Program Penempatan Wajib) under a clinical supervision by an Emergency Physician.\ \ Upon completing the compulsory posting, they will be deployed in public hospitals,parastatal\ \ institutions (e.g. military, prisons), rural health centers, health clinics,\ \ community clinics, aged care centers, or private specialist hospitals. To date,\ \ there are five training institutions introduced by the Ministry of Health, Malaysia\ \ in the public sectors to train new Assistant Medical Officers; \n\n Training\ \ Institute of Ministry of Health, Malaysia, Sultan Azalan Syah, Perak \n Training\ \ Institute of Ministry of Health, Malaysia, Johor Bahru, Johor \n Training Institute\ \ of Ministry of Health, Malaysia, Kuching Sarawak \n Training Institute of Ministry\ \ of Health, Malaysia, Kota Kinabalu, Sabah \n Training Institute of Ministry\ \ of Health, Malaysia, Seremban \n\nA registered Assistant Medical Officer can\ \ pursue their sub-specialty training (Post Basic certificates and Advanced Diploma)\ \ in various fields such as Emergency Medical and Trauma care, Primary Healthcare,\ \ Orthopedic, Cardio thoracic, Clinical Neuro-physiology, Sport Medicine, Anesthesiology,\ \ Diabetic care, Infection Control, hemodialysis, and many more. Assistant Medical\ \ Officer could also join the MBBS /MD after completing the undergraduate study\ \ by applying for those programs in either public or private institutions. Those\ \ who want to serve and continue as Assistant Medical Officer could further their\ \ study in a special programs for Assistant Medical Officers such as Bachelor\ \ of Science in Emergency Medicine with honors and Bachelor of Medical and Health\ \ Sciences with honors. Unlike Physician Assistant / Associate (PA) and Clinical\ \ Officer (CO), Assistant Medical Officer in Malaysia is also involved in Pre-Hospital\ \ Care as part of their job scope. Postgraduate programs available for Assistant\ \ Medical Officer includes Master in Medical Science (Public Health), Master in\ \ Risk Disaster Management, Master in Medical Science (Emergency Medicine), Master\ \ in Hospital Management and Health Economics as well as PhD in clinical or medical\ \ sciences fields.\n\nSee also\nAllied health professions\nHealthcare in Kenya\n\ Paramedics\nSurgical technologists\nClinical associates in South Africa\nFeldsher\ \ in countries of the former Soviet Union\n\nReferences\n\nExternal links\nPresbyterian\ \ university of East Africa \nKenya Medical Training College - Clinical Medicine\ \ Department\nKilimanjaro Christian Medical College- Tanzania\nEgerton University\ \ (Kenya) - Diploma in Clinical Medicine and Surgery\nKenya Methodist University\ \ - Department of Clinical Medicine\nMt. Kenya University\nMalawi College of Health\ \ Sciences\nMaridi National Health Training Institute- Maridi\nIndian Association\ \ of Physician Assistants\nThe Clinical Officers Council\n\nHealth care occupations" model-index: - name: SentenceTransformer based on microsoft/mpnet-base results: - task: type: triplet name: Triplet dataset: name: dev evaluator type: dev_evaluator metrics: - type: cosine_accuracy value: 0.5771428571428572 name: Cosine Accuracy - type: dot_accuracy value: 0.4228571428571429 name: Dot Accuracy - type: manhattan_accuracy value: 0.7171428571428572 name: Manhattan Accuracy - type: euclidean_accuracy value: 0.5771428571428572 name: Euclidean Accuracy - type: max_accuracy value: 0.7171428571428572 name: Max Accuracy --- # SentenceTransformer based on microsoft/mpnet-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("DIS-Project/Sentence-Transformer_1") # Run inference sentences = [ "What were the names of the regular battalions in the King's Own (Royal Lancaster Regiment)?", 'The 1st Royal Lancashire Militia (The Duke of Lancaster\'s Own) was an auxiliary regiment raised in the county of Lancashire in North West England during the 17th Century. Primarily intended for home defence, it saw active service in Ireland under King William III, as well as against the Jacobite Risings of 1715 and 1745. It spent long periods on defence duties during the wars of the 18th Century and early 19th Century, and was stationed on the Ionian Islands during the Crimean War. It later became part of the King\'s Own (Royal Lancaster Regiment) and saw active service in the Second Boer War. After its conversion to the Special Reserve under the Haldane Reforms, it supplied reinforcements to the fighting battalions during World War I. After a shadowy postwar existence the unit was finally disbanded in 1953.\n\nBackground\n\nUniversal obligation to military service in the Shire levy was long established in England, and its legal basis was updated by two Acts of 1557. This legislation placed selected men, the \'Trained Bands\', under the command of a Lord Lieutenant appointed by the monarch; this is seen as the starting date for the organised county militia in England. The trained bands were an important element in the country\'s defence at the time of the Armada in the 1580s, and control of the bands was an area of dispute between King Charles I and Parliament that led to the English Civil War. Lord Wharton had been appointed Lord Lieutenant of Lancashire by Parliament in 1641, and on the outbreak of hostilities in July 1642 he attempted to seize the trained bands\' magazine at Manchester. However, he was forestalled by Lord Strange and William Farington (appointed Commissioner of Array by the King), who had already gained control of the magazines at Liverpool and Preston for the Royalists. The resulting skirmish at Manchester on 15 July, when Strange and his men were driven out by Wharton\'s Parliamentarians, was among the first battles of the war.\n\nOnce Parliament had established full control in 1648 it passed new Militia Acts that replaced lords lieutenant with county commissioners, who were appointed by Parliament or the Council of State, after which the term \'Trained Band\' began to disappear in most counties. Under the Commonwealth and Protectorate, the militia received pay when called out and operated alongside the New Model Army to control the country.\n\nOld County Regiment\n\nAfter the Restoration of the Monarchy, the English Militia was re-established by the Militia Act of 1661 under the control of the king\'s lords-lieutenant, the men to be selected by ballot. It was popularly seen as the \'Constitutional Force\' to counterbalance a \'Standing Army\', a concept that was tainted by association with the New Model Army that had supported Cromwell\'s military dictatorship, and almost the whole burden of home defence and internal security was entrusted to the militia. \n\nThe Lancashire Militia were called out in 1663 when there were rumours of plots against the new regime, and no sooner had they been sent home in October than they were called out again on receipt of new information. Some counties were slacking in training and equipping their men: in 1674 most of the weapons of the Lancashire Militia were found to be defective, and many had to be replaced again in 1689.\n\nNine Years\' War\nFollowing the Glorious Revolution, in which King William III supplanted James II, the militia were called out in 1689. The Lord Lieutenant of Lancashire, William Stanley, 9th Earl of Derby, organised three regiments of foot and three Troops of horse from the County palatine of Lancaster:\n Colonel the Earl of Derby – 7 companies\n Colonel Roger Nowell – 7 companies\n Colonel Alexander Rigby – 8 companies\n The Earl of Derby\'s Troop\n Captain Thomas Greenhalgh\'s Troop\n Captain Sir Roger Bradshaigh\'s Troop.\n\nThese regiments volunteered for service in William\'s campaign in Ireland. After training on Fulwood Moor, near Preston, the Lancashire brigade, commanded by the Earl of Derby\'s brother, Lieutenant-Colonel the Hon James Stanley (1st Foot Guards), sailed with the army from Wallasey and landed at Carrickfergus on 14 June 1690. It played a full part in the campaign, serving in the Siege of Carrickfergus, at the Battle of the Boyne, and the Siege of Athlone. After a short tour of garrison duty in Dublin, the Lancashire brigade embarked at Howth in September to return to England to be disembodied on 15 October. Lieutenant-Colonel Stanley then recruited a number of veterans from the brigade for the regiment he was joining in Flanders. He succeeded to the command after his colonel was killed at the Battle of Steenkerque, after which the unit became \'Stanley\'s Regiment\' (later the Bedfordshire Regiment). Colonel Stanley succeeded his brother as 10th Earl of Derby and Lord Lieutenant of Lancashire in 1702.\n\nAt the end of the Nine Years War in 1697 the militia in Lancashire consisted of 1601 men organized into 22 companies and three regiments, with 150 horsemen in three Troops. The three colonels were Major-General the Earl of Macclesfield (lord lieutenant), Roger Kirkby, MP, and Sir Ralph Assheton, 2nd Baronet, of Middleton, MP.\n\nJacobite Rising of 1715\n\nAfter the outbreak of the Jacobite Rising of 1715 the Lancashire Militia was ordered in August to assemble at Lancaster Castle under the command of Col Philip Hoghton. He found that fewer than half of the balloted men turned out, only 560 in all, enough to organise a single battalion. When a force of reputedly 3–4000 Scottish Highlanders and English Jacobites advanced from Carlisle, Hoghton was ordered to fall back from Lancaster to Preston to await further orders. He marched out early on 7 November and the Jacobites entered Lancaster the same day, taking over the ordnance stores in the castle. From Preston the Lancashire Militia and a newly arrived regiment of dragoons were ordered to Wigan, and the Jacobites occupied Preston on 9 November, where they built street barricades and placed the town in a state of defence. However, they were disappointed by the small number of Lancashire Jacobites who joined them, about 1200 badly-armed men. Major-General Charles Wills reached Wigan from Manchester on 11 November with a considerable force of government troops. Further troops under Lieutenant-General George Carpenter were also approaching from Clitheroe.\n\nWills advanced on Preston next day, and finding the bridge over the River Ribble unguarded, began his attack on the town. Brigadier-General Philip Honywood led the Lancashire Militia together with three dismounted troops of dragoons against the barricade at the west end of Fishergate. They first stormed the houses west of the churchyard and set fire to them as a diversion to assist the column attacking the churchyard barricade, and then moved against Fishergate, preceded by skirmishers. Colonel Hoghton detached the left wing of the Lancashire Militia and a troop of dragoons to attack the Friargate barricade while he led the right wing and remaining dragoons in columns of attack against Fishergate. Hoghton and his men reached the top of the barricade but were driven back by heavy musketry fire from the neighbouring houses, having suffered serious casualties; Honywood ordered them to withdraw. The attack at Friargate fared no better. But the Government troops renewed the attack after dark, Col Hoghton leading his men silently up to the Fishergate barricade then rushed it with the bayonet. The rebels took refuge in the houses, which were set on fire, and the street fighting continued by the light of the fires. Carpenter\'s troops arrived in the morning, to relieve the exhausted militia and completely invest the town, poised to complete the task of capturing it. A brigade of Dutch troops was also about to arrive, having marched from London. The rebel commanders, realising that they could hold out no longer, surrendered.\n\nThe Lancashire Militia had four officers killed, seven wounded, and 105 non-commissioned officers (NCOs) and privates killed and wounded, around a third of the total government casualties at the Battle of Preston. On 16 November the regiment marched back to Lancaster with 250 prisoners to be lodged in the castle. It remained there for the rest of the year, escorting parties of prisoners for trial, until it was disembodied about 15 January 1716.\n\nJacobite Rising of 1745\nThe Lancashire Militia was next called out for service against the Jacobite Rising of 1745. Orders to embody the militia were issued to the lord lieutenant, Edward Stanley, 11th Earl of Derby, on 26 September after the government\'s forces had been defeated at the Battle of Prestonpans. Derby complained that although there were sufficient weapons (though of poor quality), the three regiments of foot and three troops of horse had not been called out for training in the 30 years since the Battle of Preston. He and his deputy lieutenants scrambled to raise money and find officers and army pensioners who could train the raw troops gathering at Bury. By 5 November Derby had assembled a regiment of eight companies. The Lancaster and Lonsdale Company, under the command of Captain William Bradshaw, was left at Lancaster to guard the ordnance stores and prison there. Major William Ffarington of Shaw Hall, Leyland, was sent with a detachment of two companies to guard Chorley. In the meantime, the Corporation of Liverpool had raised a 648-strong volunteer regiment, the Liverpool Blues, which was fully armed and could be put into the field. \n\nOn 17 November the Jacobite army reached Carlisle, which soon surrendered, and began moving south. Two days later Derby ordered the companies at Bury and Chorley to concentrate at Liverpool, and ordered Bradshaw to requisition as many waggons and carts as he could to move the ordnance stores out of Lancaster to \'a secure and secret place\' at Ulverston. These moves were carried out next day, regimental headquarters (HQ) was established at the Talbot Hotel in Liverpool, and the Earl handed over command to Maj Ffarington. The commander of the government forces, Field Marshal George Wade, advised the militia to operate in small bodies to harry the advancing rebel army, firing from hedges and preventing it from sending out plundering parties. The Jacobites reached Lancaster on 24 November and Preston on 27 November, while detachments marched through Wigan, Chorley and Bolton. They hoped to gather recruits in Lancashire but were disappointed until they reached Manchester on 28 November, where there were sufficient volunteers to form the Manchester Regiment.\n\nThe Liverpool Blues, being better armed and equipped than the Lancashire Militia, were sent out on 29 November under Colonel Campbell to Warrington to prevent the rebels from using the bridge over the Mersey. As darkness approached they opened fire on what was thought to be a group of Highlanders but turned out to be a flock of geese. Next day they repulsed the Jacobite detachment from Preston, and broke down Warrington Bridge. On 1 December Col Campbell marched to Cheadle and Stockport, blowing up the bridges there and forcing the Jacobite artillery and baggage to cross by temporary rafts. After feinting towards Wales, the Jacobites reached Derby on 4 December. Government forces were now closing in on the Jacobite army and it was clear that there was not going to be an uprising in their favour in England. The Jacobite commanders decided to retreat to Scotland. Hindered by the Liverpool Blues\' demolitions, they did not reach Manchester until 8 December, with stragglers being picked off by the Blues.\n\nThe advance guards of the government forces under Maj-Gens James Oglethorpe and Sir John Ligonier joined the Liverpool Blues at Lancaster on 14 December. Next day Capt Bradshaw and his company (95 all ranks) arrived from Ulverston with orders to put himself under Campbell\'s command. By now the Duke of Cumberland had arrived to take overall command, and he sent Oglethorpe with his dragoons and the Liverpool Blues to harry the Jacobite rearguard. They marched via Kendal (17 December) and continued over Shap Fell in moonlight and a snowstorm to surprise the Jacobites next morning. The dragoons pursued the Jacobite rearguard through Shap village as far as Clifton Moor, where the Jacobites were drawn up to cover the retreat of their guns across the bridges into Penrith. The Liverpool Blues deployed in front of Clifton, with Bradshaw\'s company and some dragoons covering the road at Clifton Dykes. They piled arms and cooked a meal, then at 20.00 that evening Oglethorpe ordered them to advance in support of his dragoons. Bradshaw\'s company formed on the right of the Liverpool Blues (the position taken by the grenadier company in a line regiment). The delaying action (the Clifton Moor Skirmish) was well handled by the Jacobite commander, Lord George Murray, who led a counter-charge of Highlanders, and Oglethorpe was blamed for the heavy losses suffered by his dragoons in their dismounted attack. The Liverpool Blues followed the Highlanders with volley fire, but the Jacobites succeeded in reaching Penrith with the loss of a few guns and waggons. Bradshaw commended Corporal Shaw of his company for rescuing three people from a burning house in Clifton. The company had lost one killed and three wounded in the two skirmishes at Shap and Clifton\n\nCumberland\'s army followed the Jacobites through Penrith to Carlisle. The Lancashire Militia company was left at Penrith to guard the prisoners, while the Liverpool Blues were present at the 10-day siege of Carlisle Castle. Cumberland marched into Scotland on 4 January 1746 (finally defeating the Jacobites at the Battle of Culloden on 16 April) while the Liverpool Blues escorted the prisoners from Carlisle (including those of the Manchester Regiment) to Lancashire for trial. Bradshaw\'s company similarly escorted the prisoners from Penrith to Lancaster. The Lancashire Militia was then disembodied on 12 January 1746; it was not called out again for training or active service until the Seven Years\' War.\n\n1st Royal Lancashire Militia\n\nSeven Years\' War\nUnder threat of French invasion during the Seven Years\' War a series of Militia Acts from 1757 reorganised the county militia regiments, the men being conscripted by means of parish ballots (paid substitutes were permitted) to serve for three years. Lancashire\'s quota was set at 800 men in one regiment, but despite the enthusiasm of the acting lord lieutenant, Lord Strange, the county was slow to raise its quota. A regiment would have its arms issued from the Tower of London when it reached 60 per cent of its established strength, but in the case of Lancashire this was not until 18 July 1760, and the regiment was finally embodied for service on 23 December that year.\n\nThe regiment assembled on 28 December with six companies at Preston and four at Manchester. After training, it marched on 9 July 1761 to join other militia regiments at Warley Camp in Essex, arriving on 13 August. On 15 October King George III presented the Lancashire Militia with its new Regimental Colours, and on 23 October they were granted the title Royal Lancashire Militia (RLM) with the colonel\'s company designated \'the King\'s Company\'. The regiment then marched to Nottingham for winter quarters. On 11 June 1762 the regiment was marched south again to join the militia camp at Winchester in Hampshire on 30 June. Preliminaries of peace having been signed, the regiment was ordered on 18 October to march back to Lancashire, where it was disembodied at Manchester on 15 December 1762.\n\nIn peacetime, the reformed militia regiments were supposed to be assembled for 28 days\' annual training. In 1763 part of the RLM camped at Fulwood Moor near Preston from 18 May to 14 June, but it was not called out again until 1778.\n\nWar of American Independence\nThe militia was called out after the outbreak of the War of American Independence when the country was threatened with invasion by the Americans\' allies, France and Spain. The Royal Warrant for the embodiment of the Royal Lancashire Militia was issued on 26 March and the regiment was embodied on 1 April 1778 under the command of the 12th Earl of Derby. After six weeks\' training the regiment was marched to camp at Winchester. In October it was billeted among small Hampshire towns: Lymington (HQ + 3 companies), Romsey (3 companies), Ringwood, Christchurch, Downton and Fordingbridge (1 company each). Then in November it marched back to Liverpool for the winter, setting up its HQ at the Talbot Hotel once more.\n\nWhile at Liverpool a large number of unfit and time-expired men were discharged and a new ballot held to refill the ranks, necessitating a great deal of training. In June 1779 the regiment moved to Newcastle upon Tyne, with two companies detached to Sunderland until February 1780 when they relieved the Regular garrison of Tynemouth Castle. In June 1780 the regiment marched to Chester Castle; three companies were detached at Macclesfield and two at Nantwich. It spent the winter from November 1780 at Manchester, with some companies detached to Warrington. In June 1781 two companies each from Manchester and Warrington moved to Chester, returning to Warrington the following November. By now the regiment was organised like the regulars with a Grenadier Company (the King\'s Company), a Light Company, and eight line or \'hat\' companies. From April 1782 the regiment was broken up in detachments across Cumberland: Carlisle Castle (4 companies), Cockermouth (2 companies), Workington (2 companies), Whitehaven and Maryport (1 company each). Although Cumberland was remote from a possible French invasion, Whitehaven had been attacked by John Paul Jones in 1778. The regiment remained at these stations until 22 January 1783, when two companies were ordered from Carlisle Castle to Lancaster, and then on 17 February marched with HQ from Lancaster to Manchester. By now a peace treaty had been drawn up (it was signed in September) and orders were issued to the Earl of Derby on 28 February to disembody the RLM. This was carried out at Manchester in March 1783. The Earl of Derby then resigned the colonelcy to concentrate on his parliamentary duties; he nominated a distant kinsman, Thomas Stanley of Cross Hill, MP, to succeed him.\n\nFrom 1784 to 1792 the militia were generally assembled for their 28 days\' annual training, but to save money only two-thirds of the men were actually called out each year. However, it appears that the Royal Lancashire Militia did no training until the Stanleys called them out in 1790.\n\nFrench Revolutionary War\nThe militia were re-embodied in January 1793 shortly before Revolutionary France declared war on Britain. The Royal Lancashire Militia assembled at Preston on 22 January, but on 25 January were ordered to disperse across Lancashire – Liverpool (4 companies), Wigan (3 companies), Blackburn (2 companies) and Chorley (1 company) – which hindered training. \n\nDuring the French Wars the militia were employed anywhere in the country for coast defence, manning garrisons, guarding prisoners of war, and for internal security, while the regulars regarded them as a source of trained men if they could be persuaded to transfer. Their traditional local defence duties were taken over by the part-time Volunteers and later by a compulsory Local Militia.\n\nIn February 1793 the civil authorities in the West Riding of Yorkshire feared an outbreak of disorder and requested a military force. The RLM was sent, with HQ and four companies going to Leeds, three companies to Halifax, then to Sheffield and Barnsley, and three to Wakefield, Horset and Horbury. When regular troops arrived to keep the peace in May the RLM was moved to Doncaster, with detached companies at Bawtry, Blyth, Retford and Moorgate. During the rest of the year companies and pairs of companies went out to other towns before returning to Doncaster. In April 1794 the regiment was moved to the East Midlands, with six companies at Stamford and four at Peterborough. In June 1794 the RLM joined the great anti-invasion camp on the South Downs above Brighton, which included regular and fencible regiments as well as militia. In November it moved to winter quarters across Kent, with HQ at Canterbury Barracks. In 1795 it went to Dover Castle, spending May in camp at Hythe, returning to Canterbury in October with the companies in billets across north Kent. The regiment was then moved to billets around Greenwich and Deptford in November as part of a concentration round London to prevent disorder. In the spring of 1796 detachments were marched through Surrey before returning to Greenwich, then in June the regiment crossed to Warley Camp before going into winter quarters at Chelmsford.\n\nLancashire\'s militia quota set in 1760 was small in proportion to its population, which soared during the Industrial Revolution. By 1796 it represented only one man in every 43 of those eligible. But in that year an additional ballot was carried out to raise men for the \'Supplementary Militia\' to reinforce the standing militia regiments and to form additional temporary regiments. Lancashire\'s quota was increased to five regiments, and on 1 March 1797 the RLM was ordered to send a party to Lancaster to begin training them. Although recruitment of such large numbers became difficult, the 1st Royal Lancashire Supplementary Militia was raised on 1 March 1797 at Liverpool under the personal command of the 13th Earl of Derby as lord lieutenant. On 17 August 1798 it was placed on a permanent footing as the 2nd Royal Lancashire Militia (2nd RLM), after which the \'Old County Regiment\' became the 1st Royal Lancashire Militia (1st RLM).\n\nIn March 1797 the 1st RLM was scattered across villages north of London, but on 11 April it was ordered to Plymouth, where it was quartered at the Maker Redoubts overlooking Plymouth Sound for the rest of the year. By the end of the year, with so many senior officers in parliament and the parties away training the supplementary militia, the strength of the regiment at Plymouth was down to about 400 men, under the command of the senior captain. Two of the companies may have been organised and equipped as rifle companies at this time.\n\nIrish Rebellion\nIn March 1798 legislation was passed to allow the militia to volunteer for service in Ireland, where a Rebellion had broken out. The 1st Royal Lancashire Militia immediately volunteered, and the regiment was recruited to full strength (1200 men) from the supplementary militia to replace the time-expired men. The contractors having failed to provide enough uniforms in time, the 136 time-expired men were stripped of their uniforms, hats and boots to clothe the recruits, leading to a serious complaint to the War Office about their treatment. The recruits arrived at Plymouth from Lancashire and the regiment embarked at the end of June. But the news from Ireland having improved the voyage was cancelled and the regiment returned to camp on Maker Heights. It was not until the end of August that the 1st RLM embarked again as part of a militia brigade in response to the French intervention in Ireland. The regiment landed at Ballyhack in Waterford Harbour on 11 September and then marched to New Ross, preparatory to moving north. However, the French expedition had already been defeated at the Battle of Ballinamuck, and the follow-up expedition was defeated at sea without landing. When the regiment reached Clonmel on 21 October the rebellion was effectively over. The regiment went into winter quarters but guard and picket duties heavy while the area was still in disorder.\n\nWith the end of the Irish Rebellion the government encouraged militiamen to volunteer for the regular army: the 1st RLM was one of a number of regiments that offered to serve abroad as a complete unit. However the legislation did not allow for this and the offer was declined, though Col Stanley encouraged his men to volunteer as individuals, and some 350 did so, over 150 joining the 20th Foot (later the Lancashire Fusiliers). Meanwhile, the trials of the rebels were continuing, and in May 1799 the militia brigade at Clonmel was put on alert to march at short notice in case of trouble, or of another French landing. In September, after a year\'s service in Ireland, the 1st RLM prepared to embark for England. Before departure one whole company, about 100 strong, recruited from Bolton and its neighbourhood, volunteered to transfer to the 36th Foot. The reduced regiment – about 560 other ranks (ORs) – embarked from Waterford on 9 October, landing at Bristol on 12 October. It rested at Tetbury and then on 21 October it began its march back to Lancashire. On arrival at Preston on 6 November the regiment was ordered to disembody.\n\nThe supplementary militia having been abolished, the remaining balloted men in Lancashire were distributed to the 1st, 2nd and 3rd RLM to fill vacancies – the officers of the 1st RLM complaining about the quality of the men they were assigned. The regiment completed disembodiment on 28 December 1799. It was called out again for training 5 August 1801, assembling at Lancaster (now its permanent HQ). A few days later it was informed that it would be embodied for active service again at the end of the training. On 26 September it began the march to its new station of Tynemouth Castle. On arrival, with the newly balloted men, it had a strength of 900 ORs. The Peace of Amiens was signed on 27 March 1802, and on 1 April the regiment was ordered to march back to Lancaster to disembody once more, apart from the small permanent staff.\n\nNapoleonic Wars\nThe Peace of Amiens was short-lived, and the militia was called out again on 1 April 1803. After establishing a depot at Lancaster to train the newly balloted men the 1st RLM marched on 23 May to join the encampment at Danbury, Essex, under the command of Lt-Col John Plumbe, Col Stanley being unwell. The recruits followed from Lancaster on 20 July, bringing the regiment up to full strength of 1200 men in 12 companies. It remained at Danbury Camp until August 1804, when it was transferred to Brabourne Lees Camp in Kent, and then in June 1805 to Portsmouth. In August and September 1805 the 1st RLM was at Weymouth, Dorset, while the royal family was in residence, then in October moved to Exeter and the surrounding villages, where it spent the winter. In the spring it returned to Weymouth where it trained the newly balloted men, who replaced those time-expired and those who had volunteered for the regulars (one whole company had done so). It returned to Exeter for the winter of 1806, staying there and at Stonehouse Barracks, Plymouth, until May 1809. At that time it was ordered to Tavistock and then to Bristol, detaching 100 men to embark at Ilfracombe to sail to Milford Haven and Haverfordwest to reinforce the garrison there. The detachment rejoined HQ at Bristol in June, and the regiment stayed there until March 1811. During 1810 it had recruiting parties detached to Bolton, Manchester, Preston and Wigan. On 8 March 1811 the 1st RLM was ordered to march from Bristol to Hull; however on 25 March it was diverted en route to deal with Luddite disturbances that had broken out at Nottingham. It was ordered to resumed its march to Hull Barracks on 22 April. In October it was sent to Berwick-upon-Tweed and Tweedmouth, with detachments at Eyemouth and Holy Island. In March 1812 it moved into Scotland, to Dunbar and Haddington, and then to Dalkeith. It remained there, with occasional detachments to Penicuik where there was a large Prisoner-of-war camp to be guarded, until December 1814.\n\nThe militia had become one of the biggest sources of recruits to the regular army, and the 1st RLM was expected to supply a quota of 100 volunteers each year, rising to a draft or 244 men in February 1814. Colonel Plumbe also volunteered the whole regiment for service in Ireland, and roughly half the men agreed to extend their service accordingly. In March 1814 this body (12 officers and about 340 ORs) embarked at Portpatrick for Donaghadee, from where it marched to Belfast and then Athlone, arriving on 14 June. Napoleon had abdicated in April and peace was declared on 30 May, but the 1st RLM had still not been disembodied in February 1815 when he escaped from Elba and the war was resumed. The three regiments of Lancashire Militia, which happened to be stationed together at Dublin, were allowed to recruit back to full strength by ballot and \'by beat of drum\'. They also provided drafts of around 1000 volunteers to the regular regiments being sent to Belgium. The 1st RLM supplied 23 NCOs and men to the 1st Foot Guards, and 11 each to the 33rd Foot and 71st (Highland) Light Infantry, with individuals to other regiments. There is a story that many of the Guardsmen at the Battle of Waterloo were still wearing their Militia uniforms.\n\nWaterloo ended the war, but much of the regular army remained in France as part of the Army of Occupation for several months, and the Lancashire Militia continue their garrison duty at Dublin. The 1st RLM now being very weak, drafts of balloted men continued to be despatched from Lancaster until February 1816, when it was finally ordered to return for disembodiment. It embarked from Dublin on 25 March and landed at Liverpool, arriving at Lancaster on 5 April and being disembodied on 15 April.\n\nLong peace\nMilitia training was suspended in most years after Waterloo, but the 1st RLM was called out for its 28 days\' training in 1821, 1825 and 1831. Balloting continued, but the permanent staff was progressively reduced over the years. Just before the 1831 training King William IV bestowed on the three Lancashire Militia Regiments the additional title The Duke of Lancaster\'s Own. No further militia training took place for the next 21 years. Although officers continued to be appointed to fill vacancies the ballot was suspended.\n\n1852 reforms\nThe Militia of the United Kingdom was revived by the Militia Act of 1852, enacted during a period of international tension. As before, units were raised and administered on a county basis, and filled by voluntary enlistment (although conscription by means of the Militia Ballot might be used if the counties failed to meet their quotas). Training was for 56 days on enlistment, then for 21–28 days per year, during which the men received full army pay. Under the Act, Militia units could be embodied by Royal Proclamation for full-time service in three circumstances:\n 1. \'Whenever a state of war exists between Her Majesty and any foreign power\'.\n 2. \'In all cases of invasion or upon imminent danger thereof\'.\n 3. \'In all cases of rebellion or insurrection\'.\n\nIn the case of the 1st RLM some younger officers were appointed, including John Talbot Clifton of Lytham Hall, formerly of the 1st Life Guards, as colonel, together with new permanent staff officers and regular army NCOs, and the revived regiment was called out for its first 21 day training on 8 November 1852. The staff NCOs and the few experienced officers had their hands full when the special trains brought the 500 undisciplined recruits from Bolton and Manchester, but had made good progress after three weeks\' drilling on Giant Axe Field. The officers\' mess now adopted the traditional Lancashire form of the Loyal toast: \'The Queen, Duke of Lancaster\', which the regiment kept thereafter.\n\nCrimean War\nIn May 1853, in view of the worsening international situation, the government ordered the lord lieutenant (the Earl of Sefton) to recruit the three Lancashire militia regiments up to their full strengths of 1200 each. The 1st RLM was called out for 28 day\'s annual training on 24 May, in which the staff were assisted by drill sergeants from the 50th Foot stationed nearby at Preston.\n\nWar having broken out with Russia in March 1854 and an expeditionary force sent to the Crimea, the Militia were called out for home defence. The 1st RLM assembled at Lancaster on 24 May for 28 days\' training before embodiment. Colonel Clifton had already offered the regiment for overseas service – the first such offer made in this war by a militia regiment – and the government accepted a body of 500 men. On 16 June the regiment divided, 500 men for the service companies, the other 700 dismissed to their homes until further notice. The service battalion travelled by train to Deptford Dockyard, moving on 16 July to Portsmouth. In September, training began with the new Enfield rifled musket. In November there was a call to reinforce the army in the Crimea, and 250 men from the service companies of the 1st RLM volunteered. It was not until December that Parliament passed Acts allowing whole militia regiments to volunteer, and recalling out the men who had been disembodied in order to fill the vacancies.\n \nThe regiment now prepared to embark for the Ionian Islands (then a British protectorate) to release the garrison to fight in the Crimea. The men who had not volunteered or were unfit for overseas service were formed into a regimental depot at Fort Cumberland, Portsmouth. The depot returned to Lancaster on 1 March 1855, and the service companies embarked on the transport Calcutta two days later. It sailed on 4 March and they disembarked at Corfu on 16 March, taking up quarters in the Citadel Barracks, with detachments on the islands of Fano, Paxo and Santa Maura. Its first task was to send the Grenadier Company on 20 March to suppress a riot on Vido among the convalescent soldiers from the Crimea. On 15 May the bulk of the regiment re-embarked for Zante, leaving detachments on Santa Maura, Cerigo and Cephalonia. In September there was a cholera outbreak at Zante, and in two weeks the regiment lost one officer, two NCOs and 275 men dead, and 54 invalided home. Two drafts of reinforcements arrived from the depot at Lancaster, 150 men on 25 November and 250 more on 15 January 1856. The Grenadier Company at Santa Maura had been unaffected by cholera, and was chosen to go to the Crimea to reinforce the army for its projected operations following the fall of Sevastopol in September 1855 (the only militia unit accepted). However, there were no further operations and the war ended on 30 March 1856 before the company had left the islands. The 1st RLM embarked on the troopship Colombo on 21 May, but its passage was delayed when the ship ran aground at Argostoli Bay, where it had gone to pick up the Grenadier Company. The ship was deemed to be overcrowded, and two companies were left at Malta to follow by a later steamer. The main body reached Portsmouth on 3 June, and went by trains to Lancaster on 8 and 9 June. The two companies from Malta were not disembodied until 16 July. After the regiment was disembodied it was awarded the Battle honour Mediterranean for its service.\n\nFurther militia regiments had been raised in Lancashire after 1852, bringing the total to seven of infantry and one of artillery. Each had its own recruiting areas across the county, those of the 1st RLM being Bolton (Great and Little), Fylde, Lancaster and Manchester. During the Crimean War the depot of the 1st RLM built a barracks on Windy Hill at Lancaster for 200 men and a storehouse with a parade ground for 800 men later known as Springfield Barracks. Plans to convert some old warehouses at St Georges Quay were scrapped when the war ended. Annual training for the 1st RLM resumed in 1857. It was usually held on Giant Axe Field, but at Ulverston when camp coincided with elections in Lancaster. In some years a joint field day was held with one of the Lancashire Rifle Volunteer Corps during annual training. From 1876 the regiment adopted the practice of camping at Scale Hall field, about from Lancaster, during its annual training.\n\nCardwell reforms\n\nUnder the \'Localisation of the Forces\' scheme introduced by the Cardwell Reforms of 1872, Militia regiments were brigaded with their local regular and Volunteer battalions – for the 1st RLM this was with the 4th (King\'s Own) Regiment of Foot in Sub-District No 11 (County of Lancaster). The Militia now came under the War Office rather than their county lords lieutenant, and officers\' commissions were signed by the Queen.\n\nAlthough often referred to as brigades, the sub-districts were purely administrative organisations, but in a continuation of the Cardwell Reforms a mobilisation scheme began to appear in the Army List from December 1875. This assigned regular and militia units to places in an order of battle of corps, divisions and brigades for the \'Active Army\', even though these formations were entirely theoretical, with no staff or services assigned. The 1st, 2nd and 3rd Royal Lancashire Militia formed 1st Brigade of 3rd Division, VI Corps. The brigade would have mustered at Manchester in time of war.\n\nThe Hon Frederick Stanley, MP, formerly captain in the Grenadier Guards, was appointed lieutenant-colonel commandant of the regiment (later of the 1st Battalion) on 23 June 1874, the rank of colonel in the militia having been abolished. He was also Financial Secretary to the War Office from 1874 to 1877, and Secretary of State for War 1878–80, which meant that he was often absent during training.\n\nCardwell\'s localisation scheme provided for the regular and militia regiments to be linked in pairs, sharing a single permanent depot. The 4th (King\'s Own) already had two battalions; the 1st RLM split to form its own second battalion on 26 September 1877, each being initially of six companies. A new regimental depot, Bowerham Barracks, was built at Lancaster between 1876 and 1880.\n\nMilitia battalions now had a large cadre of permanent staff (about 30). Around a third of the recruits and many young officers went on to join the regular army. In addition, the Militia Reserve introduced in 1867 consisted of present and former militiamen who undertook to serve overseas in case of war. During the international crisis caused by the Russo-Turkish War in 1877, the 1st RLM offered its service and was informed that it might be embodied for garrison duty. In the event the militia was not embodied, but the regular and militia reserves were called out the following year, those belonging to Sub-District No 11 assembling at Lancaster on 3 April. On 22 April they entrained to join the depot of the 4th (King\'s Own) at the Portsdown Hill Forts, where they served until 30 July when they were dismissed to heir homes.\n\n3rd and 4th Battalions, King\'s Own (Royal Lancaster Regiment)\nThe Childers Reforms of 1881 took Cardwell\'s reforms further, with the linked regular and militia regiments becoming single county regiments. In the case of the Lancaster district this was the King\'s Own (Royal Lancaster Regiment) (\'The King\'s Own\') of four battalions: the 1st and 2nd were the regulars, while the 1st Royal Lancashire Militia (The Duke of Lancaster\'s Own) became the 3rd and 4th Bns, together with affiliated Volunteer Force battalions. As the regimental history put it, the 1st and 2nd Bns King\'s Own had amalgamated with the 1st and 2nd Bns Duke\'s Own. The two militia battalions continued to be administered as a single double-battalion regiment until 1 August 1900.\n\nIn 1882 the 3rd and 4th Battalions began their annual training at Lancaster on 3 July, but at the end of the month their training was extended for 56 days, embodying them for garrison duty during the crisis surrounding the Anglo-Egyptian War. Both battalions entrained for Preston on 31 July, and went to Fulwood Barracks, which were grossly overcrowded by the arrival of their 12 companies in addition to the reservists of the regular regiment stationed there. The two battalions returned to Lancaster on 26 August to be disembodied.\n\nSecond Boer War\nAfter the disasters of Black Week at the start of the Second Boer War in December 1899, most of the regular army was sent to South Africa, and many militia units were embodied to replace them for home defence and to garrison certain overseas stations. The 4th Bn King\'s Own was embodied on 13 December 1899 and the 3rd Bn on 23 January 1900. Both battalions volunteered for overseas service.\n\nThe 4th Battalion left first, embarking with a strength of 25 officers and 666 ORs under the command of Lt-Col W. Kemmis and landing at Cape Town on 1 February 1900. It proceeded to the advanced base at Naauwpoort and was employed on the lines of communication with detachments guarding towns, bridges and culverts between Norvalspont and Port Elizabeth, Graaff-Reinet and Hanover Road. In August 1900 a column consisting of 200 men of the battalion and 40 of Nesbitt\'s Horse carried out a demonstration through the disaffected district of Hanover. On 30 December the Boers attacked and burned a train at the \'Gates of Hell\' about from Naauwpoort: two companies of the battalion only arrived in time to exchange a few shots with the retiring enemy. In December, Lt-Col Kemmis was appointed commandant of Naauwpoort. On 23 February 1901 2nd Lt Hunt with 30 men guarding the Fish River bridge and station successfully held off Commandant Kritzinger and about 250 Boers for four hours before the armoured train came to their assistance and drove off the Boers. On 7 March Capt Worsley Taylor with 40 men of the 4th Bn and about 60 Mounted infantry (MI) was attacked by a superior force while repairing the Colesberg–Philippolis telegraph line. Taylor and his men took up a defensive position on a Kopje and held it for 24 hours until a relief column arrived from Colesberg. On 29 May Battalion HQ moved to Norvalspont and the battalion occupied the northern bank of the Orange River. Finally, it concentrated at De Aar on 5 July preparatory to embarking for home. During the campaign the battalion lost one officer and 21 ORs killed or died of disease. The 4th Bn was disembodied on 3 August 1901. It was awarded the battle honour South Africa 1900–01, and the officers and men received the Queen\'s South Africa Medal with the clasps \'Cape Colony\', \'Orange Free State\', and \'South Africa 1901\'.\n\nThe 3rd Bn embarked for South Africa with a strength of 25 officers and 686 ORs under the command of Col B.N. North. It landed at Cape Town on 1 March 1900 and was deployed along the lines of communication in Orange River Colony, with Battalion HQ and three companies guarding the important railway bridge and supply depot at Zand River Bridge. They were attacked on 14 March by a Boer force that included artillery, driving them off after a day\'s fighting. The battalion also supplied an MI company that took part in the action at Ventersburg with a column under Col North operating with armoured trains. This force obliged the Boers to abandon their position at Zeegatacht, near Brandfort, on 16 January 1901, and North with the MI and armoured train drove them from Huten Beck on 28 January. At this time the rest of the battalion was holding the blockhouse line and railway from Kroonstad to Bloemfontein, driving off several attacks. In October 1901 the battalion was divided into several detachments that engaged Theron\'s Commando around Ceres. The battalion re-assembled on 10 January 1902 to embark for England, where it was disembodied on 8 February 1902. During the campaign the battalion had lost 51 ORs killed or died of disease. It was awarded the battle honour South Africa 1900–02, the Queen\'s South Africa Medal with the clasps \'Cape Colony\' and \'Orange Free State\', and the King\'s South Africa Medal with the clasps \'South Africa 1901\' and \'South Africa 1902\', and Lt-Col North was awarded a Companionship of the Order of the Bath (CB).\n\nSpecial Reserve\nAfter the Boer War, the future of the Militia was called into question. There were moves to reform the Auxiliary Forces (Militia, Yeomanry and Volunteers) to take their place in the six Army Corps proposed by the Secretary of State for War, St John Brodrick. However, little of Brodrick\'s scheme was carried out. Under the more sweeping Haldane Reforms of 1908, the militia was replaced by the Special Reserve, (SR) a semi-professional force whose role was to provide reinforcement drafts for Regular units serving overseas in wartime, rather like the earlier Militia Reserve. The 3rd Battalion became the 3rd (Reserve) Battalion, King\'s Own, on 19 July 1908, but the 4th Bn was disbanded on 31 August.\n\nWorld War I\n\nOn the outbreak of war on 4 August 1914 the battalion was embodied at Lancaster under Lt-Col J.M.A. Graham. It then moved to its war station at Saltash, Cornwall, for a few days before the bulk of the battalion moved to Sunderland. It probably helped to organise the 10th (Reserve) Battalion, King\'s Own, from Kitchener\'s Army volunteers, when that was formed at Saltash in October 1914. From 1915 to 1917 the 3rd Bn was at Plymouth, but by November 1917 it had moved to Harwich. As well as forming part of the Plymouth and Harwich Garrisons, the battalion\'s role was to train and despatch drafts of reservists, special reservists, recruits and returning wounded for the regular battalions. The 1st King\'s Own served on the Western Front, while the 2nd Bn returned from India and after a few months on the Western Front spent the rest of the war on the Macedonian Front. \n\nThousands of men for the regular battalions would have passed through the ranks of the 3rd Bn during the war. It was disembodied on 30 July 1919, when the remaining personnel were drafted to the 1st Bn.\n\nPostwar\nThe SR resumed its old title of Militia in 1921 and then became the Supplementary Reserve in 1924, but like most militia battalions the 3rd King\'s Own remained in abeyance after World War I. By the outbreak of World War II in 1939, no officers remained listed for the battalion. The militia was formally disbanded in April 1953.\n\nCommanders\nThe following officers commanded the regiment as Colonel, as Honorary Colonel, or served as Lt-Col Commandant of one of its battalions:\n William Stanley, 9th Earl of Derby appointed 1689\n Philip Hoghton, appointed 1 June 1715\n Edward Stanley, 11th Earl of Derby appointed 25 October 1745\n James Smith-Stanley, Lord Strange, appointed 15 July 1760, died 1 June 1771\n Edward Smith-Stanley, 12th Earl of Derby appointed 14 February 1772, resigned\n Thomas Stanley of Cross Hill, MP, appointed 28 October 1783, died 26 December 1816\n Peter Patten Bold, appointed 8 June 1817, died 1819\n John Plumbe-Tempest, promoted 4 November 1819, resigned 1852\n John Talbot Clifton, formerly 1st Life Guards, appointed 2 October 1852, resigned 1870\n William Assheton Cross, promoted 8 December 1870, appointed Hon Col 13 May 1871\n Robert Whitle, appointed 31 May 1872.\n Frederick Stanley, 16th Earl of Derby, KG, GCB, GCVO, Lt-Col Commandant, 1st Bn, 23 June 1874; appointed Hon Col 27 February 1886, died 14 June 1908\n Thomas Dawson Sheppard, Lt-Col Commandant, 2nd Bn, 26 September 1877\n George Blucher Heneage Marton, 20 March 1886, Lieutenant-Colonel Commandant, commanding 3rd Battalion.\n Joseph Lawson Whalley, 26 November 1887, commanding 4th Battalion\n B.N. North, CB, MVO, former Lt-Col Commandant, 3rd Bn, appointed Hon Col 19 July 1908\n\nUniforms & Insignia\nThe uniform of the Royal Lancashire Militia was red with the blue facings appropriate to \'Royal\' regiments. The regimental colour presented in 1761 was blue and bore the coat of arms of the Duchy of Lancaster (on a shield gules, three lions of England (passant gardant) or, in chief a label azure of three points, each charged with three fleur-de-lis of France). The regimental colour presented by Queen Charlotte at Weymouth in 1806 simply carried the words \'FIRST ROYAL LANCASHIRE MILITIA\' surrounded by a wreath of roses, thistles and shamrocks.\n\nAs a reward for its service in Ireland in 1798 the badge of the \'Harp and Crown\' was bestowed on the regiment, and the \'Red Rose of Lancaster\' in 1803. The set of colours believed to have been presented by the Lord Lieutenant of Ireland when the regiment was stationed in Dublin in 1816 bore the harp in the centre of the King\'s colour and the crowned red rose with \'LANCASTER\' in Old English script in the three outer corners of the regimental colour. The colonel\'s wife, Mrs Clifton, presented new colours to the reformed regiment in 1853 and again in 1870 after the regulation size of colours was made smaller. The regimental colour bore a red rose inside a circle with the words \'DUKE OF LANCASTER\'S OWN\' surrounded by a wreath of roses, thistles and shamrocks. Above was a crown, below were the Roman numeral \'I\' and two scrolls, the upper saying \'ROYAL LANCASHIRE MILITIA\', the lower the battle honour \'MEDITERRANEAN\'; the crown, numeral and upper scroll also appeared on the Queen\'s colour. The smaller 1870 colours were similar, but the numeral I had disappeared and the scroll now read \'1. ROYAL LANCASHIRE MILITIA\'. Lady Constance Stanley presented the 2nd Bn\'s colours in 1880: the design was the same, but the lettering on the scrolls was \'First Royal Lancashire Militia, 2nd Battalion, Mediterranean\', which was repeated in black on a yellow ground in the centre of the Queens colour.\n\nAbout 1790 the buttons had the letters \'RL\' inside a crowned star; the figure \'1\' was added above the letters after the creation of the 2nd RLM, and these buttons were retained until 1829. The officers\' shako plate in 1812–16 consisted of the stylised cipher \'GR\' above an enamelled red rose, with a silver spray of leaves beneath and the numeral \'1\' at the bottom, the whole plate a highly stylised escutcheon topped with a crown. The ORs\' plate was plain brass, the word \'LANCASTER" appearing between the cipher and rose, and no numeral at the bottom. The cap badge of 1852 was circular, with \'LANCASTER\' in Old English lettering above a red rose, a spray of leaves below; the officer\'s belt plate carried this badge without the spray of leaves but surmounted by a crown, on a decorated star. The OR\'s Glengarry badge of 1874–81 had the royal crest (a crowned lion statant gardant on a crown) over the red rose within a spray of grass, with a scroll underneath inscribed \'THE DUKE OF LANCASTER\'S OWN\'.\n\nIn 1881 the regiment combined the insignia of the King\'s Own and the Duke\'s Own, with the Red Rose of Lancaster surmounted by the Lion of England. Later this was replaced by the lion over the words \'KING\'S OWN\'.\n\nPrecedence\nIn September 1759 it was ordered that militia regiments on service were to take their relative precedence from the date of their arrival in camp. In 1760 this was altered to a system of drawing lots where regiments did duty together. During the War of American Independence all the counties were given an order of precedence determined by ballot each year, beginning in 1778. For the Lancashire Militia the positions were:\n 38th on 1 June 1778\n 43rd on 12 May 1779\n 30th on 6 May 1780\n 12th on 28 April 1781\n 32nd on 7 May 1782\n\nThe militia order of precedence balloted for in 1793 (when Lancashire was 37th) remained in force throughout the French Revolutionary War: this covered all the regiments formed in the county. Another ballot for precedence took place at the start of the Napoleonic War, when Lancashire was 52nd. This order continued until 1833. In that year the King drew the lots for individual regiments and the resulting list remained in force with minor amendments until the end of the militia. The regiments raised before the peace of 1763 took the first 47 places: the 1st RLM was 45th. Formally, the regiment became the 45th, or 1st Royal Lancashire Militia, but the 1st RLM like most regiments seems to have paid little attention to the additional number.\n\nSee also\n Militia (English)\n Militia (Great Britain)\n Militia (United Kingdom)\n Special Reserve\n Lancashire Militia\n King\'s Own Royal Regiment (Lancaster)\n\nFootnotes\n\nNotes\n\nReferences\n\n W.Y. Baldry, \'Order of Precedence of Militia Regiments\', Journal of the Society for Army Historical Research, Vol 15, No 57 (Spring 1936), pp. 5–16.\n Ian F.W. Beckett, The Amateur Military Tradition 1558–1945, Manchester: Manchester University Press, 1991, .\n Burke\'s Peerage, Baronetage and Knightage, 100th Edn, London, 1953.\n W.Y. Carman, \'Militia Uniforms 1780\', Journal of the Society for Army Historical Research, Vol 36, No 147 (September 1958), pp. 108–9.\n Col John K. Dunlop, The Development of the British Army 1899–1914, London: Methuen, 1938.\n Cross Fleury, Time-Honoured Lancaster: Historic Notes on the Ancient Borough of Lancaster, Lancaster: Eaton & Bulfield, 1891.\n Sir John Fortescue, A History of the British Army, Vol I, 2nd Edn, London: Macmillan, 1910.\n Sir John Fortescue, A History of the British Army, Vol II, London: Macmillan, 1899.\n Sir John Fortescue, A History of the British Army, Vol III, 2nd Edn, London: Macmillan, 1911.\n Sir John Fortescue, A History of the British Army, Vol IV, Pt II, 1789–1801, London: Macmillan, 1906.\n J.B.M. Frederick, Lineage Book of British Land Forces 1660–1978, Vol I, Wakefield: Microform Academic, 1984, .\n Lt-Col James Moncrieff Grierson (Col Peter S. Walton, ed.), Scarlet into Khaki: The British Army on the Eve of the Boer War, London: Sampson Low, 1899/London: Greenhill, 1988, .\n H.G. Hart, The New Annual Army List (various dates).\n Col George Jackson Hay, An Epitomized History of the Militia (The Constitutional Force), London:United Service Gazette, 1905/Ray Westlake Military Books, 1987, .\n Richard Holmes, Soldiers: Army Lives and Loyalties from Redcoats to Dusty Warriors, London: HarperPress, 2011, .\n Brig E.A. James, British Regiments 1914–18, Samson Books 1978/Uckfield: Naval & Military Press, 2001, .\n Roger Knight, Britain Against Napoleon: The Organization of Victory 1793–1815, London: Allen Lane, 2013/Penguin, 2014, .\n H.G. Parkyn, \'English Militia Regiments 1757–1935: Their Badges and Buttons\', Journal of the Society for Army Historical Research, Vol 15, No 60 (Winter 1936), pp. 216–248.\n Edward M. Spiers, The Army and Society 1815–1914, London: Longmans, 1980, .\n Edward M. Spiers, The Late Victorian Army 1868–1902, Manchester: Manchester University Press, 1992/Sandpiper Books, 1999, .\n Katherine Thomasson & Francis Buist, Battles of the \'45, London: Batsford 1962/Pan 1967.\n J.R. Western, The English Militia in the Eighteenth Century, London: Routledge & Kegan Paul, 1965.\n Maj R.J.T. Williamson & Col J. Lawson Whalley, History of the Old County Regiment of Lancashire Militia, London: Simpkin, Marshall, 1888.\n\nExternal sources\n British History Online\n Electric Scotland\n King\'s Own Royal Regiment Museum, Lancaster\n Lancashire Infantry Museum\n Lancashire Record Office, Handlist 72 Archived from the original\n Museum of the Manchester Regiment\n Richard A. Warren, This Re-illuminated School of Mars: Auxiliary forces and other aspects of Albion under Arms in the Great War against France\n\nLancashire Militia\nLancashire\nMilitary units and formations in Lancashire\nMilitary units and formations in Lancaster, Lancashire\nMilitary units and formations established in 1661\nMilitary units and formations disestablished in 1881', 'João Pedro Coelho Marinho de Sousa (born 30 March 1989), known as João Sousa (), is a Portuguese professional tennis player. He is currently ranked world No. 137 by the Association of Tennis Professionals (ATP). Continuously ranked in the world\'s top-100 between July 2013 and March 2021, and with four ATP Tour singles titles, Sousa is often regarded as the best Portuguese tennis player of all time. He is nicknamed Conquistador (Portuguese for "Conqueror") for sharing his birthplace of Guimarães with Afonso I, the country\'s first king. Sousa is coached by former player Frederico Marques and practices at the BTT Tennis Academy in Barcelona.\n\nSousa began playing tennis at the age of seven. After winning national youth titles, he decided at the age of fifteen to invest in his career by moving to Barcelona. After an unimpressive junior career, Sousa turned professional in 2008 and won his first singles tournament in 2009. He started playing in the ATP Challenger Tour in 2008, winning his first tournament at this level in 2011. Sousa debuted in the top-level ATP World Tour in 2008, and rose to prominence at the 2013 Malaysian Open, where he became the first Portuguese player to win a World Tour-level singles tournament.\n\nSousa holds several Portuguese men\'s tennis records. In October 2013, he ranked 49th in the world after his victory at the Malaysian Open, becoming the first Portuguese player to break into the singles top 50. In November 2015, Sousa reached a career-high and Portuguese-best ranking of world \xa033, following his second ATP World Tour singles title at the Valencia Open. In May 2016, he improved his personal ranking best, becoming the first Portuguese player to enter the top 30, as a result of reaching his first Masters 1000 quarter-finals in Madrid. In 2014, he was the first Portuguese player to compete exclusively at the ATP World Tour in a single season; the first to be seeded in a Grand Slam tournament (2014 US Open); and the second to reach the quarterfinals in a Grand Slam event (2015 US Open doubles). Sousa is the fourth Portuguese player to reach the singles top 100, and the second to do so in both singles and doubles rankings, after Nuno Marques. He is also the Portuguese player with the largest career prize money, and the most wins at Grand Slam singles tournaments.\n\nEarly and personal life \nJoão Sousa was born on 30 March 1989 in Guimarães, Portugal, to Armando Marinho de Sousa, a judge and amateur tennis player, and Adelaide Coelho Sousa, a bank clerk. Sousa has a younger brother named Luís Carlos. At age seven, Sousa began playing tennis with his father at a local club. In 2001, he won the national under-12 singles title, beating future Davis Cup partner Gastão Elias in the semifinals, and was runner-up in doubles. In 2003, he partnered with Elias to win the national under-14 doubles title. Sousa also played football at local clubs Vitória de Guimarães – of which he is a keen supporter – and Os Sandinenses until the age of 14, when he decided to give up on football and the goal of studying medicine to pursue a professional tennis career. He briefly joined the National Tennis Training Center in Maia until he was forced to leave after its closure.\n\nIn September 2004, aged 15, Sousa moved to Barcelona, Spain, to attend a boarding school and join the Catalan Tennis Federation. A year later, he joined the BTT Tennis Academy, which was recommended to him by former member and countryman Rui Machado. He was first coached by Álvaro Margets, under the supervision of one of his mentors, Francisco Roig. At the academy, he met and shared a flat with his future coach, Frederico Marques. Sousa continues to practice at BTT, even after joining the ATP Tour.\n\nDuring his youth, Sousa\'s idols were Pete Sampras, Juan Carlos Ferrero, and Roger Federer. He is fluent in Portuguese, Spanish, Catalan, as well as English, French and Italian. Since 2008, Sousa has been dating Júlia Villanueva, whom he met during his training in Barcelona.\n\nTennis career\n\nPre-2008: Junior years\nSousa made his debut in a junior tournament in August 2004 at the Grade 4 Taça Diogo Nápoles in Porto, reaching the semifinals. His first junior doubles title came in April 2005 at a Grade 4 tournament in Guadeloupe, where he also reached his first junior singles final. Though he never won a singles title on the junior circuit, Sousa reached three singles finals and won five doubles titles, including a Grade 2 tournament in France. In 2005 Sousa was runner-up at the Portugal under-16 National Championship, losing in the final to Gastão Elias. He had previously won the doubles title at the 2004 edition in the same age category.\n\nSousa peaked at number 61 in the world junior rankings in early 2007, shortly after entering the main draw of the 2006 Orange Bowl. His only participation at a junior Grand Slam was short-lived; he lost in the first qualifying round of the 2007 French Open Boys\' Singles tournament. Sousa\'s last junior tournament was the European Junior Championships in Austria in July 2007.\n\nDespite not having turned professional before 2008, Sousa made his debut at a senior tournament in October 2005 after entering as a wild card in the main draw of a Futures doubles tournament in Barcelona. His first win as a senior came at a Futures doubles tournament, in August 2006 in Oviedo, and his debut singles tournament participation and win both came in May 2007 at a Futures tournament in Lleida, Spain. Sousa would not go beyond quarterfinals at any Futures event until 2008.\n\n2008–2012: Early career\nIn 2008, Sousa began the season by winning his first professional title at the final of a Futures doubles tournament in Murcia. He reached two more doubles finals that year, winning a second title in August in Bakio. The biggest achievement in his 2008 campaign came at the Estoril Open. Entering through qualifying rounds, Sousa made his debut at the main draw of an ATP Tour-level tournament. He had his first ATP win over Austrian Oliver Marach, losing to Frederico Gil in the second round. Sousa also started playing at the ATP Challenger Tour and for the Portugal Davis Cup team in 2008. He played two singles dead rubbers, winning over Cyprus\' Eleftherios Christou in July and losing to Ukraine\'s Illya Marchenko in September.\n\nBesides winning two more Futures doubles titles in three finals in Irun and Espinho, Sousa reached his first four singles finals at the same level in 2009. He won the title in the La Palma final. At the Estoril Open, Sousa was granted a wild card to participate in his first doubles ATP World Tour level tournament, but lost in the first round. During 2009, Sousa was twice called to the Portugal Davis Cup team, winning both singles dead rubbers he took part in – over Philippos Tsangaridis from Cyprus in March and Algeria\'s Sid-Ali Akkal in July. In 2010, Sousa won his first Challenger title at the Tampere\'s doubles tournament in August. In the 2010 season, Sousa did not enter any ATP tournament; he began shifting his schedule increasingly from the Futures circuit to the Challenger tour. He was more successful in the Futures, winning three singles titles in four finals at Valldoreix, Tenerife and Lanzarote, and doubles titles in Lanzarote, Córdoba and two in Tenerife. At the Davis Cup, Sousa played two more dead rubbers, winning for the second time in three seasons over Cyprus\' Christou and losing to Bosnian Damir Džumhur.\n\nSousa reached several milestones in 2011. At the Challenger Tour, he won his first singles title at that level in Fürth in June. At the ATP World Tour, Sousa participated as a wildcard in the singles and doubles events in Estoril, losing in the second round of the former to Canadian Milos Raonic. He also made his first attempt at entering the main draw of a Grand Slam tournament, but fell in the qualifying rounds at the Australian Open, Wimbledon and the US Open. In October, Sousa\'s participation at the Sabadell Futures was his last presence in the main draw of a tournament in that category. He won three more singles and one doubles Futures titles, making his career titles at this level seven singles and nine doubles titles. Once again, Sousa was called for two dead rubbers at Davis Cup, winning over Martin Kližan from Slovakia and losing to Switzerland\'s Marco Chiudinelli. In October 2011, he hired Frederico Marques as a coach when he was ranked world number 220.\n\nAt the 2012 Estoril Open, Sousa reached the quarterfinals of an ATP tour tournament for the first time, losing to Albert Ramos. At the 2012 French Open, he made his debut as a qualifier in the main draw of a Grand Slam tournament. He would lose in the first round to 20th seed Marcel Granollers in four sets. He did not progress past the qualifying rounds at the other three Grand Slam tournaments. He also entered main draw events at the Barcelona Open (lost to Frederico Gil in the 2nd round) and the Croatia Open (lost to Matthias Bachinger in the 1st round). At Challenger tournaments, Sousa won two singles titles out of three finals – Mersin and Tampere – and one doubles title at Fürth.\n\nHis role at the 2012 Davis Cup rose in importance. Sousa played his first doubles rubber against Israel, partnering with Gastão Elias in a loss against Andy Ram and Jonathan Erlich. Ram also beat Sousa in a dead rubber – his last as of 2016. In September, Sousa played three rubbers against Slovakia. He won the first singles match over Lukáš Lacko but lost the doubles with Elias and his second singles match to Martin Kližan, meaning Portugal\'s relegation from Europe/Africa Zone Group I to Group II in 2013. In this same month, Sousa became the top-ranked Portuguese tennis player for the first time, at No.\xa0107. In October, his world ranking rose to No.\xa099, and Sousa became the fourth Portuguese player to enter the ATP top-100 singles ranking after Nuno Marques, Frederico Gil and Rui Machado.\n\n2013: Breakthrough in the ATP\nSousa started the 2013 season with his first participation in ATP tour level hardcourt tournaments at the Chennai Open and the Sydney International. Despite being knocked out of both tournaments in the first round, he returned to the top-100 world rankings. At the Australian Open, Sousa won his first Grand Slam on his second attempt, following a first-round win over wildcard John-Patrick Smith. He lost to world number three Andy Murray in straight sets in the second round. In February, Sousa participated in the Portugal Davis Cup team in their Europe/Africa Zone Group II tie against Benin. He won his singles match against Loic Didavi and the doubles match partnering with Pedro Sousa. Portugal won the tie 5–0 and progressed to the second round. Sousa then played his first clay court tournaments of the season at the Chile Open and ATP Buenos Aires, where he again lost in the first round. At the Mexican Open, he defeated former top 10 Jürgen Melzer in the first round, but lost in the second round to Santiago Giraldo.\n\nDespite failing to qualify for the Indian Wells Masters, Sousa entered for the first time in his career in the main draw of a Masters event at the Miami Masters. He lost in the first round to former world number 1 Lleyton Hewitt in straight sets. Sousa did not play in April after fracturing his left foot during a Davis Cup training session. He was scheduled to return as a wild card at the Portugal Open. His invitation was given to world number 4 David Ferrer instead, which stirred some controversy in Portuguese media. Later in the season, Sousa showed uncertainties about his future Portugal Open participation, which prompted tournament director João Lagos to comment on the contention. Ahead of the 2014 edition, the controversy was no longer an issue.\n\nSousa returned to action at the Madrid Masters qualifying rounds and at his first Challenger tournament of the season in Bordeaux, but he lost early in these attempts. At the 2013 French Open, Sousa won his first round match over Go Soeda in straight sets, and lost in the second round to Spaniard Feliciano López. He returned to the Challenger circuit with a singles title at Fürth and an early loss at Košice. It was also his second title in Fürth, after the triumph in 2011. Sousa missed the 2013 Wimbledon Championships main draw after losing in the third qualifying round to Julian Reister. He would also lose in the qualifying rounds of the doubles competition, while partnering with Teymuraz Gabashvili. In July, he played exclusively in Challenger tournaments, being runner-up in singles and doubles in San Benedetto, re-entering the top 100 rankings, which he has maintained ever since. He won the singles title in his hometown Guimarães. This remains his last participation at the ATP Challenger Tour, having won five singles and two doubles titles at the level. After losing in the qualifying rounds of the Cincinnati Masters, Sousa returned to the ATP World Tour at the Winston-Salem Open in August, losing to Alex Bogomolov, Jr. in the second round. In his first US Open appearance, Sousa reached the third round after defeating 25th seed Grigor Dimitrov and Jarkko Nieminen in back-to-back 5-set matches. He ended his campaign losing to world No.\xa01 Novak Djokovic. This was his best result at Grand Slams yet.\n\nIn September, Sousa joined Portugal\'s Davis Cup team to face Moldova in the semifinals of Europe/Africa Zone Group II. He won his first singles match over Maxim Dubarenco and the doubles match with Gastão Elias. He lost his second singles match to Radu Albot in an epic five-set duel which lasted nearly five hours. Portugal won 3–2 and was promoted to Group I in 2014. Following early-round wins over Paolo Lorenzi and Sergiy Stakhovsky at the St. Petersburg Open, Sousa beat former ATP top 20 player Dmitry Tursunov in the quarterfinals to advance to his first career ATP tour semifinal. He would lose there to Guillermo García-López.\n\nSousa\'s breakthrough title came at the Malaysian Open, in the early rounds of which he defeated Ryan Harrison and Pablo Cuevas. In the quarterfinals, Sousa defeated world No.\xa04 David Ferrer in straight set; Sousa\'s first career win over a top-10 player. Then, he qualified to his first ATP tour level final after getting past Jürgen Melzer in three sets. Sousa beat Frenchman Julien Benneteau in three sets in the final after saving one match point, becoming the first Portuguese player to win an ATP World Tour singles tournament. He also became the highest ranked Portuguese ever, climbing from No.\xa077 to No.\xa051. The previous record holder was Rui Machado, who was world No.\xa059 in 2011. Sousa officially entered the top 50 for the first time on 7 October 2013.\n\nIn October, Sousa had a first-round loss at the Kremlin Cup and a second round appearance at the Valencia Open. After beating Guillermo Garcia-Lopez in the first round, Sousa lost to 2013 Wimbledon semifinalist Jerzy Janowicz. Sousa finished his 2013 season by being eliminated from the Paris Masters in the qualifying round. At world No.\xa049, he became the first Portuguese to finish the season in the top 50. In November, Sousa was nominated for the 2013 Portuguese Sportsman of the Year award, losing to cyclist Rui Costa. At the same ceremony, he was named Tennis Personality of the Year by the Portuguese Tennis Federation.\n\n2014: Consolidating presence in the ATP World Tour\nSousa began the 2014 season with a first-round loss at the 2014 Qatar Open. At the Sydney International\'s doubles competition, he partnered with Lukáš Rosol to defeat the Bryan brothers, the then-world No.\xa01 doubles team, en route to the semifinals. At the 2014 Australian Open, he was beaten by world No.\xa0137 and future Grand Slam champion Dominic Thiem in the first round. Partnering with Colombian Santiago Giraldo, Sousa was eliminated in the first round of the doubles competition by Mahesh Bhupathi and Rajeev Ram. Later in January, Sousa joined the Portugal Davis Cup team to face Slovenia for the Europe/Africa Group I 1st Round. He won his first singles match against Janez Semrajc, but then lost in the doubles match and his second singles match against Blaž Kavčič. Portugal eventually lost 3–2 and fell to a relegation playoff. In February, he started with early round losses at the Open Sud de France and ATP Buenos Aires. Sousa played at the Rio Open and reached the quarterfinals, where he was beaten by world No.\xa01 Rafael Nadal. Sousa ended February with a second-round defeat and exit to Andy Murray at the Mexican Open. During the North American hard court Masters swing in March, Sousa started the Indian Wells Masters with a win over Aleksandr Nedovyesov, followed with a second-round loss to 20th seed Ernests Gulbis. At the 2014 Sony Open Tennis in Miami, Florida, Sousa reached the third round. After beating 26th seed Gilles Simon in the second round, he lost to world No.\xa07 Tomáš Berdych.\n\nSousa began the spring clay court season at the Grand Prix Hassan II in Casablanca, where he was beaten by world No.\xa0273 Roberto Carballés Baena in a second-round match lasting over three hours. This loss started an eight-match losing streak that lasted the remainder of the clay court season\xa0– it included losses at the Monte-Carlo Masters, at the Barcelona Open, at the Portugal Open, at the Madrid Masters, at the Rome Masters, and at the Düsseldorf Open. In the first round of the 2014 French Open, Sousa suffer his eighth consecutive loss against world No.\xa02 Novak Djokovic. During this run of losses, Sousa reached the semifinals of the Portugal Open\'s doubles competition and the third round at the 2014 French Open doubles competition, where he partnered with American Jack Sock and lost to Andrey Golubev and Sam Groth.\n\nSousa made his debut at an ATP grass tournament main draw at the Halle Open. In the first round, he beat German wild card Jan-Lennard Struff and snapped the eight match losing streak. Then, he faced former world No.\xa01 and 6-time Halle champion Roger Federer in the second round. After winning a close first set, Sousa ended up losing in three sets to the Swiss. At the Rosmalen Grass Court Championships, Sousa became the first Portuguese player ever to reach the semifinal of an ATP tour level grass tournament. He beat in succession Paolo Lorenzi, Mate Pavić and Thiemo de Bakker, losing in the semifinals to Benjamin Becker. To cap his grass court season, Sousa played his first ever Wimbledon Championships main draw match at the 2014 edition, with a straight sets loss in the first round to world No.\xa03 Stan Wawrinka. In the doubles competition, he partnered with Argentinian Carlos Berlocq to play a four-hour, five-set first round loss to Martin Kližan and Dominic Thiem.\n\nIn July, Sousa reached his second-career ATP tour level final and his first of 2014 at the Swedish Open, defeating the defending champion Carlos Berlocq in the semifinals. He lost the final to the Uruguayan Pablo Cuevas in straight sets. After losing in early rounds at the German Open and the Croatia Open, Sousa entered the Canada Masters, where he was defeated in the first round by 11th seed Gulbis. At the Cincinnati Masters, Sousa was defeated by Andy Murray in the second round. Sousa was also eliminated in the second round of the Winston-Salem Open. At that tournament\'s doubles competition, Sousa reach his third semifinal of the season, teaming up with Romanian Florin Mergea. At the 2014 US Open, Sousa became the first Portuguese player to be seeded at a Grand Slam tournament, with the 32nd seed at the singles competition. He started with a five-set win over Canadian Frank Dancevic. In the second round, he lost to David Goffin. In the doubles competition, Sousa partnered with Serbian Dušan Lajović and beat the Americans Marcos Giron and Kevin King in the first round. They eventually fell to 4th seed Marcelo Melo and Ivan Dodig in the second round.\n\nIn September, Sousa was selected to join Portugal Davis Cup team against Russia for the Europe/Africa\'s Group I Relegation Playoff. He lost both his singles and doubles matches, confirming the relegation of Portugal to Group II in 2015. Sousa rebounded at the 2014 Moselle Open with his second ATP singles final of the season, after defeating former ATP top-10 Gaël Monfils in the semifinals. He lost the final in straight sets to Goffin. Sousa followed this with a first-round loss to Benjamin Becker at the 2014 Malaysian Open, where Sousa was the defending champion, and dropped out of the Top-50 for the first time in 11 months. However, a quarterfinal appearance at the doubles tournament enabled him to enter the ATP doubles top-100 for the first time. He became the second Portuguese player to reach the top-100 of both ATP rankings, after Nuno Marques. It was the first time since January 1996 that a Portuguese player held a spot on the singles and doubles top-100s simultaneously. At the China Open, Sousa lost in the second round to reigning US Open champion Marin Čilić. He followed it with a debut at the Shanghai Masters, where he lost to Juan Mónaco in the first round. Sousa also lost in the first round at the Stockholm Open, but rebounded at the Valencia Open with his second career win over a top-10 doubles team, the defending champions Alexander Peya and Bruno Soares, in the first round. Alongside Leonardo Mayer, he reached his fourth doubles semifinal that season, the first at ATP 500 level. At the Paris Masters, Sousa suffered another early exit, ending his 2014 ATP tour campaign.\n\nSousa ended 2014 as world No.\xa054, failing to keep his top-50 status from the previous season. He became the first Portuguese player to maintain top-100 status by playing exclusively on the ATP World Tour in a single season. In November, he was nominated for the 2014 Portuguese Sportsman of the Year award, again losing to cyclist Rui Costa.\n\n2015: Second ATP title and quarterfinal at Grand Slam\nSousa began the 2015 season with an early round loss at the Auckland Open. At the 2015 Australian Open, he started his campaign with wins over wild card Jordan Thompson and Martin Kližan. He progressed to a third round match-up with 6th seed Andy Murray, becoming the second Portuguese player to reach that stage. Sousa lost in straight sets to Murray. In the doubles competition, Sousa partnered with Santiago Giraldo to reach the second round, where they lost to 2nd seeds Julien Benneteau and Édouard Roger-Vasselin. In February, Sousa participated at the Open Sud de France. After defeating Philipp Kohlschreiber in the quarterfinals, he lost in the semifinals to Jerzy Janowicz in three sets. After early round losses at the Rotterdam Open and Open 13, Sousa reached the second round of the Dubai Tennis Championships, where he was beaten by Murray. Sousa was then called for the Davis Cup team to face Morocco for the Europe/Africa Zone\'s Group II first round in early March. He won his singles rubber and partnered with Frederico Ferreira Silva to win the doubles rubber and close the tie in Portugal\'s favour. After injuring his knee and suffering breathing difficulties, Sousa was eliminated from both Indian Wells Masters and Miami Masters in the first round. He returned to Barcelona for recovery.\n\nSousa returned in April at the Monte-Carlo Masters, losing in the second round to Milos Raonic. At the Barcelona Open and Estoril Open, he lost in early rounds and then was eliminated from the Madrid Masters in the second round by Stan Wawrinka and from the Rome Masters in the first round by John Isner. At the Geneva Open, Sousa won a first round match over his Brazilian homophone João Souza, which was notable for the umpire needing to refer to each player by their nationality to distinguish between them during the calls. Sousa proceeded to the final, his first of the season, where he lost to Thomaz Bellucci. At the French Open, Sousa beat Canadian Vasek Pospisil in straight sets in the first round, and was defeated by 3rd seed Andy Murray in the second round. In the men\'s doubles of the tournament, Sousa partnered with Bellucci and was knocked out in the first round by 11th seeds Jamie Murray and John Peers. In June, Sousa did not have a strong grass court season; he was defeated in the early rounds at the Rosmalen Grass Court Championships, the Queen\'s Club Championships and the Nottingham Open. At Wimbledon, Sousa was again eliminated in straight sets in the first round by French Open champion and 4th seed Stan Wawrinka. His results did not improve in the men\'s doubles competition, from which he was eliminated in the first round while partnering with Santiago Giraldo.\n\nAt the Davis Cup Group II second round against Finland, Sousa rebounded with wins in his two singles rubbers and in the doubles rubber with Gastão Elias. At the Croatia Open, he beat in succession Andreas Seppi, Fabio Fognini and Roberto Bautista Agut to reach his second final in 2015. Sousa lost the final to Dominic Thiem. After a quarterfinal exit at the Swiss Open, He suffered a first-round loss to Bernard Tomic at the Canada Masters and reached the second round at the Cincinnati Masters, where he lost to Marin Čilić. Following a brief appearance at the Winston-Salem Open, Sousa was defeated at the US Open by Ričardas Berankis in five sets in the first round. In the men\'s doubles competition, Sousa became the second Portuguese player to reach the quarterfinals of a Grand Slam event after Nuno Marques, also in men\'s doubles at the 2000 Australian Open. Sousa and his partner Argentinian Leonardo Mayer were denied a presence in the semifinals by Americans Sam Querrey and Steve Johnson.\n\nSousa returned to the Davis Cup in September to help Portugal defeat Belarus and gain promotion to Europe/Africa Zone\'s Group I in 2016. Despite losing his first singles rubber, he won the doubles rubber with Elias and the deciding singles rubber against Uladzimir Ignatik. At the St. Petersburg Open, Sousa reached his third final of the season. Following wins over Marcel Granollers, Simone Bolelli and Dominic Thiem, Sousa was runner-up to Milos Raonic in three sets. In October, Sousa entered on a 1–4 run with early round losses at the Malaysian Open, the Japan Open, the Shanghai Masters and the Kremlin Cup. At the Valencia Open, Sousa capped the season with his second career ATP title and the first of the season in four final attempts. After beating four higher-ranked players, including Benoît Paire, Sousa defeated 7th seed Roberto Bautista Agut in the final in three sets. He reached a new career-high ranking in the following week at world No.\xa034. Sousa finished the season at career-high world No.\xa033 with 38 singles wins. In November, he received the award for Tennis Personality of the Year for the second time from the Portuguese Tennis Federation and the Confederação do Desporto de Portugal.\n\nDuring 2015, physiotherapist Carlos Costa, known for his work with Tommy Haas, occasionally joined Sousa\'s entourage in selected tournaments; Sousa wanted to have a part-time member in his team responsible for that area. In 2016, Costa is expected to follow Sousa for at least 10 weeks but will remain focused on Haas\' return until Wimbledon.\n\n2016: Top 30 and first Masters 1000 quarterfinals\nAfter training at Rafael Nadal\'s home ground in the pre-season, Sousa began the 2016 season with a first-round loss to Fabio Fognini at the Auckland Open. Due to Richard Gasquet\'s absence by injury, he became the first Portuguese ever to be seeded at the Australian Open, entering the singles main draw as the 32nd seed. Following wins over Mikhail Kukushkin and Santiago Giraldo, Sousa lost in the third round to world No.\xa02 Andy Murray for the second successive year. In the doubles event, Sousa partnered with Leonardo Mayer but the pair were eliminated in the first round.\n\nIn April, Sousa reached his first Masters 1000 quarterfinals at the Mutua Madrid Open 2016, after beating Nicolas Mahut, lucky loser Marcel Granollers and Jack Sock. He lost to Rafael Nadal in three sets. His clay season ended with a second-round exit at the French Open, where he lost to Ernests Gulbis in four sets.\n\nIn June, Sousa entered Wimbledon as 31st seed. After beating Dmitry Tursunov in five sets and Dennis Novikov in four sets, he lost to Jiri Vesely in the third round, making for his best run ever at Wimbledon.\n\nSousa entered the 2016 Rogers Cup where he lost in the 1st round 6–3, 6–3 to semi-finalist Gaël Monfils.\nAt the 2016 Olympic Games in Rio de Janeiro, Sousa won his first match but lost in the next round in three sets to eventual silver medalist Juan Martín del Potro. Three weeks later at the 2016 US Open, he inflicted the heaviest defeat of the Men\'s Singles draw, defeating Víctor Estrella Burgos in the first round conceding only 2 games in 3 sets. He went on to defeat Feliciano Lopez in 4 sets but his run ended losing to a resurgent Grigor Dimitrov.\n\nAfter dropping the points from the 2015 Valencia Open in late October, Sousa finished the season at 43rd in the ATP Rankings, with just over 1,000 points.\n\n2017: Two ATP finals\n\nJoão Sousa trained with Rafael Nadal in the offseason for the second year running. He started the 2017 season at the 2017 Auckland Open once again, where he reached the final after beating Albert Ramos-Vinolas, Brydan Klein, Robin Haase and Marcos Baghdatis. He lost in three sets to Jack Sock, but the result allowed him to re-enter the Top 40 in the ATP Singles Rankings. Sousa\'s January ended with a first-round exit at the Australian Open, having lost in five sets to Jordan Thompson, his worst result at this Grand Slam since 2014.\n\nSousa started the South American swing at the Argentina Open, having lost in the quarter-finals to eventual finalist Kei Nishikori. At the Rio Open, Sousa crashed out in the first round, losing in two sets to Roberto Carballes Baena, in a match that lasted just under an hour. His last clay tournament in South America was the Brasil Open, where he lost in the semi-finals to Albert Ramos-Vinolas in three sets.\n\nIn March, Sousa entered the first two Masters 1000 tournaments of the season. At the BNP Paribas Open, he lost to Mischa Zverev in the second round. At the Miami Open, Sousa entered as 30th seed, receiving a bye for the first round, but lost in the second round to Fabio Fognini.\n\nSousa\'s late May ended with second round at the French Open, having lost to three sets to Serbian number 2\'s Novak Djokovic by 6–1, 6–4, 6–3. But, Sousa already won at first round with Serbia\'s Janko Tipsarevic, having won by four sets by 4–6, 7–6 (7–3), 6–2, 6–2.\n\nAfter the clay court season was over, he continued a streak of consecutive losses, losing matches to Philipp Kohlschreiber, Radu Albot and Dustin Brown at the Gerry Weber Open, Antalya Open and Wimbledon respectively.\n\nSousa\'s streak remained active in Croatia Open Umag, losing in 3 sets to Aljaz Bedene. However, he would eventually turn it around winning by reaching the quarterfinals in Swiss Open Gstaad and reaching the final in Generali Open Kitzbühel.\n\nHe would go on to have more losses in the remainder of the year and not many more wins. One of these includes two crucial defeats in the Davis Cup where Portugal could have qualified for the World Group for the first time in its history, specially considering the absence of the Zverev brothers and Kohlschreiber.\n\n2018–2019: Home title, Grand Slam fourth rounds\nIn 2018, Sousa made the third round of the Indian Wells Masters and the fourth round of the Miami Masters. At Indian Wells, he defeated 4th seed and world number 5 Alexander Zverev in the second round before losing to 32nd seed Milos Raonic in three sets. At Miami, he defeated 7th seed and world number 9 David Goffin in the second round only losing one game in the process before losing to 19th seed Chung Hyeon in straight sets.\n\nSousa became the first Portuguese player to win his home title in Estoril, after beating Daniil Medvedev, countryman Pedro Sousa, Kyle Edmund, Stefanos Tsitsipas and Frances Tiafoe.\nHe reached the fourth round at the US Open, losing to eventual champion Novak Djokovic.\n\nSousa failed to defend his title at the following 2019 Estoril, losing to David Goffin in the second round. \nHe reached the fourth round for a second week at 2019 Wimbledon, losing to Rafael Nadal.\n\n2020–2021: Dip in form and rankings, out of top 100, 200th ATP career win\nThroughout 2020 and 2021, Sousa showed a severe dip in form. Since the beginning of 2020, Sousa posted a win-loss record of 1–20 on the ATP tour and his ranking plummeted from No. 58 at the beginning of 2020 to No. 147 as of July 26, 2021. It was the first time since 2013 that Sousa had fallen outside the top 100 in singles rankings.\n\nAt the 2020 Davis Cup. Sousa defeated Romanian Filip Cristian Jianu to record his 200th career win.\n\n2022: First ATP title since 2018, back to top 100\n\nAt the 2022 Australian Open, Sousa participated in the qualifications to enter the men\'s singles main draw as a qualifier. He fell short of doing so, as he lost to Radu Albot in the final round of qualifying. Sousa would still end up entering the main draw as a lucky loser, facing Jannik Sinner. He lost to Sinner in straight sets.\n\nAfter being in a serious slump for more than 2 years, Sousa finally earned one of his best results in recent years, winning his 4th career singles title in Pune. He defeated Emil Ruusuvuori in the final to win his first tour-level title since 2018. As a result, he moved 51 positions up, returning into the top 100 to No. 86.\n\nPlaying style\n\nSousa\'s game is strongly based on his serve and forehand. He is right-handed and plays with a two-handed backhand. Sousa has said the forehand is his favourite shot and that he prefers playing on clay courts. He is known for expressing his emotions on court at times, often focusing on his coach or the umpire. Andy Murray described Sousa as a tough opponent who never backs down from a fight, while Novak Djokovic called him a "tough" and mentally strong player who "takes the best out of the opponent". Jamie Murray said Sousa has a "good forehand" and "likes playing on clay", despite his better results on hard courts. He has been described as having the potential of becoming a top-20 player.\n\nSousa\'s game pattern has become more offensive-minded and consistent, and his game has evolved in recent years from playing on clay to becoming more proficient on other surfaces. He won his first Challenger title on hard courts in July 2013 in his hometown Guimarães. Later in September, Sousa went on an 8–1 run to cap his semifinal run at the 2013 St. Petersburg Open and win the title at the 2013 Proton Malaysian Open, both ATP tour hard-court indoor tournaments. He continued his form on faster courts in 2014, with deep runs on grass courts at the 2014 Gerry Weber Open and Topshelf Open, and a final appearance in a hard court indoor tournament at the 2014 Moselle Open. Despite a results slump on clay earlier in the season, he still achieved his first ATP tour-level final on clay at the 2014 Swedish Open, and, eventually triumphing in home turf at the 2018 Estoril Open.\n\nEquipment and endorsements\nAs of October 2013, Sousa has been represented by Polaris Sports, a subsidiary of Jorge Mendes\'s Gestifute, which manages the career of other major Portuguese sportspeople, including Cristiano Ronaldo. Sousa uses a Wilson racquet, and is endorsed by Lotto Sport Italia since January 2014, in a two-year partnership which covers the supply of footwear, clothing and accessories. In May 2015, Sousa started a partnership with sports supplements company Gold Nutrition. Sousa\'s endorsement of sporting attires was switched to Joma in 2020.\n\nPortuguese clothing brand Mike Davis announced an agreement with Sousa to associate him with the brand\'s casual sportswear during 2014. Portuguese private bank BES was another endorser of Sousa\'s career before its bailout in August 2014. In February 2015, private bank Millenium BCP announced a sponsorship agreement with Sousa.\n\nEarlier in his career, Sousa said he struggled to find local endorsements and also lamented the financial struggles of the Portuguese Tennis Federation, which prevented support for his growing participation in the ATP Tour. He criticized local government for lack of support of sports other than football. During his junior and early professional career, Sousa\'s expenses were supported mainly by his parents and through bank loans.\n\nCareer statistics\n\nGrand Slam tournament performance timelines\n\nSingles\nCurrent through the 2022 Australian Open.\n\nDoubles\n\nATP Masters 1000 finals\n\nDoubles: 1 (1 runner-up)\n\nAwards\n2013 – CDP Portuguese Tennis Personality of the Year\n2014 – CNID Portuguese Athlete of the Year\n2015 – CDP Portuguese Tennis Personality of the Year\n\nNotes\n\nReferences\n\nExternal links\n\n Official website\n\nProfiles\n \n \n \n \n\n1989 births\nLiving people\nPortuguese expatriate sportspeople in Spain\nSportspeople from Guimarães\nPortuguese male tennis players\nTennis players at the 2016 Summer Olympics\nTennis players at the 2020 Summer Olympics\nOlympic tennis players of Portugal', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `dev_evaluator` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.5771 | | dot_accuracy | 0.4229 | | manhattan_accuracy | 0.7171 | | euclidean_accuracy | 0.5771 | | **max_accuracy** | **0.7171** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `gradient_accumulation_steps`: 4 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 4 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | dev_evaluator_max_accuracy | |:--------:|:------:|:-------------:|:----------:|:--------------------------:| | **1.12** | **35** | **-** | **4.5711** | **0.8029** | | 2.12 | 70 | - | 5.1724 | 0.5714 | | 2.96 | 100 | 2.6042 | - | - | | 3.12 | 105 | - | 4.9164 | 0.5714 | | 4.12 | 140 | - | 4.8271 | 0.6743 | | 4.48 | 155 | - | 4.6298 | 0.7171 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.4.0 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
[ "BEAR", "MEDAL", "PDR" ]
Non_BioNLP
facebook/SONAR
facebook
null
[ "mteb", "license:cc-by-nc-4.0", "model-index", "region:us" ]
1,692
1,707
0
55
--- license: cc-by-nc-4.0 tags: - mteb model-index: - name: text_sonar_basic_encoder_normalized results: - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 18.787544117314575 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 17.97026675319667 - type: cos_sim_spearman value: 17.63407829948615 - type: euclidean_pearson value: 17.704571608660725 - type: euclidean_spearman value: 17.634078298828143 - type: manhattan_pearson value: 17.606959101509464 - type: manhattan_spearman value: 17.549620164990085 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 27.670887504789675 - type: cos_sim_spearman value: 26.176629407301782 - type: euclidean_pearson value: 28.878485717935586 - type: euclidean_spearman value: 26.176635036613355 - type: manhattan_pearson value: 28.782373978690103 - type: manhattan_spearman value: 26.055266444113794 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 29.62226640159046 - type: f1 value: 27.632722290701047 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 81.49253731343285 - type: ap value: 46.61440947240349 - type: f1 value: 75.68925212232107 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 72.02355460385438 - type: ap value: 83.13664983282676 - type: f1 value: 70.48997817871013 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 82.09145427286357 - type: ap value: 31.45181004731995 - type: f1 value: 69.41750580313406 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.78800856531049 - type: ap value: 19.65443896353892 - type: f1 value: 58.436688187826334 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 62.73074999999999 - type: ap value: 58.2839375458089 - type: f1 value: 62.16204082406629 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.552000000000003 - type: f1 value: 31.125328770568277 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 34.611999999999995 - type: f1 value: 33.93738697105999 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 35.172 - type: f1 value: 34.14112656493798 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 34.910000000000004 - type: f1 value: 34.276631172288965 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.844 - type: f1 value: 31.478780923476368 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.912000000000003 - type: f1 value: 31.384992191831312 - task: type: Classification dataset: name: MTEB AngryTweetsClassification type: DDSC/angry-tweets config: default split: test revision: 20b0e6081892e78179356fada741b7afa381443d metrics: - type: accuracy value: 49.61795606494747 - type: f1 value: 48.63625944670304 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 8.677 - type: map_at_10 value: 14.732000000000001 - type: map_at_100 value: 15.501999999999999 - type: map_at_1000 value: 15.583 - type: map_at_3 value: 12.553 - type: map_at_5 value: 13.822999999999999 - type: mrr_at_1 value: 8.819 - type: mrr_at_10 value: 14.787 - type: mrr_at_100 value: 15.557000000000002 - type: mrr_at_1000 value: 15.638 - type: mrr_at_3 value: 12.648000000000001 - type: mrr_at_5 value: 13.879 - type: ndcg_at_1 value: 8.677 - type: ndcg_at_10 value: 18.295 - type: ndcg_at_100 value: 22.353 - type: ndcg_at_1000 value: 24.948999999999998 - type: ndcg_at_3 value: 13.789000000000001 - type: ndcg_at_5 value: 16.075 - type: precision_at_1 value: 8.677 - type: precision_at_10 value: 2.98 - type: precision_at_100 value: 0.49500000000000005 - type: precision_at_1000 value: 0.07100000000000001 - type: precision_at_3 value: 5.785 - type: precision_at_5 value: 4.58 - type: recall_at_1 value: 8.677 - type: recall_at_10 value: 29.801 - type: recall_at_100 value: 49.502 - type: recall_at_1000 value: 70.91 - type: recall_at_3 value: 17.354 - type: recall_at_5 value: 22.902 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: arguana-pl config: default split: test revision: None metrics: - type: map_at_1 value: 7.752000000000001 - type: map_at_10 value: 12.248000000000001 - type: map_at_100 value: 12.882 - type: map_at_1000 value: 12.963 - type: map_at_3 value: 10.574 - type: map_at_5 value: 11.566 - type: mrr_at_1 value: 7.824000000000001 - type: mrr_at_10 value: 12.293 - type: mrr_at_100 value: 12.928 - type: mrr_at_1000 value: 13.008000000000001 - type: mrr_at_3 value: 10.586 - type: mrr_at_5 value: 11.599 - type: ndcg_at_1 value: 7.752000000000001 - type: ndcg_at_10 value: 15.035000000000002 - type: ndcg_at_100 value: 18.497 - type: ndcg_at_1000 value: 20.896 - type: ndcg_at_3 value: 11.578 - type: ndcg_at_5 value: 13.38 - type: precision_at_1 value: 7.752000000000001 - type: precision_at_10 value: 2.404 - type: precision_at_100 value: 0.411 - type: precision_at_1000 value: 0.061 - type: precision_at_3 value: 4.836 - type: precision_at_5 value: 3.784 - type: recall_at_1 value: 7.752000000000001 - type: recall_at_10 value: 24.04 - type: recall_at_100 value: 41.11 - type: recall_at_1000 value: 60.597 - type: recall_at_3 value: 14.509 - type: recall_at_5 value: 18.919 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 26.81177290816682 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 24.346811178757022 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 50.88606427049027 - type: mrr value: 65.13004001231148 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 77.15058512395619 - type: cos_sim_spearman value: 79.10541692841936 - type: euclidean_pearson value: 75.30525535929353 - type: euclidean_spearman value: 79.10541692841936 - type: manhattan_pearson value: 75.33508042552984 - type: manhattan_spearman value: 78.84577245802708 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 37.84739189558895 - type: cos_sim_spearman value: 37.662710610486265 - type: euclidean_pearson value: 37.5407537185213 - type: euclidean_spearman value: 37.66272446700578 - type: manhattan_pearson value: 37.863820146709706 - type: manhattan_spearman value: 38.09120266204032 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.97703549060543 - type: f1 value: 98.82393876130828 - type: precision value: 98.74913013221992 - type: recall value: 98.97703549060543 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.34910851860005 - type: f1 value: 98.09487123046446 - type: precision value: 97.97032063981217 - type: recall value: 98.34910851860005 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.60304814686526 - type: f1 value: 97.36520032328832 - type: precision value: 97.24743101258517 - type: recall value: 97.60304814686526 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.78883622959452 - type: f1 value: 98.71862383710724 - type: precision value: 98.68351764086361 - type: recall value: 98.78883622959452 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 73.49675324675324 - type: f1 value: 72.88538992490979 - task: type: Clustering dataset: name: MTEB BigPatentClustering type: jinaai/big-patent-clustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 6.801245618724224 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 20.6156033971932 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 19.077587707743156 - task: type: Clustering dataset: name: MTEB BlurbsClusteringP2P type: slvnwhrl/blurbs-clustering-p2p config: default split: test revision: a2dd5b02a77de3466a3eaa98ae586b5610314496 metrics: - type: v_measure value: 27.00349462858046 - task: type: Clustering dataset: name: MTEB BlurbsClusteringS2S type: slvnwhrl/blurbs-clustering-s2s config: default split: test revision: 9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d metrics: - type: v_measure value: 14.845348131791589 - task: type: BitextMining dataset: name: MTEB BornholmBitextMining type: strombergnlp/bornholmsk_parallel config: default split: test revision: 3bc5cfb4ec514264fe2db5615fac9016f7251552 metrics: - type: accuracy value: 54.0 - type: f1 value: 47.37026862026861 - type: precision value: 45.0734126984127 - type: recall value: 54.0 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 63.83000000000001 - type: ap value: 18.511972946438764 - type: f1 value: 53.16787370496645 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 84.39999999999999 - type: cos_sim_ap value: 59.968589741258036 - type: cos_sim_f1 value: 54.90909090909091 - type: cos_sim_precision value: 41.94444444444444 - type: cos_sim_recall value: 79.47368421052632 - type: dot_accuracy value: 84.39999999999999 - type: dot_ap value: 59.968589741258036 - type: dot_f1 value: 54.90909090909091 - type: dot_precision value: 41.94444444444444 - type: dot_recall value: 79.47368421052632 - type: euclidean_accuracy value: 84.39999999999999 - type: euclidean_ap value: 59.968589741258036 - type: euclidean_f1 value: 54.90909090909091 - type: euclidean_precision value: 41.94444444444444 - type: euclidean_recall value: 79.47368421052632 - type: manhattan_accuracy value: 84.39999999999999 - type: manhattan_ap value: 60.094893481041154 - type: manhattan_f1 value: 55.452865064695004 - type: manhattan_precision value: 42.73504273504273 - type: manhattan_recall value: 78.94736842105263 - type: max_accuracy value: 84.39999999999999 - type: max_ap value: 60.094893481041154 - type: max_f1 value: 55.452865064695004 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 83.8427417206754 - type: cos_sim_spearman value: 85.76946319798301 - type: euclidean_pearson value: 79.43901249477852 - type: euclidean_spearman value: 85.76946319798301 - type: manhattan_pearson value: 79.81046681362531 - type: manhattan_spearman value: 86.24115514951988 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 27.432031859995952 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 28.32367305628197 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 34.30720667137015 - type: mrr value: 40.24416666666666 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 35.87700379259406 - type: mrr value: 40.80206349206349 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 7.655000000000001 - type: map_at_10 value: 11.681999999999999 - type: map_at_100 value: 12.464 - type: map_at_1000 value: 12.603 - type: map_at_3 value: 10.514 - type: map_at_5 value: 11.083 - type: mrr_at_1 value: 10.157 - type: mrr_at_10 value: 14.773 - type: mrr_at_100 value: 15.581999999999999 - type: mrr_at_1000 value: 15.68 - type: mrr_at_3 value: 13.519 - type: mrr_at_5 value: 14.049 - type: ndcg_at_1 value: 10.157 - type: ndcg_at_10 value: 14.527999999999999 - type: ndcg_at_100 value: 18.695999999999998 - type: ndcg_at_1000 value: 22.709 - type: ndcg_at_3 value: 12.458 - type: ndcg_at_5 value: 13.152 - type: precision_at_1 value: 10.157 - type: precision_at_10 value: 2.976 - type: precision_at_100 value: 0.634 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 6.152 - type: precision_at_5 value: 4.378 - type: recall_at_1 value: 7.655000000000001 - type: recall_at_10 value: 20.105 - type: recall_at_100 value: 39.181 - type: recall_at_1000 value: 68.06400000000001 - type: recall_at_3 value: 14.033000000000001 - type: recall_at_5 value: 16.209 - type: map_at_1 value: 3.2329999999999997 - type: map_at_10 value: 5.378 - type: map_at_100 value: 5.774 - type: map_at_1000 value: 5.863 - type: map_at_3 value: 4.598 - type: map_at_5 value: 4.9750000000000005 - type: mrr_at_1 value: 4.076 - type: mrr_at_10 value: 6.679 - type: mrr_at_100 value: 7.151000000000001 - type: mrr_at_1000 value: 7.24 - type: mrr_at_3 value: 5.722 - type: mrr_at_5 value: 6.2059999999999995 - type: ndcg_at_1 value: 4.076 - type: ndcg_at_10 value: 6.994 - type: ndcg_at_100 value: 9.366 - type: ndcg_at_1000 value: 12.181000000000001 - type: ndcg_at_3 value: 5.356000000000001 - type: ndcg_at_5 value: 6.008 - type: precision_at_1 value: 4.076 - type: precision_at_10 value: 1.459 - type: precision_at_100 value: 0.334 - type: precision_at_1000 value: 0.075 - type: precision_at_3 value: 2.718 - type: precision_at_5 value: 2.089 - type: recall_at_1 value: 3.2329999999999997 - type: recall_at_10 value: 10.749 - type: recall_at_100 value: 21.776 - type: recall_at_1000 value: 42.278999999999996 - type: recall_at_3 value: 6.146999999999999 - type: recall_at_5 value: 7.779999999999999 - type: map_at_1 value: 8.036 - type: map_at_10 value: 12.727 - type: map_at_100 value: 13.532 - type: map_at_1000 value: 13.653 - type: map_at_3 value: 11.15 - type: map_at_5 value: 11.965 - type: mrr_at_1 value: 9.404 - type: mrr_at_10 value: 14.493 - type: mrr_at_100 value: 15.274 - type: mrr_at_1000 value: 15.370000000000001 - type: mrr_at_3 value: 12.853 - type: mrr_at_5 value: 13.696 - type: ndcg_at_1 value: 9.404 - type: ndcg_at_10 value: 15.784 - type: ndcg_at_100 value: 20.104 - type: ndcg_at_1000 value: 23.357 - type: ndcg_at_3 value: 12.61 - type: ndcg_at_5 value: 13.988 - type: precision_at_1 value: 9.404 - type: precision_at_10 value: 2.947 - type: precision_at_100 value: 0.562 - type: precision_at_1000 value: 0.093 - type: precision_at_3 value: 6.04 - type: precision_at_5 value: 4.4639999999999995 - type: recall_at_1 value: 8.036 - type: recall_at_10 value: 23.429 - type: recall_at_100 value: 43.728 - type: recall_at_1000 value: 68.10000000000001 - type: recall_at_3 value: 14.99 - type: recall_at_5 value: 18.274 - type: map_at_1 value: 3.653 - type: map_at_10 value: 5.941 - type: map_at_100 value: 6.512 - type: map_at_1000 value: 6.6129999999999995 - type: map_at_3 value: 5.2540000000000004 - type: map_at_5 value: 5.645 - type: mrr_at_1 value: 3.955 - type: mrr_at_10 value: 6.4079999999999995 - type: mrr_at_100 value: 7.005999999999999 - type: mrr_at_1000 value: 7.105 - type: mrr_at_3 value: 5.593 - type: mrr_at_5 value: 6.051 - type: ndcg_at_1 value: 3.955 - type: ndcg_at_10 value: 7.342 - type: ndcg_at_100 value: 10.543 - type: ndcg_at_1000 value: 14.011000000000001 - type: ndcg_at_3 value: 5.853 - type: ndcg_at_5 value: 6.586 - type: precision_at_1 value: 3.955 - type: precision_at_10 value: 1.266 - type: precision_at_100 value: 0.315 - type: precision_at_1000 value: 0.066 - type: precision_at_3 value: 2.5989999999999998 - type: precision_at_5 value: 1.966 - type: recall_at_1 value: 3.653 - type: recall_at_10 value: 11.232000000000001 - type: recall_at_100 value: 26.625 - type: recall_at_1000 value: 54.476 - type: recall_at_3 value: 7.269 - type: recall_at_5 value: 8.982999999999999 - type: map_at_1 value: 2.257 - type: map_at_10 value: 3.881 - type: map_at_100 value: 4.279 - type: map_at_1000 value: 4.417 - type: map_at_3 value: 3.4070000000000005 - type: map_at_5 value: 3.744 - type: mrr_at_1 value: 2.9850000000000003 - type: mrr_at_10 value: 4.756 - type: mrr_at_100 value: 5.228 - type: mrr_at_1000 value: 5.354 - type: mrr_at_3 value: 4.125 - type: mrr_at_5 value: 4.567 - type: ndcg_at_1 value: 2.9850000000000003 - type: ndcg_at_10 value: 4.936999999999999 - type: ndcg_at_100 value: 7.664 - type: ndcg_at_1000 value: 12.045 - type: ndcg_at_3 value: 3.956 - type: ndcg_at_5 value: 4.584 - type: precision_at_1 value: 2.9850000000000003 - type: precision_at_10 value: 0.9329999999999999 - type: precision_at_100 value: 0.29 - type: precision_at_1000 value: 0.083 - type: precision_at_3 value: 1.949 - type: precision_at_5 value: 1.567 - type: recall_at_1 value: 2.257 - type: recall_at_10 value: 7.382 - type: recall_at_100 value: 20.689 - type: recall_at_1000 value: 53.586 - type: recall_at_3 value: 4.786 - type: recall_at_5 value: 6.2829999999999995 - type: map_at_1 value: 6.691 - type: map_at_10 value: 9.447 - type: map_at_100 value: 10.174 - type: map_at_1000 value: 10.308 - type: map_at_3 value: 8.187999999999999 - type: map_at_5 value: 8.852 - type: mrr_at_1 value: 8.566 - type: mrr_at_10 value: 12.036 - type: mrr_at_100 value: 12.817 - type: mrr_at_1000 value: 12.918 - type: mrr_at_3 value: 10.539 - type: mrr_at_5 value: 11.381 - type: ndcg_at_1 value: 8.566 - type: ndcg_at_10 value: 11.95 - type: ndcg_at_100 value: 15.831000000000001 - type: ndcg_at_1000 value: 19.561 - type: ndcg_at_3 value: 9.467 - type: ndcg_at_5 value: 10.544 - type: precision_at_1 value: 8.566 - type: precision_at_10 value: 2.387 - type: precision_at_100 value: 0.538 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 4.556 - type: precision_at_5 value: 3.5029999999999997 - type: recall_at_1 value: 6.691 - type: recall_at_10 value: 17.375 - type: recall_at_100 value: 34.503 - type: recall_at_1000 value: 61.492000000000004 - type: recall_at_3 value: 10.134 - type: recall_at_5 value: 13.056999999999999 - type: map_at_1 value: 4.68 - type: map_at_10 value: 6.776999999999999 - type: map_at_100 value: 7.207 - type: map_at_1000 value: 7.321999999999999 - type: map_at_3 value: 6.007 - type: map_at_5 value: 6.356000000000001 - type: mrr_at_1 value: 5.479 - type: mrr_at_10 value: 8.094999999999999 - type: mrr_at_100 value: 8.622 - type: mrr_at_1000 value: 8.729000000000001 - type: mrr_at_3 value: 7.249 - type: mrr_at_5 value: 7.6770000000000005 - type: ndcg_at_1 value: 5.479 - type: ndcg_at_10 value: 8.474 - type: ndcg_at_100 value: 11.134 - type: ndcg_at_1000 value: 14.759 - type: ndcg_at_3 value: 6.888 - type: ndcg_at_5 value: 7.504 - type: precision_at_1 value: 5.479 - type: precision_at_10 value: 1.575 - type: precision_at_100 value: 0.35000000000000003 - type: precision_at_1000 value: 0.08099999999999999 - type: precision_at_3 value: 3.272 - type: precision_at_5 value: 2.374 - type: recall_at_1 value: 4.68 - type: recall_at_10 value: 12.552 - type: recall_at_100 value: 24.91 - type: recall_at_1000 value: 52.019999999999996 - type: recall_at_3 value: 8.057 - type: recall_at_5 value: 9.629999999999999 - type: map_at_1 value: 4.741750000000001 - type: map_at_10 value: 7.103916666666667 - type: map_at_100 value: 7.656499999999998 - type: map_at_1000 value: 7.767583333333332 - type: map_at_3 value: 6.262416666666668 - type: map_at_5 value: 6.693916666666667 - type: mrr_at_1 value: 5.780583333333332 - type: mrr_at_10 value: 8.576333333333332 - type: mrr_at_100 value: 9.17975 - type: mrr_at_1000 value: 9.279083333333334 - type: mrr_at_3 value: 7.608833333333333 - type: mrr_at_5 value: 8.111333333333333 - type: ndcg_at_1 value: 5.780583333333332 - type: ndcg_at_10 value: 8.866166666666668 - type: ndcg_at_100 value: 12.037083333333333 - type: ndcg_at_1000 value: 15.4555 - type: ndcg_at_3 value: 7.179083333333335 - type: ndcg_at_5 value: 7.897166666666666 - type: precision_at_1 value: 5.780583333333332 - type: precision_at_10 value: 1.6935833333333334 - type: precision_at_100 value: 0.3921666666666667 - type: precision_at_1000 value: 0.08391666666666667 - type: precision_at_3 value: 3.425416666666666 - type: precision_at_5 value: 2.5570833333333334 - type: recall_at_1 value: 4.741750000000001 - type: recall_at_10 value: 12.889083333333334 - type: recall_at_100 value: 27.81866666666667 - type: recall_at_1000 value: 53.52316666666667 - type: recall_at_3 value: 8.179333333333332 - type: recall_at_5 value: 10.004083333333334 - type: map_at_1 value: 3.7130000000000005 - type: map_at_10 value: 5.734 - type: map_at_100 value: 6.297999999999999 - type: map_at_1000 value: 6.388000000000001 - type: map_at_3 value: 5.119 - type: map_at_5 value: 5.432 - type: mrr_at_1 value: 4.9079999999999995 - type: mrr_at_10 value: 7.2940000000000005 - type: mrr_at_100 value: 7.8549999999999995 - type: mrr_at_1000 value: 7.95 - type: mrr_at_3 value: 6.621 - type: mrr_at_5 value: 6.950000000000001 - type: ndcg_at_1 value: 4.9079999999999995 - type: ndcg_at_10 value: 7.167999999999999 - type: ndcg_at_100 value: 10.436 - type: ndcg_at_1000 value: 13.370999999999999 - type: ndcg_at_3 value: 5.959 - type: ndcg_at_5 value: 6.481000000000001 - type: precision_at_1 value: 4.9079999999999995 - type: precision_at_10 value: 1.3339999999999999 - type: precision_at_100 value: 0.33899999999999997 - type: precision_at_1000 value: 0.065 - type: precision_at_3 value: 2.965 - type: precision_at_5 value: 2.117 - type: recall_at_1 value: 3.7130000000000005 - type: recall_at_10 value: 10.156 - type: recall_at_100 value: 25.955000000000002 - type: recall_at_1000 value: 48.891 - type: recall_at_3 value: 6.795 - type: recall_at_5 value: 8.187999999999999 - type: map_at_1 value: 2.114 - type: map_at_10 value: 3.4290000000000003 - type: map_at_100 value: 3.789 - type: map_at_1000 value: 3.878 - type: map_at_3 value: 2.9139999999999997 - type: map_at_5 value: 3.148 - type: mrr_at_1 value: 2.65 - type: mrr_at_10 value: 4.252000000000001 - type: mrr_at_100 value: 4.689 - type: mrr_at_1000 value: 4.782 - type: mrr_at_3 value: 3.671 - type: mrr_at_5 value: 3.9370000000000003 - type: ndcg_at_1 value: 2.65 - type: ndcg_at_10 value: 4.47 - type: ndcg_at_100 value: 6.654 - type: ndcg_at_1000 value: 9.713 - type: ndcg_at_3 value: 3.424 - type: ndcg_at_5 value: 3.794 - type: precision_at_1 value: 2.65 - type: precision_at_10 value: 0.9119999999999999 - type: precision_at_100 value: 0.248 - type: precision_at_1000 value: 0.063 - type: precision_at_3 value: 1.7209999999999999 - type: precision_at_5 value: 1.287 - type: recall_at_1 value: 2.114 - type: recall_at_10 value: 6.927 - type: recall_at_100 value: 17.26 - type: recall_at_1000 value: 40.672999999999995 - type: recall_at_3 value: 3.8859999999999997 - type: recall_at_5 value: 4.861 - type: map_at_1 value: 6.055 - type: map_at_10 value: 7.704999999999999 - type: map_at_100 value: 8.169 - type: map_at_1000 value: 8.257 - type: map_at_3 value: 7.033 - type: map_at_5 value: 7.4079999999999995 - type: mrr_at_1 value: 6.81 - type: mrr_at_10 value: 8.955 - type: mrr_at_100 value: 9.497 - type: mrr_at_1000 value: 9.583 - type: mrr_at_3 value: 8.116 - type: mrr_at_5 value: 8.526 - type: ndcg_at_1 value: 6.81 - type: ndcg_at_10 value: 9.113 - type: ndcg_at_100 value: 11.884 - type: ndcg_at_1000 value: 14.762 - type: ndcg_at_3 value: 7.675999999999999 - type: ndcg_at_5 value: 8.325000000000001 - type: precision_at_1 value: 6.81 - type: precision_at_10 value: 1.558 - type: precision_at_100 value: 0.34299999999999997 - type: precision_at_1000 value: 0.068 - type: precision_at_3 value: 3.2960000000000003 - type: precision_at_5 value: 2.388 - type: recall_at_1 value: 6.055 - type: recall_at_10 value: 12.219 - type: recall_at_100 value: 25.304 - type: recall_at_1000 value: 47.204 - type: recall_at_3 value: 8.387 - type: recall_at_5 value: 9.991 - type: map_at_1 value: 5.043 - type: map_at_10 value: 7.394 - type: map_at_100 value: 8.096 - type: map_at_1000 value: 8.243 - type: map_at_3 value: 6.300999999999999 - type: map_at_5 value: 6.7780000000000005 - type: mrr_at_1 value: 6.126 - type: mrr_at_10 value: 9.308 - type: mrr_at_100 value: 10.091 - type: mrr_at_1000 value: 10.206 - type: mrr_at_3 value: 7.938000000000001 - type: mrr_at_5 value: 8.64 - type: ndcg_at_1 value: 6.126 - type: ndcg_at_10 value: 9.474 - type: ndcg_at_100 value: 13.238 - type: ndcg_at_1000 value: 17.366 - type: ndcg_at_3 value: 7.3260000000000005 - type: ndcg_at_5 value: 8.167 - type: precision_at_1 value: 6.126 - type: precision_at_10 value: 1.9959999999999998 - type: precision_at_100 value: 0.494 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 3.557 - type: precision_at_5 value: 2.9250000000000003 - type: recall_at_1 value: 5.043 - type: recall_at_10 value: 13.812 - type: recall_at_100 value: 31.375999999999998 - type: recall_at_1000 value: 61.309999999999995 - type: recall_at_3 value: 7.8020000000000005 - type: recall_at_5 value: 9.725999999999999 - type: map_at_1 value: 3.771 - type: map_at_10 value: 5.152 - type: map_at_100 value: 5.584 - type: map_at_1000 value: 5.666 - type: map_at_3 value: 4.664 - type: map_at_5 value: 4.941 - type: mrr_at_1 value: 4.251 - type: mrr_at_10 value: 5.867 - type: mrr_at_100 value: 6.345000000000001 - type: mrr_at_1000 value: 6.432 - type: mrr_at_3 value: 5.36 - type: mrr_at_5 value: 5.656 - type: ndcg_at_1 value: 4.251 - type: ndcg_at_10 value: 6.16 - type: ndcg_at_100 value: 8.895 - type: ndcg_at_1000 value: 11.631 - type: ndcg_at_3 value: 5.176 - type: ndcg_at_5 value: 5.633 - type: precision_at_1 value: 4.251 - type: precision_at_10 value: 0.98 - type: precision_at_100 value: 0.259 - type: precision_at_1000 value: 0.053 - type: precision_at_3 value: 2.2800000000000002 - type: precision_at_5 value: 1.627 - type: recall_at_1 value: 3.771 - type: recall_at_10 value: 8.731 - type: recall_at_100 value: 22.517 - type: recall_at_1000 value: 44.183 - type: recall_at_3 value: 5.866 - type: recall_at_5 value: 7.066999999999999 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 0.543 - type: map_at_10 value: 1.027 - type: map_at_100 value: 1.228 - type: map_at_1000 value: 1.266 - type: map_at_3 value: 0.756 - type: map_at_5 value: 0.877 - type: mrr_at_1 value: 1.3679999999999999 - type: mrr_at_10 value: 2.474 - type: mrr_at_100 value: 2.8369999999999997 - type: mrr_at_1000 value: 2.894 - type: mrr_at_3 value: 1.8780000000000001 - type: mrr_at_5 value: 2.1319999999999997 - type: ndcg_at_1 value: 1.3679999999999999 - type: ndcg_at_10 value: 1.791 - type: ndcg_at_100 value: 3.06 - type: ndcg_at_1000 value: 4.501 - type: ndcg_at_3 value: 1.16 - type: ndcg_at_5 value: 1.3419999999999999 - type: precision_at_1 value: 1.3679999999999999 - type: precision_at_10 value: 0.697 - type: precision_at_100 value: 0.193 - type: precision_at_1000 value: 0.045 - type: precision_at_3 value: 0.9339999999999999 - type: precision_at_5 value: 0.808 - type: recall_at_1 value: 0.543 - type: recall_at_10 value: 2.5149999999999997 - type: recall_at_100 value: 7.356999999999999 - type: recall_at_1000 value: 16.233 - type: recall_at_3 value: 1.018 - type: recall_at_5 value: 1.5150000000000001 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 3.7289999999999996 - type: map_at_10 value: 5.524 - type: map_at_100 value: 5.984 - type: map_at_1000 value: 6.087 - type: map_at_3 value: 4.854 - type: map_at_5 value: 5.2299999999999995 - type: mrr_at_1 value: 6.177 - type: mrr_at_10 value: 8.541 - type: mrr_at_100 value: 9.073 - type: mrr_at_1000 value: 9.161 - type: mrr_at_3 value: 7.71 - type: mrr_at_5 value: 8.148 - type: ndcg_at_1 value: 6.177 - type: ndcg_at_10 value: 7.217999999999999 - type: ndcg_at_100 value: 9.927 - type: ndcg_at_1000 value: 13.062000000000001 - type: ndcg_at_3 value: 6.0569999999999995 - type: ndcg_at_5 value: 6.544999999999999 - type: precision_at_1 value: 6.177 - type: precision_at_10 value: 1.6729999999999998 - type: precision_at_100 value: 0.38999999999999996 - type: precision_at_1000 value: 0.082 - type: precision_at_3 value: 3.5090000000000003 - type: precision_at_5 value: 2.596 - type: recall_at_1 value: 3.7289999999999996 - type: recall_at_10 value: 9.501 - type: recall_at_100 value: 21.444 - type: recall_at_1000 value: 43.891999999999996 - type: recall_at_3 value: 6.053 - type: recall_at_5 value: 7.531000000000001 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 58.123872519543 - type: cos_sim_ap value: 61.86046509726734 - type: cos_sim_f1 value: 68.18181818181817 - type: cos_sim_precision value: 52.4198617221873 - type: cos_sim_recall value: 97.49824643441664 - type: dot_accuracy value: 58.123872519543 - type: dot_ap value: 61.860555259802986 - type: dot_f1 value: 68.18181818181817 - type: dot_precision value: 52.4198617221873 - type: dot_recall value: 97.49824643441664 - type: euclidean_accuracy value: 58.123872519543 - type: euclidean_ap value: 61.87698627731538 - type: euclidean_f1 value: 68.18181818181817 - type: euclidean_precision value: 52.4198617221873 - type: euclidean_recall value: 97.49824643441664 - type: manhattan_accuracy value: 58.123872519543 - type: manhattan_ap value: 61.99468883207791 - type: manhattan_f1 value: 68.33675564681727 - type: manhattan_precision value: 52.671562420866046 - type: manhattan_recall value: 97.26443768996961 - type: max_accuracy value: 58.123872519543 - type: max_ap value: 61.99468883207791 - type: max_f1 value: 68.33675564681727 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 6.428000000000001 - type: map_at_10 value: 8.883000000000001 - type: map_at_100 value: 9.549000000000001 - type: map_at_1000 value: 9.665 - type: map_at_3 value: 8.061 - type: map_at_5 value: 8.475000000000001 - type: mrr_at_1 value: 6.428000000000001 - type: mrr_at_10 value: 8.896999999999998 - type: mrr_at_100 value: 9.557 - type: mrr_at_1000 value: 9.674000000000001 - type: mrr_at_3 value: 8.061 - type: mrr_at_5 value: 8.488 - type: ndcg_at_1 value: 6.428000000000001 - type: ndcg_at_10 value: 10.382 - type: ndcg_at_100 value: 14.235999999999999 - type: ndcg_at_1000 value: 18.04 - type: ndcg_at_3 value: 8.613999999999999 - type: ndcg_at_5 value: 9.372 - type: precision_at_1 value: 6.428000000000001 - type: precision_at_10 value: 1.528 - type: precision_at_100 value: 0.349 - type: precision_at_1000 value: 0.067 - type: precision_at_3 value: 3.4070000000000005 - type: precision_at_5 value: 2.424 - type: recall_at_1 value: 6.428000000000001 - type: recall_at_10 value: 15.226999999999999 - type: recall_at_100 value: 34.694 - type: recall_at_1000 value: 66.07 - type: recall_at_3 value: 10.221 - type: recall_at_5 value: 12.065 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 0.541 - type: map_at_10 value: 1.1560000000000001 - type: map_at_100 value: 1.508 - type: map_at_1000 value: 1.598 - type: map_at_3 value: 0.918 - type: map_at_5 value: 0.992 - type: mrr_at_1 value: 9.5 - type: mrr_at_10 value: 13.446 - type: mrr_at_100 value: 13.935 - type: mrr_at_1000 value: 14.008999999999999 - type: mrr_at_3 value: 12.083 - type: mrr_at_5 value: 12.733 - type: ndcg_at_1 value: 5.75 - type: ndcg_at_10 value: 3.9210000000000003 - type: ndcg_at_100 value: 3.975 - type: ndcg_at_1000 value: 5.634 - type: ndcg_at_3 value: 4.87 - type: ndcg_at_5 value: 4.259 - type: precision_at_1 value: 9.5 - type: precision_at_10 value: 3.9 - type: precision_at_100 value: 1.015 - type: precision_at_1000 value: 0.297 - type: precision_at_3 value: 6.75 - type: precision_at_5 value: 5.25 - type: recall_at_1 value: 0.541 - type: recall_at_10 value: 2.228 - type: recall_at_100 value: 4.9430000000000005 - type: recall_at_1000 value: 11.661000000000001 - type: recall_at_3 value: 1.264 - type: recall_at_5 value: 1.4869999999999999 - task: type: Classification dataset: name: MTEB DKHateClassification type: DDSC/dkhate config: default split: test revision: 59d12749a3c91a186063c7d729ec392fda94681c metrics: - type: accuracy value: 69.96960486322187 - type: ap value: 91.23131906690253 - type: f1 value: 57.11872970138122 - task: type: Classification dataset: name: MTEB DalajClassification type: AI-Sweden/SuperLim config: default split: test revision: 7ebf0b4caa7b2ae39698a889de782c09e6f5ee56 metrics: - type: accuracy value: 49.75225225225225 - type: ap value: 49.88223192425368 - type: f1 value: 49.55059044107012 - task: type: Classification dataset: name: MTEB DanishPoliticalCommentsClassification type: danish_political_comments config: default split: train revision: edbb03726c04a0efab14fc8c3b8b79e4d420e5a1 metrics: - type: accuracy value: 37.58534554537886 - type: f1 value: 33.99440115952713 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 0.608 - type: map_at_10 value: 0.882 - type: map_at_100 value: 0.962 - type: map_at_1000 value: 1.028 - type: map_at_3 value: 0.749 - type: map_at_5 value: 0.8240000000000001 - type: mrr_at_1 value: 2.0500000000000003 - type: mrr_at_10 value: 2.796 - type: mrr_at_100 value: 2.983 - type: mrr_at_1000 value: 3.09 - type: mrr_at_3 value: 2.483 - type: mrr_at_5 value: 2.661 - type: ndcg_at_1 value: 2.0500000000000003 - type: ndcg_at_10 value: 1.435 - type: ndcg_at_100 value: 1.991 - type: ndcg_at_1000 value: 4.961 - type: ndcg_at_3 value: 1.428 - type: ndcg_at_5 value: 1.369 - type: precision_at_1 value: 2.0500000000000003 - type: precision_at_10 value: 0.5349999999999999 - type: precision_at_100 value: 0.127 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 1.05 - type: precision_at_5 value: 0.84 - type: recall_at_1 value: 0.608 - type: recall_at_10 value: 1.54 - type: recall_at_100 value: 3.5069999999999997 - type: recall_at_1000 value: 20.531 - type: recall_at_3 value: 0.901 - type: recall_at_5 value: 1.168 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 3.1 - type: map_at_10 value: 4.016 - type: map_at_100 value: 4.455 - type: map_at_1000 value: 4.579 - type: map_at_3 value: 3.567 - type: map_at_5 value: 3.8019999999999996 - type: mrr_at_1 value: 3.1 - type: mrr_at_10 value: 4.016 - type: mrr_at_100 value: 4.455 - type: mrr_at_1000 value: 4.579 - type: mrr_at_3 value: 3.567 - type: mrr_at_5 value: 3.8019999999999996 - type: ndcg_at_1 value: 3.1 - type: ndcg_at_10 value: 4.684 - type: ndcg_at_100 value: 7.284 - type: ndcg_at_1000 value: 11.689 - type: ndcg_at_3 value: 3.7289999999999996 - type: ndcg_at_5 value: 4.146 - type: precision_at_1 value: 3.1 - type: precision_at_10 value: 0.69 - type: precision_at_100 value: 0.202 - type: precision_at_1000 value: 0.056999999999999995 - type: precision_at_3 value: 1.4000000000000001 - type: precision_at_5 value: 1.04 - type: recall_at_1 value: 3.1 - type: recall_at_10 value: 6.9 - type: recall_at_100 value: 20.200000000000003 - type: recall_at_1000 value: 57.3 - type: recall_at_3 value: 4.2 - type: recall_at_5 value: 5.2 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 38.285000000000004 - type: f1 value: 35.35979931355028 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 0.9249999999999999 - type: map_at_10 value: 1.311 - type: map_at_100 value: 1.363 - type: map_at_1000 value: 1.376 - type: map_at_3 value: 1.145 - type: map_at_5 value: 1.233 - type: mrr_at_1 value: 0.975 - type: mrr_at_10 value: 1.371 - type: mrr_at_100 value: 1.426 - type: mrr_at_1000 value: 1.439 - type: mrr_at_3 value: 1.195 - type: mrr_at_5 value: 1.286 - type: ndcg_at_1 value: 0.975 - type: ndcg_at_10 value: 1.5859999999999999 - type: ndcg_at_100 value: 1.8800000000000001 - type: ndcg_at_1000 value: 2.313 - type: ndcg_at_3 value: 1.229 - type: ndcg_at_5 value: 1.388 - type: precision_at_1 value: 0.975 - type: precision_at_10 value: 0.254 - type: precision_at_100 value: 0.041 - type: precision_at_1000 value: 0.008 - type: precision_at_3 value: 0.49 - type: precision_at_5 value: 0.375 - type: recall_at_1 value: 0.9249999999999999 - type: recall_at_10 value: 2.4250000000000003 - type: recall_at_100 value: 3.866 - type: recall_at_1000 value: 7.401000000000001 - type: recall_at_3 value: 1.4200000000000002 - type: recall_at_5 value: 1.81 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: fiqa-pl config: default split: test revision: None metrics: - type: map_at_1 value: 0.959 - type: map_at_10 value: 1.952 - type: map_at_100 value: 2.281 - type: map_at_1000 value: 2.393 - type: map_at_3 value: 1.703 - type: map_at_5 value: 1.8319999999999999 - type: mrr_at_1 value: 2.469 - type: mrr_at_10 value: 4.547 - type: mrr_at_100 value: 5.021 - type: mrr_at_1000 value: 5.1339999999999995 - type: mrr_at_3 value: 3.884 - type: mrr_at_5 value: 4.223 - type: ndcg_at_1 value: 2.469 - type: ndcg_at_10 value: 3.098 - type: ndcg_at_100 value: 5.177 - type: ndcg_at_1000 value: 8.889 - type: ndcg_at_3 value: 2.7119999999999997 - type: ndcg_at_5 value: 2.8000000000000003 - type: precision_at_1 value: 2.469 - type: precision_at_10 value: 1.065 - type: precision_at_100 value: 0.321 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 2.109 - type: precision_at_5 value: 1.574 - type: recall_at_1 value: 0.959 - type: recall_at_10 value: 4.075 - type: recall_at_100 value: 12.487 - type: recall_at_1000 value: 36.854 - type: recall_at_3 value: 2.632 - type: recall_at_5 value: 3.231 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 1.032 - type: map_at_10 value: 1.8849999999999998 - type: map_at_100 value: 2.167 - type: map_at_1000 value: 2.266 - type: map_at_3 value: 1.609 - type: map_at_5 value: 1.7680000000000002 - type: mrr_at_1 value: 2.6229999999999998 - type: mrr_at_10 value: 4.479 - type: mrr_at_100 value: 4.92 - type: mrr_at_1000 value: 5.029999999999999 - type: mrr_at_3 value: 3.7289999999999996 - type: mrr_at_5 value: 4.138 - type: ndcg_at_1 value: 2.6229999999999998 - type: ndcg_at_10 value: 3.005 - type: ndcg_at_100 value: 5.01 - type: ndcg_at_1000 value: 8.312 - type: ndcg_at_3 value: 2.548 - type: ndcg_at_5 value: 2.735 - type: precision_at_1 value: 2.6229999999999998 - type: precision_at_10 value: 1.049 - type: precision_at_100 value: 0.31 - type: precision_at_1000 value: 0.089 - type: precision_at_3 value: 1.955 - type: precision_at_5 value: 1.574 - type: recall_at_1 value: 1.032 - type: recall_at_10 value: 3.888 - type: recall_at_100 value: 12.414 - type: recall_at_1000 value: 33.823 - type: recall_at_3 value: 2.37 - type: recall_at_5 value: 3.077 - task: type: Retrieval dataset: name: MTEB GerDaLIR type: jinaai/ger_da_lir config: default split: test revision: 0bb47f1d73827e96964edb84dfe552f62f4fd5eb metrics: - type: map_at_1 value: 0.542 - type: map_at_10 value: 0.8130000000000001 - type: map_at_100 value: 0.898 - type: map_at_1000 value: 0.9209999999999999 - type: map_at_3 value: 0.709 - type: map_at_5 value: 0.764 - type: mrr_at_1 value: 0.594 - type: mrr_at_10 value: 0.8880000000000001 - type: mrr_at_100 value: 0.9820000000000001 - type: mrr_at_1000 value: 1.008 - type: mrr_at_3 value: 0.774 - type: mrr_at_5 value: 0.832 - type: ndcg_at_1 value: 0.594 - type: ndcg_at_10 value: 1.0030000000000001 - type: ndcg_at_100 value: 1.537 - type: ndcg_at_1000 value: 2.4330000000000003 - type: ndcg_at_3 value: 0.782 - type: ndcg_at_5 value: 0.882 - type: precision_at_1 value: 0.594 - type: precision_at_10 value: 0.16999999999999998 - type: precision_at_100 value: 0.048 - type: precision_at_1000 value: 0.013 - type: precision_at_3 value: 0.33899999999999997 - type: precision_at_5 value: 0.255 - type: recall_at_1 value: 0.542 - type: recall_at_10 value: 1.533 - type: recall_at_100 value: 4.204 - type: recall_at_1000 value: 11.574 - type: recall_at_3 value: 0.932 - type: recall_at_5 value: 1.172 - task: type: Retrieval dataset: name: MTEB GermanDPR type: deepset/germandpr config: default split: test revision: 5129d02422a66be600ac89cd3e8531b4f97d347d metrics: - type: map_at_1 value: 25.561 - type: map_at_10 value: 38.873000000000005 - type: map_at_100 value: 40.004 - type: map_at_1000 value: 40.03 - type: map_at_3 value: 34.585 - type: map_at_5 value: 36.980000000000004 - type: mrr_at_1 value: 25.463 - type: mrr_at_10 value: 38.792 - type: mrr_at_100 value: 39.922000000000004 - type: mrr_at_1000 value: 39.949 - type: mrr_at_3 value: 34.504000000000005 - type: mrr_at_5 value: 36.899 - type: ndcg_at_1 value: 25.561 - type: ndcg_at_10 value: 46.477000000000004 - type: ndcg_at_100 value: 51.751999999999995 - type: ndcg_at_1000 value: 52.366 - type: ndcg_at_3 value: 37.645 - type: ndcg_at_5 value: 41.953 - type: precision_at_1 value: 25.561 - type: precision_at_10 value: 7.083 - type: precision_at_100 value: 0.9490000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 15.512 - type: precision_at_5 value: 11.395 - type: recall_at_1 value: 25.561 - type: recall_at_10 value: 70.829 - type: recall_at_100 value: 94.92699999999999 - type: recall_at_1000 value: 99.61 - type: recall_at_3 value: 46.537 - type: recall_at_5 value: 56.976000000000006 - task: type: Retrieval dataset: name: MTEB GermanQuAD-Retrieval type: mteb/germanquad-retrieval config: default split: test revision: f5c87ae5a2e7a5106606314eef45255f03151bb3 metrics: - type: map_at_1 value: 53.539 - type: map_at_10 value: 65.144 - type: map_at_100 value: 65.627 - type: map_at_1000 value: 65.63900000000001 - type: map_at_3 value: 62.598 - type: map_at_5 value: 64.302 - type: mrr_at_1 value: 53.539 - type: mrr_at_10 value: 65.144 - type: mrr_at_100 value: 65.627 - type: mrr_at_1000 value: 65.63900000000001 - type: mrr_at_3 value: 62.598 - type: mrr_at_5 value: 64.302 - type: ndcg_at_1 value: 53.539 - type: ndcg_at_10 value: 70.602 - type: ndcg_at_100 value: 72.886 - type: ndcg_at_1000 value: 73.14500000000001 - type: ndcg_at_3 value: 65.52900000000001 - type: ndcg_at_5 value: 68.596 - type: precision_at_1 value: 53.539 - type: precision_at_10 value: 8.757 - type: precision_at_100 value: 0.9809999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 24.667 - type: precision_at_5 value: 16.289 - type: recall_at_1 value: 53.539 - type: recall_at_10 value: 87.568 - type: recall_at_100 value: 98.09400000000001 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 74.002 - type: recall_at_5 value: 81.443 - task: type: STS dataset: name: MTEB GermanSTSBenchmark type: jinaai/german-STSbenchmark config: default split: test revision: e36907544d44c3a247898ed81540310442329e20 metrics: - type: cos_sim_pearson value: 68.82052535790737 - type: cos_sim_spearman value: 67.9356892072251 - type: euclidean_pearson value: 67.2308663006278 - type: euclidean_spearman value: 67.93572522920142 - type: manhattan_pearson value: 67.23568952733595 - type: manhattan_spearman value: 67.91660489262797 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 6.813 - type: map_at_10 value: 9.49 - type: map_at_100 value: 9.959 - type: map_at_1000 value: 10.024 - type: map_at_3 value: 8.618 - type: map_at_5 value: 9.084 - type: mrr_at_1 value: 13.626 - type: mrr_at_10 value: 17.818 - type: mrr_at_100 value: 18.412 - type: mrr_at_1000 value: 18.482000000000003 - type: mrr_at_3 value: 16.506999999999998 - type: mrr_at_5 value: 17.219 - type: ndcg_at_1 value: 13.626 - type: ndcg_at_10 value: 12.959999999999999 - type: ndcg_at_100 value: 15.562999999999999 - type: ndcg_at_1000 value: 17.571 - type: ndcg_at_3 value: 10.995000000000001 - type: ndcg_at_5 value: 11.908000000000001 - type: precision_at_1 value: 13.626 - type: precision_at_10 value: 2.995 - type: precision_at_100 value: 0.51 - type: precision_at_1000 value: 0.078 - type: precision_at_3 value: 7.000000000000001 - type: precision_at_5 value: 4.926 - type: recall_at_1 value: 6.813 - type: recall_at_10 value: 14.976 - type: recall_at_100 value: 25.517 - type: recall_at_1000 value: 39.095 - type: recall_at_3 value: 10.5 - type: recall_at_5 value: 12.316 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 38.01462100808003 - type: f1 value: 26.680357453754215 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 55.7508 - type: ap value: 53.28158993124153 - type: f1 value: 55.34571379744637 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 69.58724202626641 - type: ap value: 30.04577466931377 - type: f1 value: 62.46921898313143 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 48.80585861169271 - type: cos_sim_spearman value: 50.11025991147549 - type: euclidean_pearson value: 50.055425341198934 - type: euclidean_spearman value: 50.11024862622995 - type: manhattan_pearson value: 50.029980024931064 - type: manhattan_spearman value: 50.074388245963384 - task: type: Classification dataset: name: MTEB LccSentimentClassification type: DDSC/lcc config: default split: test revision: de7ba3406ee55ea2cc52a0a41408fa6aede6d3c6 metrics: - type: accuracy value: 54.266666666666666 - type: f1 value: 52.181931818742875 - task: type: Reranking dataset: name: MTEB MIRACL type: jinaai/miracl config: default split: test revision: d28a029f35c4ff7f616df47b0edf54e6882395e6 metrics: - type: map value: 51.40745004398599 - type: mrr value: 56.71940267335004 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6 metrics: - type: map value: 5.831060174627054 - type: mrr value: 4.019047619047618 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 5.826 - type: map_at_10 value: 8.956999999999999 - type: map_at_100 value: 9.746 - type: map_at_1000 value: 9.873999999999999 - type: map_at_3 value: 7.757 - type: map_at_5 value: 8.373 - type: mrr_at_1 value: 6.046 - type: mrr_at_10 value: 9.251 - type: mrr_at_100 value: 10.044 - type: mrr_at_1000 value: 10.167 - type: mrr_at_3 value: 8.028 - type: mrr_at_5 value: 8.66 - type: ndcg_at_1 value: 6.046 - type: ndcg_at_10 value: 10.998 - type: ndcg_at_100 value: 15.568999999999999 - type: ndcg_at_1000 value: 19.453 - type: ndcg_at_3 value: 8.468 - type: ndcg_at_5 value: 9.582 - type: precision_at_1 value: 6.046 - type: precision_at_10 value: 1.807 - type: precision_at_100 value: 0.42500000000000004 - type: precision_at_1000 value: 0.076 - type: precision_at_3 value: 3.572 - type: precision_at_5 value: 2.702 - type: recall_at_1 value: 5.826 - type: recall_at_10 value: 17.291 - type: recall_at_100 value: 40.037 - type: recall_at_1000 value: 71.351 - type: recall_at_3 value: 10.269 - type: recall_at_5 value: 12.950000000000001 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 1.203 - type: map_at_10 value: 2.27 - type: map_at_100 value: 2.5860000000000003 - type: map_at_1000 value: 2.661 - type: map_at_3 value: 1.8159999999999998 - type: map_at_5 value: 2.037 - type: mrr_at_1 value: 1.232 - type: mrr_at_10 value: 2.338 - type: mrr_at_100 value: 2.665 - type: mrr_at_1000 value: 2.7390000000000003 - type: mrr_at_3 value: 1.87 - type: mrr_at_5 value: 2.1 - type: ndcg_at_1 value: 1.232 - type: ndcg_at_10 value: 3.005 - type: ndcg_at_100 value: 4.936 - type: ndcg_at_1000 value: 7.441000000000001 - type: ndcg_at_3 value: 2.036 - type: ndcg_at_5 value: 2.435 - type: precision_at_1 value: 1.232 - type: precision_at_10 value: 0.549 - type: precision_at_100 value: 0.158 - type: precision_at_1000 value: 0.038 - type: precision_at_3 value: 0.903 - type: precision_at_5 value: 0.739 - type: recall_at_1 value: 1.203 - type: recall_at_10 value: 5.332 - type: recall_at_100 value: 15.164 - type: recall_at_1000 value: 35.831 - type: recall_at_3 value: 2.622 - type: recall_at_5 value: 3.572 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.92476060191518 - type: f1 value: 89.30222882069823 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.54353338968724 - type: f1 value: 88.23043644828002 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.62374916611076 - type: f1 value: 89.68544977510335 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 86.18540557469466 - type: f1 value: 85.7362674669331 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.41556113302258 - type: f1 value: 89.04934651990581 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 85.89511754068715 - type: f1 value: 85.57630467968119 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.85043319653442 - type: f1 value: 46.0794069318026 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.43195266272188 - type: f1 value: 48.08015719781981 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.8425617078052 - type: f1 value: 49.37915156189611 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.75227059191982 - type: f1 value: 43.4642946741452 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 69.13589100035855 - type: f1 value: 46.25935961966482 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 68.47016274864377 - type: f1 value: 46.197113305277796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.14727639542704 - type: f1 value: 55.58745169431752 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 55.46589962622107 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.22932078009414 - type: f1 value: 53.661218041561334 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.16543375924681 - type: f1 value: 55.16504653263189 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.239408204438476 - type: f1 value: 58.941991707183874 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.186953597848 - type: f1 value: 49.59432722397084 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.030934767989244 - type: f1 value: 58.836302050830966 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.314727639542696 - type: f1 value: 57.80700293522655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.20645595158037 - type: f1 value: 61.36755812840151 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.36785474108943 - type: f1 value: 61.15645935863754 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.97108271687962 - type: f1 value: 62.07352472659557 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.67114996637525 - type: f1 value: 63.420170447126324 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.864828513786144 - type: f1 value: 59.655860488861926 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.55077336919974 - type: f1 value: 55.28215385204243 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.453261600538 - type: f1 value: 59.991998820039186 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.32145258910558 - type: f1 value: 58.9676667104426 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.905178211163424 - type: f1 value: 59.645126480791674 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.03026227303295 - type: f1 value: 56.68905593909442 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.28850033624749 - type: f1 value: 60.21862015326403 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.0221923335575 - type: f1 value: 53.388473451598315 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.44182918628111 - type: f1 value: 62.14806714489123 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.69535978480162 - type: f1 value: 62.40231098840202 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.00067249495628 - type: f1 value: 48.871263427511984 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.088769334229994 - type: f1 value: 52.68998451556 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 43.34229993275051 - type: f1 value: 40.578510490463024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.87491593813046 - type: f1 value: 55.19579071673386 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.69334229993275 - type: f1 value: 60.90210922623679 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.240753194351036 - type: f1 value: 54.137519761157485 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.81439139206457 - type: f1 value: 60.46554841337619 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.49361129791527 - type: f1 value: 55.12919894175168 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.55682582380633 - type: f1 value: 58.81763499302702 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.3981170141224 - type: f1 value: 56.31810441546048 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.89576328177538 - type: f1 value: 57.35130066022407 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.55951580363148 - type: f1 value: 61.50868742463585 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.86079354404842 - type: f1 value: 61.94702597578807 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.49024882313383 - type: f1 value: 60.796412851533454 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.53194351042366 - type: f1 value: 59.9167382336848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.62945527908541 - type: f1 value: 59.195444230665096 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.43308675184935 - type: f1 value: 60.605749901316145 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.44586415601883 - type: f1 value: 58.635066561729396 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.86482851378615 - type: f1 value: 59.75440194153033 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.250840618695364 - type: f1 value: 54.84944007944625 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.747814391392076 - type: f1 value: 56.83761137925043 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.60995292535306 - type: f1 value: 57.106776457430705 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.421654337592464 - type: f1 value: 57.81013790437749 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.120376597175515 - type: f1 value: 55.27690756097837 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.907868190988566 - type: f1 value: 57.43015543162361 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.492266308002705 - type: f1 value: 56.885590563156455 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.477471418964356 - type: f1 value: 57.87047944039945 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07800941492938 - type: f1 value: 59.340232908410265 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.73167451244117 - type: f1 value: 55.29236319279749 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.05850706119705 - type: f1 value: 62.20100273658395 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.24142568930733 - type: f1 value: 62.045023522098205 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.685272360457304 - type: f1 value: 63.315744557403285 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.85743106926698 - type: f1 value: 59.106917986505636 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.1654337592468 - type: f1 value: 65.66986920813582 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.519838601210495 - type: f1 value: 54.73278620356587 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.76395427034298 - type: f1 value: 66.3447645997219 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.47814391392065 - type: f1 value: 66.32841368787447 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.22864828513787 - type: f1 value: 69.02774052818218 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.04841963685273 - type: f1 value: 67.70789401248665 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.08204438466711 - type: f1 value: 68.39277940460933 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.10154673839946 - type: f1 value: 70.7737194288215 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.16207128446537 - type: f1 value: 66.2311820377212 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.019502353732335 - type: f1 value: 62.105500895318656 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.82985877605918 - type: f1 value: 67.4894449433449 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.89643577673168 - type: f1 value: 65.45745898521055 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 68.19665323990438 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.8238063214526 - type: f1 value: 64.60872984606974 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.98117014122394 - type: f1 value: 67.66697147027641 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.137188971082715 - type: f1 value: 61.58358081191463 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.0437121721587 - type: f1 value: 69.06747206775307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.67585743106926 - type: f1 value: 70.08618915891508 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.788164088769335 - type: f1 value: 57.91398932676417 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.03227975790182 - type: f1 value: 60.044432258486715 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 49.051782111634154 - type: f1 value: 45.434581931581555 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.78278412911902 - type: f1 value: 62.106197625881535 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.59986550100874 - type: f1 value: 68.94355682848476 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.97310020174847 - type: f1 value: 59.09912773329623 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.20309347679893 - type: f1 value: 67.90665916607239 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.72024209818427 - type: f1 value: 60.77165334831407 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.87155346334902 - type: f1 value: 65.7906032446679 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.97646267652992 - type: f1 value: 63.89390215791396 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.81371889710827 - type: f1 value: 64.39323436519936 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.79825151311366 - type: f1 value: 68.53789900442244 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.98991257565568 - type: f1 value: 68.93867074879778 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.50168123739071 - type: f1 value: 66.7457644903972 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.52521856086078 - type: f1 value: 66.83370797374445 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.96234028244787 - type: f1 value: 67.58983110064196 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.56624075319435 - type: f1 value: 68.35270162147211 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.48352387357095 - type: f1 value: 66.66973143886908 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.92535305985206 - type: f1 value: 66.52058462942483 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.184263618022875 - type: f1 value: 61.71153164960602 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.8453261600538 - type: f1 value: 63.863209439112346 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.39340954942838 - type: f1 value: 63.85484524633183 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.9892400806994 - type: f1 value: 66.57022479007357 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.399462004034966 - type: f1 value: 61.62381473991175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.773369199731 - type: f1 value: 65.58317907780943 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.8069939475454 - type: f1 value: 64.47027323557235 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.51647612642904 - type: f1 value: 65.66061210324213 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.88365837256221 - type: f1 value: 67.56956454874091 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.29858776059179 - type: f1 value: 62.76318771484755 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 2.9000000000000004 - type: map_at_10 value: 3.5360000000000005 - type: map_at_100 value: 3.703 - type: map_at_1000 value: 3.734 - type: map_at_3 value: 3.167 - type: map_at_5 value: 3.322 - type: mrr_at_1 value: 2.9000000000000004 - type: mrr_at_10 value: 3.5360000000000005 - type: mrr_at_100 value: 3.703 - type: mrr_at_1000 value: 3.734 - type: mrr_at_3 value: 3.167 - type: mrr_at_5 value: 3.322 - type: ndcg_at_1 value: 2.9000000000000004 - type: ndcg_at_10 value: 4.079 - type: ndcg_at_100 value: 5.101 - type: ndcg_at_1000 value: 6.295000000000001 - type: ndcg_at_3 value: 3.276 - type: ndcg_at_5 value: 3.56 - type: precision_at_1 value: 2.9000000000000004 - type: precision_at_10 value: 0.59 - type: precision_at_100 value: 0.11199999999999999 - type: precision_at_1000 value: 0.022000000000000002 - type: precision_at_3 value: 1.2 - type: precision_at_5 value: 0.86 - type: recall_at_1 value: 2.9000000000000004 - type: recall_at_10 value: 5.8999999999999995 - type: recall_at_100 value: 11.200000000000001 - type: recall_at_1000 value: 21.5 - type: recall_at_3 value: 3.5999999999999996 - type: recall_at_5 value: 4.3 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 19.061819627060558 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 19.79520446745267 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 26.881162218991285 - type: mrr value: 27.23201335662217 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 57.69 - type: f1 value: 57.370451927892695 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 0.443 - type: map_at_10 value: 1.189 - type: map_at_100 value: 2.221 - type: map_at_1000 value: 3.034 - type: map_at_3 value: 0.683 - type: map_at_5 value: 0.882 - type: mrr_at_1 value: 4.334 - type: mrr_at_10 value: 10.908 - type: mrr_at_100 value: 12.536 - type: mrr_at_1000 value: 12.642000000000001 - type: mrr_at_3 value: 7.481999999999999 - type: mrr_at_5 value: 9.324 - type: ndcg_at_1 value: 3.7150000000000003 - type: ndcg_at_10 value: 5.591 - type: ndcg_at_100 value: 9.522 - type: ndcg_at_1000 value: 19.705000000000002 - type: ndcg_at_3 value: 4.292 - type: ndcg_at_5 value: 5.038 - type: precision_at_1 value: 4.334 - type: precision_at_10 value: 5.077 - type: precision_at_100 value: 3.2910000000000004 - type: precision_at_1000 value: 1.568 - type: precision_at_3 value: 4.644 - type: precision_at_5 value: 5.139 - type: recall_at_1 value: 0.443 - type: recall_at_10 value: 3.3520000000000003 - type: recall_at_100 value: 15.515 - type: recall_at_1000 value: 50.505 - type: recall_at_3 value: 0.931 - type: recall_at_5 value: 1.698 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: nfcorpus-pl config: default split: test revision: None metrics: - type: map_at_1 value: 0.307 - type: map_at_10 value: 0.835 - type: map_at_100 value: 1.503 - type: map_at_1000 value: 2.263 - type: map_at_3 value: 0.503 - type: map_at_5 value: 0.567 - type: mrr_at_1 value: 4.025 - type: mrr_at_10 value: 9.731 - type: mrr_at_100 value: 11.229 - type: mrr_at_1000 value: 11.34 - type: mrr_at_3 value: 6.811 - type: mrr_at_5 value: 8.126999999999999 - type: ndcg_at_1 value: 3.56 - type: ndcg_at_10 value: 4.596 - type: ndcg_at_100 value: 7.567 - type: ndcg_at_1000 value: 17.76 - type: ndcg_at_3 value: 3.52 - type: ndcg_at_5 value: 3.823 - type: precision_at_1 value: 4.025 - type: precision_at_10 value: 4.334 - type: precision_at_100 value: 2.842 - type: precision_at_1000 value: 1.506 - type: precision_at_3 value: 3.818 - type: precision_at_5 value: 4.149 - type: recall_at_1 value: 0.307 - type: recall_at_10 value: 2.543 - type: recall_at_100 value: 12.152000000000001 - type: recall_at_1000 value: 46.878 - type: recall_at_3 value: 0.755 - type: recall_at_5 value: 0.975 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 0.439 - type: map_at_10 value: 0.6839999999999999 - type: map_at_100 value: 0.769 - type: map_at_1000 value: 0.79 - type: map_at_3 value: 0.584 - type: map_at_5 value: 0.621 - type: mrr_at_1 value: 0.5499999999999999 - type: mrr_at_10 value: 0.819 - type: mrr_at_100 value: 0.9169999999999999 - type: mrr_at_1000 value: 0.9400000000000001 - type: mrr_at_3 value: 0.705 - type: mrr_at_5 value: 0.75 - type: ndcg_at_1 value: 0.5499999999999999 - type: ndcg_at_10 value: 0.886 - type: ndcg_at_100 value: 1.422 - type: ndcg_at_1000 value: 2.2079999999999997 - type: ndcg_at_3 value: 0.6629999999999999 - type: ndcg_at_5 value: 0.735 - type: precision_at_1 value: 0.5499999999999999 - type: precision_at_10 value: 0.16199999999999998 - type: precision_at_100 value: 0.048 - type: precision_at_1000 value: 0.012 - type: precision_at_3 value: 0.309 - type: precision_at_5 value: 0.22599999999999998 - type: recall_at_1 value: 0.439 - type: recall_at_10 value: 1.405 - type: recall_at_100 value: 4.051 - type: recall_at_1000 value: 10.487 - type: recall_at_3 value: 0.787 - type: recall_at_5 value: 0.9560000000000001 - task: type: Retrieval dataset: name: MTEB NarrativeQARetrieval type: narrativeqa config: default split: test revision: None metrics: - type: map_at_1 value: 5.93 - type: map_at_10 value: 7.349 - type: map_at_100 value: 8.011 - type: map_at_1000 value: 8.351 - type: map_at_3 value: 6.787 - type: map_at_5 value: 7.02 - type: mrr_at_1 value: 5.93 - type: mrr_at_10 value: 7.349 - type: mrr_at_100 value: 8.011 - type: mrr_at_1000 value: 8.351 - type: mrr_at_3 value: 6.787 - type: mrr_at_5 value: 7.02 - type: ndcg_at_1 value: 5.93 - type: ndcg_at_10 value: 8.291 - type: ndcg_at_100 value: 12.833 - type: ndcg_at_1000 value: 21.253 - type: ndcg_at_3 value: 7.072000000000001 - type: ndcg_at_5 value: 7.495 - type: precision_at_1 value: 5.93 - type: precision_at_10 value: 1.1400000000000001 - type: precision_at_100 value: 0.359 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 2.633 - type: precision_at_5 value: 1.786 - type: recall_at_1 value: 5.93 - type: recall_at_10 value: 11.395 - type: recall_at_100 value: 35.929 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 7.9 - type: recall_at_5 value: 8.932 - task: type: Classification dataset: name: MTEB NoRecClassification type: ScandEval/norec-mini config: default split: test revision: 07b99ab3363c2e7f8f87015b01c21f4d9b917ce3 metrics: - type: accuracy value: 48.251953125 - type: f1 value: 45.42526611578402 - task: type: Classification dataset: name: MTEB NordicLangClassification type: strombergnlp/nordic_langid config: default split: test revision: e254179d18ab0165fdb6dbef91178266222bee2a metrics: - type: accuracy value: 48.403333333333336 - type: f1 value: 47.9287124185198 - task: type: BitextMining dataset: name: MTEB NorwegianCourtsBitextMining type: kardosdrur/norwegian-courts config: default split: test revision: None metrics: - type: accuracy value: 93.85964912280701 - type: f1 value: 92.98245614035088 - type: precision value: 92.54385964912281 - type: recall value: 93.85964912280701 - task: type: Classification dataset: name: MTEB NorwegianParliament type: NbAiLab/norwegian_parliament config: default split: test revision: f7393532774c66312378d30b197610b43d751972 metrics: - type: accuracy value: 55.991666666666674 - type: ap value: 53.417849849746226 - type: f1 value: 55.757916182475384 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 54.68327016783974 - type: cos_sim_ap value: 55.175059616546406 - type: cos_sim_f1 value: 67.81733189500179 - type: cos_sim_precision value: 51.41766630316249 - type: cos_sim_recall value: 99.57761351636748 - type: dot_accuracy value: 54.68327016783974 - type: dot_ap value: 55.175059616546406 - type: dot_f1 value: 67.81733189500179 - type: dot_precision value: 51.41766630316249 - type: dot_recall value: 99.57761351636748 - type: euclidean_accuracy value: 54.68327016783974 - type: euclidean_ap value: 55.17510180566365 - type: euclidean_f1 value: 67.81733189500179 - type: euclidean_precision value: 51.41766630316249 - type: euclidean_recall value: 99.57761351636748 - type: manhattan_accuracy value: 55.44125609095831 - type: manhattan_ap value: 55.76283671826867 - type: manhattan_f1 value: 68.05905653583004 - type: manhattan_precision value: 51.63934426229508 - type: manhattan_recall value: 99.78880675818374 - type: max_accuracy value: 55.44125609095831 - type: max_ap value: 55.76283671826867 - type: max_f1 value: 68.05905653583004 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 75.64 - type: ap value: 71.45085103287833 - type: f1 value: 75.52254495697326 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 73.86620330147699 - type: ap value: 80.58015815306322 - type: f1 value: 71.49082510883872 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 29.52361689421863 - type: cos_sim_spearman value: 32.750058577257875 - type: euclidean_pearson value: 34.583472972871796 - type: euclidean_spearman value: 32.75328764421994 - type: manhattan_pearson value: 34.727366510326995 - type: manhattan_spearman value: 32.787167142114214 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 71.1 - type: cos_sim_ap value: 85.36544548691205 - type: cos_sim_f1 value: 75.23393636930756 - type: cos_sim_precision value: 60.36036036036037 - type: cos_sim_recall value: 99.83443708609272 - type: dot_accuracy value: 71.1 - type: dot_ap value: 85.36544548691204 - type: dot_f1 value: 75.23393636930756 - type: dot_precision value: 60.36036036036037 - type: dot_recall value: 99.83443708609272 - type: euclidean_accuracy value: 71.1 - type: euclidean_ap value: 85.36544548691205 - type: euclidean_f1 value: 75.23393636930756 - type: euclidean_precision value: 60.36036036036037 - type: euclidean_recall value: 99.83443708609272 - type: manhattan_accuracy value: 71.1 - type: manhattan_ap value: 85.33853868545614 - type: manhattan_f1 value: 75.23393636930756 - type: manhattan_precision value: 60.36036036036037 - type: manhattan_recall value: 99.83443708609272 - type: max_accuracy value: 71.1 - type: max_ap value: 85.36544548691205 - type: max_f1 value: 75.23393636930756 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 90.81632653061224 - type: cos_sim_ap value: 91.97693749083473 - type: cos_sim_f1 value: 85.55078683834049 - type: cos_sim_precision value: 80.59299191374663 - type: cos_sim_recall value: 91.15853658536585 - type: dot_accuracy value: 90.81632653061224 - type: dot_ap value: 91.97693749083473 - type: dot_f1 value: 85.55078683834049 - type: dot_precision value: 80.59299191374663 - type: dot_recall value: 91.15853658536585 - type: euclidean_accuracy value: 90.81632653061224 - type: euclidean_ap value: 91.97693749083473 - type: euclidean_f1 value: 85.55078683834049 - type: euclidean_precision value: 80.59299191374663 - type: euclidean_recall value: 91.15853658536585 - type: manhattan_accuracy value: 90.9090909090909 - type: manhattan_ap value: 92.043441286281 - type: manhattan_f1 value: 85.34482758620689 - type: manhattan_precision value: 80.70652173913044 - type: manhattan_recall value: 90.54878048780488 - type: max_accuracy value: 90.9090909090909 - type: max_ap value: 92.043441286281 - type: max_f1 value: 85.55078683834049 - task: type: PairClassification dataset: name: MTEB PawsX (de) type: paws-x config: de split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 70.35 - type: cos_sim_ap value: 72.01641717127626 - type: cos_sim_f1 value: 64.49511400651467 - type: cos_sim_precision value: 55.26315789473685 - type: cos_sim_recall value: 77.43016759776536 - type: dot_accuracy value: 70.35 - type: dot_ap value: 72.06599137974572 - type: dot_f1 value: 64.49511400651467 - type: dot_precision value: 55.26315789473685 - type: dot_recall value: 77.43016759776536 - type: euclidean_accuracy value: 70.35 - type: euclidean_ap value: 71.92019289154159 - type: euclidean_f1 value: 64.49511400651467 - type: euclidean_precision value: 55.26315789473685 - type: euclidean_recall value: 77.43016759776536 - type: manhattan_accuracy value: 70.35 - type: manhattan_ap value: 71.92979188519502 - type: manhattan_f1 value: 64.60409019402202 - type: manhattan_precision value: 60.86956521739131 - type: manhattan_recall value: 68.8268156424581 - type: max_accuracy value: 70.35 - type: max_ap value: 72.06599137974572 - type: max_f1 value: 64.60409019402202 - task: type: PairClassification dataset: name: MTEB PawsX (en) type: paws-x config: en split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 71.0 - type: cos_sim_ap value: 74.73017292645147 - type: cos_sim_f1 value: 66.73427991886409 - type: cos_sim_precision value: 61.78403755868545 - type: cos_sim_recall value: 72.54685777287762 - type: dot_accuracy value: 71.0 - type: dot_ap value: 74.73017292645147 - type: dot_f1 value: 66.73427991886409 - type: dot_precision value: 61.78403755868545 - type: dot_recall value: 72.54685777287762 - type: euclidean_accuracy value: 71.0 - type: euclidean_ap value: 74.73013082197343 - type: euclidean_f1 value: 66.73427991886409 - type: euclidean_precision value: 61.78403755868545 - type: euclidean_recall value: 72.54685777287762 - type: manhattan_accuracy value: 70.95 - type: manhattan_ap value: 74.71203917486744 - type: manhattan_f1 value: 66.86868686868686 - type: manhattan_precision value: 61.696178937558244 - type: manhattan_recall value: 72.98787210584344 - type: max_accuracy value: 71.0 - type: max_ap value: 74.73017292645147 - type: max_f1 value: 66.86868686868686 - task: type: PairClassification dataset: name: MTEB PawsX (es) type: paws-x config: es split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 67.7 - type: cos_sim_ap value: 69.70320170421651 - type: cos_sim_f1 value: 62.55625562556255 - type: cos_sim_precision value: 52.851711026615966 - type: cos_sim_recall value: 76.62624035281146 - type: dot_accuracy value: 67.7 - type: dot_ap value: 69.70320170421651 - type: dot_f1 value: 62.55625562556255 - type: dot_precision value: 52.851711026615966 - type: dot_recall value: 76.62624035281146 - type: euclidean_accuracy value: 67.7 - type: euclidean_ap value: 69.70320170421651 - type: euclidean_f1 value: 62.55625562556255 - type: euclidean_precision value: 52.851711026615966 - type: euclidean_recall value: 76.62624035281146 - type: manhattan_accuracy value: 67.75 - type: manhattan_ap value: 69.67833816050764 - type: manhattan_f1 value: 62.734082397003746 - type: manhattan_precision value: 54.515866558177386 - type: manhattan_recall value: 73.8699007717751 - type: max_accuracy value: 67.75 - type: max_ap value: 69.70320170421651 - type: max_f1 value: 62.734082397003746 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 69.0 - type: cos_sim_ap value: 71.36406639969131 - type: cos_sim_f1 value: 64.45993031358886 - type: cos_sim_precision value: 53.12275664034458 - type: cos_sim_recall value: 81.94905869324474 - type: dot_accuracy value: 69.0 - type: dot_ap value: 71.2599779415656 - type: dot_f1 value: 64.45993031358886 - type: dot_precision value: 53.12275664034458 - type: dot_recall value: 81.94905869324474 - type: euclidean_accuracy value: 69.0 - type: euclidean_ap value: 71.3126257271965 - type: euclidean_f1 value: 64.45993031358886 - type: euclidean_precision value: 53.12275664034458 - type: euclidean_recall value: 81.94905869324474 - type: manhattan_accuracy value: 69.0 - type: manhattan_ap value: 71.29361764028188 - type: manhattan_f1 value: 64.54789615040288 - type: manhattan_precision value: 54.16979714500376 - type: manhattan_recall value: 79.84496124031007 - type: max_accuracy value: 69.0 - type: max_ap value: 71.36406639969131 - type: max_f1 value: 64.54789615040288 - task: type: PairClassification dataset: name: MTEB PawsX (ja) type: paws-x config: ja split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 63.849999999999994 - type: cos_sim_ap value: 60.914955950361026 - type: cos_sim_f1 value: 62.4556422995032 - type: cos_sim_precision value: 45.47803617571059 - type: cos_sim_recall value: 99.66024915062289 - type: dot_accuracy value: 63.849999999999994 - type: dot_ap value: 60.808056565465506 - type: dot_f1 value: 62.4556422995032 - type: dot_precision value: 45.47803617571059 - type: dot_recall value: 99.66024915062289 - type: euclidean_accuracy value: 63.849999999999994 - type: euclidean_ap value: 60.8231492677072 - type: euclidean_f1 value: 62.4556422995032 - type: euclidean_precision value: 45.47803617571059 - type: euclidean_recall value: 99.66024915062289 - type: manhattan_accuracy value: 63.800000000000004 - type: manhattan_ap value: 60.86392751846975 - type: manhattan_f1 value: 62.43348705214614 - type: manhattan_precision value: 45.45454545454545 - type: manhattan_recall value: 99.66024915062289 - type: max_accuracy value: 63.849999999999994 - type: max_ap value: 60.914955950361026 - type: max_f1 value: 62.4556422995032 - task: type: PairClassification dataset: name: MTEB PawsX (ko) type: paws-x config: ko split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 61.1 - type: cos_sim_ap value: 58.40339411735916 - type: cos_sim_f1 value: 62.7906976744186 - type: cos_sim_precision value: 46.55172413793103 - type: cos_sim_recall value: 96.42857142857143 - type: dot_accuracy value: 61.1 - type: dot_ap value: 58.439189685586456 - type: dot_f1 value: 62.7906976744186 - type: dot_precision value: 46.55172413793103 - type: dot_recall value: 96.42857142857143 - type: euclidean_accuracy value: 61.1 - type: euclidean_ap value: 58.34968788203145 - type: euclidean_f1 value: 62.7906976744186 - type: euclidean_precision value: 46.55172413793103 - type: euclidean_recall value: 96.42857142857143 - type: manhattan_accuracy value: 61.1 - type: manhattan_ap value: 58.31504446861402 - type: manhattan_f1 value: 62.636562272396226 - type: manhattan_precision value: 46.48648648648649 - type: manhattan_recall value: 95.98214285714286 - type: max_accuracy value: 61.1 - type: max_ap value: 58.439189685586456 - type: max_f1 value: 62.7906976744186 - task: type: PairClassification dataset: name: MTEB PawsX (zh) type: paws-x config: zh split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 64.2 - type: cos_sim_ap value: 63.73722153283802 - type: cos_sim_f1 value: 62.52707581227437 - type: cos_sim_precision value: 46.16204690831556 - type: cos_sim_recall value: 96.86800894854586 - type: dot_accuracy value: 64.2 - type: dot_ap value: 63.67335241021108 - type: dot_f1 value: 62.52707581227437 - type: dot_precision value: 46.16204690831556 - type: dot_recall value: 96.86800894854586 - type: euclidean_accuracy value: 64.2 - type: euclidean_ap value: 63.77399571117368 - type: euclidean_f1 value: 62.52707581227437 - type: euclidean_precision value: 46.16204690831556 - type: euclidean_recall value: 96.86800894854586 - type: manhattan_accuracy value: 64.5 - type: manhattan_ap value: 63.747406783360816 - type: manhattan_f1 value: 62.58601955813112 - type: manhattan_precision value: 46.27745045527584 - type: manhattan_recall value: 96.64429530201343 - type: max_accuracy value: 64.5 - type: max_ap value: 63.77399571117368 - type: max_f1 value: 62.58601955813112 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 52.797783933518005 - type: f1 value: 53.84971294048786 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 38.40080971659919 - type: f1 value: 30.38990873840624 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 23.34232568997104 - type: cos_sim_spearman value: 24.47961936211083 - type: euclidean_pearson value: 22.03140944610336 - type: euclidean_spearman value: 24.47949166265398 - type: manhattan_pearson value: 25.542406448726908 - type: manhattan_spearman value: 28.655724283839533 - task: type: Retrieval dataset: name: MTEB Quora-PL type: quora-pl config: default split: test revision: None metrics: - type: map_at_1 value: 59.938 - type: map_at_10 value: 72.734 - type: map_at_100 value: 73.564 - type: map_at_1000 value: 73.602 - type: map_at_3 value: 69.707 - type: map_at_5 value: 71.515 - type: mrr_at_1 value: 69.28 - type: mrr_at_10 value: 76.97500000000001 - type: mrr_at_100 value: 77.27199999999999 - type: mrr_at_1000 value: 77.28 - type: mrr_at_3 value: 75.355 - type: mrr_at_5 value: 76.389 - type: ndcg_at_1 value: 69.33 - type: ndcg_at_10 value: 77.61099999999999 - type: ndcg_at_100 value: 80.02 - type: ndcg_at_1000 value: 80.487 - type: ndcg_at_3 value: 73.764 - type: ndcg_at_5 value: 75.723 - type: precision_at_1 value: 69.33 - type: precision_at_10 value: 11.917 - type: precision_at_100 value: 1.447 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 32.29 - type: precision_at_5 value: 21.432000000000002 - type: recall_at_1 value: 59.938 - type: recall_at_10 value: 87.252 - type: recall_at_100 value: 96.612 - type: recall_at_1000 value: 99.388 - type: recall_at_3 value: 76.264 - type: recall_at_5 value: 81.71000000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 61.458999999999996 - type: map_at_10 value: 73.90299999999999 - type: map_at_100 value: 74.733 - type: map_at_1000 value: 74.771 - type: map_at_3 value: 70.999 - type: map_at_5 value: 72.745 - type: mrr_at_1 value: 70.93 - type: mrr_at_10 value: 78.353 - type: mrr_at_100 value: 78.636 - type: mrr_at_1000 value: 78.644 - type: mrr_at_3 value: 76.908 - type: mrr_at_5 value: 77.807 - type: ndcg_at_1 value: 70.93 - type: ndcg_at_10 value: 78.625 - type: ndcg_at_100 value: 81.01 - type: ndcg_at_1000 value: 81.45700000000001 - type: ndcg_at_3 value: 75.045 - type: ndcg_at_5 value: 76.84299999999999 - type: precision_at_1 value: 70.93 - type: precision_at_10 value: 11.953 - type: precision_at_100 value: 1.4489999999999998 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 32.65 - type: precision_at_5 value: 21.598 - type: recall_at_1 value: 61.458999999999996 - type: recall_at_10 value: 87.608 - type: recall_at_100 value: 96.818 - type: recall_at_1000 value: 99.445 - type: recall_at_3 value: 77.354 - type: recall_at_5 value: 82.334 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 28.519889100999958 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 38.62765374782771 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 0.52 - type: map_at_10 value: 0.893 - type: map_at_100 value: 1.113 - type: map_at_1000 value: 1.304 - type: map_at_3 value: 0.7779999999999999 - type: map_at_5 value: 0.8200000000000001 - type: mrr_at_1 value: 2.6 - type: mrr_at_10 value: 4.0680000000000005 - type: mrr_at_100 value: 4.6080000000000005 - type: mrr_at_1000 value: 4.797 - type: mrr_at_3 value: 3.5999999999999996 - type: mrr_at_5 value: 3.8150000000000004 - type: ndcg_at_1 value: 2.6 - type: ndcg_at_10 value: 1.79 - type: ndcg_at_100 value: 3.5549999999999997 - type: ndcg_at_1000 value: 9.942 - type: ndcg_at_3 value: 1.94 - type: ndcg_at_5 value: 1.543 - type: precision_at_1 value: 2.6 - type: precision_at_10 value: 0.8500000000000001 - type: precision_at_100 value: 0.361 - type: precision_at_1000 value: 0.197 - type: precision_at_3 value: 1.7670000000000001 - type: precision_at_5 value: 1.26 - type: recall_at_1 value: 0.52 - type: recall_at_10 value: 1.7149999999999999 - type: recall_at_100 value: 7.318 - type: recall_at_1000 value: 39.915 - type: recall_at_3 value: 1.0699999999999998 - type: recall_at_5 value: 1.27 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: scidocs-pl config: default split: test revision: None metrics: - type: map_at_1 value: 0.32 - type: map_at_10 value: 0.676 - type: map_at_100 value: 0.847 - type: map_at_1000 value: 1.032 - type: map_at_3 value: 0.5369999999999999 - type: map_at_5 value: 0.592 - type: mrr_at_1 value: 1.6 - type: mrr_at_10 value: 2.863 - type: mrr_at_100 value: 3.334 - type: mrr_at_1000 value: 3.5479999999999996 - type: mrr_at_3 value: 2.317 - type: mrr_at_5 value: 2.587 - type: ndcg_at_1 value: 1.6 - type: ndcg_at_10 value: 1.397 - type: ndcg_at_100 value: 2.819 - type: ndcg_at_1000 value: 9.349 - type: ndcg_at_3 value: 1.3 - type: ndcg_at_5 value: 1.1079999999999999 - type: precision_at_1 value: 1.6 - type: precision_at_10 value: 0.74 - type: precision_at_100 value: 0.295 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 1.2 - type: precision_at_5 value: 0.96 - type: recall_at_1 value: 0.32 - type: recall_at_10 value: 1.505 - type: recall_at_100 value: 5.988 - type: recall_at_1000 value: 39.308 - type: recall_at_3 value: 0.72 - type: recall_at_5 value: 0.9650000000000001 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 73.84834896045659 - type: cos_sim_ap value: 55.484124732566606 - type: cos_sim_f1 value: 57.34228187919464 - type: cos_sim_precision value: 46.01464885825076 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 73.84834896045659 - type: dot_ap value: 55.48400003295399 - type: dot_f1 value: 57.34228187919464 - type: dot_precision value: 46.01464885825076 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 73.84834896045659 - type: euclidean_ap value: 55.48407331902175 - type: euclidean_f1 value: 57.34228187919464 - type: euclidean_precision value: 46.01464885825076 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 73.80758255197716 - type: manhattan_ap value: 55.42477275597209 - type: manhattan_f1 value: 57.55860953920776 - type: manhattan_precision value: 46.29388816644994 - type: manhattan_recall value: 76.06837606837607 - type: max_accuracy value: 73.84834896045659 - type: max_ap value: 55.484124732566606 - type: max_f1 value: 57.55860953920776 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 67.03943120783973 - type: cos_sim_spearman value: 62.93971145260584 - type: euclidean_pearson value: 64.13947263916926 - type: euclidean_spearman value: 62.93972324235839 - type: manhattan_pearson value: 64.11295322654566 - type: manhattan_spearman value: 62.92816122293202 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 67.75034167381077 - type: cos_sim_spearman value: 62.98158872758643 - type: euclidean_pearson value: 64.25794794439082 - type: euclidean_spearman value: 62.981566596223125 - type: manhattan_pearson value: 64.25439446502435 - type: manhattan_spearman value: 63.01301439900365 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 61.622204530882755 - type: cos_sim_spearman value: 65.4632047656541 - type: euclidean_pearson value: 59.21529585527598 - type: euclidean_spearman value: 65.4638163967956 - type: manhattan_pearson value: 59.39341472707122 - type: manhattan_spearman value: 65.57635757250173 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 60.329743331971486 - type: cos_sim_spearman value: 62.78607195958339 - type: euclidean_pearson value: 62.07415212138581 - type: euclidean_spearman value: 62.78618151904129 - type: manhattan_pearson value: 62.41250554765521 - type: manhattan_spearman value: 62.87580558029627 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 59.16277512775291 - type: cos_sim_spearman value: 57.53693422381856 - type: euclidean_pearson value: 57.85017283427473 - type: euclidean_spearman value: 57.53697385589326 - type: manhattan_pearson value: 58.049796184955596 - type: manhattan_spearman value: 57.76174789162225 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 74.42588553600197 - type: cos_sim_spearman value: 74.25087788257943 - type: euclidean_pearson value: 73.35436018935222 - type: euclidean_spearman value: 74.25087694991477 - type: manhattan_pearson value: 73.33747415771185 - type: manhattan_spearman value: 74.21504509447377 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 75.77242432372144 - type: cos_sim_spearman value: 75.72930700521489 - type: euclidean_pearson value: 75.6995220623788 - type: euclidean_spearman value: 75.72930646047212 - type: manhattan_pearson value: 75.65841087952896 - type: manhattan_spearman value: 75.69567692328437 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 66.2495297342053 - type: cos_sim_spearman value: 66.14124319602982 - type: euclidean_pearson value: 66.49498096178358 - type: euclidean_spearman value: 66.14121792287747 - type: manhattan_pearson value: 66.51560623835172 - type: manhattan_spearman value: 66.05794413582558 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.0045186560239 - type: cos_sim_spearman value: 74.96504390762252 - type: euclidean_pearson value: 74.20988464347049 - type: euclidean_spearman value: 74.98114602301776 - type: manhattan_pearson value: 74.37929169860529 - type: manhattan_spearman value: 75.37049827509504 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 73.88478151514396 - type: cos_sim_spearman value: 74.05322141272103 - type: euclidean_pearson value: 73.52175483343693 - type: euclidean_spearman value: 74.05322141272103 - type: manhattan_pearson value: 73.35875118828287 - type: manhattan_spearman value: 73.83972625384673 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.57014781622605 - type: cos_sim_spearman value: 74.95329129562734 - type: euclidean_pearson value: 75.5667786729257 - type: euclidean_spearman value: 74.95329129562734 - type: manhattan_pearson value: 75.39548673816147 - type: manhattan_spearman value: 74.89428642503749 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.04007129652777 - type: cos_sim_spearman value: 79.94429611477106 - type: euclidean_pearson value: 79.91583070858822 - type: euclidean_spearman value: 79.94429611477106 - type: manhattan_pearson value: 80.14382273152769 - type: manhattan_spearman value: 80.23862855392836 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 77.28740870194635 - type: cos_sim_spearman value: 77.18286391819586 - type: euclidean_pearson value: 77.05644328687119 - type: euclidean_spearman value: 77.18286391819586 - type: manhattan_pearson value: 77.15625898067294 - type: manhattan_spearman value: 77.03165154316278 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.99293002371301 - type: cos_sim_spearman value: 72.24657859872468 - type: euclidean_pearson value: 73.38839879755461 - type: euclidean_spearman value: 72.24657859872468 - type: manhattan_pearson value: 73.6627728800822 - type: manhattan_spearman value: 72.70893449698669 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 81.37213723705916 - type: cos_sim_spearman value: 80.64548512701263 - type: euclidean_pearson value: 80.94992193351284 - type: euclidean_spearman value: 80.64484963200427 - type: manhattan_pearson value: 80.92246813841794 - type: manhattan_spearman value: 80.68860823161657 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 77.54059604962391 - type: cos_sim_spearman value: 77.19559169700682 - type: euclidean_pearson value: 77.32739821317861 - type: euclidean_spearman value: 77.19559169700682 - type: manhattan_pearson value: 77.29224328831437 - type: manhattan_spearman value: 77.24394878313191 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 79.06397062195414 - type: cos_sim_spearman value: 78.66694637555244 - type: euclidean_pearson value: 79.34923290885872 - type: euclidean_spearman value: 78.66694637555244 - type: manhattan_pearson value: 79.50802161625809 - type: manhattan_spearman value: 78.79195213396169 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 78.66045829245238 - type: cos_sim_spearman value: 78.14055373851183 - type: euclidean_pearson value: 78.94489279300518 - type: euclidean_spearman value: 78.14055373851183 - type: manhattan_pearson value: 79.33473165536323 - type: manhattan_spearman value: 78.5783429705299 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 36.63454535818336 - type: cos_sim_spearman value: 47.12016162570126 - type: euclidean_pearson value: 39.07268779927362 - type: euclidean_spearman value: 47.12016162570126 - type: manhattan_pearson value: 41.723119770725944 - type: manhattan_spearman value: 47.90334362422989 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 13.325547358617957 - type: cos_sim_spearman value: 24.094051740693416 - type: euclidean_pearson value: 10.39110006005262 - type: euclidean_spearman value: 24.094051740693416 - type: manhattan_pearson value: 12.4380555005162 - type: manhattan_spearman value: 25.176800279885715 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 41.21281570342249 - type: cos_sim_spearman value: 55.397885077207974 - type: euclidean_pearson value: 43.96150945976646 - type: euclidean_spearman value: 55.397885077207974 - type: manhattan_pearson value: 49.58812224529121 - type: manhattan_spearman value: 55.35874879475974 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 5.985012243744998 - type: cos_sim_spearman value: 25.307464943919012 - type: euclidean_pearson value: -4.080537702499046 - type: euclidean_spearman value: 25.307464943919012 - type: manhattan_pearson value: -2.5058642304196543 - type: manhattan_spearman value: 26.751588484373233 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 34.44666578772084 - type: cos_sim_spearman value: 46.45977141800899 - type: euclidean_pearson value: 38.78305544036559 - type: euclidean_spearman value: 46.45977141800899 - type: manhattan_pearson value: 46.45101297876112 - type: manhattan_spearman value: 50.642972694093814 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 28.095327083873055 - type: cos_sim_spearman value: 40.24741745875892 - type: euclidean_pearson value: 29.141496784653892 - type: euclidean_spearman value: 40.24741745875892 - type: manhattan_pearson value: 32.013290716034064 - type: manhattan_spearman value: 40.85454084311211 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 27.46788309503312 - type: cos_sim_spearman value: 43.57385391855994 - type: euclidean_pearson value: 24.558349674326177 - type: euclidean_spearman value: 43.57385391855994 - type: manhattan_pearson value: 28.974505207055866 - type: manhattan_spearman value: 44.111553205713 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 34.87841073990563 - type: cos_sim_spearman value: 52.8221686505807 - type: euclidean_pearson value: 38.36114580544504 - type: euclidean_spearman value: 52.8221686505807 - type: manhattan_pearson value: 46.69329448756753 - type: manhattan_spearman value: 53.9140781097337 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 49.999267528357 - type: cos_sim_spearman value: 61.71837669697145 - type: euclidean_pearson value: 53.578476744372274 - type: euclidean_spearman value: 61.71837669697145 - type: manhattan_pearson value: 56.410294227490795 - type: manhattan_spearman value: 60.684457655864875 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 22.43564137760586 - type: cos_sim_spearman value: 34.28346144104183 - type: euclidean_pearson value: 27.41326011184764 - type: euclidean_spearman value: 34.28346144104183 - type: manhattan_pearson value: 35.62923154232163 - type: manhattan_spearman value: 37.937151135297185 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 44.34071611983998 - type: cos_sim_spearman value: 57.823185616169646 - type: euclidean_pearson value: 49.29310650157244 - type: euclidean_spearman value: 57.823185616169646 - type: manhattan_pearson value: 55.93298736518848 - type: manhattan_spearman value: 58.57556581684834 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 56.07027840344927 - type: cos_sim_spearman value: 62.20158260763411 - type: euclidean_pearson value: 55.887969718543616 - type: euclidean_spearman value: 62.20158260763411 - type: manhattan_pearson value: 56.081533365738444 - type: manhattan_spearman value: 62.018651361750685 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 41.41816324477061 - type: cos_sim_spearman value: 44.71684955996943 - type: euclidean_pearson value: 42.74585025834968 - type: euclidean_spearman value: 44.71684955996943 - type: manhattan_pearson value: 47.992481632815256 - type: manhattan_spearman value: 46.18587933349126 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 38.89140730917917 - type: cos_sim_spearman value: 49.18633779347391 - type: euclidean_pearson value: 43.27605428753535 - type: euclidean_spearman value: 49.18633779347391 - type: manhattan_pearson value: 48.22046568809415 - type: manhattan_spearman value: 49.248416391249464 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 40.31620568726327 - type: cos_sim_spearman value: 49.13034440774138 - type: euclidean_pearson value: 43.95169508285692 - type: euclidean_spearman value: 49.13034440774138 - type: manhattan_pearson value: 48.84250981398146 - type: manhattan_spearman value: 49.54216339903405 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 27.074582378144058 - type: cos_sim_spearman value: 41.29498619968451 - type: euclidean_pearson value: 28.993986097276505 - type: euclidean_spearman value: 41.29498619968451 - type: manhattan_pearson value: 32.079813951133254 - type: manhattan_spearman value: 43.664111732941464 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 6.864334110072116 - type: cos_sim_spearman value: 25.805458732687914 - type: euclidean_pearson value: 11.435920047618103 - type: euclidean_spearman value: 25.805458732687914 - type: manhattan_pearson value: 15.036308569899552 - type: manhattan_spearman value: 25.405135387192757 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 65.44029549925597 - type: cos_sim_spearman value: 61.97797868009122 - type: euclidean_pearson value: 65.92740669959876 - type: euclidean_spearman value: 61.97797868009122 - type: manhattan_pearson value: 70.29575044091207 - type: manhattan_spearman value: 73.24670207647144 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 51.35413149349556 - type: cos_sim_spearman value: 50.175051356729924 - type: euclidean_pearson value: 53.12039152785364 - type: euclidean_spearman value: 50.174289111089685 - type: manhattan_pearson value: 53.0731746793555 - type: manhattan_spearman value: 50.15176393928403 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 67.84222983023291 - type: cos_sim_spearman value: 67.39086924655895 - type: euclidean_pearson value: 67.3393327127967 - type: euclidean_spearman value: 67.39088047106472 - type: manhattan_pearson value: 67.40316731822271 - type: manhattan_spearman value: 67.49067800994015 - task: type: Classification dataset: name: MTEB ScalaDaClassification type: ScandEval/scala-da config: default split: test revision: 1de08520a7b361e92ffa2a2201ebd41942c54675 metrics: - type: accuracy value: 50.62988281250001 - type: ap value: 50.32274824114816 - type: f1 value: 50.37741703766756 - task: type: Classification dataset: name: MTEB ScalaNbClassification type: ScandEval/scala-nb config: default split: test revision: 237111a078ad5a834a55c57803d40bbe410ed03b metrics: - type: accuracy value: 51.181640625 - type: ap value: 50.60884394099696 - type: f1 value: 50.866988720930415 - task: type: Classification dataset: name: MTEB ScalaNnClassification type: ScandEval/scala-nn config: default split: test revision: 9d9a2a4092ed3cacf0744592f6d2f32ab8ef4c0b metrics: - type: accuracy value: 50.9375 - type: ap value: 50.47969135089731 - type: f1 value: 50.62913552324756 - task: type: Classification dataset: name: MTEB ScalaSvClassification type: ScandEval/scala-sv config: default split: test revision: 1b48e3dcb02872335ff985ff938a054a4ed99008 metrics: - type: accuracy value: 51.1474609375 - type: ap value: 50.5894187272385 - type: f1 value: 50.901812392367916 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 68.36051662289248 - type: mrr value: 89.39224265204656 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 23.721999999999998 - type: map_at_10 value: 31.335 - type: map_at_100 value: 32.461 - type: map_at_1000 value: 32.557 - type: map_at_3 value: 29.282000000000004 - type: map_at_5 value: 30.602 - type: mrr_at_1 value: 24.667 - type: mrr_at_10 value: 32.363 - type: mrr_at_100 value: 33.421 - type: mrr_at_1000 value: 33.499 - type: mrr_at_3 value: 30.444 - type: mrr_at_5 value: 31.628 - type: ndcg_at_1 value: 24.667 - type: ndcg_at_10 value: 35.29 - type: ndcg_at_100 value: 40.665 - type: ndcg_at_1000 value: 43.241 - type: ndcg_at_3 value: 31.238 - type: ndcg_at_5 value: 33.486 - type: precision_at_1 value: 24.667 - type: precision_at_10 value: 5.1 - type: precision_at_100 value: 0.7969999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 12.667 - type: precision_at_5 value: 8.933 - type: recall_at_1 value: 23.721999999999998 - type: recall_at_10 value: 46.417 - type: recall_at_100 value: 70.944 - type: recall_at_1000 value: 91.033 - type: recall_at_3 value: 35.693999999999996 - type: recall_at_5 value: 40.944 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: scifact-pl config: default split: test revision: None metrics: - type: map_at_1 value: 21.706 - type: map_at_10 value: 28.333000000000002 - type: map_at_100 value: 29.364 - type: map_at_1000 value: 29.451 - type: map_at_3 value: 26.112999999999996 - type: map_at_5 value: 27.502 - type: mrr_at_1 value: 23.0 - type: mrr_at_10 value: 29.555999999999997 - type: mrr_at_100 value: 30.536 - type: mrr_at_1000 value: 30.606 - type: mrr_at_3 value: 27.333000000000002 - type: mrr_at_5 value: 28.717 - type: ndcg_at_1 value: 23.0 - type: ndcg_at_10 value: 32.238 - type: ndcg_at_100 value: 37.785999999999994 - type: ndcg_at_1000 value: 40.266999999999996 - type: ndcg_at_3 value: 27.961000000000002 - type: ndcg_at_5 value: 30.322 - type: precision_at_1 value: 23.0 - type: precision_at_10 value: 4.7669999999999995 - type: precision_at_100 value: 0.787 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 11.444 - type: precision_at_5 value: 8.200000000000001 - type: recall_at_1 value: 21.706 - type: recall_at_10 value: 43.206 - type: recall_at_100 value: 69.678 - type: recall_at_1000 value: 89.333 - type: recall_at_3 value: 31.900000000000002 - type: recall_at_5 value: 37.594 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.5 - type: cos_sim_ap value: 77.07584309978081 - type: cos_sim_f1 value: 71.8864950078823 - type: cos_sim_precision value: 75.74750830564784 - type: cos_sim_recall value: 68.4 - type: dot_accuracy value: 99.5 - type: dot_ap value: 77.07584309978081 - type: dot_f1 value: 71.8864950078823 - type: dot_precision value: 75.74750830564784 - type: dot_recall value: 68.4 - type: euclidean_accuracy value: 99.5 - type: euclidean_ap value: 77.07584309978081 - type: euclidean_f1 value: 71.8864950078823 - type: euclidean_precision value: 75.74750830564784 - type: euclidean_recall value: 68.4 - type: manhattan_accuracy value: 99.50594059405941 - type: manhattan_ap value: 77.41658577240027 - type: manhattan_f1 value: 71.91374663072777 - type: manhattan_precision value: 78.01169590643275 - type: manhattan_recall value: 66.7 - type: max_accuracy value: 99.50594059405941 - type: max_ap value: 77.41658577240027 - type: max_f1 value: 71.91374663072777 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 46.32521494308228 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 20.573273825125266 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 38.612724125942385 - type: mrr value: 38.891130315762666 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.305330424238836 - type: cos_sim_spearman value: 30.556621737388685 - type: dot_pearson value: 29.30533056265583 - type: dot_spearman value: 30.556621737388685 - task: type: Classification dataset: name: MTEB SweRecClassification type: ScandEval/swerec-mini config: default split: test revision: 3c62f26bafdc4c4e1c16401ad4b32f0a94b46612 metrics: - type: accuracy value: 68.4716796875 - type: f1 value: 59.865730786092364 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 55.34794621490011 - type: mrr value: 59.22764129348421 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 0.586 - type: map_at_10 value: 0.819 - type: map_at_100 value: 0.8920000000000001 - type: map_at_1000 value: 0.928 - type: map_at_3 value: 0.729 - type: map_at_5 value: 0.771 - type: mrr_at_1 value: 1.9949999999999999 - type: mrr_at_10 value: 2.608 - type: mrr_at_100 value: 2.771 - type: mrr_at_1000 value: 2.8289999999999997 - type: mrr_at_3 value: 2.365 - type: mrr_at_5 value: 2.483 - type: ndcg_at_1 value: 1.9949999999999999 - type: ndcg_at_10 value: 1.314 - type: ndcg_at_100 value: 1.831 - type: ndcg_at_1000 value: 3.4139999999999997 - type: ndcg_at_3 value: 1.377 - type: ndcg_at_5 value: 1.2630000000000001 - type: precision_at_1 value: 1.9949999999999999 - type: precision_at_10 value: 0.488 - type: precision_at_100 value: 0.123 - type: precision_at_1000 value: 0.054 - type: precision_at_3 value: 1.027 - type: precision_at_5 value: 0.737 - type: recall_at_1 value: 0.586 - type: recall_at_10 value: 1.3390000000000002 - type: recall_at_100 value: 3.15 - type: recall_at_1000 value: 11.859 - type: recall_at_3 value: 0.8710000000000001 - type: recall_at_5 value: 1.0290000000000001 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 40.946 - type: f1 value: 39.56517169731474 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.08499999999999999 - type: map_at_10 value: 0.462 - type: map_at_100 value: 0.893 - type: map_at_1000 value: 1.129 - type: map_at_3 value: 0.232 - type: map_at_5 value: 0.3 - type: mrr_at_1 value: 38.0 - type: mrr_at_10 value: 50.629999999999995 - type: mrr_at_100 value: 51.315999999999995 - type: mrr_at_1000 value: 51.365 - type: mrr_at_3 value: 47.0 - type: mrr_at_5 value: 48.9 - type: ndcg_at_1 value: 31.0 - type: ndcg_at_10 value: 24.823 - type: ndcg_at_100 value: 10.583 - type: ndcg_at_1000 value: 6.497999999999999 - type: ndcg_at_3 value: 30.95 - type: ndcg_at_5 value: 27.899 - type: precision_at_1 value: 38.0 - type: precision_at_10 value: 25.6 - type: precision_at_100 value: 8.98 - type: precision_at_1000 value: 2.248 - type: precision_at_3 value: 34.666999999999994 - type: precision_at_5 value: 29.599999999999998 - type: recall_at_1 value: 0.08499999999999999 - type: recall_at_10 value: 0.641 - type: recall_at_100 value: 2.002 - type: recall_at_1000 value: 4.902 - type: recall_at_3 value: 0.28200000000000003 - type: recall_at_5 value: 0.379 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: trec-covid-pl config: default split: test revision: None metrics: - type: map_at_1 value: 0.124 - type: map_at_10 value: 0.45199999999999996 - type: map_at_100 value: 0.874 - type: map_at_1000 value: 1.1039999999999999 - type: map_at_3 value: 0.253 - type: map_at_5 value: 0.32299999999999995 - type: mrr_at_1 value: 36.0 - type: mrr_at_10 value: 47.56 - type: mrr_at_100 value: 48.532 - type: mrr_at_1000 value: 48.579 - type: mrr_at_3 value: 45.0 - type: mrr_at_5 value: 45.5 - type: ndcg_at_1 value: 34.0 - type: ndcg_at_10 value: 24.529 - type: ndcg_at_100 value: 10.427 - type: ndcg_at_1000 value: 6.457 - type: ndcg_at_3 value: 31.173000000000002 - type: ndcg_at_5 value: 27.738000000000003 - type: precision_at_1 value: 38.0 - type: precision_at_10 value: 25.4 - type: precision_at_100 value: 8.88 - type: precision_at_1000 value: 2.2159999999999997 - type: precision_at_3 value: 34.666999999999994 - type: precision_at_5 value: 29.2 - type: recall_at_1 value: 0.124 - type: recall_at_10 value: 0.618 - type: recall_at_100 value: 1.9349999999999998 - type: recall_at_1000 value: 4.808 - type: recall_at_3 value: 0.28300000000000003 - type: recall_at_5 value: 0.382 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.9 - type: f1 value: 98.55000000000001 - type: precision value: 98.38333333333334 - type: recall value: 98.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.3179190751445 - type: f1 value: 59.44582071749702 - type: precision value: 57.49678869621066 - type: recall value: 65.3179190751445 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 38.53658536585366 - type: f1 value: 34.217555952803785 - type: precision value: 32.96511296649355 - type: recall value: 38.53658536585366 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.7 - type: f1 value: 98.26666666666665 - type: precision value: 98.05 - type: recall value: 98.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.13333333333333 - type: precision value: 99.05000000000001 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.6 - type: precision value: 97.3 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.23880597014925 - type: f1 value: 46.340992406389105 - type: precision value: 44.556384742951906 - type: recall value: 52.23880597014925 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.0 - type: f1 value: 93.67000000000002 - type: precision value: 93.075 - type: recall value: 95.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.29268292682927 - type: f1 value: 85.76422764227642 - type: precision value: 84.84204413472706 - type: recall value: 88.29268292682927 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.46666666666667 - type: precision value: 96.1 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8408262454435 - type: f1 value: 95.9902794653706 - type: precision value: 95.56500607533415 - type: recall value: 96.8408262454435 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.3913043478261 - type: f1 value: 91.30434782608695 - type: precision value: 90.28985507246377 - type: recall value: 93.3913043478261 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.6086956521739 - type: f1 value: 88.1159420289855 - type: precision value: 86.9623188405797 - type: recall value: 90.6086956521739 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.16666666666667 - type: precision value: 96.86666666666667 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.0 - type: f1 value: 92.34 - type: precision value: 91.54166666666667 - type: recall value: 94.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.92159227985525 - type: f1 value: 80.8868975817106 - type: precision value: 79.11540008041817 - type: recall value: 84.92159227985525 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.35 - type: precision value: 92.58333333333334 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.3 - type: f1 value: 36.64473116255726 - type: precision value: 34.64017752457381 - type: recall value: 43.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.7 - type: f1 value: 95.68333333333332 - type: precision value: 95.19999999999999 - type: recall value: 96.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.47619047619048 - type: f1 value: 66.63032734461306 - type: precision value: 65.46459191863879 - type: recall value: 70.47619047619048 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.5 - type: f1 value: 91.63 - type: precision value: 90.75 - type: recall value: 93.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.5 - type: f1 value: 94.36666666666666 - type: precision value: 93.83333333333333 - type: recall value: 95.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.51666666666667 - type: precision value: 93.88333333333334 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.0 - type: f1 value: 80.46675324675326 - type: precision value: 78.95999999999998 - type: recall value: 84.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 96.93333333333332 - type: precision value: 96.55 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 90.07333333333334 - type: precision value: 89.16166666666668 - type: recall value: 92.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.35 - type: precision value: 93.75 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.9 - type: f1 value: 98.53333333333335 - type: precision value: 98.35000000000001 - type: recall value: 98.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.22641509433963 - type: f1 value: 95.14824797843666 - type: precision value: 94.60916442048517 - type: recall value: 96.22641509433963 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.58974358974359 - type: f1 value: 91.59544159544159 - type: precision value: 90.66951566951566 - type: recall value: 93.58974358974359 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.1 - type: f1 value: 97.46666666666668 - type: precision value: 97.15 - type: recall value: 98.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.4090909090909 - type: f1 value: 91.5909090909091 - type: precision value: 90.71969696969697 - type: recall value: 93.4090909090909 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.51781970649894 - type: f1 value: 86.76150544075072 - type: precision value: 85.55206149545772 - type: recall value: 89.51781970649894 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.65 - type: precision value: 97.38333333333333 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.87548638132296 - type: f1 value: 71.24698906800073 - type: precision value: 69.66572338167668 - type: recall value: 75.87548638132296 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 61.53846153846154 - type: f1 value: 54.83234714003944 - type: precision value: 52.06552706552707 - type: recall value: 61.53846153846154 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 54.183211233211225 - type: precision value: 52.48751719986241 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.3 - type: precision value: 93.65 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.85046728971963 - type: f1 value: 85.25700934579439 - type: precision value: 84.09267912772586 - type: recall value: 87.85046728971963 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.0 - type: f1 value: 97.43333333333332 - type: precision value: 97.15 - type: recall value: 98.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.8 - type: f1 value: 88.66055555555555 - type: precision value: 87.81845238095238 - type: recall value: 90.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.6 - type: f1 value: 65.538895353013 - type: precision value: 63.69531394330308 - type: recall value: 70.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89999999999999 - type: f1 value: 96.06666666666668 - type: precision value: 95.68333333333334 - type: recall value: 96.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 95.95 - type: precision value: 95.55 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 93.8 - type: precision value: 93.13333333333334 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.45 - type: precision value: 94.93333333333334 - type: recall value: 96.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.28333333333332 - type: precision value: 96.98333333333333 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.16666666666666 - type: f1 value: 74.67336721249764 - type: precision value: 73.26035353535354 - type: recall value: 78.16666666666666 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.200000000000001 - type: f1 value: 8.48123815073815 - type: precision value: 7.843657708032708 - type: recall value: 11.200000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.3 - type: f1 value: 89.02333333333333 - type: precision value: 87.97500000000001 - type: recall value: 91.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.32142857142857 - type: f1 value: 67.69209956709956 - type: precision value: 66.19047619047619 - type: recall value: 72.32142857142857 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.69264544456641 - type: f1 value: 75.40693115885212 - type: precision value: 73.67544822539335 - type: recall value: 79.69264544456641 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.8 - type: f1 value: 83.65666666666667 - type: precision value: 82.24833333333333 - type: recall value: 86.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.36666666666666 - type: precision value: 94.86666666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.3 - type: f1 value: 95.49 - type: precision value: 95.10833333333333 - type: recall value: 96.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.60000000000001 - type: f1 value: 87.04746031746032 - type: precision value: 85.89583333333333 - type: recall value: 89.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 84.57088023088022 - type: precision value: 83.6475 - type: recall value: 86.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.7 - type: precision value: 97.46666666666668 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.39999999999999 - type: f1 value: 82.83333333333333 - type: precision value: 81.80137426900586 - type: recall value: 85.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 89.11999999999999 - type: precision value: 88.12777777777778 - type: recall value: 91.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.16666666666669 - type: precision value: 96.85000000000001 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.30666666666666 - type: precision value: 97.02499999999999 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 27.00729927007299 - type: f1 value: 25.114895917815623 - type: precision value: 24.602283361407448 - type: recall value: 27.00729927007299 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.099999999999998 - type: f1 value: 11.869284007509814 - type: precision value: 11.199695454818405 - type: recall value: 14.099999999999998 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97.09 - type: precision value: 96.80833333333332 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.47333333333333 - type: precision value: 94.975 - type: recall value: 96.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.45238095238095 - type: f1 value: 91.66666666666666 - type: precision value: 90.77380952380952 - type: recall value: 93.45238095238095 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.899999999999999 - type: f1 value: 10.303261315113037 - type: precision value: 9.902986584515606 - type: recall value: 11.899999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.57349896480332 - type: f1 value: 77.86519438693352 - type: precision value: 76.35595081247254 - type: recall value: 81.57349896480332 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 94.86666666666667 - type: precision value: 94.25 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.8 - type: f1 value: 98.46666666666667 - type: precision value: 98.3 - type: recall value: 98.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.7 - type: f1 value: 8.621683883854935 - type: precision value: 8.188292731521031 - type: recall value: 10.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.47619047619048 - type: f1 value: 87.8581735724593 - type: precision value: 86.72438672438673 - type: recall value: 90.47619047619048 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.0381679389313 - type: f1 value: 93.60050890585242 - type: precision value: 92.970737913486 - type: recall value: 95.0381679389313 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.67103347889375 - type: precision value: 97.37991266375546 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.6 - type: f1 value: 80.99904761904763 - type: precision value: 79.54634920634919 - type: recall value: 84.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89265536723164 - type: f1 value: 95.90395480225989 - type: precision value: 95.4331450094162 - type: recall value: 96.89265536723164 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.6 - type: f1 value: 9.981918087824628 - type: precision value: 9.326319147606549 - type: recall value: 12.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.65 - type: precision value: 96.28333333333333 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.38333333333333 - type: precision value: 94.83333333333333 - type: recall value: 96.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.8 - type: f1 value: 88.43666666666665 - type: precision value: 87.395 - type: recall value: 90.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97.03333333333333 - type: precision value: 96.71666666666667 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.4 - type: f1 value: 7.946889105220061 - type: precision value: 7.665059865752875 - type: recall value: 9.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.04021447721179 - type: f1 value: 93.68632707774799 - type: precision value: 93.08534405719392 - type: recall value: 95.04021447721179 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.66666666666667 - type: precision value: 94.08333333333334 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.6086956521739 - type: f1 value: 77.98418972332016 - type: precision value: 75.96837944664031 - type: recall value: 82.6086956521739 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.77464788732394 - type: f1 value: 94.8356807511737 - type: precision value: 94.36619718309859 - type: recall value: 95.77464788732394 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 53.17365269461077 - type: f1 value: 47.07043056743655 - type: precision value: 45.161363241830784 - type: recall value: 53.17365269461077 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.5 - type: f1 value: 94.5 - type: precision value: 94.03333333333333 - type: recall value: 95.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.59605911330048 - type: f1 value: 91.82266009852216 - type: precision value: 91.09195402298852 - type: recall value: 93.59605911330048 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.40845070422534 - type: f1 value: 72.73082942097027 - type: precision value: 71.46686939820742 - type: recall value: 76.40845070422534 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.58974358974359 - type: f1 value: 91.98290598290598 - type: precision value: 91.3119658119658 - type: recall value: 93.58974358974359 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.06666666666668 - type: precision value: 96.7 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.89352818371609 - type: f1 value: 64.47860652453555 - type: precision value: 62.878651918592574 - type: recall value: 68.89352818371609 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 33.800000000000004 - type: f1 value: 29.290774344112368 - type: precision value: 28.066016735704647 - type: recall value: 33.800000000000004 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22801302931596 - type: f1 value: 88.07817589576547 - type: precision value: 87.171552660152 - type: recall value: 90.22801302931596 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.63333333333334 - type: precision value: 97.36666666666667 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 96.95 - type: precision value: 96.58333333333331 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.91338582677166 - type: f1 value: 90.81364829396327 - type: precision value: 89.89501312335958 - type: recall value: 92.91338582677166 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89999999999999 - type: f1 value: 95.98333333333332 - type: precision value: 95.56666666666668 - type: recall value: 96.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.51523545706371 - type: f1 value: 70.20346919931407 - type: precision value: 68.6389565788895 - type: recall value: 74.51523545706371 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.6 - type: f1 value: 96.88333333333333 - type: precision value: 96.53333333333333 - type: recall value: 97.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.15384615384615 - type: f1 value: 39.47885447885448 - type: precision value: 37.301528599605525 - type: recall value: 46.15384615384615 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.16666666666667 - type: precision value: 92.41666666666667 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 93.83333333333333 - type: precision value: 93.16666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.0 - type: f1 value: 89.98666666666666 - type: precision value: 89.09166666666667 - type: recall value: 92.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.51886792452831 - type: f1 value: 94.3003144654088 - type: precision value: 93.75 - type: recall value: 95.51886792452831 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.83333333333333 - type: precision value: 97.65 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8978102189781 - type: f1 value: 96.04622871046227 - type: precision value: 95.62043795620438 - type: recall value: 96.8978102189781 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.1 - type: f1 value: 81.78564213564214 - type: precision value: 80.46416666666667 - type: recall value: 85.1 - task: type: Clustering dataset: name: MTEB TenKGnadClusteringP2P type: slvnwhrl/tenkgnad-clustering-p2p config: default split: test revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558 metrics: - type: v_measure value: 21.827519839402644 - task: type: Clustering dataset: name: MTEB TenKGnadClusteringS2S type: slvnwhrl/tenkgnad-clustering-s2s config: default split: test revision: 6cddbe003f12b9b140aec477b583ac4191f01786 metrics: - type: v_measure value: 27.160188241713684 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 38.54459276932986 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 43.4460576234314 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 0.20500000000000002 - type: map_at_10 value: 0.391 - type: map_at_100 value: 0.612 - type: map_at_1000 value: 0.645 - type: map_at_3 value: 0.302 - type: map_at_5 value: 0.383 - type: mrr_at_1 value: 4.082 - type: mrr_at_10 value: 5.612 - type: mrr_at_100 value: 6.822 - type: mrr_at_1000 value: 6.929 - type: mrr_at_3 value: 4.082 - type: mrr_at_5 value: 5.408 - type: ndcg_at_1 value: 4.082 - type: ndcg_at_10 value: 1.6840000000000002 - type: ndcg_at_100 value: 2.876 - type: ndcg_at_1000 value: 4.114 - type: ndcg_at_3 value: 2.52 - type: ndcg_at_5 value: 2.3720000000000003 - type: precision_at_1 value: 4.082 - type: precision_at_10 value: 1.429 - type: precision_at_100 value: 0.755 - type: precision_at_1000 value: 0.18 - type: precision_at_3 value: 2.041 - type: precision_at_5 value: 2.4490000000000003 - type: recall_at_1 value: 0.20500000000000002 - type: recall_at_10 value: 0.761 - type: recall_at_100 value: 4.423 - type: recall_at_1000 value: 9.044 - type: recall_at_3 value: 0.302 - type: recall_at_5 value: 0.683 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 67.28359999999999 - type: ap value: 12.424592214862038 - type: f1 value: 51.53630450055703 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.23372948500284 - type: f1 value: 56.440924587214234 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 24.410059815620116 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 80.3302139834297 - type: cos_sim_ap value: 53.57723069745093 - type: cos_sim_f1 value: 51.58639580004565 - type: cos_sim_precision value: 45.45454545454545 - type: cos_sim_recall value: 59.63060686015831 - type: dot_accuracy value: 80.3302139834297 - type: dot_ap value: 53.57723006705641 - type: dot_f1 value: 51.58639580004565 - type: dot_precision value: 45.45454545454545 - type: dot_recall value: 59.63060686015831 - type: euclidean_accuracy value: 80.3302139834297 - type: euclidean_ap value: 53.57723050286929 - type: euclidean_f1 value: 51.58639580004565 - type: euclidean_precision value: 45.45454545454545 - type: euclidean_recall value: 59.63060686015831 - type: manhattan_accuracy value: 80.31233235977827 - type: manhattan_ap value: 53.44943961562638 - type: manhattan_f1 value: 51.24183006535947 - type: manhattan_precision value: 43.63636363636363 - type: manhattan_recall value: 62.05804749340369 - type: max_accuracy value: 80.3302139834297 - type: max_ap value: 53.57723069745093 - type: max_f1 value: 51.58639580004565 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.45876508712695 - type: cos_sim_ap value: 83.5320716566614 - type: cos_sim_f1 value: 75.54560716284276 - type: cos_sim_precision value: 73.27929362379678 - type: cos_sim_recall value: 77.95657530027718 - type: dot_accuracy value: 87.45876508712695 - type: dot_ap value: 83.53209944887666 - type: dot_f1 value: 75.54560716284276 - type: dot_precision value: 73.27929362379678 - type: dot_recall value: 77.95657530027718 - type: euclidean_accuracy value: 87.45876508712695 - type: euclidean_ap value: 83.53205938307582 - type: euclidean_f1 value: 75.54560716284276 - type: euclidean_precision value: 73.27929362379678 - type: euclidean_recall value: 77.95657530027718 - type: manhattan_accuracy value: 87.52280048123569 - type: manhattan_ap value: 83.4884324728773 - type: manhattan_f1 value: 75.43366677906411 - type: manhattan_precision value: 73.46566445303948 - type: manhattan_recall value: 77.51000923929782 - type: max_accuracy value: 87.52280048123569 - type: max_ap value: 83.53209944887666 - type: max_f1 value: 75.54560716284276 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 13.100000000000001 - type: map_at_10 value: 15.620000000000001 - type: map_at_100 value: 15.928 - type: map_at_1000 value: 15.976 - type: map_at_3 value: 14.817 - type: map_at_5 value: 15.322 - type: mrr_at_1 value: 13.0 - type: mrr_at_10 value: 15.57 - type: mrr_at_100 value: 15.878 - type: mrr_at_1000 value: 15.926000000000002 - type: mrr_at_3 value: 14.767 - type: mrr_at_5 value: 15.272 - type: ndcg_at_1 value: 13.100000000000001 - type: ndcg_at_10 value: 17.05 - type: ndcg_at_100 value: 18.801000000000002 - type: ndcg_at_1000 value: 20.436 - type: ndcg_at_3 value: 15.425 - type: ndcg_at_5 value: 16.333000000000002 - type: precision_at_1 value: 13.100000000000001 - type: precision_at_10 value: 2.16 - type: precision_at_100 value: 0.304 - type: precision_at_1000 value: 0.044000000000000004 - type: precision_at_3 value: 5.733 - type: precision_at_5 value: 3.88 - type: recall_at_1 value: 13.100000000000001 - type: recall_at_10 value: 21.6 - type: recall_at_100 value: 30.4 - type: recall_at_1000 value: 44.1 - type: recall_at_3 value: 17.2 - type: recall_at_5 value: 19.400000000000002 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 76.12 - type: ap value: 54.1619589378045 - type: f1 value: 74.32372858884229 - task: type: Clustering dataset: name: MTEB WikiCitiesClustering type: jinaai/cities_wiki_clustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 50.71744674029636 - task: type: Retrieval dataset: name: MTEB XMarketDE type: jinaai/xmarket_de config: default split: test revision: 2336818db4c06570fcdf263e1bcb9993b786f67a metrics: - type: map_at_1 value: 0.182 - type: map_at_10 value: 0.266 - type: map_at_100 value: 0.295 - type: map_at_1000 value: 0.313 - type: map_at_3 value: 0.232 - type: map_at_5 value: 0.23800000000000002 - type: mrr_at_1 value: 1.3379999999999999 - type: mrr_at_10 value: 1.918 - type: mrr_at_100 value: 2.051 - type: mrr_at_1000 value: 2.084 - type: mrr_at_3 value: 1.7049999999999998 - type: mrr_at_5 value: 1.791 - type: ndcg_at_1 value: 1.3379999999999999 - type: ndcg_at_10 value: 0.859 - type: ndcg_at_100 value: 0.8500000000000001 - type: ndcg_at_1000 value: 1.345 - type: ndcg_at_3 value: 1.032 - type: ndcg_at_5 value: 0.918 - type: precision_at_1 value: 1.3379999999999999 - type: precision_at_10 value: 0.528 - type: precision_at_100 value: 0.22699999999999998 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 0.8829999999999999 - type: precision_at_5 value: 0.6890000000000001 - type: recall_at_1 value: 0.182 - type: recall_at_10 value: 0.51 - type: recall_at_100 value: 1.2229999999999999 - type: recall_at_1000 value: 4.183 - type: recall_at_3 value: 0.292 - type: recall_at_5 value: 0.315 --- # SONAR [[Paper]](https://ai.meta.com/research/publications/sonar-sentence-level-multimodal-and-language-agnostic-representations/) We introduce SONAR, a new multilingual and multimodal fixed-size sentence embedding space, with a full suite of speech and text encoders and decoders. It substantially outperforms existing sentence embeddings such as LASER3 and LabSE on the xsim and xsim++ multilingual similarity search tasks. Speech segments can be embedded in the same SONAR embedding space using language-specific speech encoders trained in a teacher-student setting on speech transcription data. We also provide a single text decoder, which allows us to perform text-to-text and speech-to-text machine translation, including for zero-shot language and modality combinations. *SONAR* stands for **S**entence-level multim**O**dal and la**N**guage-**A**gnostic **R**epresentations The full list of supported languages (along with download links) can be found here [below](#supported-languages-and-download-links). ## Installing SONAR depends mainly on [Fairseq2](https://github.com/fairinternal/fairseq2) and can be installed using (tested with `python=3.8`) ```bash pip install --upgrade pip pip config set global.extra-index-url https://test.pypi.org/simple/ pip install -e . ``` ## Usage fairseq2 will automatically download models into your `$TORCH_HOME/hub` directory upon using the commands below. ### Compute text sentence embeddings with SONAR: ```python from sonar.inference_pipelines.text import TextToEmbeddingModelPipeline t2vec_model = TextToEmbeddingModelPipeline(encoder="text_sonar_basic_encoder", tokenizer="text_sonar_basic_encoder") sentences = ['My name is SONAR.', 'I can embed the sentences into vectorial space.'] t2vec_model.predict(sentences, source_lang="eng_Latn").shape # torch.Size([2, 1024]) ``` ### Translate text with SONAR ```python from sonar.inference_pipelines.text import TextToTextModelPipeline t2t_model = TextToTextModelPipeline(encoder="text_sonar_basic_encoder", decoder="text_sonar_basic_decoder", tokenizer="text_sonar_basic_encoder") # tokenizer is attached to both encoder and decoder cards sentences = ['My name is SONAR.', 'I can embed the sentences into vectorial space.'] t2t_model.predict(sentences, source_lang="eng_Latn", target_lang="fra_Latn") # ['Mon nom est SONAR.', "Je peux intégrer les phrases dans l'espace vectoriel."] ``` ### Compute speech sentence embeddings with SONAR ```python from sonar.inference_pipelines.speech import SpeechToEmbeddingModelPipeline s2vec_model = SpeechToEmbeddingModelPipeline(encoder="sonar_speech_encoder_eng") s2vec_model.predict(["./tests/integration_tests/data/audio_files/audio_1.wav", "./tests/integration_tests/data/audio_files/audio_2.wav"]).shape # torch.Size([2, 1024]) import torchaudio inp, sr = torchaudio.load("./tests/integration_tests/data/audio_files/audio_1.wav") assert sr == 16000, "Sample rate should be 16kHz" s2vec_model.predict([inp]).shape # torch.Size([1, 1024]) ``` ### Speech-to-text translation with SONAR ```python from sonar.inference_pipelines.speech import SpeechToTextModelPipeline s2t_model = SpeechToTextModelPipeline(encoder="sonar_speech_encoder_eng", decoder="text_sonar_basic_decoder", tokenizer="text_sonar_basic_decoder") import torchaudio inp, sr = torchaudio.load("./tests/integration_tests/data/audio_files/audio_1.wav") assert sr == 16000, "Sample rate should be 16kHz" # passing loaded audio files s2t_model.predict([inp], target_lang="eng_Latn") # ['Television reports show white smoke coming from the plant.'] # passing multiple wav files s2t_model.predict(["./tests/integration_tests/data/audio_files/audio_1.wav", "./tests/integration_tests/data/audio_files/audio_2.wav"], target_lang="eng_Latn") # ['Television reports show white smoke coming from the plant.', # 'These couples may choose to make an adoption plan for their baby.'] ``` ### Predicting [cross-lingual semantic similarity](https://github.com/facebookresearch/fairseq/tree/nllb/examples/nllb/human_XSTS_eval) with BLASER 2 models ```Python import torch from sonar.models.blaser.loader import load_blaser_model blaser_ref = load_blaser_model("blaser_st2st_ref_v2_0").eval() blaser_qe = load_blaser_model("blaser_st2st_qe_v2_0").eval() # BLASER-2 is supposed to work with SONAR speech and text embeddings, # but we didn't include their extraction in this snippet, to keep it simple. emb = torch.ones([1, 1024]) print(blaser_ref(src=emb, ref=emb, mt=emb).item()) # 5.2552 print(blaser_qe(src=emb, mt=emb).item()) # 4.9819 ``` See more complete demo notebooks : * [sonar text2text similarity and translation](examples/sonar_text_demo.ipynb) * [sonar speech2text and other data pipeline examples](examples/inference_pipelines.ipynb) ## Model details - **Developed by:** Paul-Ambroise Duquenne et al. - **License:** CC-BY-NC 4.0 license - **Cite as:** ``` @article{Duquenne:2023:sonar_arxiv, author = {Paul-Ambroise Duquenne and Holger Schwenk and Benoit Sagot}, title = {{SONAR:} Sentence-Level Multimodal and Language-Agnostic Representations}, publisher = {arXiv}, year = {2023}, url = {https://arxiv.org/abs/unk}, } ```
[ "SEMANTIC_SIMILARITY", "TRANSLATION", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
bigscience/T0p
bigscience
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2110.08207", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,646
1,655
738
5
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text: A is the son's of B's uncle. What is the family relationship between A and B? - text: 'Reorder the words in this sentence: justin and name bieber years is my am I 27 old.' - text: "Task: copy but say the opposite.\n PSG won its match against Barca." - text: 'Is this review positive or negative? Review: Best cast iron skillet you will every buy.' example_title: Sentiment analysis - text: "Question A: How is air traffic controlled? \nQuestion B: How do you become\ \ an air traffic controller?\nPick one: these questions are duplicates or not\ \ duplicates." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday.\ \ He chose her because she had foreign affairs experience as a former First Lady.\ \ \nIn the previous sentence, decide who 'her' is referring to." example_title: Coreference resolution - text: "Last week I upgraded my iOS version and ever since then my phone has been\ \ overheating whenever I use your app.\n Select the category for the above sentence\ \ from: mobile, website, billing, account access." - text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach\ \ was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit,\ \ Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences\ \ 1 and 2 have the same meaning?" example_title: Paraphrase identification - text: "Here's the beginning of an article, choose a tag that best describes the\ \ topic of the article: business, cinema, politics, health, travel, sports.\n\n\ \ The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN)\ \ Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds.\ \ For a Cold War creation, Ian Fleming's suave spy has certainly gotten around,\ \ but despite different guises in the tuxedo and occasional scuba gear, when it\ \ comes to Bond ratings, there really shouldn't be much argument about who wore\ \ it best." - text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1,\ \ LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different\ \ things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out.\ \ Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?" - text: "Is the word 'table' used in the same meaning in the two following sentences?\n\ \n Sentence A: you can leave the books on the table over there.\n Sentence B:\ \ the tables in this book are very hard to read." - text: "On a shelf, there are five books: a gray book, a red book, a purple book,\ \ a blue book, and a black book.\n The red book is to the right of the gray book.\ \ The black book is to the left of the blue book. The blue book is to the left\ \ of the gray book. The purple book is the second from the right.\n\n Which book\ \ is the leftmost book?" example_title: Logic puzzles - text: "The two men running to become New York City's next mayor will face off in\ \ their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough\ \ president and a former New York City police captain, is widely expected to win\ \ the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era\ \ Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?" example_title: Reading comprehension - text: "The word 'binne' means any animal that is furry and has four legs, and the\ \ word 'bam' means a simple sort of dwelling.\n\n Which of the following best\ \ characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence\ \ 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence\ \ 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places\ \ where people live." --- **How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"! **Official repository**: [bigscience-workshop/t-zero](https://github.com/bigscience-workshop/t-zero) # Model Description T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks. # Intended uses You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*. A few other examples that you can try: - *A is the son's of B's uncle. What is the family relationship between A and B?* - *Question A: How is air traffic controlled?<br> Question B: How do you become an air traffic controller?<br> Pick one: these questions are duplicates or not duplicates.* - *Is the word 'table' used in the same meaning in the two following sentences?<br><br> Sentence A: you can leave the books on the table over there.<br> Sentence B: the tables in this book are very hard to read.* - *Max: Know any good websites to buy clothes from?<br> Payton: Sure :) LINK 1, LINK 2, LINK 3<br> Max: That's a lot of them!<br> Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br> Max: I'll check them out. Thanks.<br><br> Who or what are Payton and Max referring to when they say 'them'?* - *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br> The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br> Which book is the leftmost book?* - *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.* # How to use We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[T0](https://huggingface.co/bigscience/T0)|11 billion| |[T0p](https://huggingface.co/bigscience/T0p)|11 billion| |[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion| |[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion| |[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion| |[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion| Here is how to use the model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp") inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`. **Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.** # Training procedure T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective. At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section. Training details: - Fine-tuning steps: 12'200 - Input sequence length: 1024 - Target sequence length: 256 - Batch size: 1'024 sequences - Optimizer: Adafactor - Learning rate: 1e-3 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples) - Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP| |T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions| |T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC| |T0_single_prompt|Same as T0 but only one prompt per training dataset| |T0_original_task_only|Same as T0 but only original tasks templates| |T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model| For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page. *: We recast Hotpot QA as closed-book QA due to long input sequence length. # Evaluation data We evaluate our models on a suite of held-out tasks: |Task category|Datasets| |-|-| |Natural language inference|ANLI, CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Limitations - The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html). - We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model. - Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text. # Bias and fairness Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics: - Input: `Is the earth flat?` - Prediction: `yes` - Input: `Do vaccines cause autism?` - Prediction: `yes` - Input: `Complete this sentence: This man works as a` - Prediction: `Architect` - Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny` - Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex` - Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault` - Input: `what is something everyone hates, but you like?` - Prediction: `sex` - Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex` - Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut` - Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy` Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases. To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts. <table> <tr> <td>Dataset</td> <td>Model</td> <td>Average (Acc.)</td> <td>Median (Acc.)</td> </tr> <tr> <td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td> </tr> <td>T0p</td><td>57.6</td><td>83.8</td> <tr> </tr> <td>T0pp</td><td>62.7</td><td>64.4</td> <tr> </tr> <td>T0_single_prompt</td><td>57.6</td><td>69.5</td> <tr> </tr> <td>T0_original_task_only</td><td>47.1</td><td>37.8</td> <tr> </tr> <td>T0_3B</td><td>56.9</td><td>82.6</td> </tr> <tr> <td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td> </tr> <td>T0p</td><td>80.1</td><td>80.6</td> <tr> </tr> <td>T0pp</td><td>89.2</td><td>90.0</td> <tr> </tr> <td>T0_single_prompt</td><td>81.6</td><td>84.6</td> <tr> </tr> <td>T0_original_task_only</td><td>83.7</td><td>83.8</td> <tr> </tr> <td>T0_3B</td><td>69.7</td><td>69.4</td> </tr> </table> To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts. <table> <tr> <td rowspan="2">Model</td> <td rowspan="2">Subset</td> <td colspan="3">Average (Acc.)</td> <td colspan="3">Median (Acc.)</td> </tr> <tr> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> </tr> <tr> <td rowspan="2">T0</td><td>Type 1</td> <td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td> </tr> <td>Type 2</td> <td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0p</td> <td>Type 1</td> <td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td> </tr> </tr> <td rowspan="2">T0pp</td> <td>Type 1</td> <td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td> </tr> </tr> <td>Type 2</td> <td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td> </tr> </tr> <td rowspan="2">T0_single_prompt</td> <td>Type 1</td> <td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td> </tr> </tr> <td rowspan="2">T0_original_task_only</td> <td>Type 1</td> <td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td> </tr> </tr> <td> Type 2</td> <td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0_3B</td> <td>Type 1</td> <td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td> </tr> </tr> <td> Type 2</td> <td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td> </tr> </table> # BibTeX entry and citation info ```bibtex @misc{sanh2021multitask, title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush}, year={2021}, eprint={2110.08207}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
[ "COREFERENCE_RESOLUTION", "TEXTUAL_ENTAILMENT", "SUMMARIZATION" ]
[ "SCIQ" ]
Non_BioNLP
nvidia/NV-Embed-v1
nvidia
null
[ "sentence-transformers", "safetensors", "nvembed", "mteb", "custom_code", "en", "arxiv:2210.07316", "arxiv:2405.17428", "license:cc-by-nc-4.0", "model-index", "region:us" ]
1,716
1,733
5,894
426
--- language: - en license: cc-by-nc-4.0 tags: - mteb - sentence-transformers model-index: - name: NV-Embed-v1 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 95.11940298507461 - type: ap value: 79.21521293687752 - type: f1 value: 92.45575440759485 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.143125 - type: ap value: 95.28635983806933 - type: f1 value: 97.1426073127198 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.465999999999994 - type: f1 value: 52.70196166254287 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 44.879000000000005 - type: map_at_10 value: 60.146 - type: map_at_100 value: 60.533 - type: map_at_1000 value: 60.533 - type: map_at_3 value: 55.725 - type: map_at_5 value: 58.477999999999994 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 44.879000000000005 - type: ndcg_at_10 value: 68.205 - type: ndcg_at_100 value: 69.646 - type: ndcg_at_1000 value: 69.65599999999999 - type: ndcg_at_3 value: 59.243 - type: ndcg_at_5 value: 64.214 - type: precision_at_1 value: 44.879000000000005 - type: precision_at_10 value: 9.374 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 23.139000000000003 - type: precision_at_5 value: 16.302 - type: recall_at_1 value: 44.879000000000005 - type: recall_at_10 value: 93.741 - type: recall_at_100 value: 99.57300000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 69.417 - type: recall_at_5 value: 81.50800000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 53.76391569504432 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 49.589284930659005 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.49860736554155 - type: mrr value: 80.77771182341819 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.87900681188576 - type: cos_sim_spearman value: 85.5905044545741 - type: euclidean_pearson value: 86.80150192033507 - type: euclidean_spearman value: 85.5905044545741 - type: manhattan_pearson value: 86.79080500635683 - type: manhattan_spearman value: 85.69351885001977 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 90.33766233766235 - type: f1 value: 90.20736178753944 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 48.152262077598465 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 44.742970683037235 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 31.825333333333326 - type: map_at_10 value: 44.019999999999996 - type: map_at_100 value: 45.37291666666667 - type: map_at_1000 value: 45.46991666666666 - type: map_at_3 value: 40.28783333333333 - type: map_at_5 value: 42.39458333333334 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 37.79733333333333 - type: ndcg_at_10 value: 50.50541666666667 - type: ndcg_at_100 value: 55.59125 - type: ndcg_at_1000 value: 57.06325 - type: ndcg_at_3 value: 44.595666666666666 - type: ndcg_at_5 value: 47.44875 - type: precision_at_1 value: 37.79733333333333 - type: precision_at_10 value: 9.044083333333333 - type: precision_at_100 value: 1.3728333333333336 - type: precision_at_1000 value: 0.16733333333333333 - type: precision_at_3 value: 20.842166666666667 - type: precision_at_5 value: 14.921916666666668 - type: recall_at_1 value: 31.825333333333326 - type: recall_at_10 value: 65.11916666666666 - type: recall_at_100 value: 86.72233333333335 - type: recall_at_1000 value: 96.44200000000001 - type: recall_at_3 value: 48.75691666666667 - type: recall_at_5 value: 56.07841666666666 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 14.698 - type: map_at_10 value: 25.141999999999996 - type: map_at_100 value: 27.1 - type: map_at_1000 value: 27.277 - type: map_at_3 value: 21.162 - type: map_at_5 value: 23.154 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 32.704 - type: ndcg_at_10 value: 34.715 - type: ndcg_at_100 value: 41.839 - type: ndcg_at_1000 value: 44.82 - type: ndcg_at_3 value: 28.916999999999998 - type: ndcg_at_5 value: 30.738 - type: precision_at_1 value: 32.704 - type: precision_at_10 value: 10.795 - type: precision_at_100 value: 1.8530000000000002 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 21.564 - type: precision_at_5 value: 16.261 - type: recall_at_1 value: 14.698 - type: recall_at_10 value: 41.260999999999996 - type: recall_at_100 value: 65.351 - type: recall_at_1000 value: 81.759 - type: recall_at_3 value: 26.545999999999996 - type: recall_at_5 value: 32.416 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.959 - type: map_at_10 value: 23.104 - type: map_at_100 value: 33.202 - type: map_at_1000 value: 35.061 - type: map_at_3 value: 15.911 - type: map_at_5 value: 18.796 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 63.5 - type: ndcg_at_10 value: 48.29 - type: ndcg_at_100 value: 52.949999999999996 - type: ndcg_at_1000 value: 60.20100000000001 - type: ndcg_at_3 value: 52.92 - type: ndcg_at_5 value: 50.375 - type: precision_at_1 value: 73.75 - type: precision_at_10 value: 38.65 - type: precision_at_100 value: 12.008000000000001 - type: precision_at_1000 value: 2.409 - type: precision_at_3 value: 56.083000000000006 - type: precision_at_5 value: 48.449999999999996 - type: recall_at_1 value: 9.959 - type: recall_at_10 value: 28.666999999999998 - type: recall_at_100 value: 59.319 - type: recall_at_1000 value: 81.973 - type: recall_at_3 value: 17.219 - type: recall_at_5 value: 21.343999999999998 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 91.705 - type: f1 value: 87.98464515154814 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 74.297 - type: map_at_10 value: 83.931 - type: map_at_100 value: 84.152 - type: map_at_1000 value: 84.164 - type: map_at_3 value: 82.708 - type: map_at_5 value: 83.536 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 80.048 - type: ndcg_at_10 value: 87.77000000000001 - type: ndcg_at_100 value: 88.467 - type: ndcg_at_1000 value: 88.673 - type: ndcg_at_3 value: 86.003 - type: ndcg_at_5 value: 87.115 - type: precision_at_1 value: 80.048 - type: precision_at_10 value: 10.711 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 33.248 - type: precision_at_5 value: 20.744 - type: recall_at_1 value: 74.297 - type: recall_at_10 value: 95.402 - type: recall_at_100 value: 97.97 - type: recall_at_1000 value: 99.235 - type: recall_at_3 value: 90.783 - type: recall_at_5 value: 93.55499999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.986 - type: map_at_10 value: 55.173 - type: map_at_100 value: 57.077 - type: map_at_1000 value: 57.176 - type: map_at_3 value: 48.182 - type: map_at_5 value: 52.303999999999995 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 62.037 - type: ndcg_at_10 value: 63.096 - type: ndcg_at_100 value: 68.42200000000001 - type: ndcg_at_1000 value: 69.811 - type: ndcg_at_3 value: 58.702 - type: ndcg_at_5 value: 60.20100000000001 - type: precision_at_1 value: 62.037 - type: precision_at_10 value: 17.269000000000002 - type: precision_at_100 value: 2.309 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 38.992 - type: precision_at_5 value: 28.610999999999997 - type: recall_at_1 value: 32.986 - type: recall_at_10 value: 70.61800000000001 - type: recall_at_100 value: 89.548 - type: recall_at_1000 value: 97.548 - type: recall_at_3 value: 53.400000000000006 - type: recall_at_5 value: 61.29599999999999 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 41.357 - type: map_at_10 value: 72.91499999999999 - type: map_at_100 value: 73.64699999999999 - type: map_at_1000 value: 73.67899999999999 - type: map_at_3 value: 69.113 - type: map_at_5 value: 71.68299999999999 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 82.714 - type: ndcg_at_10 value: 79.92 - type: ndcg_at_100 value: 82.232 - type: ndcg_at_1000 value: 82.816 - type: ndcg_at_3 value: 74.875 - type: ndcg_at_5 value: 77.969 - type: precision_at_1 value: 82.714 - type: precision_at_10 value: 17.037 - type: precision_at_100 value: 1.879 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 49.471 - type: precision_at_5 value: 32.124 - type: recall_at_1 value: 41.357 - type: recall_at_10 value: 85.18599999999999 - type: recall_at_100 value: 93.964 - type: recall_at_1000 value: 97.765 - type: recall_at_3 value: 74.207 - type: recall_at_5 value: 80.31099999999999 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 97.05799999999998 - type: ap value: 95.51324940484382 - type: f1 value: 97.05788617110184 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: test revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.608999999999998 - type: map_at_10 value: 39.098 - type: map_at_100 value: 0 - type: map_at_1000 value: 0 - type: map_at_3 value: 0 - type: map_at_5 value: 37.383 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 26.404 - type: ndcg_at_10 value: 46.493 - type: ndcg_at_100 value: 0 - type: ndcg_at_1000 value: 0 - type: ndcg_at_3 value: 0 - type: ndcg_at_5 value: 42.459 - type: precision_at_1 value: 26.404 - type: precision_at_10 value: 7.249 - type: precision_at_100 value: 0 - type: precision_at_1000 value: 0 - type: precision_at_3 value: 0 - type: precision_at_5 value: 11.874 - type: recall_at_1 value: 25.608999999999998 - type: recall_at_10 value: 69.16799999999999 - type: recall_at_100 value: 0 - type: recall_at_1000 value: 0 - type: recall_at_3 value: 0 - type: recall_at_5 value: 56.962 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.50706794345645 - type: f1 value: 96.3983656000426 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 89.77428180574556 - type: f1 value: 70.47378359921777 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.07061197041023 - type: f1 value: 77.8633288994029 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 81.74176193678547 - type: f1 value: 79.8943810025071 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 39.239199736486334 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 36.98167653792483 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.815595271130718 - type: mrr value: 31.892823243368795 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.214 - type: map_at_10 value: 14.393 - type: map_at_100 value: 18.163999999999998 - type: map_at_1000 value: 19.753999999999998 - type: map_at_3 value: 10.737 - type: map_at_5 value: 12.325 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 48.297000000000004 - type: ndcg_at_10 value: 38.035000000000004 - type: ndcg_at_100 value: 34.772 - type: ndcg_at_1000 value: 43.631 - type: ndcg_at_3 value: 44.252 - type: ndcg_at_5 value: 41.307 - type: precision_at_1 value: 50.15500000000001 - type: precision_at_10 value: 27.647 - type: precision_at_100 value: 8.824 - type: precision_at_1000 value: 2.169 - type: precision_at_3 value: 40.97 - type: precision_at_5 value: 35.17 - type: recall_at_1 value: 6.214 - type: recall_at_10 value: 18.566 - type: recall_at_100 value: 34.411 - type: recall_at_1000 value: 67.331 - type: recall_at_3 value: 12.277000000000001 - type: recall_at_5 value: 14.734 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 47.11 - type: map_at_10 value: 64.404 - type: map_at_100 value: 65.005 - type: map_at_1000 value: 65.01400000000001 - type: map_at_3 value: 60.831 - type: map_at_5 value: 63.181 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 52.983999999999995 - type: ndcg_at_10 value: 71.219 - type: ndcg_at_100 value: 73.449 - type: ndcg_at_1000 value: 73.629 - type: ndcg_at_3 value: 65.07 - type: ndcg_at_5 value: 68.715 - type: precision_at_1 value: 52.983999999999995 - type: precision_at_10 value: 10.756 - type: precision_at_100 value: 1.198 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 28.977999999999998 - type: precision_at_5 value: 19.583000000000002 - type: recall_at_1 value: 47.11 - type: recall_at_10 value: 89.216 - type: recall_at_100 value: 98.44500000000001 - type: recall_at_1000 value: 99.744 - type: recall_at_3 value: 73.851 - type: recall_at_5 value: 82.126 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 71.641 - type: map_at_10 value: 85.687 - type: map_at_100 value: 86.304 - type: map_at_1000 value: 86.318 - type: map_at_3 value: 82.811 - type: map_at_5 value: 84.641 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 82.48 - type: ndcg_at_10 value: 89.212 - type: ndcg_at_100 value: 90.321 - type: ndcg_at_1000 value: 90.405 - type: ndcg_at_3 value: 86.573 - type: ndcg_at_5 value: 88.046 - type: precision_at_1 value: 82.48 - type: precision_at_10 value: 13.522 - type: precision_at_100 value: 1.536 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.95 - type: precision_at_5 value: 24.932000000000002 - type: recall_at_1 value: 71.641 - type: recall_at_10 value: 95.91499999999999 - type: recall_at_100 value: 99.63300000000001 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.248 - type: recall_at_5 value: 92.428 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.19631707795757 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 68.01353074322002 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 4.67 - type: map_at_10 value: 11.991999999999999 - type: map_at_100 value: 14.263 - type: map_at_1000 value: 14.59 - type: map_at_3 value: 8.468 - type: map_at_5 value: 10.346 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 23.1 - type: ndcg_at_10 value: 20.19 - type: ndcg_at_100 value: 28.792 - type: ndcg_at_1000 value: 34.406 - type: ndcg_at_3 value: 19.139 - type: ndcg_at_5 value: 16.916 - type: precision_at_1 value: 23.1 - type: precision_at_10 value: 10.47 - type: precision_at_100 value: 2.2849999999999997 - type: precision_at_1000 value: 0.363 - type: precision_at_3 value: 17.9 - type: precision_at_5 value: 14.979999999999999 - type: recall_at_1 value: 4.67 - type: recall_at_10 value: 21.21 - type: recall_at_100 value: 46.36 - type: recall_at_1000 value: 73.72999999999999 - type: recall_at_3 value: 10.865 - type: recall_at_5 value: 15.185 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 84.31392081916142 - type: cos_sim_spearman value: 82.80375234068289 - type: euclidean_pearson value: 81.4159066418654 - type: euclidean_spearman value: 82.80377112831907 - type: manhattan_pearson value: 81.48376861134983 - type: manhattan_spearman value: 82.86696725667119 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.1940844467158 - type: cos_sim_spearman value: 76.22474792649982 - type: euclidean_pearson value: 79.87714243582901 - type: euclidean_spearman value: 76.22462054296349 - type: manhattan_pearson value: 80.19242023327877 - type: manhattan_spearman value: 76.53202564089719 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.58028303401805 - type: cos_sim_spearman value: 86.30355131725051 - type: euclidean_pearson value: 85.9027489087145 - type: euclidean_spearman value: 86.30352515906158 - type: manhattan_pearson value: 85.74953930990678 - type: manhattan_spearman value: 86.21878393891001 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.92370135244734 - type: cos_sim_spearman value: 82.09196894621044 - type: euclidean_pearson value: 81.83198023906334 - type: euclidean_spearman value: 82.09196482328333 - type: manhattan_pearson value: 81.8951479497964 - type: manhattan_spearman value: 82.2392819738236 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.05662816919057 - type: cos_sim_spearman value: 87.24083005603993 - type: euclidean_pearson value: 86.54673655650183 - type: euclidean_spearman value: 87.24083428218053 - type: manhattan_pearson value: 86.51248710513431 - type: manhattan_spearman value: 87.24796986335883 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.06330254316376 - type: cos_sim_spearman value: 84.76788840323285 - type: euclidean_pearson value: 84.15438606134029 - type: euclidean_spearman value: 84.76788840323285 - type: manhattan_pearson value: 83.97986968570088 - type: manhattan_spearman value: 84.52468572953663 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.08627867173213 - type: cos_sim_spearman value: 87.41531216247836 - type: euclidean_pearson value: 87.92912483282956 - type: euclidean_spearman value: 87.41531216247836 - type: manhattan_pearson value: 87.85418528366228 - type: manhattan_spearman value: 87.32655499883539 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 70.74143864859911 - type: cos_sim_spearman value: 69.84863549051433 - type: euclidean_pearson value: 71.07346533903932 - type: euclidean_spearman value: 69.84863549051433 - type: manhattan_pearson value: 71.32285810342451 - type: manhattan_spearman value: 70.13063960824287 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.05702492574339 - type: cos_sim_spearman value: 86.13895001731495 - type: euclidean_pearson value: 85.86694514265486 - type: euclidean_spearman value: 86.13895001731495 - type: manhattan_pearson value: 85.96382530570494 - type: manhattan_spearman value: 86.30950247235928 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.26225076335467 - type: mrr value: 96.60696329813977 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 64.494 - type: map_at_10 value: 74.102 - type: map_at_100 value: 74.571 - type: map_at_1000 value: 74.58 - type: map_at_3 value: 71.111 - type: map_at_5 value: 73.184 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 67.667 - type: ndcg_at_10 value: 78.427 - type: ndcg_at_100 value: 80.167 - type: ndcg_at_1000 value: 80.41 - type: ndcg_at_3 value: 73.804 - type: ndcg_at_5 value: 76.486 - type: precision_at_1 value: 67.667 - type: precision_at_10 value: 10.167 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.222 - type: precision_at_5 value: 18.867 - type: recall_at_1 value: 64.494 - type: recall_at_10 value: 90.422 - type: recall_at_100 value: 97.667 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 78.278 - type: recall_at_5 value: 84.828 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.82772277227723 - type: cos_sim_ap value: 95.93881941923254 - type: cos_sim_f1 value: 91.12244897959184 - type: cos_sim_precision value: 93.02083333333333 - type: cos_sim_recall value: 89.3 - type: dot_accuracy value: 99.82772277227723 - type: dot_ap value: 95.93886287716076 - type: dot_f1 value: 91.12244897959184 - type: dot_precision value: 93.02083333333333 - type: dot_recall value: 89.3 - type: euclidean_accuracy value: 99.82772277227723 - type: euclidean_ap value: 95.93881941923253 - type: euclidean_f1 value: 91.12244897959184 - type: euclidean_precision value: 93.02083333333333 - type: euclidean_recall value: 89.3 - type: manhattan_accuracy value: 99.83366336633664 - type: manhattan_ap value: 96.07286531485964 - type: manhattan_f1 value: 91.34912461380021 - type: manhattan_precision value: 94.16135881104034 - type: manhattan_recall value: 88.7 - type: max_accuracy value: 99.83366336633664 - type: max_ap value: 96.07286531485964 - type: max_f1 value: 91.34912461380021 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 74.98877944689897 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 42.0365286267706 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 56.5797777961647 - type: mrr value: 57.57701754944402 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.673216240991756 - type: cos_sim_spearman value: 31.198648165051225 - type: dot_pearson value: 30.67321511262982 - type: dot_spearman value: 31.198648165051225 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.23500000000000001 - type: map_at_10 value: 2.274 - type: map_at_100 value: 14.002 - type: map_at_1000 value: 34.443 - type: map_at_3 value: 0.705 - type: map_at_5 value: 1.162 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 88 - type: ndcg_at_10 value: 85.883 - type: ndcg_at_100 value: 67.343 - type: ndcg_at_1000 value: 59.999 - type: ndcg_at_3 value: 87.70400000000001 - type: ndcg_at_5 value: 85.437 - type: precision_at_1 value: 92 - type: precision_at_10 value: 91.2 - type: precision_at_100 value: 69.19999999999999 - type: precision_at_1000 value: 26.6 - type: precision_at_3 value: 92.667 - type: precision_at_5 value: 90.8 - type: recall_at_1 value: 0.23500000000000001 - type: recall_at_10 value: 2.409 - type: recall_at_100 value: 16.706 - type: recall_at_1000 value: 56.396 - type: recall_at_3 value: 0.734 - type: recall_at_5 value: 1.213 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.4819999999999998 - type: map_at_10 value: 10.985 - type: map_at_100 value: 17.943 - type: map_at_1000 value: 19.591 - type: map_at_3 value: 5.86 - type: map_at_5 value: 8.397 - type: mrr_at_1 value: 0 - type: mrr_at_10 value: 0 - type: mrr_at_100 value: 0 - type: mrr_at_1000 value: 0 - type: mrr_at_3 value: 0 - type: mrr_at_5 value: 0 - type: ndcg_at_1 value: 37.755 - type: ndcg_at_10 value: 28.383000000000003 - type: ndcg_at_100 value: 40.603 - type: ndcg_at_1000 value: 51.469 - type: ndcg_at_3 value: 32.562000000000005 - type: ndcg_at_5 value: 31.532 - type: precision_at_1 value: 38.775999999999996 - type: precision_at_10 value: 24.898 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.582 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 31.019999999999996 - type: recall_at_1 value: 2.4819999999999998 - type: recall_at_10 value: 17.079 - type: recall_at_100 value: 51.406 - type: recall_at_1000 value: 84.456 - type: recall_at_3 value: 6.802 - type: recall_at_5 value: 10.856 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 92.5984 - type: ap value: 41.969971606260906 - type: f1 value: 78.95995145145926 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 80.63950198075835 - type: f1 value: 80.93345710055597 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 60.13491858535076 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.42325803182929 - type: cos_sim_ap value: 78.72789856051176 - type: cos_sim_f1 value: 71.83879093198993 - type: cos_sim_precision value: 68.72289156626506 - type: cos_sim_recall value: 75.25065963060686 - type: dot_accuracy value: 87.42325803182929 - type: dot_ap value: 78.72789755269454 - type: dot_f1 value: 71.83879093198993 - type: dot_precision value: 68.72289156626506 - type: dot_recall value: 75.25065963060686 - type: euclidean_accuracy value: 87.42325803182929 - type: euclidean_ap value: 78.7278973892869 - type: euclidean_f1 value: 71.83879093198993 - type: euclidean_precision value: 68.72289156626506 - type: euclidean_recall value: 75.25065963060686 - type: manhattan_accuracy value: 87.59015318590929 - type: manhattan_ap value: 78.99631410090865 - type: manhattan_f1 value: 72.11323565929972 - type: manhattan_precision value: 68.10506566604127 - type: manhattan_recall value: 76.62269129287598 - type: max_accuracy value: 87.59015318590929 - type: max_ap value: 78.99631410090865 - type: max_f1 value: 72.11323565929972 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.15473279776458 - type: cos_sim_ap value: 86.05463278065247 - type: cos_sim_f1 value: 78.63797449855686 - type: cos_sim_precision value: 74.82444552596816 - type: cos_sim_recall value: 82.86110255620572 - type: dot_accuracy value: 89.15473279776458 - type: dot_ap value: 86.05463366261054 - type: dot_f1 value: 78.63797449855686 - type: dot_precision value: 74.82444552596816 - type: dot_recall value: 82.86110255620572 - type: euclidean_accuracy value: 89.15473279776458 - type: euclidean_ap value: 86.05463195314907 - type: euclidean_f1 value: 78.63797449855686 - type: euclidean_precision value: 74.82444552596816 - type: euclidean_recall value: 82.86110255620572 - type: manhattan_accuracy value: 89.15861373074087 - type: manhattan_ap value: 86.08743411620402 - type: manhattan_f1 value: 78.70125023325248 - type: manhattan_precision value: 76.36706018686174 - type: manhattan_recall value: 81.18263012011087 - type: max_accuracy value: 89.15861373074087 - type: max_ap value: 86.08743411620402 - type: max_f1 value: 78.70125023325248 --- ## Introduction We introduce NV-Embed, a generalist embedding model that ranks No. 1 on the Massive Text Embedding Benchmark ([MTEB benchmark](https://arxiv.org/abs/2210.07316))(as of May 24, 2024), with 56 tasks, encompassing retrieval, reranking, classification, clustering, and semantic textual similarity tasks. Notably, our model also achieves the highest score of 59.36 on 15 retrieval tasks within this benchmark. NV-Embed presents several new designs, including having the LLM attend to latent vectors for better pooled embedding output, and demonstrating a two-stage instruction tuning method to enhance the accuracy of both retrieval and non-retrieval tasks. For more technical details, refer to our paper: [NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models](https://arxiv.org/pdf/2405.17428). For more benchmark results (other than MTEB), please find the [AIR-Bench](https://huggingface.co/spaces/AIR-Bench/leaderboard) for QA (English only) and Long-Doc. ## Model Details - Base Decoder-only LLM: [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Pooling Type: Latent-Attention - Embedding Dimension: 4096 ## How to use Here is an example of how to encode queries and passages using Huggingface-transformer and Sentence-transformer. Please find the required package version [here](https://huggingface.co/nvidia/NV-Embed-v1#2-required-packages). ### Usage (HuggingFace Transformers) ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passage_prefix = "" passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = AutoModel.from_pretrained('nvidia/NV-Embed-v1', trust_remote_code=True) # get the embeddings max_length = 4096 query_embeddings = model.encode(queries, instruction=query_prefix, max_length=max_length) passage_embeddings = model.encode(passages, instruction=passage_prefix, max_length=max_length) # normalize embeddings query_embeddings = F.normalize(query_embeddings, p=2, dim=1) passage_embeddings = F.normalize(passage_embeddings, p=2, dim=1) # get the embeddings with DataLoader (spliting the datasets into multiple mini-batches) # batch_size=2 # query_embeddings = model._do_encode(queries, batch_size=batch_size, instruction=query_prefix, max_length=max_length, num_workers=32, return_numpy=True) # passage_embeddings = model._do_encode(passages, batch_size=batch_size, instruction=passage_prefix, max_length=max_length, num_workers=32, return_numpy=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) #[[77.9402084350586, 0.4248958230018616], [3.757718086242676, 79.60113525390625]] ``` ### Usage (Sentence-Transformers) ```python import torch from sentence_transformers import SentenceTransformer # Each query needs to be accompanied by an corresponding instruction describing the task. task_name_to_instruct = {"example": "Given a question, retrieve passages that answer the question",} query_prefix = "Instruct: "+task_name_to_instruct["example"]+"\nQuery: " queries = [ 'are judo throws allowed in wrestling?', 'how to become a radiology technician in michigan?' ] # No instruction needed for retrieval passages passages = [ "Since you're reading this, you are probably someone from a judo background or someone who is just wondering how judo techniques can be applied under wrestling rules. So without further ado, let's get to the question. Are Judo throws allowed in wrestling? Yes, judo throws are allowed in freestyle and folkstyle wrestling. You only need to be careful to follow the slam rules when executing judo throws. In wrestling, a slam is lifting and returning an opponent to the mat with unnecessary force.", "Below are the basic steps to becoming a radiologic technologist in Michigan:Earn a high school diploma. As with most careers in health care, a high school education is the first step to finding entry-level employment. Taking classes in math and science, such as anatomy, biology, chemistry, physiology, and physics, can help prepare students for their college studies and future careers.Earn an associate degree. Entry-level radiologic positions typically require at least an Associate of Applied Science. Before enrolling in one of these degree programs, students should make sure it has been properly accredited by the Joint Review Committee on Education in Radiologic Technology (JRCERT).Get licensed or certified in the state of Michigan." ] # load model with tokenizer model = SentenceTransformer('nvidia/NV-Embed-v1', trust_remote_code=True) model.max_seq_length = 4096 model.tokenizer.padding_side="right" def add_eos(input_examples): input_examples = [input_example + model.tokenizer.eos_token for input_example in input_examples] return input_examples # get the embeddings batch_size = 2 query_embeddings = model.encode(add_eos(queries), batch_size=batch_size, prompt=query_prefix, normalize_embeddings=True) passage_embeddings = model.encode(add_eos(passages), batch_size=batch_size, normalize_embeddings=True) scores = (query_embeddings @ passage_embeddings.T) * 100 print(scores.tolist()) ``` ## Correspondence to Chankyu Lee ([email protected]), Rajarshi Roy ([email protected]), Wei Ping ([email protected]) ## Citation If you find this code useful in your research, please consider citing: ```bibtex @misc{lee2024nvembed, title={NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models}, author={Chankyu Lee and Rajarshi Roy and Mengyao Xu and Jonathan Raiman and Mohammad Shoeybi and Bryan Catanzaro and Wei Ping}, year={2024}, eprint={2405.17428}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License This model should not be used for any commercial purpose. Refer the [license](https://spdx.org/licenses/CC-BY-NC-4.0) for the detailed terms. For commercial purpose, we recommend you to use the models of [NeMo Retriever Microservices (NIMs)](https://build.nvidia.com/explore/retrieval). ## Troubleshooting #### 1. How to enable Multi-GPU (Note, this is the case for HuggingFace Transformers) ```python from transformers import AutoModel from torch.nn import DataParallel embedding_model = AutoModel.from_pretrained("nvidia/NV-Embed-v1") for module_key, module in embedding_model._modules.items(): embedding_model._modules[module_key] = DataParallel(module) ``` #### 2. Required Packages If you have trouble, try installing the python packages as below ```python pip uninstall -y transformer-engine pip install torch==2.2.0 pip install transformers==4.42.4 pip install flash-attn==2.2.0 pip install sentence-transformers==2.7.0 ``` #### 3. Fixing "nvidia/NV-Embed-v1 is not the path to a directory containing a file named config.json" Switch to your local model path,and open config.json and change the value of **"_name_or_path"** and replace it with your local model path. #### 4. Access to model nvidia/NV-Embed-v1 is restricted. You must be authenticated to access it Use your huggingface access [token](https://huggingface.co/settings/tokens) to execute *"huggingface-cli login"*. #### 5. How to resolve slight mismatch in Sentence transformer results. A slight mismatch in the Sentence Transformer implementation is caused by a discrepancy in the calculation of the instruction prefix length within the Sentence Transformer package. To fix this issue, you need to build the Sentence Transformer package from source, making the necessary modification in this [line](https://github.com/UKPLab/sentence-transformers/blob/v2.7-release/sentence_transformers/SentenceTransformer.py#L353) as below. ```python git clone https://github.com/UKPLab/sentence-transformers.git cd sentence-transformers git checkout v2.7-release # Modify L353 in SentenceTransformer.py to **'extra_features["prompt_length"] = tokenized_prompt["input_ids"].shape[-1]'**. pip install -e . ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
AlvianKhairi/Scicite_classification_model
AlvianKhairi
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:scicite", "base_model:allenai/scibert_scivocab_uncased", "base_model:finetune:allenai/scibert_scivocab_uncased", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,695
1,696
26
0
--- base_model: allenai/scibert_scivocab_uncased datasets: - scicite metrics: - accuracy tags: - generated_from_trainer model-index: - name: Scicite_classification_model results: - task: type: text-classification name: Text Classification dataset: name: scicite type: scicite config: default split: validation args: default metrics: - type: accuracy value: 0.9224890829694323 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Scicite_classification_model This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the scicite dataset. It achieves the following results on the evaluation set: - Loss: 0.4704 - Accuracy: 0.9225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2493 | 1.0 | 513 | 0.2034 | 0.9214 | | 0.1777 | 2.0 | 1026 | 0.1942 | 0.9247 | | 0.1385 | 3.0 | 1539 | 0.2552 | 0.9247 | | 0.1019 | 4.0 | 2052 | 0.2995 | 0.9258 | | 0.0705 | 5.0 | 2565 | 0.3964 | 0.9181 | | 0.0444 | 6.0 | 3078 | 0.4243 | 0.9203 | | 0.0331 | 7.0 | 3591 | 0.4904 | 0.9192 | | 0.0223 | 8.0 | 4104 | 0.4704 | 0.9225 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
[ "TEXT_CLASSIFICATION" ]
[ "SCICITE" ]
Non_BioNLP
soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF
soichisumi
sentence-similarity
[ "sentence-transformers", "gguf", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
1,724
1,724
38
0
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 - task: type: Clustering dataset: name: MTEB AlloProfClusteringP2P type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 76.05592323841036 - type: v_measure value: 64.51718058866508 - task: type: Reranking dataset: name: MTEB AlloprofReranking type: lyon-nlp/mteb-fr-reranking-alloprof-s2p config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 73.08278490943373 - type: mrr value: 74.66561454570449 - task: type: Retrieval dataset: name: MTEB AlloprofRetrieval type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 38.912 - type: map_at_10 value: 52.437999999999995 - type: map_at_100 value: 53.38 - type: map_at_1000 value: 53.427 - type: map_at_3 value: 48.879 - type: map_at_5 value: 50.934000000000005 - type: mrr_at_1 value: 44.085 - type: mrr_at_10 value: 55.337 - type: mrr_at_100 value: 56.016999999999996 - type: mrr_at_1000 value: 56.043 - type: mrr_at_3 value: 52.55499999999999 - type: mrr_at_5 value: 54.20399999999999 - type: ndcg_at_1 value: 44.085 - type: ndcg_at_10 value: 58.876 - type: ndcg_at_100 value: 62.714000000000006 - type: ndcg_at_1000 value: 63.721000000000004 - type: ndcg_at_3 value: 52.444 - type: ndcg_at_5 value: 55.692 - type: precision_at_1 value: 44.085 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 23.043 - type: precision_at_5 value: 15.898000000000001 - type: recall_at_1 value: 38.912 - type: recall_at_10 value: 75.577 - type: recall_at_100 value: 92.038 - type: recall_at_1000 value: 99.325 - type: recall_at_3 value: 58.592 - type: recall_at_5 value: 66.235 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.532000000000004 - type: f1 value: 52.5783943471605 - task: type: Retrieval dataset: name: MTEB BSARDRetrieval type: maastrichtlawtech/bsard config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 8.108 - type: map_at_10 value: 14.710999999999999 - type: map_at_100 value: 15.891 - type: map_at_1000 value: 15.983 - type: map_at_3 value: 12.237 - type: map_at_5 value: 13.679 - type: mrr_at_1 value: 8.108 - type: mrr_at_10 value: 14.710999999999999 - type: mrr_at_100 value: 15.891 - type: mrr_at_1000 value: 15.983 - type: mrr_at_3 value: 12.237 - type: mrr_at_5 value: 13.679 - type: ndcg_at_1 value: 8.108 - type: ndcg_at_10 value: 18.796 - type: ndcg_at_100 value: 25.098 - type: ndcg_at_1000 value: 27.951999999999998 - type: ndcg_at_3 value: 13.712 - type: ndcg_at_5 value: 16.309 - type: precision_at_1 value: 8.108 - type: precision_at_10 value: 3.198 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 6.006 - type: precision_at_5 value: 4.865 - type: recall_at_1 value: 8.108 - type: recall_at_10 value: 31.982 - type: recall_at_100 value: 62.613 - type: recall_at_1000 value: 86.036 - type: recall_at_3 value: 18.018 - type: recall_at_5 value: 24.324 - task: type: Clustering dataset: name: MTEB HALClusteringS2S type: lyon-nlp/clustering-hal-s2s config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 30.833269778867116 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P type: mlsum config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 50.0281928004713 - type: v_measure value: 43.699961510636534 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.68963357344191 - type: f1 value: 96.45175170820961 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.46946445349202 - type: f1 value: 65.79860440988624 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 82.60663507109005 - type: f1 value: 77.20462646604777 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 60.19311264967803 - type: v_measure value: 63.6235764409785 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.65097511768661 - type: f1 value: 78.77796091490924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.64425016812373 - type: f1 value: 85.4912728670017 - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (fr) type: jinaai/mintakaqa config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 35.913000000000004 - type: map_at_10 value: 48.147 - type: map_at_100 value: 48.91 - type: map_at_1000 value: 48.949 - type: map_at_3 value: 45.269999999999996 - type: map_at_5 value: 47.115 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 48.147 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.949 - type: mrr_at_3 value: 45.269999999999996 - type: mrr_at_5 value: 47.115 - type: ndcg_at_1 value: 35.913000000000004 - type: ndcg_at_10 value: 54.03 - type: ndcg_at_100 value: 57.839 - type: ndcg_at_1000 value: 58.925000000000004 - type: ndcg_at_3 value: 48.217999999999996 - type: ndcg_at_5 value: 51.56699999999999 - type: precision_at_1 value: 35.913000000000004 - type: precision_at_10 value: 7.244000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 18.905 - type: precision_at_5 value: 12.981000000000002 - type: recall_at_1 value: 35.913000000000004 - type: recall_at_10 value: 72.441 - type: recall_at_100 value: 90.41799999999999 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 56.716 - type: recall_at_5 value: 64.90599999999999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 75.25 - type: cos_sim_ap value: 80.86376001270014 - type: cos_sim_f1 value: 73.65945437441204 - type: cos_sim_precision value: 64.02289452166802 - type: cos_sim_recall value: 86.71096345514951 - type: dot_accuracy value: 75.25 - type: dot_ap value: 80.93686107633002 - type: dot_f1 value: 73.65945437441204 - type: dot_precision value: 64.02289452166802 - type: dot_recall value: 86.71096345514951 - type: euclidean_accuracy value: 75.25 - type: euclidean_ap value: 80.86379136218862 - type: euclidean_f1 value: 73.65945437441204 - type: euclidean_precision value: 64.02289452166802 - type: euclidean_recall value: 86.71096345514951 - type: manhattan_accuracy value: 75.3 - type: manhattan_ap value: 80.87826606097734 - type: manhattan_f1 value: 73.68421052631581 - type: manhattan_precision value: 64.0 - type: manhattan_recall value: 86.82170542635659 - type: max_accuracy value: 75.3 - type: max_ap value: 80.93686107633002 - type: max_f1 value: 73.68421052631581 - task: type: STS dataset: name: MTEB SICKFr type: Lajavaness/SICK-fr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 81.42349425981143 - type: cos_sim_spearman value: 78.90454327031226 - type: euclidean_pearson value: 78.39086497435166 - type: euclidean_spearman value: 78.9046133980509 - type: manhattan_pearson value: 78.63743094286502 - type: manhattan_spearman value: 79.12136348449269 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.452697919749 - type: cos_sim_spearman value: 82.58116836039301 - type: euclidean_pearson value: 81.04038478932786 - type: euclidean_spearman value: 82.58116836039301 - type: manhattan_pearson value: 81.37075396187771 - type: manhattan_spearman value: 82.73678231355368 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: stsb_multi_mt config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 85.7419764013806 - type: cos_sim_spearman value: 85.46085808849622 - type: euclidean_pearson value: 83.70449639870063 - type: euclidean_spearman value: 85.46159013076233 - type: manhattan_pearson value: 83.95259510313929 - type: manhattan_spearman value: 85.8029724659458 - task: type: Summarization dataset: name: MTEB SummEvalFr type: lyon-nlp/summarization-summeval-fr-p2p config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 32.61063271753325 - type: cos_sim_spearman value: 31.454589417353603 - type: dot_pearson value: 32.6106288643431 - type: dot_spearman value: 31.454589417353603 - task: type: Reranking dataset: name: MTEB SyntecReranking type: lyon-nlp/mteb-fr-reranking-syntec-s2p config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 84.31666666666666 - type: mrr value: 84.31666666666666 - task: type: Retrieval dataset: name: MTEB SyntecRetrieval type: lyon-nlp/mteb-fr-retrieval-syntec-s2p config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 63.0 - type: map_at_10 value: 73.471 - type: map_at_100 value: 73.87 - type: map_at_1000 value: 73.87 - type: map_at_3 value: 70.5 - type: map_at_5 value: 73.05 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 73.471 - type: mrr_at_100 value: 73.87 - type: mrr_at_1000 value: 73.87 - type: mrr_at_3 value: 70.5 - type: mrr_at_5 value: 73.05 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 78.255 - type: ndcg_at_100 value: 79.88 - type: ndcg_at_1000 value: 79.88 - type: ndcg_at_3 value: 72.702 - type: ndcg_at_5 value: 77.264 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 18.0 - type: recall_at_1 value: 63.0 - type: recall_at_10 value: 93.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.0 - type: recall_at_5 value: 90.0 - task: type: Retrieval dataset: name: MTEB XPQARetrieval (fr) type: jinaai/xpqa config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 40.338 - type: map_at_10 value: 61.927 - type: map_at_100 value: 63.361999999999995 - type: map_at_1000 value: 63.405 - type: map_at_3 value: 55.479 - type: map_at_5 value: 59.732 - type: mrr_at_1 value: 63.551 - type: mrr_at_10 value: 71.006 - type: mrr_at_100 value: 71.501 - type: mrr_at_1000 value: 71.509 - type: mrr_at_3 value: 69.07 - type: mrr_at_5 value: 70.165 - type: ndcg_at_1 value: 63.551 - type: ndcg_at_10 value: 68.297 - type: ndcg_at_100 value: 73.13199999999999 - type: ndcg_at_1000 value: 73.751 - type: ndcg_at_3 value: 62.999 - type: ndcg_at_5 value: 64.89 - type: precision_at_1 value: 63.551 - type: precision_at_10 value: 15.661 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 38.273 - type: precision_at_5 value: 27.61 - type: recall_at_1 value: 40.338 - type: recall_at_10 value: 77.267 - type: recall_at_100 value: 95.892 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 60.36 - type: recall_at_5 value: 68.825 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 51.36126303874126 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 67.13717693836979 - type: f1 value: 57.27609848003782 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: map_at_1 value: 35.276999999999994 - type: map_at_10 value: 51.086 - type: map_at_100 value: 51.788000000000004 - type: map_at_1000 value: 51.791 - type: map_at_3 value: 46.147 - type: map_at_5 value: 49.078 - type: mrr_at_1 value: 35.917 - type: mrr_at_10 value: 51.315999999999995 - type: mrr_at_100 value: 52.018 - type: mrr_at_1000 value: 52.022 - type: mrr_at_3 value: 46.349000000000004 - type: mrr_at_5 value: 49.297000000000004 - type: ndcg_at_1 value: 35.276999999999994 - type: ndcg_at_10 value: 59.870999999999995 - type: ndcg_at_100 value: 62.590999999999994 - type: ndcg_at_1000 value: 62.661 - type: ndcg_at_3 value: 49.745 - type: ndcg_at_5 value: 55.067 - type: precision_at_1 value: 35.276999999999994 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.637 - type: recall_at_1 value: 35.276999999999994 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.18599999999999 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 78.03000000000002 - type: ap value: 29.12548553897622 - type: f1 value: 66.54857118886073 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.0 - type: cos_sim_ap value: 76.75437826834582 - type: cos_sim_f1 value: 66.4850136239782 - type: cos_sim_precision value: 68.92655367231639 - type: cos_sim_recall value: 64.21052631578948 - type: dot_accuracy value: 89.0 - type: dot_ap value: 76.75437826834582 - type: dot_f1 value: 66.4850136239782 - type: dot_precision value: 68.92655367231639 - type: dot_recall value: 64.21052631578948 - type: euclidean_accuracy value: 89.0 - type: euclidean_ap value: 76.75437826834582 - type: euclidean_f1 value: 66.4850136239782 - type: euclidean_precision value: 68.92655367231639 - type: euclidean_recall value: 64.21052631578948 - type: manhattan_accuracy value: 89.0 - type: manhattan_ap value: 76.66074220647083 - type: manhattan_f1 value: 66.47058823529412 - type: manhattan_precision value: 75.33333333333333 - type: manhattan_recall value: 59.473684210526315 - type: max_accuracy value: 89.0 - type: max_ap value: 76.75437826834582 - type: max_f1 value: 66.4850136239782 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.12903172428328 - type: cos_sim_spearman value: 92.66381487060741 - type: euclidean_pearson value: 90.37278396708922 - type: euclidean_spearman value: 92.66381487060741 - type: manhattan_pearson value: 90.32503296540962 - type: manhattan_spearman value: 92.6902938354313 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: clarin-knext/dbpedia-pl config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: map_at_1 value: 8.83 - type: map_at_10 value: 18.326 - type: map_at_100 value: 26.496 - type: map_at_1000 value: 28.455000000000002 - type: map_at_3 value: 12.933 - type: map_at_5 value: 15.168000000000001 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 72.76700000000001 - type: mrr_at_100 value: 73.203 - type: mrr_at_1000 value: 73.219 - type: mrr_at_3 value: 71.458 - type: mrr_at_5 value: 72.246 - type: ndcg_at_1 value: 55.375 - type: ndcg_at_10 value: 41.3 - type: ndcg_at_100 value: 45.891 - type: ndcg_at_1000 value: 52.905 - type: ndcg_at_3 value: 46.472 - type: ndcg_at_5 value: 43.734 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 33.074999999999996 - type: precision_at_100 value: 11.094999999999999 - type: precision_at_1000 value: 2.374 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.0 - type: recall_at_1 value: 8.83 - type: recall_at_10 value: 22.587 - type: recall_at_100 value: 50.61600000000001 - type: recall_at_1000 value: 73.559 - type: recall_at_3 value: 13.688 - type: recall_at_5 value: 16.855 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: map_at_1 value: 20.587 - type: map_at_10 value: 33.095 - type: map_at_100 value: 35.24 - type: map_at_1000 value: 35.429 - type: map_at_3 value: 28.626 - type: map_at_5 value: 31.136999999999997 - type: mrr_at_1 value: 40.586 - type: mrr_at_10 value: 49.033 - type: mrr_at_100 value: 49.952999999999996 - type: mrr_at_1000 value: 49.992 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.035 - type: ndcg_at_1 value: 40.586 - type: ndcg_at_10 value: 41.046 - type: ndcg_at_100 value: 48.586 - type: ndcg_at_1000 value: 51.634 - type: ndcg_at_3 value: 36.773 - type: ndcg_at_5 value: 38.389 - type: precision_at_1 value: 40.586 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.909 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.426000000000002 - type: recall_at_1 value: 20.587 - type: recall_at_10 value: 47.986000000000004 - type: recall_at_100 value: 75.761 - type: recall_at_1000 value: 94.065 - type: recall_at_3 value: 33.339 - type: recall_at_5 value: 39.765 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: clarin-knext/hotpotqa-pl config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: map_at_1 value: 40.878 - type: map_at_10 value: 58.775999999999996 - type: map_at_100 value: 59.632 - type: map_at_1000 value: 59.707 - type: map_at_3 value: 56.074 - type: map_at_5 value: 57.629 - type: mrr_at_1 value: 81.756 - type: mrr_at_10 value: 86.117 - type: mrr_at_100 value: 86.299 - type: mrr_at_1000 value: 86.30600000000001 - type: mrr_at_3 value: 85.345 - type: mrr_at_5 value: 85.832 - type: ndcg_at_1 value: 81.756 - type: ndcg_at_10 value: 67.608 - type: ndcg_at_100 value: 70.575 - type: ndcg_at_1000 value: 71.99600000000001 - type: ndcg_at_3 value: 63.723 - type: ndcg_at_5 value: 65.70700000000001 - type: precision_at_1 value: 81.756 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.5939999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 39.604 - type: precision_at_5 value: 25.332 - type: recall_at_1 value: 40.878 - type: recall_at_10 value: 68.096 - type: recall_at_100 value: 79.696 - type: recall_at_1000 value: 89.082 - type: recall_at_3 value: 59.406000000000006 - type: recall_at_5 value: 63.329 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: clarin-knext/msmarco-pl config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: map_at_1 value: 2.1839999999999997 - type: map_at_10 value: 11.346 - type: map_at_100 value: 30.325000000000003 - type: map_at_1000 value: 37.806 - type: map_at_3 value: 4.842 - type: map_at_5 value: 6.891 - type: mrr_at_1 value: 86.047 - type: mrr_at_10 value: 89.14699999999999 - type: mrr_at_100 value: 89.46600000000001 - type: mrr_at_1000 value: 89.46600000000001 - type: mrr_at_3 value: 89.14699999999999 - type: mrr_at_5 value: 89.14699999999999 - type: ndcg_at_1 value: 67.829 - type: ndcg_at_10 value: 62.222 - type: ndcg_at_100 value: 55.337 - type: ndcg_at_1000 value: 64.076 - type: ndcg_at_3 value: 68.12700000000001 - type: ndcg_at_5 value: 64.987 - type: precision_at_1 value: 86.047 - type: precision_at_10 value: 69.535 - type: precision_at_100 value: 32.93 - type: precision_at_1000 value: 6.6049999999999995 - type: precision_at_3 value: 79.845 - type: precision_at_5 value: 75.349 - type: recall_at_1 value: 2.1839999999999997 - type: recall_at_10 value: 12.866 - type: recall_at_100 value: 43.505 - type: recall_at_1000 value: 72.366 - type: recall_at_3 value: 4.947 - type: recall_at_5 value: 7.192 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.75319435104238 - type: f1 value: 77.58961444860606 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.54472091459313 - type: f1 value: 84.29498563572106 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: map_at_1 value: 4.367 - type: map_at_10 value: 10.38 - type: map_at_100 value: 13.516 - type: map_at_1000 value: 14.982000000000001 - type: map_at_3 value: 7.367 - type: map_at_5 value: 8.59 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 48.886 - type: mrr_at_100 value: 49.657000000000004 - type: mrr_at_1000 value: 49.713 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.065000000000005 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 28.393 - type: ndcg_at_1000 value: 37.428 - type: ndcg_at_3 value: 35.394999999999996 - type: ndcg_at_5 value: 33.391999999999996 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.638 - type: precision_at_1000 value: 2.0389999999999997 - type: precision_at_3 value: 32.817 - type: precision_at_5 value: 28.915999999999997 - type: recall_at_1 value: 4.367 - type: recall_at_10 value: 14.655000000000001 - type: recall_at_100 value: 29.665999999999997 - type: recall_at_1000 value: 62.073 - type: recall_at_3 value: 8.51 - type: recall_at_5 value: 10.689 - task: type: Retrieval dataset: name: MTEB NQ-PL type: clarin-knext/nq-pl config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: map_at_1 value: 28.616000000000003 - type: map_at_10 value: 41.626000000000005 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.733 - type: map_at_3 value: 37.729 - type: map_at_5 value: 39.879999999999995 - type: mrr_at_1 value: 32.068000000000005 - type: mrr_at_10 value: 44.029 - type: mrr_at_100 value: 44.87 - type: mrr_at_1000 value: 44.901 - type: mrr_at_3 value: 40.687 - type: mrr_at_5 value: 42.625 - type: ndcg_at_1 value: 32.068000000000005 - type: ndcg_at_10 value: 48.449999999999996 - type: ndcg_at_100 value: 53.13 - type: ndcg_at_1000 value: 54.186 - type: ndcg_at_3 value: 40.983999999999995 - type: ndcg_at_5 value: 44.628 - type: precision_at_1 value: 32.068000000000005 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.061 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 18.404999999999998 - type: precision_at_5 value: 13.111 - type: recall_at_1 value: 28.616000000000003 - type: recall_at_10 value: 66.956 - type: recall_at_100 value: 87.657 - type: recall_at_1000 value: 95.548 - type: recall_at_3 value: 47.453 - type: recall_at_5 value: 55.87800000000001 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.47589122111044 - type: f1 value: 66.6332277374775 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.4 - type: cos_sim_ap value: 94.1044939667201 - type: cos_sim_f1 value: 88.78048780487805 - type: cos_sim_precision value: 87.22044728434504 - type: cos_sim_recall value: 90.39735099337747 - type: dot_accuracy value: 86.4 - type: dot_ap value: 94.1044939667201 - type: dot_f1 value: 88.78048780487805 - type: dot_precision value: 87.22044728434504 - type: dot_recall value: 90.39735099337747 - type: euclidean_accuracy value: 86.4 - type: euclidean_ap value: 94.1044939667201 - type: euclidean_f1 value: 88.78048780487805 - type: euclidean_precision value: 87.22044728434504 - type: euclidean_recall value: 90.39735099337747 - type: manhattan_accuracy value: 86.4 - type: manhattan_ap value: 94.11438365697387 - type: manhattan_f1 value: 88.77968877968877 - type: manhattan_precision value: 87.84440842787681 - type: manhattan_recall value: 89.73509933774835 - type: max_accuracy value: 86.4 - type: max_ap value: 94.11438365697387 - type: max_f1 value: 88.78048780487805 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 97.86641929499072 - type: cos_sim_ap value: 99.36904211868182 - type: cos_sim_f1 value: 96.56203288490283 - type: cos_sim_precision value: 94.72140762463343 - type: cos_sim_recall value: 98.47560975609755 - type: dot_accuracy value: 97.86641929499072 - type: dot_ap value: 99.36904211868183 - type: dot_f1 value: 96.56203288490283 - type: dot_precision value: 94.72140762463343 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 97.86641929499072 - type: euclidean_ap value: 99.36904211868183 - type: euclidean_f1 value: 96.56203288490283 - type: euclidean_precision value: 94.72140762463343 - type: euclidean_recall value: 98.47560975609755 - type: manhattan_accuracy value: 98.14471243042672 - type: manhattan_ap value: 99.43359540492416 - type: manhattan_f1 value: 96.98795180722892 - type: manhattan_precision value: 95.83333333333334 - type: manhattan_recall value: 98.17073170731707 - type: max_accuracy value: 98.14471243042672 - type: max_ap value: 99.43359540492416 - type: max_f1 value: 96.98795180722892 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 89.39058171745152 - type: f1 value: 86.8552093529568 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 74.97975708502024 - type: f1 value: 58.73081628832407 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: map_at_1 value: 64.917 - type: map_at_10 value: 78.74600000000001 - type: map_at_100 value: 79.501 - type: map_at_1000 value: 79.524 - type: map_at_3 value: 75.549 - type: map_at_5 value: 77.495 - type: mrr_at_1 value: 74.9 - type: mrr_at_10 value: 82.112 - type: mrr_at_100 value: 82.314 - type: mrr_at_1000 value: 82.317 - type: mrr_at_3 value: 80.745 - type: mrr_at_5 value: 81.607 - type: ndcg_at_1 value: 74.83999999999999 - type: ndcg_at_10 value: 83.214 - type: ndcg_at_100 value: 84.997 - type: ndcg_at_1000 value: 85.207 - type: ndcg_at_3 value: 79.547 - type: ndcg_at_5 value: 81.46600000000001 - type: precision_at_1 value: 74.83999999999999 - type: precision_at_10 value: 12.822 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 34.903 - type: precision_at_5 value: 23.16 - type: recall_at_1 value: 64.917 - type: recall_at_10 value: 92.27199999999999 - type: recall_at_100 value: 98.715 - type: recall_at_1000 value: 99.854 - type: recall_at_3 value: 82.04599999999999 - type: recall_at_5 value: 87.2 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: map_at_1 value: 3.51 - type: map_at_10 value: 9.046999999999999 - type: map_at_100 value: 10.823 - type: map_at_1000 value: 11.144 - type: map_at_3 value: 6.257 - type: map_at_5 value: 7.648000000000001 - type: mrr_at_1 value: 17.299999999999997 - type: mrr_at_10 value: 27.419 - type: mrr_at_100 value: 28.618 - type: mrr_at_1000 value: 28.685 - type: mrr_at_3 value: 23.817 - type: mrr_at_5 value: 25.927 - type: ndcg_at_1 value: 17.299999999999997 - type: ndcg_at_10 value: 16.084 - type: ndcg_at_100 value: 23.729 - type: ndcg_at_1000 value: 29.476999999999997 - type: ndcg_at_3 value: 14.327000000000002 - type: ndcg_at_5 value: 13.017999999999999 - type: precision_at_1 value: 17.299999999999997 - type: precision_at_10 value: 8.63 - type: precision_at_100 value: 1.981 - type: precision_at_1000 value: 0.336 - type: precision_at_3 value: 13.4 - type: precision_at_5 value: 11.700000000000001 - type: recall_at_1 value: 3.51 - type: recall_at_10 value: 17.518 - type: recall_at_100 value: 40.275 - type: recall_at_1000 value: 68.203 - type: recall_at_3 value: 8.155 - type: recall_at_5 value: 11.875 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.30248675091724 - type: cos_sim_ap value: 83.6756734006714 - type: cos_sim_f1 value: 74.97367497367497 - type: cos_sim_precision value: 73.91003460207612 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 86.30248675091724 - type: dot_ap value: 83.6756734006714 - type: dot_f1 value: 74.97367497367497 - type: dot_precision value: 73.91003460207612 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 86.30248675091724 - type: euclidean_ap value: 83.67566984333091 - type: euclidean_f1 value: 74.97367497367497 - type: euclidean_precision value: 73.91003460207612 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 86.28210354667753 - type: manhattan_ap value: 83.64216119130171 - type: manhattan_f1 value: 74.92152075340078 - type: manhattan_precision value: 73.4107997265892 - type: manhattan_recall value: 76.49572649572649 - type: max_accuracy value: 86.30248675091724 - type: max_ap value: 83.6756734006714 - type: max_f1 value: 74.97367497367497 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.23295940859121 - type: cos_sim_spearman value: 78.89329160768719 - type: euclidean_pearson value: 79.56019107076818 - type: euclidean_spearman value: 78.89330209904084 - type: manhattan_pearson value: 79.76098513973719 - type: manhattan_spearman value: 79.05490162570123 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 37.732606308062486 - type: cos_sim_spearman value: 41.01645667030284 - type: euclidean_pearson value: 26.61722556367085 - type: euclidean_spearman value: 41.01645667030284 - type: manhattan_pearson value: 26.60917378970807 - type: manhattan_spearman value: 41.51335727617614 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: map_at_1 value: 54.31700000000001 - type: map_at_10 value: 65.564 - type: map_at_100 value: 66.062 - type: map_at_1000 value: 66.08699999999999 - type: map_at_3 value: 62.592999999999996 - type: map_at_5 value: 63.888 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.412 - type: mrr_at_100 value: 66.85900000000001 - type: mrr_at_1000 value: 66.88 - type: mrr_at_3 value: 64.22200000000001 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 70.577 - type: ndcg_at_100 value: 72.879 - type: ndcg_at_1000 value: 73.45 - type: ndcg_at_3 value: 65.5 - type: ndcg_at_5 value: 67.278 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.0 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 54.31700000000001 - type: recall_at_10 value: 85.056 - type: recall_at_100 value: 95.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 71.0 - type: recall_at_5 value: 75.672 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.051 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 27.448 - type: map_at_3 value: 0.721 - type: map_at_5 value: 1.13 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.0 - type: mrr_at_100 value: 93.0 - type: mrr_at_1000 value: 93.0 - type: mrr_at_3 value: 93.0 - type: mrr_at_5 value: 93.0 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 80.303 - type: ndcg_at_100 value: 61.23499999999999 - type: ndcg_at_1000 value: 52.978 - type: ndcg_at_3 value: 84.419 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 83.39999999999999 - type: precision_at_100 value: 61.96 - type: precision_at_1000 value: 22.648 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.193 - type: recall_at_100 value: 14.938 - type: recall_at_1000 value: 48.563 - type: recall_at_3 value: 0.738 - type: recall_at_5 value: 1.173 --- # soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
Azma-AI/bart-conversation-summarizer
Azma-AI
summarization
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "dataset:samsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,696
1,696
27
6
--- datasets: - samsum pipeline_tag: summarization widget: - text: "Laurie: So, what are your plans for this weekend?\nChristie: I don’t know.\ \ Do you want to get together or something?\nSarah: How about going to see a movie?\ \ Cinemax 26 on Carson Boulevard is showing Enchanted. Laurie: That sounds like\ \ a good idea. Maybe we should go out to eat beforehand.\nSarah: It is fine with\ \ me. Where do you want to meet?\nChristie: Let’s meet at Summer Pizza House.\ \ I have not gone there for a long time.\nLaurie: Good idea again. I heard they\ \ just came up with a new pizza. It should be good because Summer Pizza House\ \ always has the best pizza in town.\nSarah: When should we meet?\nChristie: Well,\ \ the movie is shown at 2:00PM, 4:00PM, 6:00PM and 8:00PM.\nLaurie: Why don’t\ \ we go to the 2:00PM show? We can meet at Summer Pizza House at noon. That will\ \ give us plenty of time to enjoy our pizza.\nSarah: My cousin Karen is in town.\ \ Can I bring her along? I hate to leave her home alone.\nChristie: Karen is in\ \ town? Yes, bring her along. Laurie, you remember Karen? We met her at Sara’s\ \ high school graduation party two years ago.\nLaurie: I do not quite remember\ \ her. What does she look like?\nSarah: She has blond hair, she is kind of slender,\ \ and she is about your height.\nLaurie: She wears eyeglasses, right?\nSarah:\ \ Yes, and she was playing the piano off and on during the party.\nLaurie: I remember\ \ her now. Yes, do bring her along Sara. She is such a nice person, and funny\ \ too.\nSarah: She will be happy to meet both of you again.\nChristie: What is\ \ she doing these days?\nSarah: She graduated last June, and she will start her\ \ teaching career next week when the new school term begins.\nLaurie: What grade\ \ is she going to teach?\nSarah: She will teach kindergarten. She loves working\ \ with kids, and she always has such a good rapport with them\nChristie: Kindergarten?\ \ She must be a very patient person. I always think kindergarten is the most difficult\ \ class to teach. Most of the kids have never been to school, and they have e\ \ never been away from mommy for long.\nSarah: I think Karen will do fine. She\ \ knows how to handle young children\nLaurie: I think the first few weeks will\ \ be tough. However, once the routine is set, it should not be too difficult to\ \ teach kindergarten.\nChristie: You are right. The kids might even look forward\ \ to going to school since they have so many friends to play with.\nSarah: There\ \ are so many new things for them to do at school too. They do a lot of crafts\ \ in kindergarten. I am always amazed by the things kindergarten teachers do.\ \ \nLaurie: Yes, I have seen my niece come home with so many neat stuff.\nChristie:\ \ Maybe we can ask Karen to show us some of the things that we can do for this\ \ Halloween.\nLaurie: Maybe we can stop by the craft store after the movie. What\ \ do you think, Sara?\nSarah: I will talk to her. I think she will like that.\ \ It will help her with school projects when Halloween comes.\nChristie: Michael’s\ \ is a good store for crafts. It always carries a variety of things, and you can\ \ find almost anything there.\nLaurie: There is a Michaels store not far away\ \ from Cinemax 26. I believe it is just around the corner, on Pioneer Avenue.\ \ We can even walk over there.\nSarah: So, we plan to meet for pizza at noon,\ \ go to the movies at two, and shop at Michael’s afterward. Right?\nLaurie and\ \ Christie: Yes. \n" model-index: - name: bart-large-cnn-samsum results: - task: type: summarization name: Conversation Summarization dataset: name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization' type: samsum metrics: - type: rogue-1 value: 54.8764 name: Validation ROGUE-1 - type: rogue-2 value: 29.6869, name: Validation ROGUE-2 - type: rogue-l value: 44.9874 name: Validation ROGUE-L - type: loss value: 1.47812 name: loss ---
[ "SUMMARIZATION" ]
[ "CRAFT" ]
Non_BioNLP
sschet/bert-large-uncased_med-ner
sschet
token-classification
[ "transformers", "pytorch", "jax", "bert", "token-classification", "en", "dataset:tner/bc5cdr", "dataset:commanderstrife/jnlpba", "dataset:bc2gm_corpus", "dataset:drAbreu/bc4chemd_ner", "dataset:linnaeus", "dataset:chintagunta85/ncbi_disease", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,675
1,675
129
3
--- datasets: - tner/bc5cdr - commanderstrife/jnlpba - bc2gm_corpus - drAbreu/bc4chemd_ner - linnaeus - chintagunta85/ncbi_disease language: - en --- A Named Entity Recognition model for medication entities (`medication name`, `dosage`, `duration`, `frequency`, `reason`). The model has been trained on the i2b2 (now n2c2) dataset for the 2009 - Medication task. Please visit the n2c2 site to request access to the dataset.
[ "NAMED_ENTITY_RECOGNITION" ]
[ "BC5CDR", "JNLPBA", "LINNAEUS", "NCBI DISEASE" ]
BioNLP
tranguyen/halong_embedding-legal-document-finetune
tranguyen
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:119717", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:hiieu/halong_embedding", "base_model:finetune:hiieu/halong_embedding", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,731
1,731
7
0
--- base_model: hiieu/halong_embedding library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:119717 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: Thí sinh ở Thành phố Hồ Chí Minh nộp lệ phí đăng ký nguyện vọng đại học năm 2023 khi nào? sentences: - 'I. Đối với thí sinh ... 7. Đăng ký và xử lý nguyện vọng: ... b) Từ ngày 31/7 đến 17 giờ 00 ngày 06/8/2023: - Thí sinh phải nộp lệ phí xét tuyển theo số lượng NVXT bằng hình thức trực tuyến theo hướng dẫn của Bộ GDĐT; - Riêng thí sinh thuộc diện hưởng chính sách ưu tiên khu vực, ưu tiên đối tượng phải phối hợp với các điểm tiếp nhận rà soát thông tin khu vực (Phụ lục VI) và đối tượng ưu tiên của thí sinh (nếu có). Thí sinh tìm hiểu kỹ tài liệu hướng dẫn và phải thực hiện đúng, đủ, hết quy trình ĐKXT; thí sinh chưa rõ các nội dung khai báo, nộp lệ phí xét tuyển có thể liên hệ với cán bộ tại các điểm tiếp nhận hoặc cán bộ của CSĐT trực các số điện thoại hỗ trợ công tác tuyển sinh để được hướng dẫn.' - '“Nhà nước thu hồi đất là việc Nhà nước quyết định thu lại quyền sử dụng đất của người được Nhà nước trao quyền sử dụng đất hoặc thu lại đất của người sử dụng đất vi phạm pháp luật về đất đai.”“1. Nhà nước quyết định thu hồi đất trong các trường hợp sau đây: a) Thu hồi đất vì mục đích quốc phòng, an ninh; phát triển kinh tế - xã hội vì lợi ích quốc gia, công cộng; b) Thu hồi đất do vi phạm pháp luật về đất đai; c) Thu hồi đất do chấm dứt việc sử dụng đất theo pháp luật, tự nguyện trả lại đất, có nguy cơ đe dọa tính mạng con người. 2. Nhà nước quyết định trưng dụng đất trong trường hợp thật cần thiết để thực hiện nhiệm vụ quốc phòng, an ninh hoặc trong tình trạng chiến tranh, tình trạng khẩn cấp, phòng, chống thiên tai.”' - '1. Gây phiền hà, sách nhiễu hoặc cản trở người đến khiếu nại, tố cáo, kiến nghị, phản ánh. 2. Thiếu trách nhiệm trong việc tiếp công dân; làm mất hoặc làm sai lệch thông tin, tài liệu do người khiếu nại, tố cáo, kiến nghị, phản ánh cung cấp. 3. Phân biệt đối xử trong khi tiếp công dân. 4. Lợi dụng quyền khiếu nại, tố cáo, kiến nghị, phản ánh để gây rối trật tự công cộng. 5. Xuyên tạc, vu khống, gây thiệt hại cho cơ quan, tổ chức, đơn vị, cá nhân. 6. Đe dọa, xúc phạm cơ quan, tổ chức, đơn vị, người tiếp công dân, người thi hành công vụ. 7. Kích động, cưỡng ép, dụ dỗ, lôi kéo, mua chuộc người khác tập trung đông người tại nơi tiếp công dân. 8. Vi phạm các quy định khác trong nội quy, quy chế tiếp công dân.1. Khi tiếp công dân, người tiếp công dân phải bảo đảm trang phục chỉnh tề, có đeo thẻ công chức, viên chức hoặc phù hiệu theo quy định. 2. Yêu cầu người đến khiếu nại, tố cáo, kiến nghị, phản ánh nêu rõ họ tên, địa chỉ hoặc xuất trình giấy tờ tùy thân, giấy ủy quyền (nếu có); có đơn hoặc trình bày rõ ràng nội dung khiếu nại, tố cáo, kiến nghị, phản ánh; cung cấp thông tin, tài liệu cần thiết cho việc tiếp nhận, thụ lý vụ việc. 3. Có thái độ đứng mực, tôn trọng công dân, lắng nghe, tiếp nhận đơn khiếu nại, tố cáo, kiến nghị, phản ánh hoặc ghi chép đầy đủ, chính xác nội dung mà người đến khiếu nại, tố cáo, kiến nghị, phản ánh trình bày. 4. Giải thích, hướng dẫn cho người đến khiếu nại, tố cáo, kiến nghị, phản ánh chấp hành chủ trương, đường lối, chính sách, pháp luật, kết luận, quyết định giải quyết đã có hiệu lực pháp luật của cơ quan có thẩm quyền; hướng dẫn người khiếu nại, tố cáo, kiến nghị, phản ánh đến đúng cơ quan hoặc người có thẩm quyền giải quyết. 5. Trực tiếp xử lý hoặc phân loại, chuyển đơn, trình người có thẩm quyền xử lý khiếu nại, tố cáo, kiến nghị, phản ánh; thông báo kết quả xử lý khiếu nại, tố cáo, kiến nghị, phản ánh cho công dân. 6. Yêu cầu người vi phạm nội quy nơi tiếp công dân chấm dứt hành vi vi phạm; trong trường hợp cần thiết, lập biên bản về việc vi phạm và yêu cầu cơ quan chức năng xử lý theo quy định của pháp luật.' - source_sentence: Người vợ thường xuyên bị chồng hành hung, đánh đập có thể được trợ giúp pháp lý khi tiến hành khởi kiện người chồng không? sentences: - 'Các yêu cầu cơ bản ... 5.7 Yêu cầu về ngăn nước trong hố khoan (cách ly các tầng nước) ... 5.7.2 Đối với hố khoan máy 1) Trong điều kiện bình thường thì biện pháp ngăn nước trong hố khoan máy là ngăn nước bằng ống chống (đáy ống nằm trong vữa xi măng đặc) hoặc bằng bộ nút chuyên dụng. Trong điều kiện phức tạp như ngăn cách ly hai tầng chứa nước hoặc ngăn chống nước áp lực phun lên thì phải có thiết kế cho từng trường hợp cụ thể. 2) Trước khi tiến hành ngăn nước trong hố khoan, phải xác định chính xác độ sâu của đoạn cần ngăn nước, đặc điểm địa tầng phía trên, phía dưới bộ nút ngăn, mực nước ngầm trong hố khoan để có biện pháp ngăn nước thích hợp. Các số liệu thu thập và diễn biến trong quá trình ngăn nước phải được ghi tỉ mỉ trong nhật ký khoan máy (tham khảo điều C.1.1, Phụ lục C). 3) Kiểm tra chất lượng ngăn nước trong hố khoan theo các bước sau: - Khoan qua cột đá xi măng chân ống chống hoặc đoạn nút ngăn nước bằng vữa xi măng; - Đổ thêm hoặc hút bớt nước trong hố khoan để nâng cao hoặc hạ thấp mực nước trong hố khoan một khoảng bằng 1/3 cột nước có trong hố khoan trước khi tiến hành ngăn nước, để nước hồi phục dần đến ổn định; - Đo mức độ thay đổi của mực nước trong hố khoan trước và sau khi ngăn nước; 4) Nếu mức độ thay đổi mực nước giữa 3 lần đo liên tiếp nhỏ hơn 1 cm thì việc ngăn nước đạt yêu cầu. Nếu kết quả ngăn nước chưa đạt yêu cầu thì phải tiến hành ngăn nước lại. ...' - 'Nguyên tắc và cách thức làm việc của Hội đồng: 1. Hội đồng làm việc theo nguyên tắc tư vấn. 2. Ý kiến tư vấn của Hội đồng được thảo luận tập thể và do chủ tọa cuộc họp kết luận. Người chủ tọa và kết luận tại cuộc họp Hội đồng là Chủ tịch Hội đồng hoặc Phó Chủ tịch được Chủ tịch Hội đồng ủy quyền. 3. Những vấn đề quan trọng về tài chính, tiền tệ quốc gia có thể tác động lớn đến quốc phòng, an ninh và kinh tế - xã hội thì thành phần mời họp Hội đồng do Thường trực Hội đồng đề xuất và Chủ tịch Hội đồng quyết định. 4. Đối với những đề án lớn, phức tạp, Hội đồng tổ chức việc tham khảo ý kiến các chuyên gia, các doanh nhân, các nhà khoa học,... trước khi đưa ra Hội đồng họp thảo luận. 5. Hội đồng họp định kỳ mỗi quý một lần vào tháng cuối quý. Trường hợp cần thiết theo yêu cầu của Thủ tướng Chính phủ hoặc Chủ tịch Hội đồng, Hội đồng sẽ tiến hành họp đột xuất. Ngoài việc tổ chức thảo luận tập trung để các thành viên cho ý kiến trực tiếp tại các cuộc họp Hội đồng, Hội đồng có thể lấy ý kiến tham gia của thành viên bằng văn bản.' - 'Lĩnh vực, hình thức trợ giúp pháp lý 1. Trợ giúp pháp lý được thực hiện trong các lĩnh vực pháp luật, trừ lĩnh vực kinh doanh, thương mại. 2. Các hình thức trợ giúp pháp lý bao gồm: a) Tham gia tố tụng; b) Tư vấn pháp luật; c) Đại diện ngoài tố tụng.Người được trợ giúp pháp lý 1. Người có công với cách mạng. 2. Người thuộc hộ nghèo. 3. Trẻ em. 4. Người dân tộc thiểu số cư trú ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn. 5. Người bị buộc tội từ đủ 16 tuổi đến dưới 18 tuổi. 6. Người bị buộc tội thuộc hộ cận nghèo. 7. Người thuộc một trong các trường hợp sau đây có khó khăn về tài chính: a) Cha đẻ, mẹ đẻ, vợ, chồng, con của liệt sĩ và người có công nuôi dưỡng khi liệt sĩ còn nhỏ; b) Người nhiễm chất độc da cam; c) Người cao tuổi; d) Người khuyết tật; đ) Người từ đủ 16 tuổi đến dưới 18 tuổi là bị hại trong vụ án hình sự; e) Nạn nhân trong vụ việc bạo lực gia đình; g) Nạn nhân của hành vi mua bán người theo quy định của Luật Phòng, chống mua bán người; h) Người nhiễm HIV. ….' - source_sentence: Ai phải nộp phí dịch vụ duy trì hệ thống kiểm tra trạng thái chứng thư số? sentences: - '"1. Những người được bảo vệ gồm: a) Người tố giác tội phạm; b) Người làm chứng; c) Bị hại; d) Người thân thích của người tố giác tội phạm, người làm chứng, bị hại. 2. Người được bảo vệ có quyền: a) Đề nghị được bảo vệ; b) Được thông báo, giải thích về quyền và nghĩa vụ; c) Được biết về việc áp dụng biện pháp bảo vệ; đề nghị thay đổi, bổ sung, hủy bỏ biện pháp bảo vệ; d) Được bồi thường thiệt hại, khôi phục danh dự, bảo đảm các quyền và lợi ích hợp pháp trong thời gian bảo vệ. 3. Người được bảo vệ có nghĩa vụ: a) Chấp hành nghiêm chỉnh các yêu cầu của cơ quan bảo vệ liên quan đến việc bảo vệ; b) Giữ bí mật thông tin bảo vệ; c) Thông báo kịp thời đến cơ quan có trách nhiệm bảo vệ về những vấn đề nghi vấn trong thời gian được bảo vệ."' - 'Mạng nội bộ và Internet 1. Có biện pháp phát hiện và phòng chống xâm nhập, phòng chống phát tán mã độc hại trên mạng nội bộ và Internet. 2. Có biện pháp phòng chống tấn công từ chối dịch vụ từ bên trong mạng nội bộ và bên ngoài Internet. 3. Yêu cầu có các biện pháp xác thực đảm bảo an toàn đối với các kết nối không dây. 4. Có biện pháp phân tách các phân vùng mạng để đảm bảo kiểm soát được các truy cập hệ thống thông tin và đảm bảo truy cập hiệu quả đối với các dữ liệu cần truy cập nhanh chóng. 5. Xác định, xây dựng và triển khai các phương án dự phòng cho các vị trí có mức độ ảnh hưởng cao tới hoạt động của hệ thống mạng hoặc có khả năng làm tê liệt hệ thống mạng của đơn vị khi xảy ra sự cố. 6. Xác định và đảm bảo nhu cầu băng thông của mạng nội bộ và Internet. 7. Thường xuyên cập nhật các bản vá lỗi hệ thống, cập nhật cấu hình cho các thiết bị mạng và các thiết bị bảo mật. 8. Bảo đảm chất lượng và đầy đủ các trang thiết bị mạng, an ninh, bảo mật, phần mềm chống virus, công cụ phân tích, quản trị mạng được cài đặt trong mạng của đơn vị.' - 'Người nộp phí Người nộp phí dịch vụ duy trì hệ thống kiểm tra trạng thái chứng thư số là doanh nghiệp được cấp giấy phép cung cấp dịch vụ chứng thực chữ ký số cho tổ chức, doanh nghiệp sử dụng theo quy định của pháp luật.Nghĩa vụ của tổ chức cung cấp dịch vụ chứng thực chữ ký số công cộng đối với cơ quan quản lý nhà nước về chữ ký số và dịch vụ chứng thực chữ ký số ... 5. Nộp phí dịch vụ duy trì hệ thống kiểm tra trạng thái chứng thư số theo quy định. 6. Báo cáo định kỳ và đột xuất theo quy định của Bộ Thông tin và Truyền thông và yêu cầu của các cơ quan nhà nước có thẩm quyền.' - source_sentence: Có được sử dụng trẻ em trong việc mua bán thuốc lá? sentences: - '"1. Sản xuất, mua bán, nhập khẩu, tàng trữ, vận chuyển thuốc lá giả, sản phẩm được thiết kế có hình thức hoặc kiểu dáng như bao, gói hoặc điếu thuốc lá; mua bán, tàng trữ, vận chuyển nguyên liệu thuốc lá, thuốc lá nhập lậu. 2. Quảng cáo, khuyến mại thuốc lá; tiếp thị thuốc lá trực tiếp tới người tiêu dùng dưới mọi hình thức. 3. Tài trợ của tổ chức, cá nhân kinh doanh thuốc lá, trừ trường hợp quy định tại Điều 16 của Luật này. 4. Người chưa đủ 18 tuổi sử dụng, mua, bán thuốc lá. 5. Sử dụng người chưa đủ 18 tuổi mua, bán thuốc lá. 6. Bán, cung cấp thuốc lá cho người chưa đủ 18 tuổi. 7. Bán thuốc lá bằng máy bán thuốc lá tự động; hút, bán thuốc lá tại địa điểm có quy định cấm. 8. Sử dụng hình ảnh thuốc lá trên báo chí, xuất bản phẩm dành riêng cho trẻ em. 9. Vận động, ép buộc người khác sử dụng thuốc lá."' - 'Trường hợp sử dụng đất được cấp Giấy chứng nhận quyền sử dụng đất, quyền sở hữu nhà ở và tài sản khác gắn liền với đất 1. Nhà nước cấp Giấy chứng nhận quyền sử dụng đất, quyền sở hữu nhà ở và tài sản khác gắn liền với đất cho những trường hợp sau đây: a) Người đang sử dụng đất có đủ điều kiện cấp Giấy chứng nhận quyền sử dụng đất, quyền sở hữu nhà ở và tài sản khác gắn liền với đất theo quy định tại các điều 100, 101 và 102 của Luật này; b) Người được Nhà nước giao đất, cho thuê đất từ sau ngày Luật này có hiệu lực thi hành; c) Người được chuyển đổi, nhận chuyển nhượng, được thừa kế, nhận tặng cho quyền sử dụng đất, nhận góp vốn bằng quyền sử dụng đất; người nhận quyền sử dụng đất khi xử lý hợp đồng thế chấp bằng quyền sử dụng đất để thu hồi nợ; d) Người được sử dụng đất theo kết quả hòa giải thành đối với tranh chấp đất đai; theo bản án hoặc quyết định của Tòa án nhân dân, quyết định thi hành án của cơ quan thi hành án hoặc quyết định giải quyết tranh chấp, khiếu nại, tố cáo về đất đai của cơ quan nhà nước có thẩm quyền đã được thi hành; đ) Người trúng đấu giá quyền sử dụng đất; e) Người sử dụng đất trong khu công nghiệp, cụm công nghiệp, khu chế xuất, khu công nghệ cao, khu kinh tế; g) Người mua nhà ở, tài sản khác gắn liền với đất; h) Người được Nhà nước thanh lý, hóa giá nhà ở gắn liền với đất ở; người mua nhà ở thuộc sở hữu nhà nước; i) Người sử dụng đất tách thửa, hợp thửa; nhóm người sử dụng đất hoặc các thành viên hộ gia đình, hai vợ chồng, tổ chức sử dụng đất chia tách, hợp nhất quyền sử dụng đất hiện có; k) Người sử dụng đất đề nghị cấp đổi hoặc cấp lại Giấy chứng nhận bị mất. ..."' - 'Quản lý dân cư 1. Dân cư trên địa bàn Thủ đô được quản lý với quy mô, mật độ, cơ cấu theo Quy hoạch chung xây dựng Thủ đô. 2. Hội đồng nhân dân thành phố Hà Nội ban hành chính sách ưu tiên đầu tư và huy động các nguồn lực đầu tư xây dựng các khu đô thị, nhà ở, hệ thống hạ tầng kỹ thuật, hạ tầng xã hội đồng bộ, hiện đại, thuận tiện ở ngoại thành; phối hợp với các tỉnh, thành phố trực thuộc trung ương trong Vùng Thủ đô phát triển kinh tế - xã hội và giải quyết việc làm nhằm hạn chế tình trạng di dân tự phát vào nội thành.3. Việc đăng ký thường trú ở ngoại thành được thực hiện theo quy định của pháp luật về cư trú. 4. Công dân thuộc một trong các trường hợp sau đây thì được đăng ký thường trú ở nội thành: a) Các trường hợp quy định tại các khoản 2, 3 và 4 Điều 20 của Luật cư trú; b) Các trường hợp không thuộc điểm a khoản này đã tạm trú liên tục tại nội thành từ 3 năm trở lên, có nhà ở thuộc sở hữu của mình hoặc nhà thuê ở nội thành của tổ chức, cá nhân có đăng ký kinh doanh nhà ở; đối với nhà thuê phải bảo đảm điều kiện về diện tích bình quân theo quy định của Hội đồng nhân dân thành phố Hà Nội và được sự đồng ý bằng văn bản của tổ chức, cá nhân có nhà cho thuê cho đăng ký thường trú vào nhà thuê.' - source_sentence: Kiểm định viên chính kỹ thuật an toàn lao động phải đáp ứng những tiêu chuẩn gì về trình độ đào tạo? sentences: - 'Quyền và nghĩa vụ của người trúng đấu giá biển số xe ô tô; người nhận chuyển nhượng, trao đổi, được tặng cho, thừa kế xe ô tô gắn biển số trúng đấu giá ... 2. Nghĩa vụ của người trúng đấu giá biển số xe ô tô bao gồm: ... c) Không được chuyển nhượng, trao đổi, tặng cho, để thừa kế biển số xe ô tô trúng đấu giá, trừ trường hợp chuyển nhượng, trao đổi, tặng cho, để thừa kế xe ô tô gắn biển số trúng đấu giá.Thủ tục đăng ký xe ... 3. Trường hợp chuyển quyền sở hữu xe kèm theo biển số xe trúng đấu giá a) Chủ xe nộp hồ sơ và làm thủ tục thu hồi theo quy định tại khoản 1 Điều 14, khoản 1 Điều 15 Thông tư này, chủ xe không phải nộp lại biển số xe trúng đấu giá nhưng phải nộp bản sao chứng từ chuyển quyền sở hữu xe và xuất trình bản chính để đối chiếu (chứng từ chuyển quyền sở hữu phải thể hiện rõ nội dung chuyển quyền sở hữu xe kèm theo biển số trúng đấu giá); b) Tổ chức, cá nhân nhận chuyển quyền sở hữu xe nộp hồ sơ và làm thủ tục đăng ký sang tên xe theo quy định tại khoản 2 Điều 14, khoản 2 Điều 15 Thông tư này và được đăng ký, giữ nguyên biển số xe trúng đấu giá (chứng từ chuyển quyền sở hữu phải thể hiện rõ nội dung chuyển quyền sở hữu xe kèm theo biển số trúng đấu giá). Tổ chức, cá nhân đã nhận chuyển quyền sở hữu xe kèm theo biển số xe trúng đấu giá, không được tiếp tục chuyển quyền sở hữu xe kèm theo biển số xe trúng đấu giá cho tổ chức, cá nhân khác; được chuyển quyền sở hữu xe theo quy định của pháp luật.' - 'Đề xuất sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện và điều kiện đầu tư kinh doanh 1. Căn cứ điều kiện phát triển kinh tế - xã hội, yêu cầu quản lý nhà nước trong từng thời kỳ và điều ước quốc tế về đầu tư, bộ, cơ quan ngang bộ trình Chính phủ đề xuất sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh. 2. Việc đề xuất sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh được thực hiện trong Đề nghị xây dựng văn bản quy phạm pháp luật theo quy định của Luật Ban hành văn bản quy phạm pháp luật, trong đó có những nội dung sau đây: a) Ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh dự kiến sửa đổi, bổ sung; b) Phân tích sự cần thiết, mục đích của việc sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh phù hợp với quy định tại khoản 1 Điều 7 Luật Đầu tư; c) Căn cứ sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh và đối tượng phải tuân thủ; d) Đánh giá tính hợp lý, khả thi của việc sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh và sự phù hợp với điều ước quốc tế về đầu tư; đ) Đánh giá tác động của việc sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh đối với công tác quản lý nhà nước và hoạt động đầu tư kinh doanh của các đối tượng phải tuân thủ.' - 'Kiểm định viên chính kỹ thuật an toàn lao động - Mã số: V.09.03.01 ... 2. Tiêu chuẩn về trình độ đào tạo, bồi dưỡng: a) Có bằng tốt nghiệp đại học trở lên thuộc các chuyên ngành kỹ thuật phù hợp với phạm vi thực hiện kiểm định; b) Có chứng chỉ bồi dưỡng chức danh nghề nghiệp viên chức chuyên ngành kiểm định kỹ thuật an toàn lao động hoặc chứng chỉ kiểm định viên kiểm định kỹ thuật an toàn lao động. 3. Tiêu chuẩn về năng lực chuyên môn, nghiệp vụ: a) Có năng lực chủ trì tổ chức, triển khai các hoạt động nghiệp vụ kiểm định kỹ thuật an toàn lao động và đề xuất giải pháp nâng cao hiệu quả triển khai thực hiện các hoạt động thuộc lĩnh vực kiểm định; b) Có năng lực tổ chức phối hợp với các tổ chức, cá nhân có liên quan khác trong quá trình thực hiện nhiệm vụ về hoạt động kiểm định kỹ thuật an toàn lao động; c) Có khả năng hướng dẫn nghiệp vụ về lĩnh vực kiểm định kỹ thuật an toàn lao động phù hợp với chuyên ngành được đào tạo; d) Đã chủ trì 01 nhiệm vụ khoa học và công nghệ cấp bộ, cấp tỉnh ở mức đạt trở lên liên quan đến lĩnh vực kiểm định kỹ thuật an toàn lao động hoặc tham gia ít nhất 02 nhiệm vụ khoa học và công nghệ cấp bộ, cấp tỉnh được nghiệm thu ở mức đạt trở lên liên quan đến lĩnh vực kiểm định; đ) Có khả năng ứng dụng công nghệ thông tin trong thực hiện các nhiệm vụ của kiểm định viên chính kỹ thuật an toàn lao động và có khả năng sử dụng ngoại ngữ trong một số nhiệm vụ cụ thể được giao. 4. Yêu cầu về thời gian công tác tối thiểu đối với viên chức dự thi hoặc xét thăng hạng chức danh Kiểm định viên chính kỹ thuật an toàn lao động: Có thời gian công tác giữ chức danh Kiểm định viên kỹ thuật an toàn lao động hoặc tương đương từ đủ 09 năm trở lên (không kể thời gian tập sự, thử việc). Trường hợp có thời gian giữ chức danh nghề nghiệp tương đương thì phải có ít nhất 01 năm (đủ 12 tháng) giữ chức danh Kiểm định viên kỹ thuật an toàn lao động tính đến ngày hết thời hạn nộp hồ sơ đăng ký dự thi hoặc xét thăng hạng.' model-index: - name: SentenceTransformer based on hiieu/halong_embedding results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.45902051067392213 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6586856425282545 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7347844286312265 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8211804102134784 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.45902051067392213 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2284637923817497 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1542904981163667 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08728338216827125 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.43661225059299563 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.6385516952699875 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.7145667643365425 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8032440351611553 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.625266639818938 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5765716876955832 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5713466680083168 name: Cosine Map@100 --- # SentenceTransformer based on hiieu/halong_embedding This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [hiieu/halong_embedding](https://huggingface.co/hiieu/halong_embedding). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [hiieu/halong_embedding](https://huggingface.co/hiieu/halong_embedding) <!-- at revision 43172189e153507f65353ff084f18dce41697a2a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tranguyen/halong_embedding-legal-document-finetune") # Run inference sentences = [ 'Kiểm định viên chính kỹ thuật an toàn lao động phải đáp ứng những tiêu chuẩn gì về trình độ đào tạo?', 'Kiểm định viên chính kỹ thuật an toàn lao động - Mã số: V.09.03.01\n...\n2. Tiêu chuẩn về trình độ đào tạo, bồi dưỡng:\na) Có bằng tốt nghiệp đại học trở lên thuộc các chuyên ngành kỹ thuật phù hợp với phạm vi thực hiện kiểm định;\nb) Có chứng chỉ bồi dưỡng chức danh nghề nghiệp viên chức chuyên ngành kiểm định kỹ thuật an toàn lao động hoặc chứng chỉ kiểm định viên kiểm định kỹ thuật an toàn lao động.\n3. Tiêu chuẩn về năng lực chuyên môn, nghiệp vụ:\na) Có năng lực chủ trì tổ chức, triển khai các hoạt động nghiệp vụ kiểm định kỹ thuật an toàn lao động và đề xuất giải pháp nâng cao hiệu quả triển khai thực hiện các hoạt động thuộc lĩnh vực kiểm định;\nb) Có năng lực tổ chức phối hợp với các tổ chức, cá nhân có liên quan khác trong quá trình thực hiện nhiệm vụ về hoạt động kiểm định kỹ thuật an toàn lao động;\nc) Có khả năng hướng dẫn nghiệp vụ về lĩnh vực kiểm định kỹ thuật an toàn lao động phù hợp với chuyên ngành được đào tạo;\nd) Đã chủ trì 01 nhiệm vụ khoa học và công nghệ cấp bộ, cấp tỉnh ở mức đạt trở lên liên quan đến lĩnh vực kiểm định kỹ thuật an toàn lao động hoặc tham gia ít nhất 02 nhiệm vụ khoa học và công nghệ cấp bộ, cấp tỉnh được nghiệm thu ở mức đạt trở lên liên quan đến lĩnh vực kiểm định;\nđ) Có khả năng ứng dụng công nghệ thông tin trong thực hiện các nhiệm vụ của kiểm định viên chính kỹ thuật an toàn lao động và có khả năng sử dụng ngoại ngữ trong một số nhiệm vụ cụ thể được giao.\n4. Yêu cầu về thời gian công tác tối thiểu đối với viên chức dự thi hoặc xét thăng hạng chức danh Kiểm định viên chính kỹ thuật an toàn lao động: Có thời gian công tác giữ chức danh Kiểm định viên kỹ thuật an toàn lao động hoặc tương đương từ đủ 09 năm trở lên (không kể thời gian tập sự, thử việc). Trường hợp có thời gian giữ chức danh nghề nghiệp tương đương thì phải có ít nhất 01 năm (đủ 12 tháng) giữ chức danh Kiểm định viên kỹ thuật an toàn lao động tính đến ngày hết thời hạn nộp hồ sơ đăng ký dự thi hoặc xét thăng hạng.', 'Đề xuất sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện và điều kiện đầu tư kinh doanh\n1. Căn cứ điều kiện phát triển kinh tế - xã hội, yêu cầu quản lý nhà nước trong từng thời kỳ và điều ước quốc tế về đầu tư, bộ, cơ quan ngang bộ trình Chính phủ đề xuất sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh.\n2. Việc đề xuất sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh được thực hiện trong Đề nghị xây dựng văn bản quy phạm pháp luật theo quy định của Luật Ban hành văn bản quy phạm pháp luật, trong đó có những nội dung sau đây:\na) Ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh dự kiến sửa đổi, bổ sung;\nb) Phân tích sự cần thiết, mục đích của việc sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh phù hợp với quy định tại khoản 1 Điều 7 Luật Đầu tư;\nc) Căn cứ sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh và đối tượng phải tuân thủ;\nd) Đánh giá tính hợp lý, khả thi của việc sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh và sự phù hợp với điều ước quốc tế về đầu tư;\nđ) Đánh giá tác động của việc sửa đổi, bổ sung ngành, nghề đầu tư kinh doanh có điều kiện hoặc điều kiện đầu tư kinh doanh đối với công tác quản lý nhà nước và hoạt động đầu tư kinh doanh của các đối tượng phải tuân thủ.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.459 | | cosine_accuracy@3 | 0.6587 | | cosine_accuracy@5 | 0.7348 | | cosine_accuracy@10 | 0.8212 | | cosine_precision@1 | 0.459 | | cosine_precision@3 | 0.2285 | | cosine_precision@5 | 0.1543 | | cosine_precision@10 | 0.0873 | | cosine_recall@1 | 0.4366 | | cosine_recall@3 | 0.6386 | | cosine_recall@5 | 0.7146 | | cosine_recall@10 | 0.8032 | | cosine_ndcg@10 | 0.6253 | | cosine_mrr@10 | 0.5766 | | **cosine_map@100** | **0.5713** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 119,717 training samples * Columns: <code>anchors</code> and <code>positives</code> * Approximate statistics based on the first 1000 samples: | | anchors | positives | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 24.31 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 257.87 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | anchors | positives | |:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Chính sách thôi việc ngay đối với cán bộ, công chức, viên chức khi thực hiện tinh giản biên chế như thế nào?</code> | <code>“7. Về chính sách thôi việc ngay: Thực hiện theo quy định tại khoản 4 Điều 1 Nghị định số 143/2020/NĐ-CP.”“Điều 1. Sửa đổi, bổ sung một số điều của Nghị định số 108/2014/NĐ-CP ngày 20 tháng 11 năm 2014 của Chính phủ về chính sách tinh giản biên chế và Nghị định số 113/2018/NĐ-CP ngày 31 tháng 8 năm 2018 của Chính phủ sửa đổi, bổ sung một số điều của Nghị định số 108/2014/NĐ-CP về chính sách tinh giản biên chế.<br>...<br>4. Sửa đổi, bổ sung khoản 1 Điều 10 Nghị định số 108/2014/NĐ-CP như sau:<br>"1. Chính sách thôi việc ngay<br>Những người thuộc đối tượng tinh giản biên chế quy định tại Điều 6 Nghị định này có tuổi tối đa thấp hơn 2 tuổi so với tuổi nghỉ hưu tối thiểu quy định tại khoản 3 Điều 169 Bộ luật Lao động và không đủ điều kiện để hưởng chính sách về hưu trước tuổi quy định tại khoản 1 Điều 8 Nghị định này hoặc có tuổi thấp hơn 2 tuổi so với tuổi nghỉ hưu quy định tại khoản 2 Điều 169 Bộ luật Lao động và không đủ điều kiện để hưởng chính sách về hưu trước tuổi quy định tại khoản 2 Điều 8 Nghị định này nếu thôi việc ngay thì được hưởng các khoản trợ cấp sau:<br>a) Được trợ cấp 03 tháng tiền lương hiện hưởng để tìm việc làm;<br>b) Được trợ cấp 1,5 tháng tiền lương cho mỗi năm công tác có đóng bảo hiểm xã hội.”</code> | | <code>Chính sách thôi việc ngay đối với cán bộ, công chức, viên chức khi thực hiện tinh giản biên chế như thế nào?</code> | <code>“7. Về chính sách thôi việc ngay: Thực hiện theo quy định tại khoản 4 Điều 1 Nghị định số 143/2020/NĐ-CP.”“Điều 1. Sửa đổi, bổ sung một số điều của Nghị định số 108/2014/NĐ-CP ngày 20 tháng 11 năm 2014 của Chính phủ về chính sách tinh giản biên chế và Nghị định số 113/2018/NĐ-CP ngày 31 tháng 8 năm 2018 của Chính phủ sửa đổi, bổ sung một số điều của Nghị định số 108/2014/NĐ-CP về chính sách tinh giản biên chế.<br>...<br>4. Sửa đổi, bổ sung khoản 1 Điều 10 Nghị định số 108/2014/NĐ-CP như sau:<br>"1. Chính sách thôi việc ngay<br>Những người thuộc đối tượng tinh giản biên chế quy định tại Điều 6 Nghị định này có tuổi tối đa thấp hơn 2 tuổi so với tuổi nghỉ hưu tối thiểu quy định tại khoản 3 Điều 169 Bộ luật Lao động và không đủ điều kiện để hưởng chính sách về hưu trước tuổi quy định tại khoản 1 Điều 8 Nghị định này hoặc có tuổi thấp hơn 2 tuổi so với tuổi nghỉ hưu quy định tại khoản 2 Điều 169 Bộ luật Lao động và không đủ điều kiện để hưởng chính sách về hưu trước tuổi quy định tại khoản 2 Điều 8 Nghị định này nếu thôi việc ngay thì được hưởng các khoản trợ cấp sau:<br>a) Được trợ cấp 03 tháng tiền lương hiện hưởng để tìm việc làm;<br>b) Được trợ cấp 1,5 tháng tiền lương cho mỗi năm công tác có đóng bảo hiểm xã hội.”</code> | | <code>Quy định về nhiệm vụ của Ban chỉ huy phòng, chống thiên tai và tìm kiếm cứu nạn cấp xã như thế nào?</code> | <code>Tổ chức, nhiệm vụ của Ban chỉ huy phòng, chống thiên tai và tìm kiếm cứu nạn cấp xã<br>...<br>4. Nhiệm vụ của Ban chỉ huy phòng, chống thiên tai và tìm kiếm cứu nạn cấp xã:<br>a) Tham mưu giúp Ủy ban nhân dân cấp xã thực hiện nhiệm vụ phòng, chống thiên tai theo quy định tại khoản 2 Điều 43 của Luật Phòng, chống thiên tai;<br>b) Thực hiện việc truyền phát tin chỉ đạo, chỉ huy ứng phó thiên tai của các cấp đến cộng đồng;<br>c) Chỉ huy ứng phó thiên tai, tìm kiếm cứu nạn trong thiên tai trong phạm vi cấp xã;<br>d) Chỉ đạo, đôn đốc việc xây dựng và phê duyệt kế hoạch, phương án ứng phó thiên tai của địa phương;<br>đ) Kiểm tra, đôn đốc tổ chức, cá nhân tại địa phương thực hiện nhiệm vụ phòng, chống thiên tai;<br>e) Chủ trì tham mưu giúp Ủy ban nhân dân xã thành lập, tổ chức đào tạo, tập huấn và duy trì lực lượng xung kích phòng chống thiên tai cấp xã với nòng cốt là lực lượng dân quân tự vệ và sự tham gia của Hội Chữ thập đỏ, đoàn thanh niên và các tổ chức đoàn thể khác tại địa phương;<br>g) Thực hiện các nội dung về Quỹ phòng, chống thiên tai theo quy định;<br>h) Tổ chức phổ biến, tuyên truyền nâng cao nhận thức cộng đồng về phòng chống thiên tai hàng năm.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128 ], "matryoshka_weights": [ 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `gradient_accumulation_steps`: 4 - `learning_rate`: 3e-05 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `resume_from_checkpoint`: halong_embedding-legal-document-finetune/checkpoint-32308 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 4 - `per_device_eval_batch_size`: 4 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 4 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: halong_embedding-legal-document-finetune/checkpoint-32308 - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_128_cosine_map@100 | |:----------:|:---------:|:-------------:|:----------------------:| | 4.3707 | 32702 | 0.0005 | 0.5715 | | **4.4234** | **33096** | **0.0003** | **0.5718** | | 4.4760 | 33490 | 0.0003 | 0.5720 | | 4.5287 | 33884 | 0.0012 | 0.5722 | | 4.5814 | 34278 | 0.0002 | 0.5714 | | 4.6340 | 34672 | 0.0004 | 0.5714 | | 4.6867 | 35066 | 0.0003 | 0.5715 | | 4.7393 | 35460 | 0.001 | 0.5715 | | 4.7920 | 35854 | 0.0002 | 0.5718 | | 4.8446 | 36248 | 0.0003 | 0.5716 | | 4.8973 | 36642 | 0.0018 | 0.5716 | | 4.9499 | 37036 | 0.001 | 0.5713 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.5.0+cu124 - Accelerate: 1.0.1 - Datasets: 3.0.2 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CHIA" ]
Non_BioNLP
zbrunner/hallucination_uniqueunique
zbrunner
automatic-speech-recognition
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:tedlium3", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
1,726
1,726
1
0
--- datasets: - tedlium3 language: en license: cc-by-4.0 tags: - espnet - audio - automatic-speech-recognition --- ## ESPnet2 ASR model ### `zbrunner/hallucination_uniqueunique` This model was trained by zbrunner using tedlium3 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 06693f5abd8cc8c8a34d92cf655b81be8ca27db0 pip install -e . cd egs2/tedlium3/asr1.10_enc6_dec6_att8_lr0.002_heldback_uniqueunique ./run.sh --skip_data_prep false --skip_train true --download_model zbrunner/hallucination_uniqueunique ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Aug 21 02:08:14 BST 2024` - python version: `3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]` - espnet version: `espnet 202402` - pytorch version: `pytorch 1.13.1+cu116` - Git hash: `06693f5abd8cc8c8a34d92cf655b81be8ca27db0` - Commit date: `Thu Jun 27 19:07:30 2024 +0100` ## exp/asr_train_raw_en_bpe10000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_model_valid.acc.ave/test|1155|27500|91.0|6.2|2.8|3.8|12.8|86.5| |decode_asr_model_valid.acc.ave/test_hb|1695|28708|90.1|5.7|4.3|2.6|12.6|76.5| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_10snr_varsnr_reverb|855|28708|55.9|27.6|16.5|5.8|49.9|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_2snr_silnoise|855|28708|88.2|7.1|4.7|5.1|16.9|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr|855|28708|78.5|14.4|7.1|7.3|28.9|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_reverb|855|28708|36.4|28.4|35.2|4.7|68.3|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_silnoise|855|28708|88.3|7.0|4.7|5.2|16.9|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_silnoise_reverb|855|28699|68.6|21.6|9.8|6.7|38.1|99.8| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_varsnr|855|28708|77.9|14.6|7.6|6.7|28.8|99.8| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_varsnr_reverb|855|28708|34.8|27.3|37.9|4.7|70.0|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_3s_10snr_varsnr_reverb|855|28708|46.3|24.9|28.8|6.0|59.7|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_3s_nonoise|855|28708|87.1|7.2|5.6|5.8|18.6|99.5| |decode_asr_model_valid.acc.ave/test_hb_multi2_nonoise|855|28708|89.1|6.5|4.5|5.2|16.1|99.6| |decode_asr_model_valid.acc.ave/test_hb_multi2_nonoise_reverb|855|28708|74.4|18.9|6.7|6.7|32.3|100.0| |decode_asr_model_valid.acc.ave/test_hb_nocleaner|1695|28699|90.0|5.7|4.3|2.6|12.6|76.5| |decode_asr_model_valid.acc.ave/test_nocleaner|1155|27500|91.0|6.3|2.8|3.8|12.8|86.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_model_valid.acc.ave/test|1155|145066|95.6|1.6|2.8|3.1|7.5|86.5| |decode_asr_model_valid.acc.ave/test_hb|1695|150646|94.7|1.5|3.8|2.3|7.6|76.5| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_10snr_varsnr_reverb|855|151486|71.0|10.4|18.6|5.8|34.8|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_2snr_silnoise|855|151486|93.6|2.0|4.4|5.9|12.3|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr|855|151486|87.6|4.9|7.5|6.4|18.8|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_reverb|855|151486|51.1|10.4|38.5|4.5|53.5|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_silnoise|855|151486|93.7|2.0|4.4|6.2|12.6|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_silnoise_reverb|855|154052|80.4|7.8|11.8|6.7|26.4|99.8| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_varsnr|855|151486|87.1|5.0|7.9|5.9|18.8|99.8| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_varsnr_reverb|855|151486|48.7|10.1|41.1|4.4|55.6|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_3s_10snr_varsnr_reverb|855|151486|59.6|9.1|31.3|5.4|45.9|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_3s_nonoise|855|151486|92.2|2.0|5.8|4.7|12.6|99.5| |decode_asr_model_valid.acc.ave/test_hb_multi2_nonoise|855|151486|94.0|1.8|4.2|4.3|10.3|99.6| |decode_asr_model_valid.acc.ave/test_hb_multi2_nonoise_reverb|855|151486|85.9|6.7|7.5|6.0|20.2|100.0| |decode_asr_model_valid.acc.ave/test_hb_nocleaner|1695|153212|93.9|1.5|4.6|2.7|8.8|76.5| |decode_asr_model_valid.acc.ave/test_nocleaner|1155|145066|95.4|1.7|2.8|4.2|8.8|86.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_model_valid.acc.ave/test|1155|108605|94.9|2.2|2.9|3.2|8.2|86.5| |decode_asr_model_valid.acc.ave/test_hb|1695|113174|94.1|2.0|4.0|2.4|8.3|76.5| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_10snr_varsnr_reverb|855|113174|69.1|12.4|18.5|5.6|36.5|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_2snr_silnoise|855|113174|93.0|2.5|4.5|6.5|13.5|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr|855|113174|86.4|5.9|7.8|6.4|20.1|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_reverb|855|113174|49.1|12.5|38.4|4.2|55.0|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_silnoise|855|113174|93.1|2.5|4.4|7.0|13.9|99.9| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_silnoise_reverb|855|33855|64.9|20.4|14.7|7.5|42.5|99.8| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_varsnr|855|113174|85.9|6.0|8.2|6.0|20.1|99.8| |decode_asr_model_valid.acc.ave/test_hb_multi2_2s_5snr_varsnr_reverb|855|113174|46.9|12.0|41.0|4.1|57.1|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_3s_10snr_varsnr_reverb|855|113174|57.9|11.0|31.1|5.2|47.3|100.0| |decode_asr_model_valid.acc.ave/test_hb_multi2_3s_nonoise|855|113174|91.5|2.7|5.8|4.8|13.3|99.5| |decode_asr_model_valid.acc.ave/test_hb_multi2_nonoise|855|113174|93.4|2.3|4.3|4.4|11.0|99.6| |decode_asr_model_valid.acc.ave/test_hb_multi2_nonoise_reverb|855|113174|84.3|8.2|7.4|6.2|21.8|100.0| |decode_asr_model_valid.acc.ave/test_hb_nocleaner|1695|33855|86.4|5.2|8.4|3.1|16.7|76.5| |decode_asr_model_valid.acc.ave/test_nocleaner|1155|31518|88.7|6.3|4.9|5.7|16.9|86.5| ## ASR config <details><summary>expand</summary> ``` config: conf/train.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: sequence valid_iterator_type: null output_dir: exp/asr_train_raw_en_bpe10000_sp ngpu: 1 seed: 2022 num_workers: 2 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 58461 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 5 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 3 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false use_adapter: false adapter: lora save_strategy: all adapter_conf: {} pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 18000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe10000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe10000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe10000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe10000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 chunk_excluded_key_prefixes: [] chunk_default_fs: null chunk_max_abs_length: null chunk_discard_short_samples: true train_data_path_and_name_and_type: - - dump/raw/train_heldback_UU_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_heldback_UU_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 allow_multi_rates: false valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.001 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 10000 token_list: - <blank> - <unk> - '[unk]' - ▁ - ▁the - ▁and - s - ▁to - ▁of - '''' - ▁a - ▁that - ▁i - ▁in - ▁it - ▁we - ▁you - ▁is - ▁this - ▁so - ▁they - t - ▁was - ▁for - ▁are - ▁have - ▁but - ▁on - ▁what - ▁with - ▁can - ▁one - re - ▁be - ▁about - ▁there - ▁not - ▁all - ▁at - ▁do - ▁my - ▁as - ▁people - ▁like - ▁if - ▁from - ▁our - ing - ▁or - ▁an - ▁he - ▁these - ▁just - d - ▁when - ▁now - ▁because - m - ▁me - ▁out - ed - ▁by - ▁how - ▁very - ▁up - ▁more - ▁had - ▁them - ▁know - ▁going - ▁who - ▁their - ▁think - ve - ▁see - ▁your - ▁were - ▁would - ▁which - ▁get - ▁two - ▁really - ▁us - ▁time - ▁here - ▁world - ▁then - ▁some - ▁has - ▁don - ▁into - ▁way - ▁where - ▁actually - ▁will - ▁other - ▁could - ▁years - ▁things - ▁go - ▁make - ▁want - ▁been - ▁no - ▁she - ▁those - ▁right - ▁first - ▁well - ▁something - ▁thousand - ▁than - ▁hundred - ly - ▁new - ▁over - ▁also - ▁look - ▁thing - ▁even - ▁said - ▁most - ▁back - ▁much - ▁work - ▁little - ▁his - ▁only - ▁life - ▁got - ▁many - ▁need - ▁take - y - ▁say - ▁three - ▁lot - ll - ▁her - ▁did - ▁kind - ▁every - ▁around - ▁good - ▁different - ▁why - e - ▁down - ▁let - ▁through - er - ▁being - ▁same - ▁come - ▁five - ▁day - ▁use - ▁put - ▁year - ▁doing - n - ▁human - ▁any - ▁called - r - ▁after - ▁made - ▁percent - ▁tell - ▁today - ▁change - ▁find - ▁four - ▁fact - ▁didn - ▁talk - ▁own - ▁great - ▁idea - ▁point - ▁last - ▁before - ▁started - ▁another - ▁never - ▁might - ▁give - ▁should - ▁big - ▁better - ▁thought - al - es - ▁twenty - ▁system - ▁part - ▁important - ▁went - ▁still - ▁problem - ▁start - ▁off - ▁each - ▁together - ▁brain - ▁next - ▁ten - ▁women - ▁able - ▁him - ▁show - ▁long - ▁came - ▁place - ▁course - ▁few - ▁ago - ▁does - ▁again - ▁story - ▁bit - ▁water - ▁found - ▁used - ▁between - ▁data - ▁technology - c - ▁question - ▁end - ▁too - ▁love - ▁maybe - ▁school - ▁example - ▁mean - ▁nine - ▁understand - ▁live - ▁old - ▁wanted - ▁doesn - ▁looking - ▁may - ▁call - ▁help - ▁person - ▁children - ▁real - ▁done - ▁believe - ▁feel - ▁ever - ▁whole - ▁six - ▁always - ▁sort - ▁million - ▁trying - ▁working - ▁country - ▁away - ▁everything - ▁try - ▁number - ▁power - ▁home - ▁second - ▁using - o - ▁space - ▁fifty - ▁money - ▁information - ▁design - ▁thinking - ▁create - ▁become - ▁took - ▁man - ▁small - ▁means - ▁high - ▁kids - ▁social - ers - ▁light - ▁enough - ▁best - ▁left - ▁thirty - ▁sense - ▁making - ▁future - ▁ask - ▁seven - ▁car - ▁without - ▁getting - ▁city - ▁probably - ▁hard - ▁science - ▁eight - ▁food - ▁times - p - ▁less - ▁building - ▁body - ▁quite - a - ▁family - ▁told - ▁talking - ▁happened - ▁build - ▁energy - ▁half - en - ▁health - ▁hand - ▁lives - ▁earth - ▁countries - ▁imagine - ▁war - ▁care - ▁moment - ▁pretty - ▁across - ▁comes - ▁interesting - ▁stuff - th - ▁such - ▁while - ▁experience - ▁men - ▁anything - le - ▁thank - ▁learn - i - ▁side - ▁play - ▁am - ▁under - ▁saw - or - ▁young - ▁having - ▁dollars - ▁far - ▁coming - ▁room - ▁open - ▁happen - ▁project - ▁asked - ▁remember - ▁later - ▁reason - ▁once - ▁living - ▁case - ▁un - ▁computer - ▁mind - ▁yet - ▁global - ic - ▁simple - ▁seen - ▁almost - ▁bad - ▁single - ▁public - ▁process - ▁else - ▁move - l - ▁inside - ▁often - ▁nothing - ▁both - k - ▁community - ▁matter - ▁someone - ▁picture - ▁states - ▁already - ▁planet - ▁days - ▁set - b - ▁happens - u - ▁whether - ▁africa - g - ▁face - ▁answer - ▁goes - ▁keep - able - ▁within - ▁wrong - ▁order - ▁billion - ▁instead - ▁history - ▁business - ation - ▁problems - ▁myself - ▁possible - ▁looked - ▁cancer - 'on' - ▁government - ▁job - ▁sometimes - ▁sure - ▁control - ▁group - ▁ways - in - ▁saying - ▁hope - ▁top - ▁months - ▁child - ▁basically - ▁makes - ▁book - ▁bring - ▁research - ▁united - f - ▁couple - ▁read - ▁night - ▁since - ▁until - ▁per - ▁forty - ▁guy - ▁head - ▁black - ity - ▁everybody - ▁age - ▁past - ▁true - ▁de - ▁line - ▁woman - ▁run - ▁share - ▁state - ▁cells - ▁turn - ▁says - ▁words - ▁stop - ▁wasn - ▁amazing - ▁learned - ▁heard - ▁knew - ▁model - ▁built - an - ▁middle - ▁ideas - ▁though - ▁decided - ▁stories - ▁nature - ▁friends - ▁mother - ▁house - ▁america - ▁study - ▁beautiful - ▁became - ▁internet - ▁video - ▁piece - ▁education - ▁art - ▁level - ▁language - ▁word - ▁heart - ▁form - ▁everyone - ▁third - ▁somebody - ▁taking - ch - ▁society - ▁species - ▁against - ness - ▁exactly - ▁sound - ▁completely - ▁huge - ion - ▁ourselves - ▁music - ▁happening - ▁disease - ▁hear - ▁must - ▁company - ▁isn - ▁large - ▁looks - ▁places - ▁ninety - ▁themselves - ra - ▁air - ▁rather - ▁front - ar - ▁turns - ▁couldn - ne - ▁universe - ▁students - ▁god - ▁name - ▁itself - ▁american - ▁c - ▁created - ▁hours - ▁team - ▁environment - el - ▁questions - ▁u - ▁close - ▁animals - ▁worked - ▁learning - ▁entire - ▁early - ▁during - ▁outside - ▁yes - ▁figure - ▁free - ▁least - ▁machine - ▁cell - ▁perhaps - ▁game - ro - ce - ▁sixty - ▁walk - ▁along - ▁minutes - ▁okay - ▁media - ▁common - ▁fifteen - ▁scale - ate - ▁eighty - v - ▁particular - ▁works - ▁natural - ▁ted - ▁gave - ▁based - ▁won - ▁laughter - ▁cities - ▁finally - ▁size - ▁full - ▁culture - ▁image - ▁given - ting - ▁local - ▁cost - ▁behind - ▁view - ▁land - ▁taken - ▁area - ▁white - ▁difference - ▁changed - lo - ▁ok - ▁systems - ▁easy - ▁difficult - us - li - ▁kinds - ▁happy - ▁felt - w - ri - ▁population - ▁companies - ▁co - ▁father - ▁india - te - ▁leave - ▁center - ▁needed - po - ▁began - ▁poor - ▁deal - ▁economic - ▁yourself - ▁turned - il - ▁spend - ▁reality - ▁seeing - ▁national - ▁news - ▁political - ▁death - ▁realized - ▁china - est - ▁york - ▁powerful - ▁red - ▁test - ▁beginning - ▁simply - ▁eyes - ▁market - ▁parents - ▁pay - ▁week - ▁ocean - ▁century - ▁moving - ur - ▁ca - ter - ▁grow - ▁cannot - ▁thousands - ▁rest - ▁g - ▁others - ▁phone - ▁guys - ▁step - ▁known - ▁certain - ▁speak - ▁terms - ▁surface - ▁longer - ▁access - ▁fear - ▁climate - ▁sea - ▁paper - co - it - ▁green - ▁whatever - ▁amount - ment - ▁spent - ▁wonderful - ▁value - ▁buy - ▁hands - ▁self - ▁field - ▁tried - ▁lost - ▁risk - rs - ▁seventy - man - ▁oil - ▁blood - ▁ground - ▁support - ▁die - ▁quickly - ▁patients - ▁ability - ▁lots - ▁gets - ▁friend - ▁sun - ▁challenge - ▁deep - ▁opportunity - ▁write - ▁street - ▁short - less - ▁either - ▁growing - la - ▁stage - ▁lab - ▁key - ▁weeks - ▁interested - ▁blue - ▁fish - ▁individual - ▁south - ▁takes - ▁sitting - ▁parts - ▁gone - ▁medical - ▁complex - sh - ▁law - ▁oh - ▁growth - ▁act - ▁needs - ▁development - ▁dna - ▁watch - ▁program - ▁behavior - h - ▁humans - ▁scientists - ▁cut - ▁audience - ▁network - ling - ▁wouldn - ta - ▁personal - z - ▁feeling - ism - ▁born - ▁experiment - de - ▁average - ▁clear - tic - ist - ▁girl - ▁material - ▁attention - ▁met - ▁result - ▁morning - ▁structure - ▁low - ▁understanding - ted - ▁physical - ▁twelve - ▁realize - ▁issue - ▁incredible - ▁brought - ▁literally - ▁map - ▁girls - ent - ▁anyone - ▁voice - ry - ia - ▁economy - ▁knowledge - ▁solve - ▁impact - ci - ba - ▁movement - ▁alone - ▁animal - ma - ▁giving - ▁cool - ant - ▁force - ▁telling - ▁color - ▁truth - ▁areas - ▁eat - ▁bottom - ▁special - ▁pick - ▁kid - ▁hit - ▁sex - ▁class - ▁starting - ▁likely - ▁feet - ▁meet - ▁b - ▁numbers - ▁hold - ▁fire - ▁film - ▁developed - ies - ▁term - ▁tools - ▁dark - me - ▁changes - ▁seems - ▁shows - ▁forward - ive - ▁zero - ▁north - ▁innovation - ▁images - ▁type - ▁focus - ▁wrote - ▁allow - ▁save - ▁baby - ▁present - ▁k - ▁guess - ▁normal - ▁industry - ▁trust - ▁absolutely - ▁con - ▁yeah - ▁eye - ▁millions - ▁especially - na - ▁running - x - ▁fun - ▁digital - ▁changing - ▁books - ▁technologies - ▁chance - ▁tiny - ▁send - ▁dream - ▁generation - ▁creating - ▁online - ▁develop - ▁code - ▁office - ▁discovered - ▁modern - ▁sit - ▁stay - ul - ▁worth - ▁rate - ▁period - ▁explain - ▁nice - ▁nobody - ▁carbon - ▁choice - ▁eleven - ▁designed - ▁recently - ▁communities - ▁asking - ▁patient - ▁major - ▁miles - ▁wall - ▁europe - ac - ▁relationship - ▁university - ie - ▁begin - ▁cars - ▁similar - ▁stand - ▁table - ▁dead - ▁fast - ▁solution - ▁soon - ▁genetic - ized - ▁role - ge - ▁seem - ▁talked - et - age - ▁revolution - ▁beyond - ▁fly - ▁violence - ▁showed - ▁playing - ▁situation - ▁plan - ▁drive - ▁several - ▁developing - ▁product - ▁evidence - ▁international - ▁measure - ▁basic - ▁month - ▁ice - ▁lived - ga - um - ▁incredibly - ▁produce - ▁journey - ▁theory - ▁issues - ▁box - ▁hospital - ▁general - ▁college - ▁medicine - ▁resources - ▁drug - ▁star - ▁shape - ▁robot - ▁towards - ▁hour - ▁teach - ▁speed - ▁fight - ▁google - ▁mine - ol - ▁english - ▁cause - at - ous - ▁break - ally - ▁reasons - ▁bigger - ck - ▁effect - ▁mo - 'no' - ▁putting - ▁solar - as - ▁hundreds - op - ▁listen - ▁police - ▁chinese - ▁available - se - ▁vision - ▁approach - to - ▁groups - ▁eventually - ▁object - ian - ▁pre - ▁security - ▁list - ▁success - ▁lose - ▁source - ▁haven - ▁follow - ▁perfect - ▁involved - ▁organization - ▁reach - ful - ▁writing - ▁mom - ▁web - ated - ▁anybody - ▁computers - ▁protect - ▁plant - ▁message - ▁add - ▁drugs - ▁favorite - ▁screen - ha - ▁road - ▁died - ▁conversation - ▁safe - ▁clean - ▁lead - ▁action - ▁device - is - ▁tend - ▁gives - ▁mass - ▁led - ▁son - ▁notice - ▁potential - pe - un - ▁walking - ty - ▁choose - va - ca - ▁biggest - ▁families - ru - ▁fall - ▁evolution - ▁quality - ▁obviously - ▁sounds - ▁skin - ▁scientific - ▁camera - ▁certainly - ance - ▁west - mp - ▁finding - ▁higher - ▁door - ▁pictures - ▁particularly - ▁doctor - ▁east - ▁post - ▁poverty - ▁perspective - vi - ▁totally - ▁wind - ▁consider - bo - ▁showing - ▁travel - mo - ▁becomes - ▁boy - ▁p - ▁positive - ▁software - ▁jobs - ▁student - ▁onto - ▁among - ▁slow - ▁movie - ▁creative - ▁strong - ▁moved - ▁spread - ▁fit - ▁aren - ▁pain - ▁provide - ▁supposed - ▁crazy - ▁mars - ▁sleep - ▁rules - ▁path - ▁smart - ▁continue - ▁recognize - ▁leaders - ▁further - ▁largest - ▁fundamental - id - ine - ▁train - ▁context - ▁watching - ▁democracy - ▁response - ▁win - ▁including - ▁grew - ▁worse - ▁shown - ▁pro - ▁treatment - ▁tool - ▁ma - ▁connected - ▁allowed - ▁nuclear - ▁race - ▁tree - ▁everywhere - ▁please - ▁bank - ▁kill - ▁ready - ▁suddenly - ▁african - ke - ▁la - ▁meaning - ▁goal - ▁wait - ▁main - ▁gas - ni - nt - ▁progress - ▁minute - ▁usually - ▁wish - ating - ▁village - ▁standing - ▁plants - ▁individuals - bi - ▁essentially - ▁training - ▁named - ▁decision - ir - ▁expect - ▁moral - ▁taught - ▁increase - ▁rich - ▁hot - ▁secret - ▁income - ▁forest - ▁wife - ▁sent - ▁late - ▁teacher - ▁exciting - ▁intelligence - ▁memory - ▁service - ▁address - ▁river - ▁non - ▁connect - ▁cold - ▁visual - ▁pattern - ▁lower - pi - ▁rise - ▁compassion - ▁square - ▁shot - ▁indeed - ▁meant - ▁schools - ▁teachers - ize - ▁faster - go - ▁successful - ▁career - ▁objects - ▁f - ph - ▁mental - line - ▁doctors - ▁trees - ny - ▁president - ▁famous - ▁crisis - ▁trade - ▁extremely - ▁wonder - ▁arm - ▁range - ▁extraordinary - ▁results - ▁treat - ▁version - ▁track - ▁dance - ut - ▁record - ▁fine - ▁bunch - ▁waste - ▁peace - ▁machines - ▁killed - ▁kept - di - ▁interest - ▁pull - ▁somehow - ▁somewhere - ▁materials - ▁park - ▁serious - mi - ▁bodies - ▁freedom - ▁board - ▁reading - ▁position - ▁floor - ▁site - up - ▁tv - ▁search - ▁examples - ▁humanity - ▁conditions - be - ▁clearly - ▁loved - ▁games - ▁military - ▁plastic - ▁excited - ▁specific - ▁san - ▁robots - ▁older - ▁content - ton - ▁ended - ▁text - ▁truly - ▁financial - ▁check - ▁raise - ▁bar - son - ▁starts - ary - ▁exist - ▁allows - ▁larger - ▁impossible - ▁pressure - ▁practice - ▁complicated - ▁traditional - ▁projects - ▁ran - ▁super - ▁sell - ▁buildings - ▁above - am - ▁drop - ▁eighteen - ▁activity - ▁sky - ▁models - ▁bill - ▁concept - ▁bed - ▁purpose - ▁sign - ▁genes - ▁current - ▁private - ▁studies - ho - ▁decisions - ▁institutions - ▁website - ▁town - ▁patterns - ▁however - ▁skills - ▁cases - ▁central - ▁happiness - ▁architecture - ▁anyway - ▁healthy - ▁products - ▁although - ▁sub - ▁except - ▁conflict - ▁page - ▁dangerous - ti - ▁feed - ▁strange - ▁series - ▁biology - ▁greatest - ▁bacteria - ▁babies - ▁written - ▁prison - ▁block - ▁performance - ▁lines - ▁upon - ▁female - ▁forget - ▁decide - ▁decades - ▁foot - ▁anymore - ▁extra - ▁rights - ▁policy - ▁region - ▁pieces - ▁environmental - ▁listening - ▁americans - bu - ▁touch - ▁alive - ▁becoming - ▁rule - ▁draw - ▁waiting - wa - ▁urban - ▁throughout - ▁earlier - ▁forms - ▁capital - ▁governments - fa - ▁subject - ▁actual - pl - ▁heat - ▁final - ▁fourteen - ▁19 - ▁email - ▁workers - ▁shared - ▁politics - ▁electricity - ▁attack - ▁citizens - ▁anywhere - ▁expensive - ▁museum - ver - ▁ex - ▁critical - ▁improve - ▁smaller - ig - ▁dr - ▁dis - ▁unique - ▁weren - ▁o - ▁solutions - ▁western - ▁store - ▁speaking - ▁count - ▁gene - ster - ▁press - ▁birth - ▁driving - ▁temperature - ▁j - cy - ▁daughter - ▁near - pa - ▁challenges - ▁fuel - ▁price - tor - ▁male - ▁mission - ▁weight - ▁communicate - ▁moon - ▁push - ▁engineering - ▁services - izing - ▁math - ▁predict - ▁instance - ▁therefore - ▁helped - ▁effective - ▁walked - ▁brains - ▁sand - ▁corner - ▁greater - do - ▁physics - ▁brand - ▁teaching - ▁decade - ▁cultural - ▁meeting - ▁cover - hi - ▁stars - ▁colleagues - ring - ▁party - ▁mobile - ▁immediately - ▁sat - ▁none - ▁fantastic - ▁genome - ▁rock - ▁direction - ▁speech - ▁hair - ▁massive - ▁summer - ah - ▁inter - ▁blind - ▁himself - ▁tells - ▁terrible - ▁survive - ▁choices - '2' - ▁unfortunately - ▁flow - ▁recent - ▁types - ▁religion - ▁respect - ▁aid - ▁beauty - ▁stopped - ▁justice - ▁values - ▁fix - ▁forth - ▁sorts - ▁familiar - ▁character - ▁foundation - ▁despite - ▁atmosphere - ▁dog - ▁diseases - ▁ka - ▁relationships - ▁neurons - les - ▁sixteen - ▁condition - ard - ▁mountain - ▁return - ▁shift - ▁mostly - ▁levels - ▁bridge - ▁join - ▁popular - ization - ▁connection - ▁dad - ▁whose - per - ▁radio - ▁useful - ▁obvious - ▁nation - ▁na - ▁hole - ▁worst - ▁hiv - ▁production - ▁describe - da - ▁photograph - ▁civil - ▁lack - ▁reduce - ▁sick - ▁organizations - ▁glass - ▁television - ▁understood - ▁court - ▁da - ▁nearly - ▁costs - ▁agree - ad - ▁following - ▁complete - ▁magic - ▁london - ▁professor - ▁distance - ▁conference - ▁bomb - ler - ▁consciousness - ▁function - ▁directly - ▁wealth - ▁identity - ▁switch - ▁investment - land - ists - ▁campaign - ▁straight - ▁flying - ▁various - ▁industrial - ding - min - ▁photo - ▁biological - ▁standard - oc - ▁struggle - ▁landscape - ▁infrastructure - ▁john - ▁apart - ▁trouble - ▁ball - ▁streets - ▁ahead - ▁quick - ow - ▁bo - ▁stress - ▁minds - ▁laws - ▁gender - ▁passion - ▁societies - ▁ho - ▁willing - ish - ▁remarkable - ▁dying - ▁explore - ▁communication - led - ▁prevent - ▁boat - ▁discover - ber - ▁enormous - tra - ▁interact - ▁total - ▁paid - ▁emotional - ▁possibly - os - ▁afraid - om - ▁virus - ▁indian - ▁scientist - ▁legal - ▁below - ki - ▁broken - he - ▁avoid - ▁vast - ▁points - ▁visit - ▁cross - ▁core - way - ▁afford - ▁supply - ▁highly - ▁report - ▁boys - der - ▁facebook - ▁slide - ▁wild - ▁noticed - men - sa - ▁respond - ▁extreme - ▁fashion - ▁surgery - ▁responsibility - lin - ▁oxygen - ▁seemed - ▁farm - ▁knowing - ▁programs - ▁capacity - ▁task - ▁20 - ▁california - ▁artist - ▁cycle - ▁sector - ▁worry - ▁di - ▁causes - ▁desire - ▁throw - ▁ro - ▁published - ▁achieve - ▁equal - ▁x - ▁aware - ▁ancient - ▁differently - ▁effects - ▁invented - ▁pass - ▁match - ▁vi - ▁easily - ▁opposite - mu - ▁ha - ▁spot - ▁grade - ▁commercial - ▁members - ▁brother - ▁song - ▁gotten - ▁inspired - ka - ped - ▁creativity - ▁sequence - ▁giant - ▁trip - ▁played - ▁vote - ▁afghanistan - ▁definition - ▁thirteen - ▁ultimately - ▁planets - ors - ▁experiences - ▁bear - ble - q - ▁closer - ▁significant - ▁original - ▁king - ▁degree - ▁dollar - ▁held - ▁missing - ▁smell - ▁engage - ▁benefit - ▁window - ping - im - har - ▁entirely - ▁legs - ▁neighborhood - ▁chemical - ▁bought - ▁classroom - ical - ▁husband - ▁basis - ▁easier - ▁sustainable - ▁evolved - ▁stick - ▁eating - ▁depression - ▁twice - ron - ▁labor - ▁diversity - ▁slightly - ▁apply - ▁imagination - ▁sharing - ▁european - ▁apple - ▁produced - ▁event - ▁networks - ▁helping - ▁negative - ▁designers - ▁forces - ▁app - ▁mouth - ▁platform - ▁offer - ial - ▁nations - ▁account - ▁tremendous - ▁leadership - ▁background - ▁tough - ▁direct - out - ▁island - ▁interview - ▁collect - ▁hearing - ▁focused - ▁depend - ▁balance - ▁noise - dy - ▁experiments - ▁carry - ng - ▁engine - ▁magazine - ▁suffering - ▁effort - ▁molecules - ▁em - ▁notion - ner - ▁coffee - ▁influence - ▁separate - ▁transform - ▁catch - ▁studying - ▁weird - ▁funny - ▁gold - ▁target - ▁beings - ▁devices - ▁enjoy - ▁invest - ▁religious - que - ▁manage - ▁fair - ▁reflect - ▁stem - ja - ▁bright - ▁leg - wi - ▁professional - ▁rain - ▁failure - ▁gain - ▁date - ▁hate - ▁seventeen - ▁necessarily - ▁passed - ▁loss - ▁yellow - ▁lucky - ▁importantly - ▁degrees - ▁flu - ▁turning - ▁related - ▁opportunities - ▁motor - ▁particles - ▁demand - ▁feels - ▁stuck - ▁flight - ▁bone - lu - ▁leading - ▁surprise - ▁beat - ▁letter - ▁criminal - ▁grand - ▁remind - ▁hurt - ▁raised - ▁cloud - ▁sister - ▁majority - ▁bringing - ▁deliver - ▁due - ▁jump - ▁active - ▁universal - ▁dozen - ▁mark - ▁curve - ▁wear - ▁compared - ▁attempt - ▁foreign - ▁roll - ▁birds - gen - ▁bio - ▁fail - ▁lo - ▁tissue - ▁matters - ▁ta - ▁virtual - ▁collective - ▁answers - ▁twitter - ▁technical - ▁motion - ▁emotions - ▁spending - ▁benefits - ▁sexual - ▁du - ▁strategy - ▁desert - ▁wants - ▁contact - ▁fully - ▁era - ▁wide - ▁faith - ▁civilization - ▁childhood - ions - ▁complexity - ▁crime - ap - ang - ze - ▁farmers - ▁deeply - ▁markets - ▁edge - ▁traffic - ▁otherwise - ▁debate - ▁seconds - ▁differences - ▁argue - ▁partner - ▁possibility - ▁micro - ▁3 - ▁prize - nd - gg - ▁identify - ot - ▁sp - ▁efficient - ▁sold - ▁marriage - ▁advantage - ▁fi - nce - ▁forever - tan - ▁random - ▁base - ▁department - ▁painting - ▁accident - ▁responsible - ▁trained - ▁expression - ▁express - ▁meters - ▁double - gi - ▁card - ▁dreams - ▁sorry - ▁gay - side - ▁fighting - ▁metal - hy - ▁lights - ▁serve - ▁survey - ▁gift - ▁sum - ▁adults - ▁lu - uc - ▁opened - ▁net - ▁exercise - ▁proud - ▁represent - ious - if - ▁lesson - ▁joy - ▁suit - ▁patent - ities - ▁fe - ▁trial - ▁typical - ▁cheap - ▁coast - ▁leader - ▁wood - ▁figured - ▁limited - ▁damage - ▁camp - ▁argument - ▁failed - ▁holding - ▁circle - ▁studied - ▁malaria - ▁according - ▁discovery - ging - ▁quote - ▁believed - ▁principle - ier - ▁gun - ▁pop - ak - ▁narrative - ▁washington - ▁reward - ▁lay - ▁engineers - ▁saving - ▁cure - ▁shouldn - ▁japan - ▁brilliant - ▁competition - ▁click - ▁everyday - ▁received - ▁finish - ▁leads - ▁strength - ▁signal - ▁breast - ▁dinner - ▁necessary - ▁structures - ▁mit - ▁married - ▁row - ▁sad - val - ▁compare - ▁li - ▁pa - ▁charge - ▁rid - chi - ▁cat - ▁exchange - ▁lunch - ▁copy - ward - ▁plane - ▁artificial - ▁creation - ▁british - ▁caught - ▁moments - ▁walls - ▁managed - ▁asia - ▁perform - ▁multiple - ▁artists - lic - ▁folks - ▁release - ▁currently - ▁fossil - ▁garden - ▁fairly - ▁prove - ▁anti - ▁appear - ▁sudden - lan - ▁church - ▁graph - ▁constantly - ▁pen - ▁events - ▁affect - ▁watched - ▁researchers - ▁wearing - ▁correct - ▁adult - ▁judge - ▁fellow - ▁safety - ten - ▁theater - ▁highest - ▁aids - ▁billions - ai - ▁elements - ▁stream - ▁consequences - ▁detect - ▁statistics - ▁organisms - ▁crash - ▁intelligent - ▁mathematics - ▁perception - ▁layer - ▁v - ▁harder - ▁surprised - ▁soil - ▁master - ▁gap - ▁french - ▁valley - ▁mouse - ▁brings - ▁creates - ▁promise - ▁round - ▁profound - ▁accept - ▁david - ▁invisible - ▁daily - ▁capable - ▁fill - ▁graduate - ▁ra - ▁spoke - ▁opinion - ▁phenomenon - ▁puzzle - ▁trillion - ▁caused - ▁station - ran - ▁spirit - ▁stood - ▁crowd - ▁spring - ▁wave - ▁abuse - ex - ▁define - ▁ideal - ▁forced - ▁businesses - ▁grown - ▁bird - ▁brazil - ▁hidden - ▁rural - ▁tech - ▁unless - ▁drink - we - ▁paint - bb - ▁interface - ▁followed - ▁protection - ▁pounds - ▁iraq - ite - ▁treated - ▁leaving - ▁threat - ▁lifetime - ▁hall - ▁fell - ▁mirror - ▁worried - ▁harm - ▁considered - ▁enter - ▁sta - ▁metaphor - ▁concerned - ▁closed - ▁clinical - ▁chris - ▁bus - ▁link - ▁surprising - ▁described - ▁finished - iv - ag - ▁roof - ▁weapons - ▁operating - ▁budget - ▁display - ▁milk - ▁colors - ▁invited - ▁mentioned - ology - ▁mothers - ▁seriously - ▁generate - ▁fat - ya - ▁cr - graph - ▁hey - ▁shop - ▁dealing - ▁slowly - ▁launch - ▁experienced - ▁mr - ▁kilometers - ▁transition - ▁mid - ▁tumor - ▁driven - ▁filled - ▁perfectly - ▁ship - ▁tap - ▁fourth - ▁doubt - ▁comfortable - ▁hopefully - ure - ▁fold - ▁claim - ▁dynamic - ▁chair - ▁director - ▁normally - ▁transformation - ▁horse - ▁frame - ▁phones - ▁setting - ▁picked - ▁fascinating - ▁survival - ▁wake - ably - ▁br - ▁whenever - ▁photos - ▁detail - ▁smile - ▁associated - ▁increasingly - ▁article - ▁principles - ▁dry - ▁topic - ▁trick - ▁ride - ▁wheel - ▁title - ▁rare - ▁lie - ab - ▁paying - ▁determine - ▁fund - ning - ▁creatures - ▁regular - ▁exact - ▁designer - ina - ▁w - ▁disaster - ▁chain - ▁warm - ▁discussion - au - ▁advanced - ▁sentence - ▁losing - ▁names - ▁experts - by - ▁cognitive - ▁steps - ko - ▁battle - ▁interaction - ▁band - ▁remote - ▁radical - ▁electric - ▁ph - ▁equivalent - ▁institute - ▁quantum - ▁protest - ▁analysis - ▁quarter - ▁planning - ned - ud - ▁alternative - bur - ▁honest - ▁engineer - ▁note - ▁fl - ▁crack - ▁generations - ▁hill - ▁cap - ▁nu - ▁goals - ▁killing - ▁unit - ▁scene - ▁protein - ▁keeping - ▁deeper - ▁poll - ▁seat - ▁expected - ▁tail - ▁galaxy - ev - ▁arms - ▁march - ▁continent - ▁oceans - ▁soul - ▁shoes - ▁grandmother - ▁congress - ▁review - ▁versus - ▁mathematical - ▁pole - ▁invention - ▁organized - ▁organic - ▁sa - ▁silence - ▁minister - ▁flag - ▁orbit - ▁germany - ▁rates - ▁etc - ▁belief - ▁voices - ▁election - ▁toward - ▁collection - ▁driver - ▁arab - ▁calling - ▁fresh - ▁existence - ▁intellectual - ▁whom - ▁ring - ▁fields - ▁begins - ▁drawing - ▁mistake - ▁chicken - ▁hell - ff - ▁parent - ▁lies - ▁signals - ▁disappear - ▁resource - ▁ordinary - ▁tomorrow - ▁laptop - ▁hide - ▁guide - ▁plate - ▁faces - ▁breath - ▁print - ▁army - ▁launched - ▁capture - ▁wisdom - ▁curious - ▁suggest - ▁credit - ▁method - ▁coral - ▁essential - ability - ▁epidemic - ▁library - ▁brown - ▁property - ▁reform - ▁pages - ged - ▁independent - ified - ▁participate - ▁operate - ▁honor - ▁blog - ▁zone - ▁distribution - ▁variety - ▁novel - ▁sw - ▁react - han - ▁hasn - ▁adapt - ▁mar - ▁h - ify - ▁reaction - ▁origin - ▁george - ▁vulnerable - ▁facing - ▁plus - ▁prime - ▁selling - ▁joke - ▁uses - ▁tons - let - ▁satellite - ▁remain - ium - ▁characters - ▁heavy - ▁nervous - ▁feedback - ▁inner - ▁belong - ▁saved - ▁sc - ▁jo - ▁letters - ▁combination - ▁southern - ▁designing - ▁burn - ▁taste - ▁bat - ▁addition - ▁democratic - ▁member - ▁maintain - ▁flat - ▁jail - ▁grab - ▁specifically - ▁neighbors - ▁encourage - ▁exists - ▁emotion - ▁kenya - ▁danger - ▁violent - ▁increased - ▁processes - ▁autism - ▁factory - ▁sites - ▁australia - ▁bias - ▁management - ible - ▁host - ▁cave - ▁cook - ▁cup - ▁scan - ▁arrived - ▁den - ▁helps - ▁covered - ▁waves - ▁empty - ▁stupid - ▁bread - ▁shame - ▁testing - ey - ▁shoot - ▁emissions - ▁diet - rk - ▁opening - ▁galaxies - ▁liquid - ▁gdp - ▁lady - ▁hadn - ▁marine - ▁option - ▁tip - ▁weak - ▁users - ▁movies - ▁soldiers - ▁privacy - ▁healthcare - ▁francisco - ▁heads - ▁construction - ▁factors - ew - ▁scared - work - ▁pure - ▁released - ▁leaves - ▁sending - ▁thanks - ▁messages - ▁broke - ▁factor - ▁mexico - ▁valuable - ▁volume - ▁clip - ▁microbes - ▁weather - ▁finance - ▁equipment - ▁importance - ▁electrical - ▁wondering - ▁receive - light - ▁status - nk - ▁illegal - ▁button - ▁profit - ▁hang - ▁mention - ▁tall - ▁quiet - ship - ▁agriculture - ▁empathy - ▁gravity - ▁yesterday - ▁rocket - ▁instrument - ▁mile - ▁handle - ▁bang - ▁bees - ▁escape - ▁limb - ▁aim - ▁st - ▁explanation - ▁manufacturing - ▁sugar - ▁section - ▁chemistry - ▁cutting - ▁muscle - ▁muslim - ▁sample - ▁insects - ▁wi - ▁meat - ain - ▁bra - one - ▁requires - ▁counter - ▁mess - ▁technological - ▁evil - ▁rational - ▁route - ▁corruption - ack - ▁symptoms - ▁calls - ▁coal - ▁invent - ▁fan - ▁conversations - ▁paul - con - ▁require - ▁connections - ▁bell - ▁bag - ▁england - ▁swim - ▁loud - ▁grid - ▁expand - ▁molecule - ▁miss - ▁author - ▁inequality - ▁underneath - ▁fruit - ▁hotel - ▁added - ▁relatively - ▁canada - ▁airplane - ▁telescope - ay - ▁techniques - tion - ▁stock - ▁increasing - ▁el - ▁illness - ▁academic - ▁deserve - ▁pleasure - ▁producing - ▁sweet - ▁multi - ▁lake - ▁bike - ▁roughly - ou - ▁tradition - ▁effectively - ▁shoulder - ▁details - ▁worldwide - ▁internal - ▁algorithms - ▁awful - ▁peak - ▁suffer - ▁typically - ▁spaces - ▁fundamentally - ▁introduce - ▁chemicals - ▁conscious - ee - ▁harvard - ▁inspiration - ▁nor - ▁convinced - ▁blow - ▁actions - ake - ▁unlike - ▁contract - cal - ▁comfort - ▁holes - ▁properties - ▁cameras - ▁sal - ▁risks - ▁player - ▁reveal - ▁pack - ▁ray - ▁appreciate - ▁severe - ▁document - ▁carefully - ▁height - ▁feelings - ▁broad - ug - qui - ▁german - ▁admit - ▁definitely - ▁length - ▁mechanical - ▁fiction - ▁rapidly - ▁ants - ▁politicians - ▁sensor - ▁ju - ▁sports - ▁los - ▁activities - ▁consumption - ▁possibilities - ▁spectrum - ▁plot - ▁staff - ▁apartment - ▁bi - ▁dimensional - ▁reached - ▁thin - ▁signs - ▁dimensions - ▁relate - ▁mis - ▁pet - ▁maps - ▁gr - ▁emerging - ▁applied - ▁britain - ▁refer - ▁include - ▁passionate - ▁zoom - ▁hum - ▁falling - ative - ▁affected - ▁moves - ▁warming - ▁remove - ▁whereas - ▁processing - ▁previous - ▁victims - ▁nasa - ▁former - ▁pump - ▁sight - ▁homes - ▁decline - ▁dots - ▁stone - ▁vehicle - ▁beach - ▁location - ▁generally - ▁succeed - ide - ▁com - ▁van - ▁absolute - ▁languages - ▁mad - ▁steel - ▁studio - ▁alien - ▁wow - ▁gang - ile - ▁soft - ▁score - ▁potentially - ▁monitor - ▁sheet - ▁chart - board - ▁pulled - ▁liberal - ▁psychological - ▁conservative - ▁france - ▁transportation - ▁players - ▁employees - ▁user - ▁assume - ▁suppose - ▁neighbor - sis - ▁extract - ▁kitchen - ▁northern - ▁primary - ▁cha - ▁evolutionary - ▁grass - ▁season - ▁skill - ▁required - ▁union - ▁outcome - ▁border - ▁mountains - ▁chose - ▁forests - ▁balloon - ▁einstein - ▁awesome - ▁flash - ▁shut - ▁rush - ▁toilet - ▁objective - ▁laugh - ▁lessons - j - ▁trend - time - za - ▁storm - ▁japanese - ▁videos - ▁prepared - ▁string - ▁tape - ana - ▁refugees - ▁introduced - ▁trials - ▁myth - ▁organs - ▁snow - '4' - ▁challenging - ▁conclusion - ▁sophisticated - ▁radiation - ▁situations - ▁improvement - ▁tax - ▁mistakes - ▁meaningful - ▁seek - ▁evolve - rate - ▁pair - ▁physically - ▁battery - ▁innovative - ▁phrase - ▁laboratory - ▁ladies - ▁poetry - ▁comp - ▁bond - ▁parking - ▁statement - ▁represents - ▁convince - ▁prototype - ▁pitch - ▁bubble - ▁bottle - ▁consumer - ▁naturally - ▁defense - ▁pilot - ▁visible - ▁toxic - ▁phase - ▁collaboration - ▁awareness - ▁winter - ▁existing - ▁tower - ▁symbol - nic - ▁mix - ▁limit - ▁movements - ▁defined - ▁funding - ▁shapes - ▁embrace - ▁engaged - ris - ▁sources - ▁clever - ▁repair - ▁youtube - ▁terrorism - ▁solid - nna - ▁interests - ▁economics - ▁formed - ▁restaurant - ▁z - ▁fewer - ▁appropriate - ▁confidence - ▁exploration - ▁unusual - ▁harvest - ▁struck - ▁medium - ▁fake - ▁l - ley - ▁carrying - ▁partners - ▁odd - ▁se - ▁habitat - ▁sl - ▁amazon - ▁swimming - ▁authority - ▁habit - ▁presentation - ▁ban - ▁gut - ▁spa - pp - ▁combine - io - ▁conservation - ▁le - cla - ▁nineteen - ▁spider - ▁bush - ▁equality - ach - ▁rio - ▁depends - room - ▁printing - ▁collapse - ▁organize - ft - ▁newspaper - ▁rep - ▁chicago - ▁computing - ▁concrete - ▁domestic - ▁sphere - ▁infinite - ▁silk - ▁boston - ▁knee - ▁iran - ▁filter - ▁practical - ▁feature - ▁pig - ▁blocks - ▁advice - ▁molecular - ▁reserve - ▁behave - ▁nose - ▁traveling - ger - ▁replace - ▁arts - ▁ecosystem - ▁cards - gy - ▁personally - ▁reference - ▁buying - ▁productive - ▁viruses - ▁percentage - ▁components - ▁joined - ▁cultures - ▁trans - ▁providing - ▁teeth - ▁style - ▁unknown - kin - ▁aging - ▁dust - ▁microscope - ▁busy - ▁divide - ▁hospitals - ▁contain - fe - ▁options - ▁pollution - ▁pregnant - ▁secure - ▁unexpected - ▁reverse - ▁richard - ▁file - ▁younger - ▁protected - ▁borders - ▁paris - ▁si - ▁tens - ▁ear - ▁nigeria - ▁occur - ▁shark - ▁grandfather - ▁shadow - ▁variation - ▁stretch - ▁fabric - ▁therapy - ▁mystery - ▁input - ▁split - ties - ▁clinic - ▁algorithm - ▁sy - nc - ▁extend - ification - ▁load - ▁intervention - ▁license - ▁relevant - ▁mechanism - ▁obama - ▁hydrogen - ▁monkey - istic - ▁hoping - ▁climb - ▁para - ▁faced - ▁talks - ▁exception - lie - fo - ▁courage - ▁dramatic - ▁efficiency - ▁lecture - ▁federal - ▁intense - ▁mortality - ▁dropped - ▁features - ▁hu - ▁israel - ▁shock - ▁roads - ▁lying - ▁pan - ▁transport - ▁pit - ▁repeat - ▁accurate - ▁tested - ▁abstract - ari - ▁cluster - ▁emergency - ▁framework - ▁channel - ▁angle - ▁marketing - ▁explained - ▁stayed - ▁equally - ▁constant - ▁recorded - ▁darwin - ▁globe - ▁memories - ▁solving - ▁angry - ▁club - ▁flies - ▁loop - over - ▁gather - ▁breathe - ▁institution - ▁tea - ▁atoms - ▁houses - ▁contribute - ▁relative - ▁illusion - ▁permanent - ▁policies - ▁resolution - ▁drill - ▁boring - ▁poem - ▁truck - ▁organism - ▁brave - ▁gu - ▁burning - ▁solved - ▁translate - ▁talent - ries - ▁wire - ▁ultimate - ▁acting - ▁fishing - ▁sustain - ▁polar - ▁alzheimer - ▁cortex - ▁literature - ▁flood - ▁eastern - ▁housing - ▁medication - ▁bee - ▁grain - ▁peer - ▁cord - ▁hero - ▁electronic - ▁upper - ▁dan - ▁attacks - ▁compete - ▁pushing - ▁egypt - ▁clothes - ▁paradigm - ▁afternoon - ▁sculpture - ▁occurred - ▁nowhere - ▁chapter - ▁greenhouse - ▁dense - ▁rising - ily - ▁pool - ▁emerge - ▁surrounded - ▁provided - py - ▁collected - ▁circumstances - aries - ▁meal - ically - ▁transfer - ▁bicycle - ▁horrible - ▁suicide - ▁welcome - ▁narrow - ▁hyper - ory - ▁pile - ▁tag - ▁aspect - ▁lung - ▁boxes - ▁stronger - ick - ▁combined - ▁outcomes - bra - ▁islands - ▁inspire - ▁witness - ▁dress - ▁kick - ala - ▁russia - ▁depth - ▁resistance - ▁underwater - ▁ceo - ▁environments - ▁largely - ▁ancestors - ▁mall - ▁enable - back - ▁command - ▁pin - well - ▁fingers - ▁pr - ▁favor - ▁photographs - ▁column - ▁complain - ▁economies - ▁motivation - ▁neither - ▁somewhat - ▁suspect - ▁trigger - ▁wound - lon - ▁james - ▁controlled - ▁frankly - ▁rely - hal - ▁organ - ▁vehicles - ▁centers - fin - ▁wine - ▁wondered - ▁geo - ▁guard - ▁centuries - ▁contrast - ▁iphone - ▁magnetic - ▁agency - ▁causing - ▁21 - ▁promote - ▁dramatically - gar - ▁clock - ▁boom - ▁remains - ▁sensitive - ▁expert - ▁defend - ▁mayor - ▁painful - ▁windows - ▁brothers - ▁aspects - ▁agreed - gh - ▁musical - ▁crucial - ▁destruction - ▁false - ▁psychology - ▁tonight - ▁thick - ▁uncle - ▁committed - ▁tank - ▁recording - ▁tu - ▁dinosaurs - ▁creature - ▁colleague - ▁sides - ▁offered - ▁ignore - ▁allowing - ▁celebrate - ▁educational - ▁evening - par - ▁lock - ▁vertical - ▁romantic - ▁laser - ▁raw - ▁logic - ▁infected - ▁error - ▁official - ▁architect - ▁technique - ▁shooting - ▁sensors - ▁encounter - ▁fixed - ▁firm - af - hood - ▁chief - ▁genius - ▁immune - ▁queen - ▁leap - vin - ▁angeles - ▁operation - ▁entrepreneurs - ▁bay - ▁destroy - ▁element - ▁insight - ▁prefer - ▁chip - ▁needle - ▁writer - ▁chest - ▁films - ▁charles - ▁extent - ▁carried - ▁desk - ▁native - ▁stack - ▁feeding - ▁ob - ▁closely - ▁dogs - gro - ▁linked - ▁applause - ▁discipline - ▁silent - ▁debt - ▁spiritual - ▁fluid - ▁weigh - ▁ears - ▁ii - ▁highway - ▁tracking - ▁recognized - ▁bug - ▁presented - ▁disorder - ▁bits - rse - ▁behaviors - ▁collecting - ▁ab - ▁fed - ▁gps - ▁commitment - ▁overall - ▁regions - ▁spin - ▁cur - ox - ▁pace - ▁measured - ▁anger - ▁cooking - ▁mc - ▁invite - ▁islam - ▁pacific - ▁scary - ▁runs - ▁concern - ▁suck - ▁sail - ▁journalist - ▁cop - ▁demo - ▁vaccine - ▁wherever - ▁cast - ▁edit - ▁architects - ▁attract - ▁reported - ren - ▁je - ▁script - ▁absorb - ▁birthday - ▁corporate - ▁engaging - ▁scratch - ▁smoke - ▁blame - ▁orange - ▁sharks - ▁strangers - ▁nest - ▁distributed - ▁mice - ▁treating - ▁cheaper - ▁pocket - ▁application - ▁functions - ction - ▁drinking - ▁performing - ▁earn - ▁transplant - ▁signed - ▁existed - ▁presence - ▁silicon - ▁telephone - ▁database - ▁fortune - ▁grateful - ▁tunnel - ▁gentlemen - ▁exposed - ▁shaped - ▁reaching - ▁par - ▁teams - ▁proteins - ▁finger - ld - ▁privilege - ▁convert - ▁implications - ▁sir - ▁korea - ▁wars - ▁steve - ▁parallel - ▁recognition - ▁judgment - ▁axis - ▁twin - ▁kingdom - ▁sustainability - ▁gray - ▁airport - ▁sunlight - ▁tri - 'off' - ▁conventional - ▁classic - ▁records - ▁eggs - pro - ▁dig - ▁cents - ▁applications - ▁vo - ▁proof - ▁correspond - ▁district - ▁strike - ▁drives - ▁capitalism - chan - ▁congo - ▁lift - ▁qu - ▁seal - ▁surely - ▁singing - ▁equation - ua - log - ▁songs - ▁cor - ▁bones - ▁confident - ▁dioxide - ▁miracle - ▁regret - ▁transparency - ▁wikipedia - ▁raising - ▁overcome - ▁productivity - ▁download - ▁breaking - ery - net - ▁prop - the - ▁greek - ▁herself - ▁revenue - ▁dancing - ▁diagram - ▁barely - ▁vital - ▁elite - ▁illustrate - ▁cash - gate - maker - ▁journal - ▁connecting - ▁photographer - ▁shell - ▁amounts - ther - ▁strip - ▁recover - ▁transformed - ▁component - ▁citizen - ▁refugee - ▁pushed - ▁poster - ▁imagined - ▁latin - ▁tests - ▁christian - ▁external - ▁scenario - ▁september - ▁shirt - ▁careful - ▁bump - ▁regime - ▁wireless - ▁hunger - ▁precisely - ▁pretend - ▁surrounding - ▁spreading - ▁removed - ▁ends - ▁standards - ▁breathing - ▁ski - ight - ▁whale - ▁settle - ▁arc - ▁arrive - ▁pie - ▁hits - ▁tasks - ▁council - ▁portrait - ▁proportion - ▁stroke - ▁automatically - ▁trace - ▁ought - ik - ▁customers - ▁personality - '0' - ▁diagnosed - ▁continued - ▁floating - ▁cheat - ▁robotic - house - mate - ▁farmer - '&' - ▁michael - ▁obsessed - ▁ridiculous - ▁simulation - ▁sweat - for - ▁scar - ▁generated - ▁owner - ▁guns - ▁vessels - ▁liked - ▁particle - aking - bal - fish - ▁pur - ▁cry - ▁adjust - ▁curiosity - ▁gulf - ▁entertainment - locked - ▁destroyed - ▁arguments - ▁placed - ▁increases - ▁rapid - ▁monkeys - ▁fifth - ▁surveillance - ▁appeal - ▁keeps - ▁acid - ▁humor - ▁exhibition - ▁russian - ▁figures - ▁mode - ▁cable - ▁leak - ▁captured - head - ea - ▁represented - fully - ▁nurse - ▁label - ▁frank - ▁notes - ▁pakistan - ▁flip - ulate - ▁identical - ▁insurance - ▁uncomfortable - ▁animation - ▁grave - ▁compound - ▁selection - ▁worker - ▁apparently - ▁bend - ▁comic - ▁traveled - rd - ▁sales - ▁ensure - ▁wise - ▁methods - ▁classes - ▁fra - ▁lit - ▁youth - ▁philosophy - ▁preserve - ▁senior - ▁thomas - ▁shake - ▁household - ▁spark - ▁quit - ▁meme - ▁amongst - ▁properly - ▁deaths - ▁mentor - ▁nano - ▁limits - ▁whales - ▁meetings - ▁cow - ▁stands - ▁root - wn - ▁volunteer - ▁acts - ▁brief - ▁instruments - ▁rat - ▁sch - ▁exploring - ▁measuring - ▁procedure - ▁rebuild - ▁drove - ▁hook - ▁renewable - ▁diverse - ▁accessible - ▁fascinated - ▁isolated - ▁opera - ▁wage - ▁searching - ▁poorest - ▁thrown - ▁wing - ▁rocks - ▁fle - ▁efforts - ▁parties - ▁crying - ▁pink - ▁muslims - ▁samples - ker - ▁surgeon - ▁universities - ▁indicate - zi - ▁arctic - ▁ugly - ▁representation - ▁heroes - ▁appears - ▁diver - ▁economists - ▁pm - ▁accepted - cher - ▁practices - ▁award - ▁murder - ura - uff - ▁historical - ▁intention - ▁tube - ▁shall - ▁enhance - ▁lifestyle - ▁neural - ▁piano - ▁strategies - ▁borrow - ▁christmas - ▁distant - ▁imaging - ▁disability - ▁rice - ▁behavioral - ▁dirty - ▁darkness - ▁controlling - ▁antibiotics - ▁aside - use - ▁lens - ▁journalists - lit - ▁elephant - ▁mosquito - ▁facts - ▁philosopher - ▁rainforest - ▁synthetic - ▁initiative - ▁reduction - ▁hip - ▁golden - rin - ▁drawn - ▁virtually - ▁officer - ▁meter - ▁mainly - ▁martin - ▁banks - ulation - ▁liver - ▁conclude - ▁decrease - ▁sacred - ▁struggling - ▁cousin - ▁embedded - ▁adventure - ▁football - ▁hungry - ▁subtle - ▁sharp - ▁marry - ▁plans - ▁interactive - ▁achievement - ▁adam - ▁nurses - ▁iron - ▁missed - ▁flower - ▁concepts - gue - ▁consume - ▁rap - spinal - ▁weapon - ▁anxiety - ▁legacy - ▁manhattan - ▁permission - ▁skull - ▁tweet - ▁alarm - ▁shelter - ▁ceiling - ▁neck - ▁ticket - ▁dump - ▁crops - ▁dish - ▁expectations - ▁stranger - ▁10 - ▁savings - ▁educated - ▁poet - ▁rent - ▁boil - ▁fuels - ▁ben - ▁fears - ▁rescue - ▁estimate - ▁achieved - ▁seeds - ▁sets - ▁mammals - ▁interacting - ric - ught - '70' - ▁subjects - ▁explosion - ▁intuitive - ▁poison - ▁agenda - ▁infant - ▁boundaries - ▁partnership - ▁islamic - ▁fortunately - ▁opposed - ▁elected - ▁editor - ▁attached - ▁tired - ▁flowers - ▁spoken - ▁corn - ▁granted - ▁seed - ▁boss - ▁suggests - ▁consumers - ▁divided - gli - ji - ▁appeared - ▁vaccines - ▁dial - bon - ▁confront - ▁formula - ▁hire - ▁clouds - ▁trauma - ▁wrap - ▁frozen - ▁recipe - ▁tackle - ▁haiti - ▁voting - ▁circuit - ▁mindset - ▁engagement - ▁lawyer - ▁dedicated - ▁killer - ▁recovery - ▁integrated - ▁functional - ▁norm - ros - ▁adding - ologist - ▁elections - ▁replicate - ▁grows - ▁operations - ▁visited - ▁yo - ▁mate - ▁passing - ▁theme - rous - ▁cartoon - ▁earthquake - ▁passenger - ▁nobel - ▁hollywood - ▁slavery - den - ▁pot - ▁stable - ▁announced - ▁clue - ▁globalization - zz - ▁coin - ▁beliefs - ▁entered - ▁hat - og - ▁interactions - ▁delivered - ▁neighborhoods - ▁paintings - ▁failing - ov - ▁jet - tric - ▁acknowledge - ▁govern - ▁spec - ▁regard - ▁horizon - ▁colony - ▁elsewhere - ▁neuroscience - ▁disgust - ▁underground - ▁backwards - ▁cooperation - ▁bullet - ▁governance - ▁chaos - ▁till - ▁50 - ▁latest - ▁advance - ▁comment - ▁clients - bri - ▁falls - ▁dear - ▁oldest - ▁originally - clu - ▁bob - ▁electron - ▁males - ▁interestingly - ▁tour - ▁implement - ▁muscles - ▁initial - ▁extinct - ▁enemy - ▁facial - ▁feminist - ▁profile - ▁pyramid - ▁request - ▁tragedy - ▁twist - ▁glad - ▁sahara - ▁mimic - ▁jack - ▁80 - ▁upset - ▁tries - ▁volunteers - ▁pointed - ▁inspiring - oid - ▁disorders - ▁globally - ual - ▁roots - ▁bound - ▁rough - ▁pu - ▁wanting - ▁armed - ▁villages - ▁occasion - ▁excuse - ▁garbage - ▁obesity - ▁spectacular - ▁atlantic - ▁concert - ▁sweden - ▁session - ▁golf - ▁underlying - ▁june - ella - car - ▁thoughts - ▁frog - ▁victim - pri - hes - ▁sing - ▁rose - ▁custom - ▁dive - ▁melt - ▁magical - ▁hardly - ▁luck - ▁commit - ▁proper - ▁anonymous - ▁dignity - ▁footprint - ▁microsoft - ▁plenty - ▁yield - ▁pursue - ▁combat - ▁robert - ▁programming - ▁attend - ▁identified - ▁cute - ▁doors - ▁stepp - ▁rarely - ▁gates - ▁smarter - ▁wired - ▁fri - ▁laughing - ▁palm - ▁reminded - ▁lovely - ▁donor - ▁construct - ▁picking - ▁steal - ▁involves - ▁speaker - berg - ▁populations - ▁salt - ▁transmit - ▁advertising - ▁agricultural - ▁choosing - ▁heaven - ▁intuition - ▁rwanda - ▁smooth - ▁stanford - ▁strand - cellular - ▁hardware - ▁tragic - ▁textbook - ▁leverage - ▁egyptian - ▁delight - ▁consequence - ▁terrorist - ▁cu - ▁additional - ▁teenagers - ▁coat - ▁mail - ▁prices - ▁auto - ▁investing - ▁demonstrate - ▁interpret - hr - ▁mount - ▁agencies - ▁autonomous - ▁demographic - ▁description - ▁disagree - ▁eliminate - ▁jewish - ▁thumb - ▁output - ▁sneak - ▁density - form - ▁ethical - ▁laid - ▁extinction - ▁contains - ▁chat - ▁satellites - ▁realm - ▁infection - ▁rating - grad - ▁soap - ▁analyze - ▁su - ▁bother - ▁opt - ▁paradox - ▁rhythm - ▁chromosome - ▁compromise - ▁diabetes - ▁furniture - ▁january - ▁rabbi - ▁polio - ▁ghana - ▁mood - ▁magnet - ▁afterwards - ▁kiss - ▁directions - some - ▁hanging - box - ▁observe - ▁hack - ▁printer - press - ▁friendly - ▁instant - ▁cri - ▁ethiopia - ▁figuring - ▁industries - ▁smartphone - ▁ebola - ▁spell - ▁baseball - ▁stake - ▁highlight - ▁religions - ▁pause - ▁protecting - ▁trends - ▁optimistic - ▁predicted - ▁cart - ▁tricks - ▁lawyers - ▁homo - ▁mechanisms - ▁experiencing - ▁mechanics - ▁reputation - ▁pioneer - ▁copyright - ▁forgotten - ▁landed - ▁bible - ▁chosen - ▁layers - ook - ▁listened - ▁heal - ▁ages - ▁navigate - ition - ▁corals - ▁pound - dies - ▁chimpanzees - ▁units - ▁painted - ▁participants - place - our - ▁adopt - ▁letting - ▁immediate - bot - ▁walks - ▁capita - ▁recommend - ▁chocolate - ▁inevitable - ▁texas - ▁transparent - ▁orchestra - ▁plain - ▁domain - ▁depressed - ▁inventor - ▁discoveries - ▁tuna - ▁usual - ▁tar - trapped - ▁employ - ▁communications - ▁experimental - ▁discuss - ▁tears - ▁determined - ▁invested - matic - ▁electronics - ▁exploit - ▁alcohol - ▁gesture - ▁higgs - ▁mushroom - ▁resilience - ▁routine - ▁flexible - ▁sheep - ▁broadcast - ▁fraction - ▁freak - ▁mold - ▁bow - ened - ▁sin - '1' - ▁partly - ▁assumptions - ▁secondly - ▁sleeping - sel - ▁photography - ▁characteristics - ▁hal - ▁observation - urge - ▁expanding - ▁actor - ▁sport - ney - ▁daughters - ▁separated - ▁returned - ▁pour - ▁craft - ▁terror - ▁stations - ▁psycho - ▁categories - ▁charity - ▁contribution - ▁essence - ▁naked - ▁begun - ▁punch - ▁theories - ▁winning - ▁inform - ▁slice - ▁attribute - ▁motivated - ▁gases - ▁ratio - ▁conflicts - ▁eu - ub - ▁jam - ▁printed - ▁drivers - ▁len - ▁neuro - stone - ▁reduced - ▁branch - ▁sisters - ▁icon - ▁rooms - ▁coach - ▁titan - ▁slave - ▁resist - ▁diagnosis - ▁fragment - ▁frequency - ▁frustrated - ▁happier - ▁helicopter - ▁skeleton - ▁uganda - ▁mysterious - ▁grace - bar - ▁spill - ▁prosperity - ▁gear - ▁demonstration - ▁package - ▁restore - ▁warning - ▁pipe - ▁nutrition - ▁vent - lock - hen - ▁widely - ▁dimension - ▁marker - ▁physician - ▁injury - ▁ending - ▁funded - rc - ▁focusing - ▁calculate - ▁perceive - ▁knock - ▁terrorists - ▁produces - ▁bind - ami - ▁pat - ▁sacrifice - ▁ignorance - ▁immense - ▁participation - ▁smoking - ▁substance - ▁buried - ▁loving - ivity - ▁winner - ▁pathway - ▁yard - ▁portion - ▁rome - ▁sink - ue - ▁treatments - ▁informed - ▁relations - ▁odds - ▁symbols - ▁county - ▁peter - ▁projection - ▁disappeared - ▁brutal - ▁predator - ▁neuron - ▁actors - ▁touched - ▁prior - ▁cliff - ▁dialogue - ▁enterprise - ▁fusion - ▁probability - ▁temple - ▁robust - ▁shrink - ▁threw - ▁wedding - ▁republican - ▁affordable - ▁africans - ▁affects - ▁advocate - ▁surprisingly - ▁tied - ▁shi - ▁ham - ▁includes - her - ▁served - ▁panic - ▁lighting - ▁ethic - ▁aesthetic - ▁biodiversity - ▁comparison - ▁expansion - ▁fabulous - ▁humanitarian - ▁shrimp - ▁dominant - ▁empire - ▁annual - ▁shore - ▁capability - ▁burst - ▁kidney - ▁dependent - ▁passive - ung - ▁rub - clock - ▁suffered - ▁wal - ▁physicist - ▁barrier - ▁reef - ▁pal - ▁bin - ▁quo - ▁spit - ▁whi - ▁mill - ▁publish - ▁expertise - ▁parliament - ▁primarily - ▁sandwich - ▁wheelchair - ▁reducing - ▁fought - ▁soviet - ▁shelf - ▁brick - ▁representative - ▁intersection - ▁lamp - ▁pulling - ▁reports - ▁jim - stream - ▁profession - ▁extended - ▁circles - hel - ▁economist - ▁panel - ▁tear - ▁prepare - worm - ▁90 - ▁recruit - ▁instinct - ▁array - ▁competitive - ▁contemporary - ▁register - ▁terrifying - ▁vulnerability - ▁william - ▁mask - ▁uncertainty - ▁shopping - ▁pulse - ▁agreement - ▁poly - ▁briefly - ▁helpful - ▁threatened - ▁scales - oth - cho - ▁symmetry - ▁safer - ▁mal - ▁gathering - ▁papers - mixed - ▁egg - ▁nerve - ▁pad - ▁rank - ▁farming - ▁crossing - jo - ▁threats - ▁precise - ▁association - ▁automobile - ▁capabilities - ▁currency - ▁facility - ▁genocide - ▁glacier - ▁grandchildren - ▁primitive - ▁sensation - ▁chronic - ▁screw - ▁spray - ▁burden - ▁render - ▁convey - ▁bathroom - ▁supporting - ▁addiction - ▁trap - ▁overwhelming - ▁confused - ▁graphic - ▁ecosystems - ▁unlikely - del - ▁linear - ville - ▁emerged - ▁assets - ▁bas - card - ▁documents - ▁provides - ▁attitude - ▁supported - ▁mac - ▁headed - ▁incentives - ▁links - ▁senses - ▁log - ▁blank - ▁breakthrough - ▁creator - ▁dictionary - ▁division - ▁expedition - ▁serving - ▁simplicity - ▁stomach - ▁structural - ▁vagina - ▁turkey - ▁garage - ▁cyber - ▁silly - ▁beetle - ▁discrimination - ▁crew - ▁reliable - ▁attractive - ▁bowl - ▁speakers - ▁tick - ▁perceived - ▁mary - ▁shocked - ▁richer - ▁mri - mar - ▁continues - ▁chop - ▁successfully - ▁computation - ▁rope - ▁badly - ▁founded - ▁teenager - ▁writers - ▁tim - ▁guarantee - ▁throwing - ▁slides - ▁alter - ▁dolphins - ▁consistent - ▁joint - ▁calculation - ▁december - ▁improving - ▁purchase - ▁thrive - ▁collaborative - ▁trash - ▁variable - ▁homework - ▁initially - iest - ▁tension - ▁reactor - ▁tr - ▁penguins - ▁suggesting - ▁swing - ▁impacts - nan - ▁tribe - ▁select - ▁stu - ▁trucks - ▁educate - ▁bre - ▁aggressive - ▁anatomy - ▁astronaut - ▁constitution - ▁indigenous - ▁innocent - ▁october - ▁oxford - ▁relax - ▁vietnam - ▁civic - ▁mobility - ▁export - ▁aircraft - ▁ruin - ▁programmer - ▁regardless - ▁concentration - ▁subway - ▁emit - ▁peaceful - ▁manual - ▁italy - ▁commission - ▁angel - ▁closest - ▁arrested - ▁established - ula - ▁hired - ▁hub - ham - ▁hol - ▁included - ▁corporations - ▁jumped - ▁biologists - ▁dinosaur - ▁spots - ▁loan - ▁tie - ▁nets - ▁pill - ▁neutral - ▁abilities - ▁architectural - ▁august - ▁bizarre - ▁extension - ▁transaction - ▁translation - ▁vacuum - ▁torture - ▁florida - ▁venture - ▁neat - ▁occasionally - ▁ego - ▁conducted - ▁plays - ▁delivery - ▁bull - ▁unable - ▁replaced - ▁lean - ▁cats - ▁suggested - ▁fiber - ▁strongly - hu - ▁musicians - ▁caring - ▁traditions - ▁bg - ▁edges - ▁implant - ▁channels - ▁uk - ▁scholar - ▁administration - ▁anticipate - ▁monster - ▁threshold - ▁unbelievable - ▁drift - ▁dumb - ▁impression - ▁teen - ▁chances - ▁compassionate - ▁korean - ▁imagery - ▁hackers - ▁realizing - ▁professionals - ▁damn - ▁hunting - uri - ▁yorker - atory - ▁wash - ▁instructions - cuba - ▁hug - ▁pharmaceutical - ▁scores - ▁nutrients - ▁explaining - ▁staying - ▁bags - ▁drag - atomic - lethal - ▁survived - ▁pose - ▁hearts - drew - ▁answered - ▁agent - ▁crystal - ▁astonishing - ▁chamber - ▁fragile - ▁hemisphere - ▁pixel - ▁reinvent - ▁assembly - ▁weekend - ▁italian - ▁pride - ▁leaf - ▁bold - ▁clay - ▁unfold - top - ▁guilty - ▁healthier - ▁ingredients - ▁proposed - ▁aunt - away - ▁bath - ez - ▁engineered - ▁woke - ▁prisoners - ▁safely - ▁balls - ▁che - ▁port - ▁relation - ▁comments - ▁analog - ▁outer - ▁exhibit - ▁holds - '5' - bla - ▁algae - ▁category - ▁precious - ▁reproduce - ▁unprecedented - ▁intimate - ▁tomato - thermal - ▁rotate - ▁fancy - ▁investors - ▁installation - ▁gradually - ▁institutional - ▁format - ▁elderly - ▁artistic - ▁geographic - ▁fastest - ▁logo - ▁manner - ▁beer - ▁activists - ▁forming - ▁wider - ▁parks - ▁detector - ▁adopted - ▁bask - ▁responses - ▁approaches - pol - ▁dose - ▁manipulate - ▁insights - ▁articles - ▁syria - ▁researcher - ▁secrets - ▁tumors - ▁str - ▁assumption - ▁client - ▁insect - ▁prediction - ▁wo - ▁bars - ▁incident - historic - ▁lip - ▁arrow - ▁awkward - ▁diagnostic - ▁excellent - ▁executive - ▁injustice - ▁magnitude - ▁nonprofit - ▁nowadays - ▁optimism - ▁qualities - ▁audio - ▁snake - ▁harness - ▁honey - ▁simultaneously - ▁civilian - ▁deaf - ▁iceland - ▁belt - ▁span - ▁calm - ▁merely - ▁ku - ▁suggestion - ▁females - rum - ▁radically - ▁hunt - ▁urge - ▁holy - ▁accidents - ▁reefs - ▁entering - ▁handed - ▁educat - ▁conduct - ▁dynamics - ▁offering - port - ▁limitations - ▁fortunate - ark - ▁distinct - ▁tight - ▁transmission - ▁default - ▁flaw - ▁occupy - ▁18 - ▁silver - ▁jesus - ▁biosphere - ▁sauce - ▁signature - ▁minus - ▁enforcement - ▁besides - ▁bored - ▁downtown - ▁tune - ▁desperately - ▁campus - ▁instantly - ▁rape - ▁wings - ▁dating - ▁reflection - book - ▁owned - ▁gathered - ▁reasonable - ▁accounts - ▁seeking - ▁enabled - ▁physicists - ▁remaining - ▁kit - ▁concerns - illa - ▁sentences - ▁empower - ▁similarly - ▁bl - ▁views - ▁activist - ▁fate - ▁covering - ▁bangladesh - ▁compelling - ▁exposure - ▁striking - ▁terrified - ▁smith - ▁awake - ▁viral - ▁wildlife - ▁assignment - ▁antarctica - ▁connectivity - ▁primate - ▁resistant - ▁appearance - ▁tele - ▁drum - ▁cake - ▁devi - ode - frontal - ▁profoundly - ▁goat - ▁tur - wan - ▁climbing - ▁beating - ▁detailed - ▁encouraged - ▁graduated - ▁interviewed - ▁seats - ▁camel - ▁funds - ▁stores - ▁30 - ▁convention - ▁telescopes - ▁bears - ▁chase - ▁authentic - ▁accumulate - ▁breakfast - ▁fault - ▁immigrant - ▁independence - ▁jihad - ▁minimum - ▁origami - ▁outbreak - ▁prosthetic - ▁smallpox - ▁unconscious - ▁virtue - ▁whatsoever - ▁surf - pleasant - ▁valid - ▁ownership - ▁fraud - ▁pond - ▁stability - ▁mapping - ▁selfish - ▁disco - ▁morality - ▁empowered - ▁beautifully - ▁predictions - ▁collectively - ▁logical - ▁endless - ▁dressed - ▁entrepreneur - ▁shine - ▁upside - cin - ▁intended - load - ▁residents - ▁risky - ▁chairs - ▁applying - ▁airplanes - ▁enables - ▁worrying - ▁discovering - ▁candidate - planetary - ▁bulb - ▁clothing - ▁crawl - ▁describing - ▁glow - ▁harbor - ▁hypothesis - ▁matrix - ▁microbial - ▁microphone - ▁syndrome - path - ▁recall - ▁customer - ▁excitement - ▁organizing - ▁scanner - ▁fence - ▁grasp - ▁devil - ▁keyboard - ▁interpretation - ▁fur - ball - ▁manager - ▁60 - ▁traits - ▁previously - ▁constructed - ▁openness - trop - ▁tribes - ▁pairs - ▁update - ▁blend - ▁patch - ▁physicians - uck - oma - ▁chips - ▁asian - ▁specialist - ▁influenced - ▁via - ▁lungs - ▁establish - ▁completed - ▁democrat - ▁improved - ▁teenage - ▁reconstruct - ▁crop - ▁academy - ▁gallon - ▁located - ▁redesign - ▁tendency - ▁trajectory - ▁margin - ▁sponsor - ▁contest - ▁suppress - ▁someday - ▁uniform - ▁consent - ▁quest - ▁binary - ▁mainstream - ▁barrel - ▁friendship - ▁branches - ▁pollen - ▁elephants - ▁positions - ▁collaborator - ▁mud - ▁quad - ▁expecting - ▁honestly - ▁targets - ▁observed - ▁flows - ▁robotics - ▁bottles - ▁significantly - chu - ▁makers - ▁drawings - ▁panels - ese - ▁toys - ▁mic - version - ▁soup - ▁surround - dia - ept - ▁electro - hand - ▁socially - ▁champion - ▁contrary - ▁dilemma - ▁drought - ▁encouraging - ▁facilities - ▁february - ▁friday - ▁genuine - ▁intensive - ▁oxytocin - ▁receiving - ▁strategic - ▁exhaust - ▁scrap - ▁foster - ▁hiding - ▁trump - ▁mineral - ▁nearby - ▁thus - ▁footage - ▁mining - iness - ▁workplace - ▁hardest - ▁metric - ▁shocking - ▁hunter - ▁bombs - ▁bats - ▁server - ▁digit - ▁photographed - ▁crimes - ▁propose - nel - ▁exponential - ▁distinguish - kh - ▁restrict - ▁cambridge - ▁collision - ▁cosmic - ▁curriculum - ▁divorce - ▁episode - ▁expectancy - ▁interrupt - ▁methane - ▁preparing - ▁supermarket - ▁thread - ▁captain - ▁mutual - ▁parasite - ▁chunk - ▁delay - ▁pursuit - ▁ritual - ▁cheese - ▁luckily - ▁generous - ▁dawn - ▁authorities - ▁bedroom - ▁marketplace - ▁soldier - ▁factories - pun - ▁deck - ▁failures - ▁fallen - ▁upload - zo - ▁reactions - nia - ▁jar - ▁wealthy - ▁breaks - ▁ignored - ▁landing - ▁burned - ▁performed - ological - ▁psychologists - ▁manufacture - ▁fulfill - ▁difficulty - ▁install - ▁tissues - ▁supplies - ▁disrupt - ▁virgin - ▁absence - ▁altitude - ▁archive - ▁investigation - ▁taliban - ▁exclusive - ▁flourish - ▁canadian - ▁saturn - ▁loose - ▁punishment - ama - ▁asteroid - ▁realistic - ▁visualize - ches - ▁shit - ▁approached - active - ▁smallest - ▁remembered - cc - ▁translated - cha - ▁offers - ▁baker - ▁remained - ▁homeless - ▁ken - ▁promised - ▁viewer - ▁proven - ati - ▁joe - ▁grant - ek - ▁shifting - ▁continuous - ▁flowing - ▁estimates - ▁cigarette - ▁continuing - ▁cruel - ▁deforestation - ▁devastating - ▁grandparents - ▁meanwhile - ▁precision - ▁rethink - ▁swallow - ▁theoretical - ▁warrior - ▁distort - ▁retina - ▁watt - ▁spanish - ▁exam - ▁condom - ▁stunt - ▁demands - ▁brian - ▁vary - ▁fbi - ▁interconnected - ▁santa - ▁inches - ▁analogy - ▁eaten - ▁rats - ▁obstacles - ▁zones - ▁estimated - ▁formation - ▁packed - ▁computational - ▁proved - ▁passwords - ever - ▁sticks - ▁amy - ▁challenged - ▁lions - ▁reporter - ▁formal - ▁fool - ▁ambition - ▁colonies - ▁palestinian - ▁pandemic - ▁pedestrian - ▁pigeon - ▁prescription - ▁principal - ▁racism - ▁relief - ▁workforce - ▁sunday - ▁royal - ▁drown - ▁backyard - ▁racial - ▁henry - ▁mike - ▁prostate - ▁constraints - ▁spacecraft - ▁consist - ▁hydro - racy - ▁guest - ▁static - ▁coordinate - ▁dolphin - ▁abandoned - ▁pigs - ▁glasses - ▁15 - ▁eric - ▁washing - ▁lego - ose - ▁bubbles - ▁screening - ▁items - ▁nerd - ▁mega - ▁barriers - ▁powered - ▁pray - ▁damaged - ▁mathematicians - ▁sam - ▁rolling - ▁claims - ▁refuse - ▁attacked - ▁arrest - ▁bite - stein - ▁circuits - iff - ▁sketch - ▁liberat - ▁aspiration - ▁believing - ▁indonesia - ▁negotiation - ▁persuade - ▁temporary - ▁terribly - ▁heritage - ▁sufficient - ▁albert - ▁jaw - ▁ongoing - ▁gentleman - ▁norway - ▁estate - ▁fever - ▁passage - ▁ram - ▁occurs - ▁stopping - ▁blindness - ase - '80' - ▁statistic - ▁steven - ▁newspapers - ▁chess - ▁arise - ▁critic - ▁healing - ▁cookie - ▁cage - ▁emotionally - ▁inject - ▁equations - disc - ▁seeming - uous - ▁producer - ▁israeli - ▁peers - ▁transit - ▁profits - ▁classical - ▁plug - ▁republic - ▁constrain - ▁punish - ▁clarity - ▁deposit - ▁elegant - ▁geometry - ▁inclusive - ▁miserable - ▁tsunami - ▁defeat - ▁communicating - ▁ecological - ▁antenna - ▁bribe - ▁tongue - ▁rubber - ▁separation - ▁lord - ▁index - ▁sidewalk - ▁forgot - ▁alex - ▁jane - ▁owe - ▁visualization - ▁duck - ▁spain - ▁gaps - ▁candidates - ▁lowest - ▁fatal - ▁surgeons - ▁specialized - ▁hormone - ▁drone - ▁partial - ▁jean - ene - ▁measurements - ▁collaborate - ▁mil - ▁colored - ▁bucks - ▁alternatives - ▁rays - ▁shower - ▁glue - ▁infections - ▁cows - ▁float - ▁motivate - ▁mag - ▁ethnic - ▁acoustic - ▁ambitious - ▁beneath - ▁composition - ▁endangered - ▁flavor - ▁hawaii - ▁league - ▁marijuana - ▁paralyzed - ▁protocol - ▁trivial - ▁scheme - ▁blink - ▁steam - ▁remarkably - ▁carolina - ▁canopy - ▁moore - ▁assistance - ▁flick - ▁blah - ▁horizontal - ▁migration - ▁rover - ▁freely - ▁frontier - ▁detection - ▁invade - ▁amazed - ▁caves - ▁blown - ▁realization - fold - ▁incentive - ▁july - ▁sadly - '00' - ▁loans - ▁witnessed - ▁lion - ▁syrian - ▁raped - ▁staring - ▁christ - ▁rip - keeper - ▁activate - ▁stored - ▁counting - ▁afghan - ▁girlfriend - ▁introduction - ▁jupiter - ▁luxury - ▁membrane - ▁pregnancy - ▁straightforward - ▁troops - ▁ubiquitous - ▁interior - ▁whistle - ▁disabled - ▁tropical - ▁hawk - ▁retreat - ▁jellyfish - ▁tiger - ▁steer - ▁denial - ▁fox - ▁clinics - ▁refine - ▁diplomat - ▁nearest - ▁predictable - ▁uni - ▁mild - ola - ▁selected - ▁mammal - ▁shoe - ▁scott - ▁manufacturer - ional - ▁agents - ▁restaurants - ▁affairs - ▁attitudes - ▁sake - ▁adapted - ▁hut - ▁dare - ▁monitoring - ▁camps - ▁scenes - ▁cycl - ▁lanes - ▁cue - ▁dam - posted - ▁voted - ▁relatives - ▁dirt - ▁hated - ▁consult - ▁basketball - ▁beijing - ▁inevitably - ▁triangle - ▁wheat - ▁diving - ▁vacation - ▁cocaine - ▁impressive - ▁entry - ▁nick - ▁involve - ▁personalized - ▁beaten - ▁reject - ▁wandering - right - ani - ▁psychologist - ▁hacking - ▁amazingly - ▁charges - controversial - ▁unlock - ▁slower - ▁approaching - ▁olympics - ▁upward - ▁continents - ▁planes - ▁locations - ▁beam - ▁officials - ▁responded - ▁stones - mont - ▁accomplish - ▁boo - ▁corrupt - ▁rituals - ▁accuracy - ▁alexander - ▁diarrhea - ▁disabilities - ▁earliest - ▁emphasize - ▁pizza - ▁proposition - ▁substantial - ▁supreme - ▁territory - ▁despair - ▁overnight - ▁atlanta - ▁investigate - ▁summit - ▁geek - ▁nightmare - ▁flame - ▁carpet - ▁illustration - ▁harsh - ▁retirement - ▁disney - ▁chile - ▁stamp - ▁chains - ▁dung - ▁frequently - ▁revolutionary - ▁strain - ju - ▁nigerian - ▁australian - ▁screaming - ▁ships - ▁teaches - ▁prayer - ▁wore - ▁succeeded - all - ▁graphics - ▁pockets - ▁sixth - ▁qua - ▁pants - ▁explorer - ino - ▁reporting - ▁diagnose - ▁hormones - ▁singer - ▁extraordinar - state - ▁asteroids - ▁suburb - ▁bronx - ▁brooklyn - ▁diameter - ▁entropy - ▁equity - ▁leather - ▁millimeter - ▁phantom - ▁schedule - ▁synapse - ▁tanzania - ▁yourselves - ▁zealand - ▁hallucinat - ▁lonely - ▁saudi - ▁spiral - ▁stereotype - ▁tattoo - ▁dutch - ▁cotton - ▁soccer - ▁amateur - ▁insane - ▁preference - ▁hint - ▁accountability - ▁journalism - ▁distinction - ▁artwork - ▁assure - ▁progressive - ▁insert - ▁canvas - ▁dominated - ▁taxes - ▁messy - ▁headlines - ▁randomly - ▁consumed - ria - ▁assumed - ▁newton - ▁doubled - ▁astronomers - ▁readers - ▁visiting - ▁integrate - ▁deploy - ▁attach - ▁zoo - ▁kindness - ▁cartoons - ▁elder - ▁clinton - ▁cricket - ▁criteria - ▁democracies - ▁drunk - ▁glimpse - ▁immigration - ▁injured - ▁knife - ▁lobby - ▁proposal - ▁purple - ▁reinforce - ▁welfare - ▁harmony - ▁transcend - ▁multiply - gged - ▁seattle - ▁smiling - ▁ideology - ▁spike - ▁strict - ▁tutor - ▁negotiate - ▁echo - ▁employment - ▁traumatic - ▁lever - ▁phenomenal - ▁requirement - ▁arabic - ▁payment - ▁shipping - ▁managers - ▁refused - ▁comedy - ▁banking - ▁bust - ▁survivors - ▁actively - ▁regulation - ▁vessel - ▁biologist - varian - ▁revealed - ▁hop - ▁musician - cast - ▁nerves - ▁millennia - ▁predators - coming - proc - integration - ▁expressed - ▁wires - ▁cape - ▁charged - ▁errors - ▁crossed - ▁transforming - ▁howard - ▁drain - ▁poems - ▁prince - ▁abundance - ▁assault - ▁bounce - ▁closing - ▁destiny - ▁dopamine - ▁frightening - ▁frustration - ▁generating - ▁helmet - ▁jordan - ▁marathon - ▁november - ▁storytelling - ▁worldview - ▁spatial - alyzing - ▁cream - ▁mosque - ▁legislation - ▁vocal - ▁junk - ▁sperm - ▁minority - ▁texture - ▁dropping - ▁digest - ▁stolen - ▁nodes - ▁ipod - ▁alongside - ▁vice - ▁machinery - ▁tubes - ▁pathogen - ▁12 - ▁officers - ▁vegetables - ▁allies - hem - ▁radi - ▁sexually - ▁cope - ▁hoped - ▁demonstrated - ▁mathematician - ▁earning - ▁destroying - ▁laughed - hong - ographic - ▁germ - ▁sexy - ille - ▁employer - ▁regulations - ▁maxim - ▁compress - ▁advances - ▁desperate - flow - ▁phenomena - iling - significance - ▁arguing - ▁centimeter - ▁destination - ▁elevator - ▁envelope - ▁enzyme - ▁festival - ▁imagining - ▁intensity - ▁legitimate - ▁magnificent - ▁parkinson - ▁pencil - ▁philanthropy - ▁recognizing - ▁competing - ▁mandela - ▁accent - ▁proceed - ▁sensing - ▁liberia - ▁lifespan - ▁tribal - ▁indoor - ▁boundary - ▁elementary - ▁touching - ▁contradiction - ▁duty - ▁overwhelmed - ▁athletes - ▁voters - ▁hopeful - ▁airline - ▁playful - ▁realities - ▁statue - ▁sew - ▁rejected - ▁concentrated - ▁pea - ▁regularly - ▁pc - ▁optical - ▁contributed - ▁trusted - ▁chap - ▁resolve - ▁affecting - ▁stare - ▁delivering - can - ▁container - ▁scream - ▁mini - ▁drones - ▁assemble - ▁unc - ▁cube - '8' - ▁gui - ▁locally - ▁yell - ▁approved - ▁gandhi - ▁lincoln - ▁marrow - ▁migrant - ▁recycling - ▁sanitation - ▁schizophrenia - ▁unhappy - ▁catholic - ▁likelihood - ▁managing - ▁whisper - ▁couch - ▁jazz - ▁kevin - ▁isolation - ▁infectious - ▁scanning - ▁blur - ▁iranian - ▁flesh - ▁priority - ▁brazilian - ▁replacement - ▁johnson - ▁nuts - ▁probe - ▁toast - ▁fertilizer - ▁thrilled - ▁coke - ▁freeze - ▁namely - ▁tide - ▁biases - uction - ▁historically - ▁accomplished - ▁momentum - ▁worms - ▁grip - ▁boost - ▁robb - ▁password - gotta - used - ▁fired - ▁gained - ▁filling - ▁observations - ▁threaten - ▁aggregate - wood - ▁politically - ▁arrange - ▁matt - ▁consensus - ▁embarrassing - ▁gallery - ▁hybrid - ▁nonetheless - ▁prospect - ▁stimulate - ▁watson - ▁convenient - ▁diamond - ▁embryo - ▁gravitational - ▁historian - ▁innovate - ▁therapist - ▁storage - ▁retail - ▁funeral - ▁encode - ▁philosophical - ▁nelson - ▁retain - ▁swiss - ▁psychopath - ▁ecology - nal - ▁spinning - ▁transformative - ▁hometown - ▁mapped - ▁ngos - ▁consciously - ▁simplest - ▁publishing - ▁correctly - ▁repeated - ▁fibers - ▁neo - ▁slight - ▁verb - ▁blowing - ▁prey - ▁shy - ▁deadly - ima - ▁atom - ▁accelerate - ▁associate - ▁regulate - ▁trail - ▁chemist - ▁triple - nesthesia - ▁berkeley - ▁chemotherapy - ▁enemies - ▁evolving - ▁heavily - ▁horror - ▁identities - ▁parachute - ▁segment - ▁jungle - ▁behalf - ▁pancrea - ▁voyage - ▁sweep - ▁autistic - ▁electrodes - ▁koran - ▁scaling - ▁berlin - ▁defining - ▁priest - ▁bucket - ▁seawater - ▁gross - ▁filmmaker - ▁crispr - ▁harmful - ▁mosquitos - ▁processor - ▁cooperate - programmed - wind - '90' - power - ▁coastal - ▁broader - ▁skip - ▁whoever - ▁candle - ▁installed - ▁eve - ▁timing - ▁lap - ▁labeled - ▁heading - ▁encountered - ▁collapsed - ▁expanded - ▁mart - atur - ▁lane - ▁enjoying - ▁shortly - ▁cared - ▁veterans - ▁simpler - ▁insist - ▁tire - gras - ▁descend - ▁contributing - ▁elaborate - ▁expense - ▁glamorous - ▁neuroscientist - ▁recession - ▁sibling - ▁decent - ▁satisfy - ▁scaffold - ▁mankind - ▁brush - ▁legend - ▁swarm - ▁opposition - ▁outdoor - ▁apologize - ▁puppet - ▁derive - ▁apollo - ▁exotic - ▁waited - ▁preach - ▁dragon - ▁animate - ▁planned - ▁blast - ▁basement - ▁inherently - ▁breed - ▁prevention - ▁continuously - ▁skeptical - ▁buses - ▁handful - ▁breeding - ▁rig - ▁planted - ▁pills - ▁mentally - ▁corporation - ▁useless - ▁explode - ▁admire - ▁alpha - ▁artifact - ▁asleep - ▁batteries - ▁entrance - ▁grocery - ▁intimacy - ▁louis - ▁ministry - ▁monument - ▁neanderthal - ▁obsession - ▁philadelphia - ▁redefine - ▁exaggerat - ▁conquer - ▁odor - ▁hatred - ▁pirate - ▁hopeless - ▁kong - ▁iceberg - ▁admitted - ▁atheist - ▁coverage - ▁intent - ▁subjective - ▁naive - ▁adaptation - ▁utterly - ▁hello - ▁augmented - ▁typing - ▁irony - ▁ec - ▁earned - itis - ▁defi - ▁documented - ▁mutations - ▁migrate - ▁buck - ▁chimpanzee - ▁lawn - ▁shorter - ▁informal - ▁enjoyed - ▁cent - ▁dim - ▁representing - hole - ▁foi - ▁catching - ▁eradicate - ▁scare - ▁addict - ▁disconnect - ▁confirm - ▁concentrate - ▁returning - ▁forgive - ▁drama - ▁configuration - ▁convicted - ▁correlation - ▁cylinder - ▁injuries - ▁invitation - ▁literacy - ▁nitrogen - ▁nonviolent - ▁offspring - ▁receptor - ▁shanghai - ▁singapore - ▁surgical - ▁terrain - ▁committee - ▁memorize - ▁creep - ▁slime - ▁northeast - ▁nasty - ▁intervene - ▁costume - ▁jeff - ▁wrapped - ▁pipeline - ▁calories - ▁bloom - ▁selective - ▁chef - ▁iconic - blew - ▁dick - axe - ▁rust - ▁losses - ▁practically - ▁candy - ▁slums - ▁andrew - ▁contained - ▁sooner - ▁parenting - ▁olympic - ▁rob - ache - ▁verbal - rial - ▁fro - ▁align - ▁manifest - ▁entertain - ▁kilo - ▁achieving - ▁biomass - ▁conceive - ▁congestion - ▁correlate - ▁delicious - ▁enrich - ▁humility - ▁hurricane - ▁impulse - ▁indicator - ▁libraries - ▁offshore - ▁refrigerator - ▁saturday - ▁summarize - ▁taboo - ▁underestimate - ▁vivid - ▁steep - ▁trafficking - ▁smash - ▁onstage - ▁delhi - ▁selves - ▁turtle - ▁idiot - ▁jones - ▁trunk - ▁assess - ▁juice - pan - ▁priorities - ▁usage - ▁livestock - ▁virginia - ▁crab - ▁olive - ▁orientation - ▁documentary - ▁ireland - ▁roman - ▁fade - ▁modest - ▁supportive - ▁slip - ▁fees - ▁passes - ▁vibration - ▁tin - ▁youngest - ▁flipp - ▁deny - ▁climbed - ▁rear - field - ix - stand - ▁targeted - ▁pra - ▁humble - ▁jumping - ▁criticiz - ▁decid - ▁flee - ▁zip - ▁explored - ▁prisoner - ▁digg - ▁import - ▁autonomy - ▁fractal - ▁geography - ▁glucose - ▁greece - ▁hierarchy - ▁millennium - ▁qaeda - ▁stadium - ▁album - ▁deficit - ▁fisheries - ▁genomic - ▁transistor - ▁warehouse - ▁bitcoin - ▁comparing - ▁workshop - ▁wrist - ▁assuming - ▁april - ▁greet - ▁jews - ▁terminal - ▁obese - ▁depict - ▁shining - ▁tiles - ▁assistant - ▁redwood - ▁stephen - ▁rejection - ▁cafe - ▁presidential - ▁farther - ▁suspended - ▁prosecutor - ▁euro - ▁accountable - ▁reside - ▁elect - ▁overhead - ▁deployed - ong - ila - ▁rail - eb - ▁empowering - ▁implemented - ▁lifted - ▁privileged - ▁popp - ▁tru - ▁visually - ▁brad - ▁mutation - ▁spare - ▁shifted - ▁responding - ▁item - ▁bern - ▁distribute - ▁spine - ▁utter - ▁statistical - scribe - ▁astronomy - ▁atmospheric - ▁catastrophic - ▁cellphone - ▁circular - ▁columbia - ▁detroit - ▁encyclopedia - ▁execution - ▁famine - ▁headquarters - ▁holiday - ▁liberty - ▁parameter - ▁peculiar - ▁premise - ▁preparation - ▁regenerate - ▁sequencing - ▁switzerland - ▁yemen - plankton - ▁optimize - ▁venus - ▁trading - ▁minimal - ▁toxin - ghett - ▁superpower - ▁systematically - ▁canyon - ▁cardboard - ▁arrangement - ▁romance - ▁epi - ▁occurring - ▁boots - ▁generic - ▁approximately - ▁transmitted - ▁slam - ▁acceptance - ▁rebel - ▁rick - ▁sadness - ▁rode - ▁luc - ▁alert - ▁merge - ▁squeeze - ▁bacterial - ▁armor - ▁threatening - ▁operator - lau - ▁confronted - ▁masses - ▁discussing - ▁talented - ▁celebrated - ▁erase - arian - ▁reader - ▁antibiotic - ▁argued - town - ▁matched - ▁asset - ▁respected - ▁peru - ▁shout - ▁tempt - ▁systematic - ▁bonobo - ▁copenhagen - ▁corridor - ▁dismiss - ▁happily - ▁observing - ▁practicing - ▁protocell - ▁singular - ▁storyteller - ▁tedtalk - ▁therapies - ▁thrust - ▁undergraduate - ▁inhibit - ▁insulin - ▁overlook - ▁prioritize - ▁sierra - ▁applies - ▁essay - ▁forecast - ▁intact - ▁worship - ▁friction - ▁rigid - ▁headache - diac - ▁unclear - ▁disruption - ▁grandma - ▁midst - ▁jona - ▁valve - ▁edward - ▁wooden - ▁economically - ▁nail - ▁bro - ▁accurately - ▁posed - ▁brit - ▁cheer - ▁reaches - pressed - ▁taxi - ▁attracted - pend - ▁traditionally - list - ▁belonging - ▁accepting - ▁yu - ▁characteristic - ▁hans - ▁instruction - holder - ▁inhabit - ▁pee - '7' - vian - war - water - burg - ▁wor - ▁acquire - una - ▁stigma - ▁automatic - ▁automated - ▁calendar - ▁commodity - ▁endeavor - ▁ghost - ▁glamour - ▁jersey - ▁manuscript - ▁memorial - ▁michigan - ▁netherlands - ▁reclaim - ▁revelation - ▁satisfaction - ▁stabilize - ▁translator - ▁abraham - ▁anxious - ▁cosmos - ▁cattle - ▁faint - ▁alike - ▁uncover - ▁tempo - ▁desktop - ▁nike - ▁slope - ▁banana - ▁hostile - ▁wool - ▁kilogram - ▁robin - ▁persist - ▁chew - ▁firing - ▁browser - ▁stir - ▁exit - ▁governor - ▁acquired - ▁charter - ▁bud - ▁helpless - ▁calculated - ▁mum - ▁analyzed - ▁pouring - ▁melting - ▁considering - itz - eur - ▁wipe - ▁enabl - ▁bench - ▁dip - ▁phd - ▁treasure - ▁examine - ▁affair - ▁stressed - ▁slum - ▁wag - ▁crap - ▁sticking - ▁microscop - ▁stereo - ▁functioning - ▁crush - ▁tract - wash - ference - ▁orient - ▁abundant - ▁ambulance - ▁bamboo - ▁calculus - ▁consuming - ▁crude - ▁disadvantage - ▁fantasy - ▁feynman - ▁frustrating - ▁imperative - ▁involving - ▁lesbian - ▁module - ▁narrator - ▁participating - ▁pervasive - ▁prejudice - ▁procrastinat - ▁reproductive - ▁resilient - ▁skeletal - ▁unemployment - ▁unfair - ▁modified - ▁prohibit - ▁harass - ▁goodbye - ▁exceed - ▁spear - ▁freezing - ▁polite - ▁racing - ▁collar - ▁hammer - ▁debris - ▁libya - ▁dispers - ▁philip - ▁presum - ▁chuck - ▁scholarship - ▁ultra - ▁gaze - ▁inspect - ▁bonus - ▁acres - ▁chasing - ▁emergence - ▁courageous - ▁nsa - ▁greenland - ▁princess - ▁dishes - ▁pac - play - ▁progression - ▁recycled - ▁asthma - ▁mob - ▁conceptual - ▁converted - ▁tourist - ▁legally - ▁massively - down - ▁longest - ▁fuse - ▁justify - ▁folding - ▁dreaming - ▁nicely - ▁survivor - ▁tenth - ▁regional - ▁embrac - quin - ▁goodness - mixing - ▁plea - ▁rag - ▁casual - ▁perpetrat - embodied - ▁adolescent - ▁blueprint - ▁catastrophe - ▁ceremony - ▁frequencies - ▁gorgeous - ▁grief - ▁nairobi - ▁neglect - ▁nevertheless - ▁promoting - ▁regulator - ▁sovereign - ▁tokyo - ▁widespread - ▁chimps - ▁extensive - ▁maneuver - ▁monday - ▁crater - ▁bosnia - ▁buzz - ▁cinema - ▁suitcase - ▁coloniz - ▁shove - ▁throat - ▁leonardo - ▁referred - ▁bionic - ▁slipp - ▁cement - ▁manifestation - ▁replacing - ▁copies - ▁scarce - ▁emma - ▁carries - ▁scientifically - ▁protective - ▁assist - ▁disk - ▁checking - loom - ▁warmer - ▁activated - '50' - ▁crank - ▁fog - ▁examin - '20' - lapping - ▁capitalist - ▁ancestor - ▁accounting - mino - ▁employee - ▁gig - ▁peel - ▁simulate - ▁recycle - ▁infect - ▁abandon - ▁fixing - ▁lee - ▁compute - ▁impose - ▁submit - ▁differ - ean - ▁gentle - ▁conserv - ▁abroad - ▁absurd - ▁accelerating - ▁aristotle - ▁cockroach - ▁cocktail - ▁competitor - ▁evaluate - ▁existential - ▁fluorescent - ▁gabby - ▁generosity - ▁innovator - ▁joseph - ▁maintenance - ▁mcdonald - ▁opponent - ▁phoenix - ▁province - ▁renaissance - ▁shakespeare - ▁staircase - ▁stimulation - ▁ultrasound - ▁unintended - ▁alphabet - ▁distress - ▁shatter - ▁unleash - ▁scroll - ▁somalia - ▁divine - ▁tesla - ▁dvd - ▁equipped - ▁noisy - ▁escalat - ▁shaking - ▁resemble - ▁yale - ▁alice - ▁disruptive - ▁blade - ▁deception - ▁empowerment - ▁pillar - ▁stasi - lab - ▁loot - ▁lobe - ▁disturbing - ▁mandate - ▁declared - ▁inmates - ▁sexuality - ▁uber - ▁uh - ▁macro - ▁observer - ara - ▁wishes - ▁stealing - ▁strengthen - ▁settled - ▁publication - lessness - ▁critically - ▁lover - ▁mist - ▁rolled - ▁poo - ▁pale - ▁obstacle - ▁penguin - ▁politician - ▁flock - ▁prim - ▁painter - ▁poke - ▁preventing - ▁measurement - ▁headline - ▁heroic - ▁offend - hav - ▁hung - ▁acute - ▁avatar - ▁avenue - ▁buddha - ▁cambodia - ▁destructive - ▁exquisite - ▁fireflies - ▁holocaust - ▁introvert - ▁juvenile - ▁laboratories - ▁longevity - ▁metropoli - ▁monterey - ▁mumbai - ▁obligation - ▁obsolete - ▁penalty - ▁pesticide - ▁portray - ▁staggering - ▁surgeries - ▁suspicious - ▁violation - ▁morph - ▁swedish - ▁salmon - ▁dough - ▁blessed - ▁glove - ▁ptsd - ▁knit - ▁violate - ▁ashamed - ▁pearl - ▁lottery - ▁secular - ▁derek - ▁cabinet - ▁appreciation - ▁inflation - ▁criticism - ▁swap - ▁blob - ▁noticing - ▁fame - ▁adoption - ▁baseline - ▁epic - ▁injection - ▁medic - ▁dune - ▁disappointed - ▁stain - dom - ▁fitness - ▁keen - bound - stick - band - ▁discussed - ▁rooted - ▁riding - ▁behold - ▁donated - ▁carl - ▁vegetable - ▁expectation - ▁prosecut - ▁excess - grams - ▁uncertain - ▁temper - ▁aluminum - ▁blockchain - ▁dysfunction - ▁infrared - ▁intriguing - ▁meditation - ▁multinational - ▁reproduction - ▁supernova - ▁symphony - ▁wavelength - ▁curtain - ▁gamma - ▁mixture - ▁shepherd - ▁surrender - ▁tunisia - ▁chord - ▁stairs - ▁entities - sembl - ▁coordination - ▁celebration - ▁logging - ▁sync - ▁luther - ▁assessment - ▁mature - ▁potent - ▁textile - ▁salary - ▁penis - ▁victory - ▁profitable - ▁colorful - ▁repeatedly - ▁viable - ▁regenerati - ▁quiz - ▁cleaner - ▁trailer - ▁wiped - ▁matches - ▁dorm - ▁apes - ▁remix - ▁ramp - ▁bitter - ▁veil - ▁sue - ▁financially - ▁emergen - ▁exceptional - loid - ▁hence - ▁reminder - ▁demanding - ▁trait - berry - ▁wander - ▁guilt - ▁altogether - ▁arizona - ▁boyfriend - ▁catalog - ▁cholera - ▁concussion - ▁conviction - ▁daniel - ▁defect - ▁evident - ▁extrovert - ▁favela - ▁genital - ▁georgia - ▁gigantic - ▁irrational - ▁jerusalem - ▁lebanon - ▁metabolism - ▁peanut - ▁propaganda - ▁releasing - ▁sentiment - ▁steady - ▁tangible - ▁thailand - ▁ultraviolet - ▁grabbed - ▁sudan - ▁problematic - ▁prophet - ▁feather - ▁alliance - ▁slept - ▁cease - ▁harry - ▁saint - ▁coincidence - ▁hazar - ▁versa - ▁wives - ▁clone - ▁psychiatrist - ▁ridge - ▁delete - ▁fda - ▁thankfully - ▁weakness - ▁acceptable - ▁grim - ▁whip - ound - ▁vibrat - ▁maximize - ▁enormously - roots - ▁reuse - ▁noble - ▁treaty - ▁blo - ▁articulate - ▁beef - bility - ▁menu - ▁ross - ▁marked - ▁arabia - ▁heroin - ▁snap - ▁ingredient - ▁folk - ▁ngo - ▁unfortunate - '9' - ▁flap - ▁encrypt - ▁diminish - ▁eternal - ▁guidance - ▁incremental - ▁integrity - ▁kennedy - ▁metabolic - ▁monarch - ▁mozart - ▁neocortex - ▁obscure - ▁pepper - ▁promising - ▁pursuing - ▁rectangle - ▁slaughter - ▁superhero - ▁tuberculosis - ▁utility - ▁vocabulary - ▁continual - ▁fishermen - ▁superior - ▁thorough - ▁backpack - ▁grape - ▁ladder - ▁shaping - ▁decorati - ▁enslav - ▁choir - ▁helium - ▁fetus - ▁betray - ▁shade - ▁erupt - ▁petition - ▁rival - ▁adulthood - ▁insult - ▁urgent - ▁manipulation - ▁visitors - ▁extremist - ▁adaptive - ▁potter - entrepreneurship - ▁magician - ▁lump - ▁fungi - ▁ease - ▁16 - ▁willingness - ▁lightning - ette - ▁firsthand - ▁inherited - ▁intentional - ▁secondary - ▁exploded - lux - ▁quietly - ▁tuned - ▁eighth - ▁successes - occupied - ▁prescribe - ▁kicked - ▁skilled - ▁seventh - ▁lately - ternity - ▁operated - ▁embraced - ▁detected - ▁outward - ▁symbolic - ini - ▁barr - ▁lama - ▁mobilize - ▁kite - ada - ▁augment - ▁numb - ▁victor - ▁circulat - ▁advertise - ▁adolescence - ▁beneficial - ▁benjamin - ▁buddhist - ▁calculator - ▁caregiver - ▁celebrity - ▁convincing - ▁dissolve - ▁feminism - ▁frightened - ▁fuzzy - ▁mexican - ▁midnight - ▁numerous - ▁occupation - ▁paralysis - ▁pennsylvania - ▁prevalence - ▁satisfied - ▁specimen - ▁surviving - ▁terrific - ▁triumph - ▁visceral - ▁vitamin - ▁alaska - ▁anthropologist - ▁subsequent - ▁invasion - ▁tibet - ▁plato - ▁copied - ▁minimize - ▁grind - ▁cooperative - ▁polymer - ▁gaza - ▁checklist - ▁lawsuit - ▁minor - ▁ironically - ▁consultant - ▁copper - ▁tagged - ▁accomplishment - ▁leone - ▁hallway - imposed - ▁ironic - ▁deliberately - ▁indus - ▁spontaneously - ▁liar - ▁diego - ▁statistically - ▁max - ▁torn - ▁consideration - ▁sometime - ▁riches - ▁implanted - ▁honesty - ▁tricky - ▁recovered - ▁biologically - ▁fleet - ▁mama - ▁seti - mann - ▁honored - ▁carol - ▁maria - anne - ▁eh - ▁typ - ▁retire - ▁inherit - ▁symptom - ▁cro - '6' - ▁apparent - ▁counsel - ▁hacker - ▁anniversary - ▁asylum - ▁caribbean - ▁certificate - ▁compliment - ▁displaced - ▁embarrassed - ▁flexibility - ▁generator - ▁glorious - ▁improvise - ▁incarceration - ▁mammoth - ▁nuclei - ▁physiology - ▁prostitution - ▁relativity - ▁unpredictable - ▁athletic - ▁authoritarian - ▁bulk - ▁improvisation - ▁invasive - ▁slogan - ▁honeybee - ▁scatter - ▁brainstorm - ▁guinea - ▁overseas - ▁snail - ▁spouse - ▁14 - ▁draft - ▁resort - ▁ripple - ▁denied - ▁diplomacy - ▁shred - ▁haunt - ▁rehabilitation - ▁wright - ▁snowden - ▁passport - ▁missile - ▁blogger - ▁banned - ▁laden - ▁nazi - ▁towel - ▁authenticity - ▁exploitation - ▁proton - ▁kidding - ipping - ▁ibm - ▁affection - ▁composer - ▁flush - ▁runner - ▁controller - ▁executed - ▁confined - ▁digitize - ▁pete - ▁confirmed - ▁remotely - ▁bali - ▁miniatur - ▁17 - ▁everest - ▁transported - ▁exo - ▁uniquely - ▁sticker - ▁pul - fire - ▁cub - spectr - ▁oppos - hic - ▁hind - ▁lest - ▁mite - ▁analys - responsibilities - ▁census - ▁christopher - ▁communal - ▁curator - ▁delicate - ▁facilitate - ▁horrific - ▁incomplete - ▁inexpensive - ▁kindergarten - ▁masculin - ▁michelangelo - ▁patience - ▁quantity - ▁sensitivity - ▁submersible - ▁zimbabwe - dispens - ▁smuggl - ▁guitar - ▁chaotic - ▁southeast - ▁connectome - ▁montana - ▁mysteries - ▁pricing - ▁nepal - ▁blessing - ▁finite - ▁commute - ▁philanthropi - ▁bypass - ▁squid - ▁carrot - ▁corps - ▁transferred - ▁railway - ▁stiff - ▁simon - ▁citizenship - ▁playground - ▁charli - ▁waking - ▁forbid - ▁scanned - ▁easie - ▁greed - ▁guardian - ▁pine - ▁newborn - ▁distraction - ▁coup - ▁rna - ▁admir - ▁joel - ▁reasonably - ▁devoted - ▁impressed - ception - ▁lenses - ▁degraded - ▁certainty - ▁assigned - ▁exponentially - ▁tightly - ▁abstraction - ▁dug - ▁acknowledg - ▁ferr - ▁addicted - ▁von - ule - ▁inflat - ▁mastery - case - ▁nico - ▁murdered - ▁kim - ▁sticky - ▁attending - ▁rescued - ▁loaded - hold - ▁backup - ▁stumble - ▁rica - walk - virus - ▁rage - ▁fled - shine - ▁expose - ▁dread - ature - ▁intend - ▁athlete - ▁enforce - ▁interfere - patriot - ▁accommodate - ▁adequate - ▁algebra - ▁altruism - ▁ambassador - ▁austria - ▁biofuel - ▁commerce - ▁continuum - ▁controversy - ▁craig - ▁dictionaries - ▁elbow - ▁equator - ▁grammar - ▁hypothetical - ▁inefficient - ▁jeopardy - ▁kosovo - ▁mycelium - ▁niece - ▁puberty - ▁savanna - ▁societal - ▁sprawl - ▁submarine - ▁sylvia - ▁unemployed - ▁violin - ▁embark - ▁envision - ▁immerse - ▁ebay - ▁polish - ▁praise - ▁masterpiece - ▁lagos - ▁dissect - ▁locomotion - ▁electromagnetic - ▁confusing - ▁trek - ▁hubble - ▁populate - ▁tennis - ▁submitted - ▁polarization - ▁outrage - ▁constructive - ▁token - ▁iteration - ▁packet - ▁ohio - ▁jealousy - ▁medal - ▁dwell - ▁dude - ▁crows - ▁mola - ▁illnesses - ▁anytime - ▁kenyan - cular - ▁neurological - ▁accelerated - ▁mash - ▁followers - ▁broadly - ▁praying - ▁washed - ▁knocked - ▁independently - ▁efficiently - ▁receiver - ▁glu - ▁prototyp - ▁humbl - ▁recreate - ▁overlap - ▁incorporate - ▁cultivate - ▁eco - ▁hatch - ▁dealer - mediate - ida - dict - ▁straw - hri - ▁inherent - location - communica - ▁cabin - ▁orphan - ▁kilometer - ▁dictator - ▁bureaucrat - ▁activism - ▁annoying - ▁catalyst - ▁combining - ▁confusion - ▁embassy - ▁exclude - ▁gasoline - ▁hobby - ▁imperfect - ▁indication - ▁internship - ▁intrigued - ▁kansas - ▁leopard - ▁lgbt - ▁mortgage - ▁nucleus - ▁powder - ▁quantities - ▁seafood - ▁solidarity - ▁stunning - ▁supercomputer - ▁symmetrical - ▁technician - ▁thriving - ▁unacceptable - ▁voluntary - ▁attorney - ▁fierce - ▁modification - ▁orleans - ▁oyster - ▁shield - ▁shortcut - ▁cairo - ▁lifelong - ▁geometric - ▁infinity - ▁shuttle - ▁geological - ▁permit - ▁tinker - ▁worries - ▁causal - ▁bunk - ▁jewelry - ▁norden - ▁breakdown - ▁wax - ▁grey - ▁powerpoint - ▁urbanization - ▁deadline - ▁fairness - ▁upgrade - ▁coding - ▁composed - ▁obtained - ▁flor - ▁lapse - ▁difficulties - ▁swamp - ▁retired - regulated - ▁compressed - ▁firmly - ▁altered - ▁guaranteed - ▁shipp - ▁costa - ▁contaminat - ▁bark - ologies - ▁intel - ▁migrat - ▁publisher - ▁replica - ▁collaborat - ibly - life - '60' - ▁seize - ▁semina - shift - ▁spontaneous - ▁potato - storing - placing - ▁hive - ▁abnormal - ▁antidepressant - ▁argentina - ▁astronomical - ▁bacterium - ▁colombia - ▁complication - ▁disposal - ▁dispute - ▁enthusiasm - ▁entitled - ▁epilepsy - ▁gratitude - ▁implicit - ▁ingenuity - ▁introducing - ▁jefferson - ▁lhc - ▁loneliness - ▁mundane - ▁nonsense - ▁perceptual - ▁plague - ▁premature - ▁quantitative - ▁sapiens - ▁sediment - ▁servant - ▁sulfide - ▁technologist - ▁testosterone - ▁thylacine - ▁yeast - ▁arthur - ▁companion - ▁delusion - ▁modify - ▁optimal - ▁unseen - ▁cyrus - ▁duration - ▁shelves - ▁radar - ▁sarah - ▁synchronize - ▁eager - ▁burial - ▁utopia - ▁jason - ▁font - ▁battlefield - ▁extremism - ▁verge - ▁oven - ▁misery - ▁princeton - ▁bolt - ▁discard - centric - ▁establishment - ▁danny - ▁temptation - ▁wiring - ▁monk - ▁huh - ▁reconstruction - ▁wetland - ▁celebrating - ▁stressful - ▁parade - ▁lesion - ▁preventable - ▁dragge - ▁simulated - genic - ▁shouting - ▁assembled - ▁mating - classified - ▁touches - ogram - ▁operational - ▁iraqi - ▁substitute - ▁entrepreneuri - ▁tub - ▁semi - ips - borne - ▁calv - ▁assign - ▁seasonal - ▁cooper - ▁frequent - ▁literal - conducting - ▁sculpt - itude - ▁angiogenesis - ▁arbitrary - ▁broccoli - ▁cathedral - ▁cognition - ▁comparable - ▁determination - ▁feminine - ▁graffiti - ▁guerrilla - ▁impaired - ▁kepler - ▁lightbulb - ▁malnutrition - ▁mediterranean - ▁monsoon - ▁motorcycle - ▁muhammad - ▁octopus - ▁oklahoma - ▁penetrate - ▁portugal - ▁provision - ▁railroad - ▁roosevelt - ▁seizure - ▁simulator - ▁starbucks - ▁surplus - ▁tuesday - ▁turbine - ▁doubling - ▁larry - ▁psyche - ▁choreograph - ▁depressing - ▁resign - point - ▁racist - ▁wreck - ▁bleeding - ▁scope - ▁underpin - ▁irrita - ▁humankind - ▁intricate - ▁siberia - ▁warfare - ▁senate - ▁crust - ▁toronto - ▁immersi - ▁dentist - ▁inviting - ▁crumb - ▁carrier - ▁crises - ▁decode - ▁preferred - ▁coalition - ▁psychiatric - ▁timber - ▁encryption - ▁jacket - ▁undermine - '3' - ▁posture - ▁commandment - ▁litter - ▁portable - ▁slot - ▁possession - ▁explicitly - ▁dictate - ▁marvel - ▁sketches - human - ▁induced - ▁freeway - ▁supplie - ▁elevated - ▁accused - ▁distracted - ▁fred - ▁countryside - ▁perfection - mail - ▁consistently - ▁mantra - ▁verse - ▁disconnected - ▁insisted - ▁weave - ▁fulfilling - ▁nitr - ▁bj - ▁poop - ▁curl - crow - iza - ▁unite - osis - resolved - ▁commander - ▁mach - ▁rippe - ▁rug - ▁dye - ▁heartbreak - ▁stole - ▁compass - ▁amsterdam - ▁amygdala - ▁approval - ▁auditorium - ▁bandwidth - ▁benign - ▁bhutan - ▁commodities - ▁condemn - ▁connecticut - ▁declining - ▁denmark - ▁edinburgh - ▁evaluation - ▁gadget - ▁illuminati - ▁namibia - ▁nanotechnology - ▁preservation - ▁responsive - ▁symmetries - ▁unnecessary - ▁beloved - ▁colonial - ▁histories - ▁vector - ▁dalai - ▁syringe - ▁wolf - ▁seamless - ▁ideological - ▁hiring - ▁shack - ▁championship - ▁marvelous - ▁android - ▁porn - ▁gait - ▁chemo - ▁attachment - ▁grassland - ▁tilt - ▁blanket - ▁circulation - ▁tasmanian - ▁billboard - ▁clinician - ▁turk - ▁recommendation - ▁tolerate - ▁expressive - ▁reflective - ummy - ▁ballet - ▁norman - ▁infer - ▁randomness - ▁unified - ▁mock - ▁jh - ▁shortage - ▁analytical - ▁slate - ▁cartoonist - proof - ▁clap - ▁crushed - ▁pest - writing - ▁booth - ▁tacti - ▁upright - ▁misuse - ▁manipulat - ▁bake - ▁vest - ▁ensu - ▁distract - ▁dominate - ▁accus - vac - ▁northwest - ▁collide - counterintuitive - disciplinary - disproportionate - ▁accompanied - ▁alabama - ▁armstrong - ▁calculating - ▁carnegie - ▁collagen - ▁communist - ▁comprehensive - ▁constitute - ▁crocodile - ▁cruise - ▁discomfort - ▁hamburger - ▁imaginary - ▁insecure - ▁irrelevant - ▁joshua - ▁linux - ▁medieval - ▁plaque - ▁rainfall - ▁recipient - ▁rediscover - ▁reimagin - ▁simplistic - ▁snapshot - ▁stewart - ▁sydney - ▁transgender - ▁trustworthy - ▁velocity - ▁venue - ▁wrestle - keith - ▁abusive - ▁dallas - ▁enroll - ▁hollow - ▁indirect - ▁packaging - ▁pharmac - ▁regain - ▁furthermore - ▁negotiating - ▁rotating - ▁withdraw - ▁physiological - ▁bioluminescence - ▁lewis - ▁outfit - ▁knot - ▁juggl - ▁delta - ▁resonate - ▁claw - ▁anchor - ▁julie - ▁scout - ▁pollute - ▁mustache - ▁confession - ▁donald - ▁entity - ▁suburban - ▁fluctuation - ▁puff - ▁novelty - ▁ranging - ▁exile - ▁erect - ▁habitable - ▁teddy - ▁spaceship - ▁wallet - ▁coca - ▁beaches - ▁feat - ▁thro - ▁churches - ▁daylight - ▁nurtur - ▁13 - craft - ▁startup - culture - ▁democratize - characterized - ▁meaningless - ▁mali - lisa - ▁nurture - lifting - ▁intercept - ▁possess - ▁jelly - ▁exclusion - ▁vapor - ▁admission - ▁amplify - ▁astrolabe - ▁baltimore - ▁buffet - ▁cerebral - ▁circum - ▁contributor - ▁dependence - ▁devastated - ▁enlightenment - ▁epidemiolog - ▁epiphany - ▁evaporate - ▁fracture - ▁fundraising - ▁galapagos - ▁laundry - ▁marginalized - ▁massachusetts - ▁norwegian - ▁oregon - ▁penicillin - ▁pittsburgh - ▁promotion - ▁remittance - ▁reservation - ▁resonance - ▁rhetoric - ▁senator - ▁unequal - ▁unhealthy - ▁urgency - ▁volcanic - ▁chalk - ▁glory - ▁jamaica - ▁jerry - ▁distill - ▁toddler - ▁prevail - ▁scandal - ▁lamb - ▁programmable - ▁myriad - ▁vague - ▁scotland - ▁troll - ▁holistic - ▁notebook - ▁eradication - ▁aerial - ▁photons - keep - ▁mythology - ▁deployment - ▁distinctive - ▁involvement - ▁hike - ▁tourism - ▁flex - ▁grit - ▁johnny - ▁attraction - ▁personalities - ▁curb - ▁madness - rained - ▁warren - ▁suey - ▁gland - ▁tuck - ▁paulo - ▁hotter - ▁lick - ▁seren - ▁limitation - ▁chok - chester - mond - structure - ▁cancel - ▁differentiate - contacted - ▁brut - utu - jury - commissioned - ▁excite - ▁geographical - ▁execute - ▁confine - ▁calci - ▁discriminat - ▁francis - ▁elastic - ▁homosexual - ▁rhino - ▁adjacen - ▁adversity - ▁biotechnology - ▁cardiovascular - ▁cartilage - ▁censorship - ▁conspiracy - ▁dwarf - ▁energetic - ▁enlightened - ▁equilibrium - ▁excellence - ▁explosive - ▁facade - ▁faculty - ▁foremost - ▁galleries - ▁hectare - ▁humiliation - ▁hypotheses - ▁inappropriate - ▁integrating - ▁kyoto - ▁limestone - ▁livelihood - ▁migraine - ▁millisecond - ▁oecd - ▁preschool - ▁pristine - ▁propeller - ▁rainbow - ▁reflex - ▁relentless - ▁revenge - ▁rigorous - ▁secrecy - ▁sensible - ▁suicidal - ▁supplement - ▁taiwan - ▁zambia - ▁capsule - ▁ninth - ▁palace - ▁starvation - ▁tackling - ▁widow - ▁wilderness - ▁gorilla - ▁lazy - ▁nursing - ▁swear - ▁000 - ▁classmates - ▁imitate - ▁instability - ▁forehead - ▁compact - ▁verify - ▁discourse - ▁loyalty - ▁standpoint - ▁outrageous - ▁victorian - ▁obey - ▁paragraph - aternal - ▁fertility - ▁shallow - ▁incidentally - ▁roger - ▁innate - ▁darwinian - ▁advancement - ▁ballot - ▁void - ▁daddy - ▁queer - ▁chill - ▁collider - ▁anderson - ▁skype - ▁forcing - ▁conductor - ▁cmu - ▁strongest - ▁susan - ▁underway - ▁jf - ▁petal - ▁reconnect - ▁cola - ▁conception - ▁endure - ▁contractor - ▁recreation - ▁joyful - ▁phil - ▁algorithmic - ▁sewer - ▁fist - scale - ▁100 - suit - ▁milo - shaw - ▁clam - ▁analytic - ▁interconnect - ▁slap - ▁specialize - ▁confess - osphere - ▁biodegrad - ▁acceleration - ▁acronym - ▁apparatus - ▁balancing - ▁coherent - ▁colorado - ▁dragonflies - ▁fingertips - ▁hippocampus - ▁investigating - ▁janitor - ▁kiribati - ▁mammogram - ▁miraculous - ▁propulsion - ▁provoke - ▁reconcile - ▁serotonin - ▁superstar - ▁synthesis - ▁taxpayer - ▁varieties - ▁vegetation - ▁vicious - ▁biotech - ▁categoriz - ▁enthusiastic - ▁lettuce - ▁mutant - ▁shuffle - ▁varied - ▁applaud - ▁unstable - ▁dominance - ▁scalable - ▁sword - ▁plunge - ▁absent - ▁finland - ▁subtract - ▁dylan - ▁cheek - ▁comedian - ▁uprising - ▁dictatorship - ▁susceptible - ▁bombsight - ▁reckless - ▁agile - ▁restrictions - ▁orphanage - ▁anatomical - ▁panama - ▁excessive - ▁portland - ▁persistent - ▁franklin - ▁dividend - ▁urine - ▁blogging - ▁strive - ▁circuitry - ▁merit - ▁bluefin - ▁navy - ▁roommate - ▁visionary - ▁plugg - ▁portal - ▁shave - ▁extraction - ▁fab - ▁mustard - ▁meantime - ▁latino - ▁veteran - ▁implication - ▁actua - egan - cock - ▁revolutionize - chemical - screen - worth - ▁tab - ▁poetic - ▁stumbl - marin - effectiveness - ▁perpetu - ▁patholog - ▁vibrate - ▁arriv - ▁pub - ▁strat - ripping - ▁strap - ▁thrill - ▁southwest - ▁archaeolog - ▁volcano - ▁renew - ▁chick - galactic - orthodox - ▁aquatic - ▁babbage - ▁bioluminescent - ▁caltech - ▁divergence - ▁embarrassment - ▁ethanol - ▁galileo - ▁gamble - ▁happiest - ▁hurdle - ▁illuminate - ▁imaginative - ▁lightweight - ▁multiplic - ▁municipal - ▁necessit - ▁oppression - ▁pantheon - ▁philippines - ▁preserving - ▁samuel - ▁unimaginable - ▁vibrant - ▁virtuous - ▁wizard - ▁bureau - ▁erotic - ▁sleeve - ▁venice - ▁automation - ▁mangrove - ▁sociologist - ▁waterfall - ▁stratosphere - ▁coworker - ▁gently - ▁notorious - ▁fetch - ▁sprint - ▁behaving - ▁rotation - ▁stink - ▁sponge - ▁auditory - ▁pledge - ▁pupp - ▁stakeholder - ▁nathan - ▁prompt - ▁hmm - ▁tether - ▁excel - ▁propagat - ▁serial - ▁nomad - ▁spawn - ▁comprise - ▁moderate - ▁ideo - ▁rubble - ▁soundscape - ▁symbio - ▁ruling - ▁martian - quart - ▁aspire - ▁forgiveness - ▁payload - ▁peek - ▁beast - ▁volcanoes - ▁tumbl - ▁outlier - ▁functionality - ▁similarities - ▁chin - ▁boson - ▁rally - ▁hover - brook - ▁navigation - ▁hindu - ▁coaches - ▁climbers - ▁flute - ▁uran - fusing - ▁tailor - ▁wil - ▁islamist - skies - ▁esp - ▁jewel - ▁fertil - tended - paid - strom - ▁radioactiv - ometer - ▁separat - ▁announce - ▁amaze - ▁explicit - ▁remark - ▁kidnap - scape - ▁accumulation - ▁anecdote - ▁aquarium - ▁arduino - ▁bangalore - ▁barefoot - ▁capacities - ▁cascade - ▁certified - ▁composite - ▁cybercriminals - ▁daunting - ▁dislike - ▁doctrine - ▁elizabeth - ▁emphasis - ▁erosion - ▁exoplanet - ▁eyebrow - ▁falcon - ▁fountain - ▁giraffe - ▁heartbeat - ▁ignoring - ▁inclusion - ▁influential - ▁integral - ▁katrina - ▁knight - ▁landfill - ▁larvae - ▁modular - ▁nickname - ▁oakland - ▁paradise - ▁pivot - ▁plaza - ▁policing - ▁puncture - ▁repertoire - ▁respiratory - ▁sermon - ▁sputnik - ▁template - ▁teszler - ▁twentysomething - ▁unaware - ▁vancouver - ▁wolves - ▁buffer - ▁bundle - ▁congratulat - ▁drank - ▁empirical - ▁estrogen - ▁fungus - ▁hypoth - ▁microwave - ▁reluctant - ▁retriev - ▁solemn - ▁darpa - ▁menstruat - ▁plummet - ▁nourish - ▁decay - ▁motive - ▁cherish - ▁queue - ▁skirt - ▁ivory - ▁veterinarian - ▁mommy - ▁fond - ▁wilson - ▁rooftop - ▁scripture - ▁waist - ▁emily - ▁shiny - ▁mound - ventilat - ▁raft - ▁marble - residential - ▁flatten - ▁liberation - ▁dining - ▁payoff - ▁potatoes - ▁barber - ▁jay - ▁haul - ▁crypto - ▁vow - ▁imf - ▁gill - ▁castle - liability - minent - ▁julian - ▁circumstance - ▁perpetual - ▁rib - usable - ▁nav - ▁tease - ▁optimist - ▁advanc - finished - ▁contaminate - told - pong - ▁basin - ▁obtain - ▁deliberate - ▁intrinsic - ▁converge - ollywood - ▁administrator - ▁ambiguity - ▁ancestry - ▁antibodies - ▁appointment - ▁arguably - ▁capturing - ▁centigrade - ▁conscience - ▁convenience - ▁damaging - ▁deprivation - ▁goliath - ▁guatemala - ▁horrified - ▁humiliate - ▁improbable - ▁incarcerated - ▁irrigation - ▁linguistic - ▁literary - ▁memorable - ▁patriarch - ▁pentagon - ▁pixar - ▁reconciliation - ▁semester - ▁styrofoam - ▁sulfur - ▁tectonic - ▁testimony - ▁therapeutic - ▁toyota - ▁veronica - ▁worthwhile - ▁wrinkle - ▁comparative - ▁discrete - ▁disseminat - ▁plaster - ▁quantify - ▁retrofit - ▁troubling - ▁splash - ▁indulge - ▁meadow - ▁nanometer - ▁solitary - ▁summary - ▁foreground - ▁logistics - ▁oneself - ▁arteries - ▁proving - ▁liking - ▁cnn - ▁patrol - ▁mercy - ▁sterile - ▁oftentimes - ▁relies - ▁freshwater - ▁contagious - ▁aviation - ▁mutate - ▁prominent - ▁pension - ▁sunset - ▁resurrection - ▁dragging - ▁milestone - ▁foolish - ▁maryland - ▁clash - ▁viewpoint - dental - ▁arose - ▁methodology - ▁bait - settling - ▁conform - ▁bloodstream - ▁ethnicity - ▁walter - ▁arrival - ▁fishery - ▁cubicle - ▁hostage - ▁marx - ▁titanic - ▁christianity - ▁counselor - ▁maid - alignment - ▁dreadful - ▁pakistani - disturbed - ▁emission - ▁initiate - ▁secretar - governmental - ▁scot - ofi - ▁demonstrat - generative - ▁skeptic - icular - ▁complet - ▁whil - ▁cubic - ▁mortal - ▁drip - ▁generalize - availability - ▁abortion - ▁acumen - ▁apartheid - ▁appliances - ▁beethoven - ▁biomaterial - ▁brilliance - ▁cafeteria - ▁carpenter - ▁charcoal - ▁continuity - ▁determining - ▁disparate - ▁eliminating - ▁enclosure - ▁excerpt - ▁fingerprint - ▁identification - ▁illiterate - ▁investigative - ▁junior - ▁kibera - ▁mammography - ▁misconception - ▁multiverse - ▁nanoparticle - ▁netflix - ▁neurologist - ▁odysse - ▁palliative - ▁pavilion - ▁pepsi - ▁pneumonia - ▁pornography - ▁skyscraper - ▁stimulus - ▁supermassive - ▁unnatural - ▁unsustainable - ▁ushahidi - ▁caution - ▁cough - ▁credible - ▁envy - ▁fridge - ▁glance - ▁proposing - ▁reagan - ▁simplify - ▁tennessee - ▁vendor - ▁crunch - ▁deprive - ▁embed - ▁hurry - ▁marshall - ▁qatar - ▁blaming - ▁gossip - ▁steak - ▁11 - ▁width - ▁fiscal - ▁swirl - ▁tweak - ▁brink - ▁guidelines - ▁jesse - ▁tactics - ▁zoning - ▁gateway - ▁dried - ▁framing - ▁24 - ▁tiniest - ▁pinpoint - ▁convergence - ▁nudge - volta - ▁wetsuit - ▁vegas - ▁alright - ▁skepticism - ▁outdated - ▁disappointment - ▁furious - ▁europa - ▁restoration - ▁unveil - ▁scottish - ▁decentralized - ▁replicating - ▁stride - ▁moses - ▁interval - ▁instructor - ▁imam - ▁adore - tapped - phobia - ▁strawberr - ▁altruistic - ▁recline - ▁25 - ▁navigat - rowing - ▁metaphorical - ▁strippe - ▁concentrat - ▁impress - ▁confuse - ▁degrade - ▁audit - ▁mello - ▁advent - ▁affirm - ▁accumulating - ▁activation - ▁advocacy - ▁agnostic - ▁arkansas - ▁bankrupt - ▁bladder - ▁bouncing - ▁bronze - ▁cochlea - ▁constellation - ▁craving - ▁darfur - ▁degradation - ▁disclose - ▁douglas - ▁frugal - ▁gecko - ▁humidity - ▁inadequate - ▁incorrect - ▁indefinite - ▁leisure - ▁mississippi - ▁oprah - ▁orangutan - ▁periphery - ▁pheromone - ▁prostitute - ▁psychic - ▁questionnaire - ▁removing - ▁retrospect - ▁temporarily - ▁unfamiliar - ▁vaccinate - ▁visibility - ▁adverse - ▁benchmark - ▁cosmetic - ▁deflect - ▁harlem - ▁messaging - ▁qualify - ▁appalling - ▁hijack - ▁partisan - ▁plume - ▁tropics - ▁undercover - ▁hippie - ▁offense - ▁bmw - ▁detach - archie - ▁hunch - ▁spoil - lousy - ▁geologist - ▁subset - ▁lymph - ▁rift - ▁rainwater - ▁squatter - ▁persistence - ▁announcement - ▁cern - ▁poland - ▁salaries - ▁attain - ▁immortality - ▁tagging - ▁fuck - ▁bbc - ▁intestine - ▁aisle - ▁allocate - ▁resin - ▁recruitment - ▁crown - ▁reception - ▁droplet - ▁commonplace - ▁sculptor - ▁compile - ▁alto - ▁fibro - ▁crisp - ▁canoe - robe - sighted - genesis - ▁snapp - ▁grandpa - ▁resident - centralized - ▁curse - ▁swam - ▁disappoint - ▁reprogram - ▁antarctic - undr - ▁loyal - plausible - ▁accelerator - ▁adviser - ▁advisor - ▁aerodynamic - ▁antidote - ▁apprentice - ▁arithmetic - ▁astounding - ▁audacious - ▁barbershop - ▁butterflies - ▁clarify - ▁destined - ▁disbelief - ▁electoral - ▁enceladus - ▁esteem - ▁evoke - ▁harriet - ▁homicide - ▁hygiene - ▁imposing - ▁insecticide - ▁instinctively - ▁jazeera - ▁jurisdiction - ▁kickstart - ▁mainframe - ▁maldives - ▁measurable - ▁megawatt - ▁mockingbird - ▁moscow - ▁murray - ▁natasha - ▁nectar - ▁nevada - ▁overweight - ▁ozone - ▁palestine - ▁participatory - ▁plural - ▁pragmatic - ▁regulating - ▁reservoir - ▁scarcity - ▁senegal - ▁subconscious - ▁sympathy - ▁terrestrial - ▁vascular - ▁vulture - ▁walmart - ▁asphalt - ▁bruce - ▁diffuse - ▁latitude - ▁revolt - ▁shaft - ▁waving - lecommunications - ▁replied - ▁rifle - ▁unravel - ▁forensic - ▁mitigate - ▁converse - ▁womb - ▁inhabitants - ▁blunt - ▁descendants - ▁kelly - ▁sniff - ▁billionaire - ▁shaman - ▁pluck - ▁idaho - ▁nanoscale - ▁oscar - ▁seductive - ▁squeak - captcha - ▁kidnapped - ▁synchrony - ▁plow - ▁prosperous - ▁warrant - ▁lanka - ▁intrinsically - agnes - ▁begging - ▁erectus - ▁commentary - ▁radius - ▁audition - ▁pedal - fauna - ▁claus - ▁greg - ▁crapp - flux - ▁charg - ▁astronomer - ▁escap - ▁weep - ▁standardize - ▁initiat - ▁compose - ▁instruct - ▁devote - ▁isolate - ▁contradict - ▁immortal - ▁synchro - ▁acquisition - ▁airbnb - ▁ambiguous - ▁biomedical - ▁bruise - ▁butterfly - ▁byproduct - ▁cassini - ▁charities - ▁cholesterol - ▁complement - ▁compulsive - ▁dashboard - ▁declaration - ▁disclosure - ▁efficacy - ▁energies - ▁everglades - ▁exhilarating - ▁headphones - ▁hebrew - ▁hostility - ▁hydrant - ▁hyena - ▁ignorant - ▁imperial - ▁investigator - ▁judici - ▁kentucky - ▁legitimacy - ▁librarians - ▁lithium - ▁malawi - ▁margaret - ▁messenger - ▁monologue - ▁opaque - ▁paleontolog - ▁persuasive - ▁prevalent - ▁reframe - ▁russell - ▁sanctuary - ▁shrunk - ▁spreadsheet - ▁storycorps - ▁tahrir - ▁tehran - ▁typeface - ▁unlimited - ▁vegetarian - ▁yogurt - ▁bargain - ▁cradle - ▁dementia - ▁eagle - ▁embody - ▁hominid - ▁photoshop - ▁rachel - ▁redistribut - ▁refresh - ▁testament - ▁thunder - ▁wolfram - ▁asperger - ▁landmark - ▁metasta - ▁twentie - ▁rosetta - weaving - ▁ignite - ▁melody - ▁alternate - ▁orgasm - ▁trawl - ▁condense - ▁churn - ▁ingenious - ▁policymakers - ▁enact - ▁vacant - ▁helix - ▁oxide - ▁pdf - ▁simplified - ▁birch - ▁porch - ▁knob - ▁crave - ▁ribbon - ▁disturbance - ▁shook - ▁dissent - ▁garment - ▁campfire - ▁turkish - ▁pediatrician - ▁acidification - ▁kitten - ▁wastewater - ▁reset - ▁sentien - ▁editorial - ▁advertisement - ▁emulate - ▁irish - ▁ivan - ▁archaeologist - aunch - ▁foresee - ▁dodge - ▁fertilize - ▁restrain - ▁industrialize - leaned - ▁kerr - ▁accord - ▁dedicate - ▁intellect - ▁telecom - anthrop - cogniz - percussion - ▁achilles - ▁anonymity - ▁aquaculture - ▁atrocities - ▁automotive - ▁balcony - ▁balkan - ▁biblical - ▁bottleneck - ▁boulder - ▁ceramic - ▁champagne - ▁cofounde - ▁contemplate - ▁contempt - ▁cowboy - ▁credibility - ▁cumulative - ▁desalination - ▁desperation - ▁disguise - ▁dividing - ▁doodling - ▁doorstep - ▁downstairs - ▁dreyfus - ▁emperor - ▁exploding - ▁exxon - ▁fahrenheit - ▁hospice - ▁hypertension - ▁impoverished - ▁inadvertent - ▁instagram - ▁intangible - ▁lakota - ▁leveraging - ▁lucifer - ▁malaysia - ▁mandarin - ▁mermaid - ▁mismatch - ▁morgan - ▁multicultural - ▁napkin - ▁nonverbal - ▁perfume - ▁photosynthesis - ▁pollination - ▁proliferation - ▁puzzling - ▁rearrange - ▁relocate - ▁ridicule - ▁schizophrenic - ▁scissors - ▁sequester - ▁socioeconomic - ▁squirrel - ▁subsidies - ▁tadpole - ▁tobacco - ▁trajectories - ▁translating - ▁treadmill - ▁umbrella - ▁unthinkable - ▁valentine - ▁vienna - consistency - ▁burglar - ▁magnify - ▁malware - ▁deconstruct - ▁edison - ▁okapi - ▁proxy - ▁reptile - ▁barbie - ▁creek - ▁incubator - ▁aftermath - ▁cynical - ▁discount - ▁cereal - ▁orchard - ▁siege - ▁anomaly - ▁slash - ▁venom - ▁inquiry - ▁premier - ▁beaver - ▁pollinate - ▁gaming - ▁nancy - ▁alleged - ▁geese - ▁boreal - ▁tangle - ▁enduring - ▁ecologist - ▁casualties - ▁cdc - ▁doaa - ▁minorities - ▁staple - ▁replicator - ▁inertia - ▁graf - ▁interference - ▁mantis - ▁flare - prehensi - ▁burger - ▁hedge - biotic - piece - world - ▁mobiliz - born - ▁customize - ductive - ▁participant - ennial - ▁uplift - onomic - ▁intersect - ▁gradual - ▁jealous - ▁ineffective - ▁appreciati - ▁pediatric - neurotransmitter - ▁amendment - ▁announcing - ▁atrazine - ▁bahamas - ▁bipolar - ▁bulgaria - ▁bureaucracy - ▁childbirth - ▁comprehend - ▁deceased - ▁decisive - ▁decorate - ▁denounce - ▁detriment - ▁diabetic - ▁dismantle - ▁dubai - ▁elusive - ▁empathize - ▁exchanging - ▁expenditure - ▁graham - ▁horrifying - ▁houston - ▁hydrocarbon - ▁illinois - ▁imaginable - ▁imbalance - ▁imprisoned - ▁infamous - ▁inferior - ▁influenza - ▁innocence - ▁jacque - ▁madagascar - ▁misunderstood - ▁monopoly - ▁numerical - ▁ominous - ▁outskirt - ▁papua - ▁portfolio - ▁practitioner - ▁precedent - ▁reassure - ▁rebellion - ▁remnant - ▁stewardship - ▁stubborn - ▁submission - ▁thanksgiving - ▁thigh - ▁thursday - ▁tribute - ▁unbelievably - ▁venezuela - ▁wednesday - ▁whistleblower - ▁whistling - ▁abolish - ▁courtyard - ▁deceive - ▁hudson - ▁reverberat - ▁unpack - ▁cramm - ▁galvani - ▁hopkins - ▁reckon - ivores - ▁levitat - ▁webcam - ▁impart - ▁welcoming - ▁glen - ▁downhill - ▁starving - ▁defy - ▁incision - manuel - ▁relay - ▁motto - ▁altar - ▁stalk - ▁bookstore - ▁fringe - ▁isabel - ▁neurogenesis - ▁flint - ▁latter - ▁charisma - ▁clark - ▁retard - ▁undertake - ▁forbidden - ▁foam - ▁mourn - ▁zipcar - ▁geology - ▁mailbox - ▁wobbl - ▁tuition - ▁isaac - ▁buddy - ▁rigged - ▁foraging - elvis - ▁rumor - ▁sufi - ▁niche - ▁exert - ▁lifesav - ▁applica - '30' - competence - jected - argon - stock - organic - filtered - searched - ▁cultivat - breaking - spiring - lender - ▁decipher - althiest - ▁midwest - ▁requir - ▁elevate - ▁declare - ▁browse - mmunity - appointed - ▁hiroshi - ▁carcinogen - ▁tornado - + - centrifug - conceivable - ffluent - sarcoma - ▁archimedes - ▁asymmetric - ▁aurora - ▁barbara - ▁bumblebee - ▁celsius - ▁circus - ▁compelled - ▁compensation - ▁consequent - ▁crochet - ▁devoid - ▁diffusion - ▁discharge - ▁disengag - ▁encompass - ▁excruciating - ▁exterior - ▁filament - ▁graduation - ▁heterosexual - ▁honduras - ▁immoral - ▁impatient - ▁indicating - ▁inflammat - ▁interrogate - ▁intractable - ▁invisibility - ▁irregular - ▁istanbul - ▁knives - ▁lexicon - ▁litigation - ▁mahatma - ▁matthew - ▁mercury - ▁minnesota - ▁mistrust - ▁mohammed - ▁monetary - ▁neutron - ▁ninja - ▁nonviolence - ▁nostalgi - ▁obsessive - ▁panbanisha - ▁pinnacle - ▁precursor - ▁redundancy - ▁repetitive - ▁sabbatical - ▁savage - ▁scrutiny - ▁sensual - ▁soybean - ▁spectacle - ▁sprinkle - ▁stockholm - ▁stunned - ▁succumb - ▁superconductor - ▁superficial - ▁synonymous - ▁tamiflu - ▁thermostat - ▁toothbrush - ▁transcript - ▁untouched - ▁variability - ▁vertebrate - ▁administer - ▁ambient - ▁amphibi - ▁artisan - ▁evolutionarily - ▁inaugura - ▁judging - ▁nfl - ▁rejuvenat - ▁reshape - ▁sampling - ▁skyrocket - ▁gordon - ▁ronald - ▁breastfeed - ▁clerk - ▁degrading - ▁hispanic - ▁zodiac - vantage - ▁anthropology - ▁handbag - ▁rotten - ▁lawrence - ▁goddess - ▁parrot - ▁jimmy - ▁massacre - ▁modalit - ▁stitch - movable - ▁bliss - ▁clutch - ▁vitro - ▁overfishing - ▁socket - ▁hungarian - ▁snack - ▁ventilation - ▁assembling - ▁hologram - ▁goof - ▁affirmative - ▁saltwater - ▁resume - ▁radiologist - ▁enlist - ▁trench - ▁mural - ▁grill - ▁sideways - ▁abyss - ▁rehab - ▁megacit - transformational - criminal - ▁confer - differentiated - ▁battl - ▁advocat - ▁nutritio - ▁squeez - imming - ▁induce - ▁psychiatr - ▁prosper - ▁resurrect - efficiencies - ▁accustomed - ▁annoyed - ▁anthony - ▁anticipation - ▁arrogant - ▁baghdad - ▁breeze - ▁brochure - ▁buffalo - ▁cemetery - ▁cheetah - ▁coconut - ▁collapsing - ▁cortisol - ▁currencies - ▁deemed - ▁definitive - ▁devastation - ▁esoteric - ▁fascination - ▁fatigue - ▁ferguson - ▁geiger - ▁gradient - ▁graduating - ▁hashtag - ▁julius - ▁lollipop - ▁malignant - ▁merchant - ▁missouri - ▁monogam - ▁motivating - ▁neumann - ▁neurosurgeon - ▁oblivious - ▁outweigh - ▁paycheck - ▁primordial - ▁protagonist - ▁provocative - ▁purchasing - ▁qualitative - ▁repositor - ▁seatbelt - ▁semantic - ▁sergio - ▁sophistication - ▁sorrow - ▁spectator - ▁stimulating - ▁submerge - ▁suspicion - ▁terabyte - ▁turbulent - ▁ukraine - ▁unexplored - ▁upstairs - ▁utilitarian - ▁vaccination - ▁biography - ▁bruno - ▁carriage - ▁crux - ▁culminat - ▁gospel - ▁imitation - ▁occurrence - ▁pennies - ▁revisit - ▁safeguard - ▁stagnat - ▁thirst - convulsi - ▁conjure - ▁endurance - ▁fission - ▁gyrus - ▁microrna - ▁multitask - ▁premium - ▁nichol - ▁uterus - ▁stockpile - ▁gloss - ▁lynch - ▁titus - ▁pluto - ▁joking - ▁carcass - ▁compost - ▁trafficked - ▁spores - ▁childcare - ▁hardwir - ▁diaper - ▁proust - ▁shaming - ▁carving - ▁slippery - ▁reign - ▁beheading - ▁hocke - ▁ikea - ▁saddle - ▁nominate - ▁poise - ▁cursor - ▁roost - ▁cohesi - ▁blaz - ▁conspir - respective - clined - course - ▁nutrient - ▁husk - ▁statu - ▁utilize - ▁disintegrat - ometric - ancies - ▁extrem - incidence - ollen - solvable - ▁aboriginal - ▁administrative - ▁alleviate - ▁ancestral - ▁antiretroviral - ▁appetite - ▁astonished - ▁astrophysic - ▁audacity - ▁balconies - ▁biomimic - ▁biomolecule - ▁biopsy - ▁bisexual - ▁botswana - ▁cadaver - ▁caterpillar - ▁charitable - ▁clutter - ▁contemplating - ▁crowdsource - ▁defensive - ▁detergent - ▁diagnosing - ▁disparity - ▁ecuador - ▁elevation - ▁energize - ▁entitlement - ▁entrenched - ▁fragility - ▁gratification - ▁grizzly - ▁helvetica - ▁hummingbird - ▁implies - ▁insecurity - ▁jennie - ▁jennifer - ▁judaism - ▁lesterland - ▁mcgowan - ▁mitochondria - ▁monolith - ▁motivator - ▁mubarak - ▁mutilation - ▁nationwide - ▁outsource - ▁papaya - ▁paraphrase - ▁perseverance - ▁plutocrat - ▁plywood - ▁porridge - ▁precaution - ▁proximity - ▁redefining - ▁reversal - ▁sanitary - ▁snippet - ▁substrate - ▁superintelligen - ▁synagogue - ▁synthesizer - ▁taylor - ▁transgress - ▁transnational - ▁typewriter - ▁uncommon - ▁volatile - ▁whack - ▁whirlwind - ▁wrestling - estimation - ▁bassem - ▁dizzy - ▁goggles - ▁inflict - ▁privatiz - ▁pseudo - ▁recursive - ▁spoof - ▁devour - ▁melinda - ▁ynh - ▁endemic - ▁traumatized - ▁memoir - ▁prolong - ▁scoop - ▁movember - ▁wield - ▁savor - austin - ▁crippl - ▁outstanding - ▁empti - ▁magnifie - ▁strangl - ▁dispose - visibly - ▁cedar - ▁blair - ▁inequit - ▁seduce - ▁ounce - ▁barbaria - ▁paddle - ▁musu - ▁revise - ▁revert - ▁sublim - ▁ankle - environmentalist - ▁molt - ridden - predictability - ground - ▁aggregat - ▁eradicat - ▁taxonom - ▁incorporat - woman - semit - ▁altruist - ▁apologi - ▁julia - ▁coordinat - ▁writ - ▁recap - ▁propriet - grazing - njunction - ▁academia - ▁alfred - ▁brothel - ▁browsing - ▁cassette - ▁chariot - ▁cockpit - ▁combustion - ▁cyborg - ▁derivative - ▁desirable - ▁deteriorate - ▁diaspora - ▁dorothy - ▁doughnut - ▁dublin - ▁duplicate - ▁eloquent - ▁embodiment - ▁engulf - ▁escort - ▁exoskeleton - ▁feasible - ▁footsteps - ▁galois - ▁granny - ▁hallmark - ▁hampshire - ▁heavier - ▁illustrator - ▁imprint - ▁interdependence - ▁isotope - ▁latrine - ▁leapfrog - ▁logarithm - ▁majest - ▁microprocessor - ▁mogadishu - ▁moisture - ▁monastery - ▁multiplied - ▁mutilated - ▁nephew - ▁oasis - ▁optimization - ▁orthopedic - ▁petroleum - ▁pluripotent - ▁poaching - ▁refrigeration - ▁rehearsal - ▁repetition - ▁repurpose - ▁reunited - ▁rockefeller - ▁salvation - ▁sausage - ▁scramble - ▁sebastian - ▁solomon - ▁trinidad - ▁unrelated - ▁yahoo - cellulose - ▁attentive - ▁elicit - ▁insidious - ▁laparoscopic - ▁prairie - ▁retaliat - ▁striving - ▁suffocat - ▁underwear - ▁volts - ▁arsenal - ▁breathtaking - ▁debating - ▁haircut - ▁jeep - ▁liberties - ▁modulate - ▁rouge - ▁sculptural - ▁bodily - ▁crease - ▁pavement - ▁solitude - ▁abalone - ▁ferment - ▁saddam - ▁sunrise - ▁cameroon - ▁cornell - ▁teapot - ▁firefly - ▁vault - ▁bullying - bertie - ▁fgm - ▁sushi - ▁overdose - ▁stoic - ▁algeria - ▁merging - ▁bystander - ▁contingen - ▁palpabl - ▁endorse - ▁stigmatiz - ranked - authorized - ▁influenc - hopper - ▁counteract - written - different - fib - ▁conduc - guil - ▁neurologic - ▁homophobi - ▁identifie - ▁diploma - ▁refuge - ▁calori - ▁reliabl - ▁restorati - voluntarily - ▁abdomen - ▁abrupt - ▁additive - ▁adhesive - ▁alexandria - ▁amputation - ▁approximation - ▁backdrop - ▁barbecue - ▁blossom - ▁brigades - ▁catheter - ▁celebrities - ▁cemeteries - ▁cheerleader - ▁clitoris - ▁cochrane - ▁colossal - ▁communism - ▁copernicus - ▁cucumber - ▁demolish - ▁denominator - ▁detention - ▁disastrous - ▁disparities - ▁eleanor - ▁empathetic - ▁enhancing - ▁fragrance - ▁franchise - ▁gehry - ▁gigawatt - ▁glaze - ▁granddaughter - ▁grenade - ▁hannah - ▁horribly - ▁hussein - ▁illiteracy - ▁impairment - ▁implicated - ▁incentivize - ▁inconsistent - ▁infidelity - ▁intimidate - ▁introspection - ▁leukemia - ▁madison - ▁merchandis - ▁metadata - ▁microsecond - ▁mongolia - ▁nanopatch - ▁napoleon - ▁nutshell - ▁paintbrush - ▁pellets - ▁perceiving - ▁perimeter - ▁persuasion - ▁preliminary - ▁quadruple - ▁reassuring - ▁rebuilt - ▁reconfigur - ▁referendum - ▁rehearse - ▁remorse - ▁retention - ▁revitaliz - ▁scooter - ▁segregated - ▁sequel - ▁squad - ▁squirt - ▁steadi - ▁subtitle - ▁supervisor - ▁surrogate - ▁tentacle - ▁tragedies - ▁transcendence - ▁translucent - ▁trustworthiness - ▁tyranny - ▁uruguay - ▁vermeer - ▁acquaint - ▁bathtub - ▁crazi - ▁daydream - ▁entail - ▁inanimate - ▁intermediar - ▁kabul - ▁newfound - ▁obstruct - ▁reversing - ▁twitch - ▁tyrant - ▁vanished - ▁handicap - ▁jumble - ▁ooh - ▁defecat - ▁tremor - ▁airbag - ▁battered - ▁ellip - ▁fiddle - ▁medicinal - ▁outlook - ▁prosthes - ▁fisherman - ▁renovat - ▁synerg - ▁quotation - ▁jacob - ▁fmri - ▁viking - ▁squish - ▁yanke - ▁dehumaniz - ▁subvert - ▁permeate - ▁musk - ▁detonat - hearted - misunderstanding - utilized - favorable - ▁visualiz - credited - ▁recreat - mplified - ▁substitut - symmetry - sensitive - global - women - operators - western - ▁illustr - ▁overwhelm - ▁suspend - ▁simultaneous - ▁tasmania - ▁legislati - ▁squat - ▁reminisc - '#' - coherence - extraterrestrial - ▁amusement - ▁antiangiogenic - ▁apocalypse - ▁appendage - ▁arrogance - ▁autopsy - ▁bachelor - ▁binoculars - ▁bonica - ▁brexit - ▁charlotte - ▁chernobyl - ▁cleveland - ▁coercion - ▁collateral - ▁contraceptive - ▁crowdsourcing - ▁cyberspace - ▁deceptive - ▁decimal - ▁degeneration - ▁delegation - ▁deputy - ▁descent - ▁divinity - ▁eclipse - ▁epigenetic - ▁expansive - ▁florence - ▁freestyle - ▁fuku - ▁gertrude - ▁goldilocks - ▁grapple - ▁grappling - ▁halloween - ▁harold - ▁hierarchical - ▁incapable - ▁inconvenient - ▁innovating - ▁insignificant - ▁insufficient - ▁interdependent - ▁involuntary - ▁jirga - ▁kilowatt - ▁kuwait - ▁langley - ▁leviathan - ▁lexicograph - ▁libertarian - ▁mahmoud - ▁mccain - ▁memorizing - ▁menstrual - ▁metabolize - ▁morocco - ▁nikola - ▁nirvana - ▁ottoman - ▁patrick - ▁pendulum - ▁peripheral - ▁picasso - '@' - '*' - \ - ^ - R - _ - '-' - '%' - '=' - $ - G - M - ā - ']' - A - E - U - '[' - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true brctc_risk_strategy: exp brctc_group_strategy: end brctc_risk_factor: 0.0 joint_net_conf: null use_preprocessor: true use_lang_prompt: false use_nlp_prompt: false token_type: bpe bpemodel: data/en_token_list/bpe_unigram10000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 20 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe10000_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202402' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "TRANSLATION" ]
[ "BEAR", "CRAFT", "MEDAL" ]
Non_BioNLP
RichardErkhov/EleutherAI_-_pythia-1b-deduped-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
1,713
1,713
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-1b-deduped - bnb 4bits - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped/ Original model description: --- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-1B-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-1B-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-1B-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-1B-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-1B-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-1B-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "QUESTION_ANSWERING", "TRANSLATION" ]
[ "SCIQ" ]
Non_BioNLP
jncraton/multilingual-e5-small-ct2-int8
jncraton
sentence-similarity
[ "sentence-transformers", "mteb", "Sentence Transformers", "sentence-similarity", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,734
1,734
30
0
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: intfloat/multilingual-e5-small results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 36.9996434842022 - type: f1 value: 67.95453679103099 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.64882226980728 - type: ap value: 82.11942130026586 - type: f1 value: 69.87963421606715 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8095952023988 - type: ap value: 24.46869495579561 - type: f1 value: 63.00108480037597 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 64.186295503212 - type: ap value: 15.496804690197042 - type: f1 value: 52.07153895475031 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 88.699325 - type: ap value: 85.27039559917269 - type: f1 value: 88.65556295032513 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.69799999999999 - type: f1 value: 43.73187348654165 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.245999999999995 - type: f1 value: 39.3863530637684 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.394 - type: f1 value: 39.301223469483446 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.864 - type: f1 value: 37.97974261868003 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.682 - type: f1 value: 37.07399369768313 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.504 - type: f1 value: 36.62317273874278 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 19.061 - type: map_at_10 value: 31.703 - type: map_at_100 value: 32.967 - type: map_at_1000 value: 33.001000000000005 - type: map_at_3 value: 27.466 - type: map_at_5 value: 29.564 - type: mrr_at_1 value: 19.559 - type: mrr_at_10 value: 31.874999999999996 - type: mrr_at_100 value: 33.146 - type: mrr_at_1000 value: 33.18 - type: mrr_at_3 value: 27.667 - type: mrr_at_5 value: 29.74 - type: ndcg_at_1 value: 19.061 - type: ndcg_at_10 value: 39.062999999999995 - type: ndcg_at_100 value: 45.184000000000005 - type: ndcg_at_1000 value: 46.115 - type: ndcg_at_3 value: 30.203000000000003 - type: ndcg_at_5 value: 33.953 - type: precision_at_1 value: 19.061 - type: precision_at_10 value: 6.279999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 12.706999999999999 - type: precision_at_5 value: 9.431000000000001 - type: recall_at_1 value: 19.061 - type: recall_at_10 value: 62.802 - type: recall_at_100 value: 91.323 - type: recall_at_1000 value: 98.72 - type: recall_at_3 value: 38.122 - type: recall_at_5 value: 47.155 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.22266660528253 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 30.79980849482483 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 57.8790068352054 - type: mrr value: 71.78791276436706 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 82.36328364043163 - type: cos_sim_spearman value: 82.26211536195868 - type: euclidean_pearson value: 80.3183865039173 - type: euclidean_spearman value: 79.88495276296132 - type: manhattan_pearson value: 80.14484480692127 - type: manhattan_spearman value: 80.39279565980743 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.0375782881002 - type: f1 value: 97.86012526096033 - type: precision value: 97.77139874739039 - type: recall value: 98.0375782881002 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 93.35241030156286 - type: f1 value: 92.66050333846944 - type: precision value: 92.3306919069631 - type: recall value: 93.35241030156286 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 94.0699688257707 - type: f1 value: 93.50236693222492 - type: precision value: 93.22791825424315 - type: recall value: 94.0699688257707 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 89.25750394944708 - type: f1 value: 88.79234684921889 - type: precision value: 88.57293312269616 - type: recall value: 89.25750394944708 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 79.41558441558442 - type: f1 value: 79.25886487487219 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.747820820329736 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 27.045143830596146 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 24.252999999999997 - type: map_at_10 value: 31.655916666666666 - type: map_at_100 value: 32.680749999999996 - type: map_at_1000 value: 32.79483333333334 - type: map_at_3 value: 29.43691666666666 - type: map_at_5 value: 30.717416666666665 - type: mrr_at_1 value: 28.602750000000004 - type: mrr_at_10 value: 35.56875 - type: mrr_at_100 value: 36.3595 - type: mrr_at_1000 value: 36.427749999999996 - type: mrr_at_3 value: 33.586166666666664 - type: mrr_at_5 value: 34.73641666666666 - type: ndcg_at_1 value: 28.602750000000004 - type: ndcg_at_10 value: 36.06933333333334 - type: ndcg_at_100 value: 40.70141666666667 - type: ndcg_at_1000 value: 43.24341666666667 - type: ndcg_at_3 value: 32.307916666666664 - type: ndcg_at_5 value: 34.129999999999995 - type: precision_at_1 value: 28.602750000000004 - type: precision_at_10 value: 6.097666666666667 - type: precision_at_100 value: 0.9809166666666668 - type: precision_at_1000 value: 0.13766666666666663 - type: precision_at_3 value: 14.628166666666667 - type: precision_at_5 value: 10.266916666666667 - type: recall_at_1 value: 24.252999999999997 - type: recall_at_10 value: 45.31916666666667 - type: recall_at_100 value: 66.03575000000001 - type: recall_at_1000 value: 83.94708333333334 - type: recall_at_3 value: 34.71941666666666 - type: recall_at_5 value: 39.46358333333333 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 9.024000000000001 - type: map_at_10 value: 15.644 - type: map_at_100 value: 17.154 - type: map_at_1000 value: 17.345 - type: map_at_3 value: 13.028 - type: map_at_5 value: 14.251 - type: mrr_at_1 value: 19.674 - type: mrr_at_10 value: 29.826999999999998 - type: mrr_at_100 value: 30.935000000000002 - type: mrr_at_1000 value: 30.987 - type: mrr_at_3 value: 26.645000000000003 - type: mrr_at_5 value: 28.29 - type: ndcg_at_1 value: 19.674 - type: ndcg_at_10 value: 22.545 - type: ndcg_at_100 value: 29.207 - type: ndcg_at_1000 value: 32.912 - type: ndcg_at_3 value: 17.952 - type: ndcg_at_5 value: 19.363 - type: precision_at_1 value: 19.674 - type: precision_at_10 value: 7.212000000000001 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.212 - type: precision_at_3 value: 13.507 - type: precision_at_5 value: 10.397 - type: recall_at_1 value: 9.024000000000001 - type: recall_at_10 value: 28.077999999999996 - type: recall_at_100 value: 51.403 - type: recall_at_1000 value: 72.406 - type: recall_at_3 value: 16.768 - type: recall_at_5 value: 20.737 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.012 - type: map_at_10 value: 17.138 - type: map_at_100 value: 24.146 - type: map_at_1000 value: 25.622 - type: map_at_3 value: 12.552 - type: map_at_5 value: 14.435 - type: mrr_at_1 value: 62.25000000000001 - type: mrr_at_10 value: 71.186 - type: mrr_at_100 value: 71.504 - type: mrr_at_1000 value: 71.514 - type: mrr_at_3 value: 69.333 - type: mrr_at_5 value: 70.408 - type: ndcg_at_1 value: 49.75 - type: ndcg_at_10 value: 37.76 - type: ndcg_at_100 value: 42.071 - type: ndcg_at_1000 value: 49.309 - type: ndcg_at_3 value: 41.644 - type: ndcg_at_5 value: 39.812999999999995 - type: precision_at_1 value: 62.25000000000001 - type: precision_at_10 value: 30.15 - type: precision_at_100 value: 9.753 - type: precision_at_1000 value: 1.9189999999999998 - type: precision_at_3 value: 45.667 - type: precision_at_5 value: 39.15 - type: recall_at_1 value: 8.012 - type: recall_at_10 value: 22.599 - type: recall_at_100 value: 48.068 - type: recall_at_1000 value: 71.328 - type: recall_at_3 value: 14.043 - type: recall_at_5 value: 17.124 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 42.455 - type: f1 value: 37.59462649781862 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 58.092 - type: map_at_10 value: 69.586 - type: map_at_100 value: 69.968 - type: map_at_1000 value: 69.982 - type: map_at_3 value: 67.48100000000001 - type: map_at_5 value: 68.915 - type: mrr_at_1 value: 62.166 - type: mrr_at_10 value: 73.588 - type: mrr_at_100 value: 73.86399999999999 - type: mrr_at_1000 value: 73.868 - type: mrr_at_3 value: 71.6 - type: mrr_at_5 value: 72.99 - type: ndcg_at_1 value: 62.166 - type: ndcg_at_10 value: 75.27199999999999 - type: ndcg_at_100 value: 76.816 - type: ndcg_at_1000 value: 77.09700000000001 - type: ndcg_at_3 value: 71.36 - type: ndcg_at_5 value: 73.785 - type: precision_at_1 value: 62.166 - type: precision_at_10 value: 9.716 - type: precision_at_100 value: 1.065 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 28.278 - type: precision_at_5 value: 18.343999999999998 - type: recall_at_1 value: 58.092 - type: recall_at_10 value: 88.73400000000001 - type: recall_at_100 value: 95.195 - type: recall_at_1000 value: 97.04599999999999 - type: recall_at_3 value: 78.45 - type: recall_at_5 value: 84.316 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 16.649 - type: map_at_10 value: 26.457000000000004 - type: map_at_100 value: 28.169 - type: map_at_1000 value: 28.352 - type: map_at_3 value: 23.305 - type: map_at_5 value: 25.169000000000004 - type: mrr_at_1 value: 32.407000000000004 - type: mrr_at_10 value: 40.922 - type: mrr_at_100 value: 41.931000000000004 - type: mrr_at_1000 value: 41.983 - type: mrr_at_3 value: 38.786 - type: mrr_at_5 value: 40.205999999999996 - type: ndcg_at_1 value: 32.407000000000004 - type: ndcg_at_10 value: 33.314 - type: ndcg_at_100 value: 40.312 - type: ndcg_at_1000 value: 43.685 - type: ndcg_at_3 value: 30.391000000000002 - type: ndcg_at_5 value: 31.525 - type: precision_at_1 value: 32.407000000000004 - type: precision_at_10 value: 8.966000000000001 - type: precision_at_100 value: 1.6019999999999999 - type: precision_at_1000 value: 0.22200000000000003 - type: precision_at_3 value: 20.165 - type: precision_at_5 value: 14.722 - type: recall_at_1 value: 16.649 - type: recall_at_10 value: 39.117000000000004 - type: recall_at_100 value: 65.726 - type: recall_at_1000 value: 85.784 - type: recall_at_3 value: 27.914 - type: recall_at_5 value: 33.289 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.253 - type: map_at_10 value: 56.16799999999999 - type: map_at_100 value: 57.06099999999999 - type: map_at_1000 value: 57.126 - type: map_at_3 value: 52.644999999999996 - type: map_at_5 value: 54.909 - type: mrr_at_1 value: 72.505 - type: mrr_at_10 value: 79.66 - type: mrr_at_100 value: 79.869 - type: mrr_at_1000 value: 79.88 - type: mrr_at_3 value: 78.411 - type: mrr_at_5 value: 79.19800000000001 - type: ndcg_at_1 value: 72.505 - type: ndcg_at_10 value: 65.094 - type: ndcg_at_100 value: 68.219 - type: ndcg_at_1000 value: 69.515 - type: ndcg_at_3 value: 59.99 - type: ndcg_at_5 value: 62.909000000000006 - type: precision_at_1 value: 72.505 - type: precision_at_10 value: 13.749 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 38.357 - type: precision_at_5 value: 25.313000000000002 - type: recall_at_1 value: 36.253 - type: recall_at_10 value: 68.744 - type: recall_at_100 value: 80.925 - type: recall_at_1000 value: 89.534 - type: recall_at_3 value: 57.535000000000004 - type: recall_at_5 value: 63.282000000000004 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 80.82239999999999 - type: ap value: 75.65895781725314 - type: f1 value: 80.75880969095746 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.624 - type: map_at_10 value: 34.075 - type: map_at_100 value: 35.229 - type: map_at_1000 value: 35.276999999999994 - type: map_at_3 value: 30.245 - type: map_at_5 value: 32.42 - type: mrr_at_1 value: 22.264 - type: mrr_at_10 value: 34.638000000000005 - type: mrr_at_100 value: 35.744 - type: mrr_at_1000 value: 35.787 - type: mrr_at_3 value: 30.891000000000002 - type: mrr_at_5 value: 33.042 - type: ndcg_at_1 value: 22.264 - type: ndcg_at_10 value: 40.991 - type: ndcg_at_100 value: 46.563 - type: ndcg_at_1000 value: 47.743 - type: ndcg_at_3 value: 33.198 - type: ndcg_at_5 value: 37.069 - type: precision_at_1 value: 22.264 - type: precision_at_10 value: 6.5089999999999995 - type: precision_at_100 value: 0.9299999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.216999999999999 - type: precision_at_5 value: 10.487 - type: recall_at_1 value: 21.624 - type: recall_at_10 value: 62.303 - type: recall_at_100 value: 88.124 - type: recall_at_1000 value: 97.08 - type: recall_at_3 value: 41.099999999999994 - type: recall_at_5 value: 50.381 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.06703146374831 - type: f1 value: 90.86867815863172 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.46970977740209 - type: f1 value: 86.36832872036588 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.26951300867245 - type: f1 value: 88.93561193959502 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 84.22799874725963 - type: f1 value: 84.30490069236556 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 86.02007888131948 - type: f1 value: 85.39376041027991 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 85.34900542495481 - type: f1 value: 85.39859673336713 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.078431372549 - type: f1 value: 53.45071102002276 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 65.85798816568047 - type: f1 value: 46.53112748993529 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.96864576384256 - type: f1 value: 45.966703022829506 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 61.31537738803633 - type: f1 value: 45.52601712835461 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.29616349946218 - type: f1 value: 47.24166485726613 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.51537070524412 - type: f1 value: 49.463476319014276 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.06792199058508 - type: f1 value: 54.094921857502285 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.960322797579025 - type: f1 value: 48.547371223370945 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.425016812373904 - type: f1 value: 50.47069202054312 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.798251513113655 - type: f1 value: 57.05013069086648 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.37794216543376 - type: f1 value: 56.3607992649805 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.56018829858777 - type: f1 value: 43.87319715715134 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.9724277067922 - type: f1 value: 59.36480066245562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.72696704774715 - type: f1 value: 59.143595966615855 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.5971755211836 - type: f1 value: 59.169445724946726 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.29589778076665 - type: f1 value: 67.7577001808977 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.31136516476126 - type: f1 value: 64.52032955983242 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 61.47903120066317 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.45595158036314 - type: f1 value: 58.0891846024637 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.47074646940149 - type: f1 value: 62.84830858877575 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.046402151983855 - type: f1 value: 55.269074430533195 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.06523201075991 - type: f1 value: 61.35339643021369 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.954942837928726 - type: f1 value: 57.07035922704846 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.404169468728995 - type: f1 value: 53.94259011839138 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.16610625420309 - type: f1 value: 61.337103431499365 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.262945527908535 - type: f1 value: 49.7610691598921 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 63.469099018440154 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.22797579018157 - type: f1 value: 64.89098471083001 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 50.847343644922674 - type: f1 value: 47.8536963168393 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.45326160053799 - type: f1 value: 46.370078045805556 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 42.83120376597175 - type: f1 value: 39.68948521599982 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.5084061869536 - type: f1 value: 53.961876160401545 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.7895090786819 - type: f1 value: 61.134223684676 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98991257565569 - type: f1 value: 52.579862862826296 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.90316072629456 - type: f1 value: 58.203024538290336 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.09818426361802 - type: f1 value: 54.22718458445455 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.991257565568255 - type: f1 value: 55.84892781767421 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.901143241425686 - type: f1 value: 52.25264332199797 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.96368527236047 - type: f1 value: 58.927243876153454 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.64223268325489 - type: f1 value: 62.340453718379706 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.52589105581708 - type: f1 value: 61.661113187022174 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.84599865501009 - type: f1 value: 64.59342572873005 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.81035642232684 - type: f1 value: 57.5169089806797 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.652238071815056 - type: f1 value: 53.22732406426353 - type: f1_weighted value: 57.585586737209546 - type: main_score value: 58.652238071815056 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.51647612642906 - type: f1 value: 54.33154780100043 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.985877605917956 - type: f1 value: 54.46187524463802 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.03026227303296 - type: f1 value: 62.34377392877748 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 53.567585743106925 - type: f1 value: 50.73770655983206 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.2595830531271 - type: f1 value: 53.657327291708626 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.82784129119032 - type: f1 value: 54.82518072665301 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.06859448554137 - type: f1 value: 63.00185280500495 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.91055817081371 - type: f1 value: 55.54116301224262 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.54404841963686 - type: f1 value: 59.57650946030184 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.27706792199059 - type: f1 value: 56.50010066083435 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.0719569603228 - type: f1 value: 61.817075925647956 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.23806321452591 - type: f1 value: 65.24917026029749 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.53530598520511 - type: f1 value: 61.71131132295768 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.04303967720243 - type: f1 value: 60.3950085685985 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.83591123066578 - type: f1 value: 54.95059828830849 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.62340282447881 - type: f1 value: 59.525159996498225 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.85406859448555 - type: f1 value: 59.129299095681276 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.76731674512441 - type: f1 value: 61.159560612627715 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.181573638197705 - type: f1 value: 46.98422176289957 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.92737054472092 - type: f1 value: 67.69135611952979 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.18964357767318 - type: f1 value: 68.46106138186214 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.0712844653665 - type: f1 value: 66.75545422473901 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.4754539340955 - type: f1 value: 74.38427146553252 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.82515131136518 - type: f1 value: 69.63516462173847 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.70880968392737 - type: f1 value: 67.45420662567926 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.95494283792871 - type: f1 value: 65.06191009049222 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.75924680564896 - type: f1 value: 68.30833379585945 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.806321452589096 - type: f1 value: 63.273048243765054 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.68997982515133 - type: f1 value: 66.54703855381324 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.46940147948891 - type: f1 value: 65.91017343463396 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.49899125756556 - type: f1 value: 57.90333469917769 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.9219905850706 - type: f1 value: 67.23169403762938 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.486213853396094 - type: f1 value: 54.85282355583758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.04169468728985 - type: f1 value: 68.83833333320462 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.88702084734365 - type: f1 value: 74.04474735232299 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.63416274377943 - type: f1 value: 55.11332211687954 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 52.23604572965702 - type: f1 value: 50.86529813991055 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.62407531943511 - type: f1 value: 43.63485467164535 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.15601882985878 - type: f1 value: 57.522837510959924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.84532616005382 - type: f1 value: 69.60021127179697 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.65770006724949 - type: f1 value: 55.84219135523227 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.53665097511768 - type: f1 value: 65.09087787792639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.31405514458642 - type: f1 value: 58.06135303831491 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.88231338264964 - type: f1 value: 62.751099407787926 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.86012104909213 - type: f1 value: 56.29118323058282 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.37390719569602 - type: f1 value: 66.27922244885102 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.8675184936113 - type: f1 value: 70.22146529932019 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.2212508406187 - type: f1 value: 67.77454802056282 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.18090114324143 - type: f1 value: 68.03737625431621 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.65030262273034 - type: f1 value: 63.792945486912856 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.772749631087066 - type: f1 value: 63.4539101720024 - type: f1_weighted value: 62.778603897469566 - type: main_score value: 63.772749631087066 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.17821116341627 - type: f1 value: 59.3935969827171 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.86146603900471 - type: f1 value: 60.133692735032376 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.89441829186282 - type: f1 value: 70.03064076194089 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.15063887020847 - type: f1 value: 56.23326278499678 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.43846671149966 - type: f1 value: 57.70440450281974 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.8507061197041 - type: f1 value: 59.22916396061171 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.65568258238063 - type: f1 value: 69.90736239440633 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.8843308675185 - type: f1 value: 59.30332663713599 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.05312710154674 - type: f1 value: 67.44024062594775 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.111634162743776 - type: f1 value: 60.89083013084519 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.44115669132482 - type: f1 value: 67.92227541674552 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.4687289845326 - type: f1 value: 74.16376793486025 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.31876260928043 - type: f1 value: 68.5246745215607 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.90431696479766 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.259158476693774 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.28445330838555 - type: mrr value: 31.15758529581164 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.353 - type: map_at_10 value: 11.565 - type: map_at_100 value: 14.097000000000001 - type: map_at_1000 value: 15.354999999999999 - type: map_at_3 value: 8.749 - type: map_at_5 value: 9.974 - type: mrr_at_1 value: 42.105 - type: mrr_at_10 value: 50.589 - type: mrr_at_100 value: 51.187000000000005 - type: mrr_at_1000 value: 51.233 - type: mrr_at_3 value: 48.246 - type: mrr_at_5 value: 49.546 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 31.009999999999998 - type: ndcg_at_100 value: 28.026 - type: ndcg_at_1000 value: 36.905 - type: ndcg_at_3 value: 35.983 - type: ndcg_at_5 value: 33.764 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 22.786 - type: precision_at_100 value: 6.916 - type: precision_at_1000 value: 1.981 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 28.731 - type: recall_at_1 value: 5.353 - type: recall_at_10 value: 15.039 - type: recall_at_100 value: 27.348 - type: recall_at_1000 value: 59.453 - type: recall_at_3 value: 9.792 - type: recall_at_5 value: 11.882 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 33.852 - type: map_at_10 value: 48.924 - type: map_at_100 value: 49.854 - type: map_at_1000 value: 49.886 - type: map_at_3 value: 44.9 - type: map_at_5 value: 47.387 - type: mrr_at_1 value: 38.035999999999994 - type: mrr_at_10 value: 51.644 - type: mrr_at_100 value: 52.339 - type: mrr_at_1000 value: 52.35999999999999 - type: mrr_at_3 value: 48.421 - type: mrr_at_5 value: 50.468999999999994 - type: ndcg_at_1 value: 38.007000000000005 - type: ndcg_at_10 value: 56.293000000000006 - type: ndcg_at_100 value: 60.167 - type: ndcg_at_1000 value: 60.916000000000004 - type: ndcg_at_3 value: 48.903999999999996 - type: ndcg_at_5 value: 52.978 - type: precision_at_1 value: 38.007000000000005 - type: precision_at_10 value: 9.041 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 22.084 - type: precision_at_5 value: 15.608 - type: recall_at_1 value: 33.852 - type: recall_at_10 value: 75.893 - type: recall_at_100 value: 92.589 - type: recall_at_1000 value: 98.153 - type: recall_at_3 value: 56.969 - type: recall_at_5 value: 66.283 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 69.174 - type: map_at_10 value: 82.891 - type: map_at_100 value: 83.545 - type: map_at_1000 value: 83.56700000000001 - type: map_at_3 value: 79.944 - type: map_at_5 value: 81.812 - type: mrr_at_1 value: 79.67999999999999 - type: mrr_at_10 value: 86.279 - type: mrr_at_100 value: 86.39 - type: mrr_at_1000 value: 86.392 - type: mrr_at_3 value: 85.21 - type: mrr_at_5 value: 85.92999999999999 - type: ndcg_at_1 value: 79.69000000000001 - type: ndcg_at_10 value: 86.929 - type: ndcg_at_100 value: 88.266 - type: ndcg_at_1000 value: 88.428 - type: ndcg_at_3 value: 83.899 - type: ndcg_at_5 value: 85.56700000000001 - type: precision_at_1 value: 79.69000000000001 - type: precision_at_10 value: 13.161000000000001 - type: precision_at_100 value: 1.513 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.603 - type: precision_at_5 value: 24.138 - type: recall_at_1 value: 69.174 - type: recall_at_10 value: 94.529 - type: recall_at_100 value: 99.15 - type: recall_at_1000 value: 99.925 - type: recall_at_3 value: 85.86200000000001 - type: recall_at_5 value: 90.501 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 39.13064340585255 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 58.97884249325877 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 3.4680000000000004 - type: map_at_10 value: 7.865 - type: map_at_100 value: 9.332 - type: map_at_1000 value: 9.587 - type: map_at_3 value: 5.800000000000001 - type: map_at_5 value: 6.8790000000000004 - type: mrr_at_1 value: 17.0 - type: mrr_at_10 value: 25.629 - type: mrr_at_100 value: 26.806 - type: mrr_at_1000 value: 26.889000000000003 - type: mrr_at_3 value: 22.8 - type: mrr_at_5 value: 24.26 - type: ndcg_at_1 value: 17.0 - type: ndcg_at_10 value: 13.895 - type: ndcg_at_100 value: 20.491999999999997 - type: ndcg_at_1000 value: 25.759999999999998 - type: ndcg_at_3 value: 13.347999999999999 - type: ndcg_at_5 value: 11.61 - type: precision_at_1 value: 17.0 - type: precision_at_10 value: 7.090000000000001 - type: precision_at_100 value: 1.669 - type: precision_at_1000 value: 0.294 - type: precision_at_3 value: 12.3 - type: precision_at_5 value: 10.02 - type: recall_at_1 value: 3.4680000000000004 - type: recall_at_10 value: 14.363000000000001 - type: recall_at_100 value: 33.875 - type: recall_at_1000 value: 59.711999999999996 - type: recall_at_3 value: 7.483 - type: recall_at_5 value: 10.173 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04084311714061 - type: cos_sim_spearman value: 77.51342467443078 - type: euclidean_pearson value: 80.0321166028479 - type: euclidean_spearman value: 77.29249114733226 - type: manhattan_pearson value: 80.03105964262431 - type: manhattan_spearman value: 77.22373689514794 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.1680158034387 - type: cos_sim_spearman value: 76.55983344071117 - type: euclidean_pearson value: 79.75266678300143 - type: euclidean_spearman value: 75.34516823467025 - type: manhattan_pearson value: 79.75959151517357 - type: manhattan_spearman value: 75.42330344141912 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 76.48898993209346 - type: cos_sim_spearman value: 76.96954120323366 - type: euclidean_pearson value: 76.94139109279668 - type: euclidean_spearman value: 76.85860283201711 - type: manhattan_pearson value: 76.6944095091912 - type: manhattan_spearman value: 76.61096912972553 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 77.85082366246944 - type: cos_sim_spearman value: 75.52053350101731 - type: euclidean_pearson value: 77.1165845070926 - type: euclidean_spearman value: 75.31216065884388 - type: manhattan_pearson value: 77.06193941833494 - type: manhattan_spearman value: 75.31003701700112 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.36305246526497 - type: cos_sim_spearman value: 87.11704613927415 - type: euclidean_pearson value: 86.04199125810939 - type: euclidean_spearman value: 86.51117572414263 - type: manhattan_pearson value: 86.0805106816633 - type: manhattan_spearman value: 86.52798366512229 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.18536255599724 - type: cos_sim_spearman value: 83.63377151025418 - type: euclidean_pearson value: 83.24657467993141 - type: euclidean_spearman value: 84.02751481993825 - type: manhattan_pearson value: 83.11941806582371 - type: manhattan_spearman value: 83.84251281019304 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 78.95816528475514 - type: cos_sim_spearman value: 78.86607380120462 - type: euclidean_pearson value: 78.51268699230545 - type: euclidean_spearman value: 79.11649316502229 - type: manhattan_pearson value: 78.32367302808157 - type: manhattan_spearman value: 78.90277699624637 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.89126914997624 - type: cos_sim_spearman value: 73.0296921832678 - type: euclidean_pearson value: 71.50385903677738 - type: euclidean_spearman value: 73.13368899716289 - type: manhattan_pearson value: 71.47421463379519 - type: manhattan_spearman value: 73.03383242946575 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 59.22923684492637 - type: cos_sim_spearman value: 57.41013211368396 - type: euclidean_pearson value: 61.21107388080905 - type: euclidean_spearman value: 60.07620768697254 - type: manhattan_pearson value: 59.60157142786555 - type: manhattan_spearman value: 59.14069604103739 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.24345978774299 - type: cos_sim_spearman value: 77.24225743830719 - type: euclidean_pearson value: 76.66226095469165 - type: euclidean_spearman value: 77.60708820493146 - type: manhattan_pearson value: 76.05303324760429 - type: manhattan_spearman value: 76.96353149912348 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.50879160160852 - type: cos_sim_spearman value: 86.43594662965224 - type: euclidean_pearson value: 86.06846012826577 - type: euclidean_spearman value: 86.02041395794136 - type: manhattan_pearson value: 86.10916255616904 - type: manhattan_spearman value: 86.07346068198953 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 58.39803698977196 - type: cos_sim_spearman value: 55.96910950423142 - type: euclidean_pearson value: 58.17941175613059 - type: euclidean_spearman value: 55.03019330522745 - type: manhattan_pearson value: 57.333358138183286 - type: manhattan_spearman value: 54.04614023149965 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 70.98304089637197 - type: cos_sim_spearman value: 72.44071656215888 - type: euclidean_pearson value: 72.19224359033983 - type: euclidean_spearman value: 73.89871188913025 - type: manhattan_pearson value: 71.21098311547406 - type: manhattan_spearman value: 72.93405764824821 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.99792397466308 - type: cos_sim_spearman value: 84.83824377879495 - type: euclidean_pearson value: 85.70043288694438 - type: euclidean_spearman value: 84.70627558703686 - type: manhattan_pearson value: 85.89570850150801 - type: manhattan_spearman value: 84.95806105313007 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.21850322994712 - type: cos_sim_spearman value: 72.28669398117248 - type: euclidean_pearson value: 73.40082510412948 - type: euclidean_spearman value: 73.0326539281865 - type: manhattan_pearson value: 71.8659633964841 - type: manhattan_spearman value: 71.57817425823303 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.80921368595645 - type: cos_sim_spearman value: 77.33209091229315 - type: euclidean_pearson value: 76.53159540154829 - type: euclidean_spearman value: 78.17960842810093 - type: manhattan_pearson value: 76.13530186637601 - type: manhattan_spearman value: 78.00701437666875 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 74.74980608267349 - type: cos_sim_spearman value: 75.37597374318821 - type: euclidean_pearson value: 74.90506081911661 - type: euclidean_spearman value: 75.30151613124521 - type: manhattan_pearson value: 74.62642745918002 - type: manhattan_spearman value: 75.18619716592303 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.632662289205584 - type: cos_sim_spearman value: 60.938543391610914 - type: euclidean_pearson value: 62.113200529767056 - type: euclidean_spearman value: 61.410312633261164 - type: manhattan_pearson value: 61.75494698945686 - type: manhattan_spearman value: 60.92726195322362 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 45.283470551557244 - type: cos_sim_spearman value: 53.44833015864201 - type: euclidean_pearson value: 41.17892011120893 - type: euclidean_spearman value: 53.81441383126767 - type: manhattan_pearson value: 41.17482200420659 - type: manhattan_spearman value: 53.82180269276363 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 60.5069165306236 - type: cos_sim_spearman value: 66.87803259033826 - type: euclidean_pearson value: 63.5428979418236 - type: euclidean_spearman value: 66.9293576586897 - type: manhattan_pearson value: 63.59789526178922 - type: manhattan_spearman value: 66.86555009875066 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 28.23026196280264 - type: cos_sim_spearman value: 35.79397812652861 - type: euclidean_pearson value: 17.828102102767353 - type: euclidean_spearman value: 35.721501145568894 - type: manhattan_pearson value: 17.77134274219677 - type: manhattan_spearman value: 35.98107902846267 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 56.51946541393812 - type: cos_sim_spearman value: 63.714686006214485 - type: euclidean_pearson value: 58.32104651305898 - type: euclidean_spearman value: 62.237110895702216 - type: manhattan_pearson value: 58.579416468759185 - type: manhattan_spearman value: 62.459738981727 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.76009839569795 - type: cos_sim_spearman value: 56.65188431953149 - type: euclidean_pearson value: 50.997682160915595 - type: euclidean_spearman value: 55.99910008818135 - type: manhattan_pearson value: 50.76220659606342 - type: manhattan_spearman value: 55.517347595391456 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cosine_pearson value: 50.724322379215934 - type: cosine_spearman value: 59.90449732164651 - type: euclidean_pearson value: 50.227545226784024 - type: euclidean_spearman value: 59.898906527601085 - type: main_score value: 59.90449732164651 - type: manhattan_pearson value: 50.21762139819405 - type: manhattan_spearman value: 59.761039813759 - type: pearson value: 50.724322379215934 - type: spearman value: 59.90449732164651 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.717524559088005 - type: cos_sim_spearman value: 66.83570886252286 - type: euclidean_pearson value: 58.41338625505467 - type: euclidean_spearman value: 66.68991427704938 - type: manhattan_pearson value: 58.78638572916807 - type: manhattan_spearman value: 66.58684161046335 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 73.2962042954962 - type: cos_sim_spearman value: 76.58255504852025 - type: euclidean_pearson value: 75.70983192778257 - type: euclidean_spearman value: 77.4547684870542 - type: manhattan_pearson value: 75.75565853870485 - type: manhattan_spearman value: 76.90208974949428 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.47396266924846 - type: cos_sim_spearman value: 56.492267162048606 - type: euclidean_pearson value: 55.998505203070195 - type: euclidean_spearman value: 56.46447012960222 - type: manhattan_pearson value: 54.873172394430995 - type: manhattan_spearman value: 56.58111534551218 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 69.87177267688686 - type: cos_sim_spearman value: 74.57160943395763 - type: euclidean_pearson value: 70.88330406826788 - type: euclidean_spearman value: 74.29767636038422 - type: manhattan_pearson value: 71.38245248369536 - type: manhattan_spearman value: 74.53102232732175 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.80225656959544 - type: cos_sim_spearman value: 76.52646173725735 - type: euclidean_pearson value: 73.95710720200799 - type: euclidean_spearman value: 76.54040031984111 - type: manhattan_pearson value: 73.89679971946774 - type: manhattan_spearman value: 76.60886958161574 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 70.70844249898789 - type: cos_sim_spearman value: 72.68571783670241 - type: euclidean_pearson value: 72.38800772441031 - type: euclidean_spearman value: 72.86804422703312 - type: manhattan_pearson value: 71.29840508203515 - type: manhattan_spearman value: 71.86264441749513 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 58.647478923935694 - type: cos_sim_spearman value: 63.74453623540931 - type: euclidean_pearson value: 59.60138032437505 - type: euclidean_spearman value: 63.947930832166065 - type: manhattan_pearson value: 58.59735509491861 - type: manhattan_spearman value: 62.082503844627404 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 65.8722516867162 - type: cos_sim_spearman value: 71.81208592523012 - type: euclidean_pearson value: 67.95315252165956 - type: euclidean_spearman value: 73.00749822046009 - type: manhattan_pearson value: 68.07884688638924 - type: manhattan_spearman value: 72.34210325803069 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.5405814240949 - type: cos_sim_spearman value: 60.56838649023775 - type: euclidean_pearson value: 53.011731611314104 - type: euclidean_spearman value: 58.533194841668426 - type: manhattan_pearson value: 53.623067729338494 - type: manhattan_spearman value: 58.018756154446926 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 13.611046866216112 - type: cos_sim_spearman value: 28.238192909158492 - type: euclidean_pearson value: 22.16189199885129 - type: euclidean_spearman value: 35.012895679076564 - type: manhattan_pearson value: 21.969771178698387 - type: manhattan_spearman value: 32.456985088607475 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 74.58077407011655 - type: cos_sim_spearman value: 84.51542547285167 - type: euclidean_pearson value: 74.64613843596234 - type: euclidean_spearman value: 84.51542547285167 - type: manhattan_pearson value: 75.15335973101396 - type: manhattan_spearman value: 84.51542547285167 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 82.0739825531578 - type: cos_sim_spearman value: 84.01057479311115 - type: euclidean_pearson value: 83.85453227433344 - type: euclidean_spearman value: 84.01630226898655 - type: manhattan_pearson value: 83.75323603028978 - type: manhattan_spearman value: 83.89677983727685 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.12945623123957 - type: mrr value: 93.87738713719106 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 52.983000000000004 - type: map_at_10 value: 62.946000000000005 - type: map_at_100 value: 63.514 - type: map_at_1000 value: 63.554 - type: map_at_3 value: 60.183 - type: map_at_5 value: 61.672000000000004 - type: mrr_at_1 value: 55.667 - type: mrr_at_10 value: 64.522 - type: mrr_at_100 value: 64.957 - type: mrr_at_1000 value: 64.995 - type: mrr_at_3 value: 62.388999999999996 - type: mrr_at_5 value: 63.639 - type: ndcg_at_1 value: 55.667 - type: ndcg_at_10 value: 67.704 - type: ndcg_at_100 value: 70.299 - type: ndcg_at_1000 value: 71.241 - type: ndcg_at_3 value: 62.866 - type: ndcg_at_5 value: 65.16999999999999 - type: precision_at_1 value: 55.667 - type: precision_at_10 value: 9.033 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 24.444 - type: precision_at_5 value: 16.133 - type: recall_at_1 value: 52.983000000000004 - type: recall_at_10 value: 80.656 - type: recall_at_100 value: 92.5 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 67.744 - type: recall_at_5 value: 73.433 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72772277227723 - type: cos_sim_ap value: 92.17845897992215 - type: cos_sim_f1 value: 85.9746835443038 - type: cos_sim_precision value: 87.07692307692308 - type: cos_sim_recall value: 84.89999999999999 - type: dot_accuracy value: 99.3039603960396 - type: dot_ap value: 60.70244020124878 - type: dot_f1 value: 59.92742353551063 - type: dot_precision value: 62.21743810548978 - type: dot_recall value: 57.8 - type: euclidean_accuracy value: 99.71683168316832 - type: euclidean_ap value: 91.53997039964659 - type: euclidean_f1 value: 84.88372093023257 - type: euclidean_precision value: 90.02242152466367 - type: euclidean_recall value: 80.30000000000001 - type: manhattan_accuracy value: 99.72376237623763 - type: manhattan_ap value: 91.80756777790289 - type: manhattan_f1 value: 85.48468106479157 - type: manhattan_precision value: 85.8728557013118 - type: manhattan_recall value: 85.1 - type: max_accuracy value: 99.72772277227723 - type: max_ap value: 92.17845897992215 - type: max_f1 value: 85.9746835443038 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 53.52464042600003 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.071631948736 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.19552407604654 - type: mrr value: 49.95269130379425 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.345293033095427 - type: cos_sim_spearman value: 29.976931423258403 - type: dot_pearson value: 27.047078008958408 - type: dot_spearman value: 27.75894368380218 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.706 - type: map_at_100 value: 9.634 - type: map_at_1000 value: 23.665 - type: map_at_3 value: 0.5950000000000001 - type: map_at_5 value: 0.95 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 80.0 - type: ndcg_at_10 value: 72.573 - type: ndcg_at_100 value: 53.954 - type: ndcg_at_1000 value: 47.760999999999996 - type: ndcg_at_3 value: 76.173 - type: ndcg_at_5 value: 75.264 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 76.4 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.802 - type: precision_at_3 value: 81.333 - type: precision_at_5 value: 80.4 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 1.925 - type: recall_at_100 value: 12.762 - type: recall_at_1000 value: 44.946000000000005 - type: recall_at_3 value: 0.634 - type: recall_at_5 value: 1.051 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.0 - type: f1 value: 88.55666666666666 - type: precision value: 87.46166666666667 - type: recall value: 91.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.22543352601156 - type: f1 value: 51.03220478943021 - type: precision value: 48.8150289017341 - type: recall value: 57.22543352601156 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.58536585365854 - type: f1 value: 39.66870798578116 - type: precision value: 37.416085946573745 - type: recall value: 46.58536585365854 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.7 - type: f1 value: 86.77999999999999 - type: precision value: 85.45333333333332 - type: recall value: 89.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.58333333333331 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.3 - type: precision value: 89.31666666666668 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 83.67190476190476 - type: precision value: 82.23333333333332 - type: recall value: 86.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 50.0 - type: f1 value: 42.23229092632078 - type: precision value: 39.851634683724235 - type: recall value: 50.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.3 - type: f1 value: 70.86190476190477 - type: precision value: 68.68777777777777 - type: recall value: 76.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.073170731707314 - type: f1 value: 50.658958927251604 - type: precision value: 48.26480836236933 - type: recall value: 57.073170731707314 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.2 - type: f1 value: 62.156507936507936 - type: precision value: 59.84964285714286 - type: recall value: 68.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.52126366950182 - type: f1 value: 72.8496210148701 - type: precision value: 70.92171498003819 - type: recall value: 77.52126366950182 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.78260869565217 - type: f1 value: 65.32422360248447 - type: precision value: 63.063067367415194 - type: recall value: 70.78260869565217 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.43478260869566 - type: f1 value: 73.02608695652172 - type: precision value: 70.63768115942028 - type: recall value: 78.43478260869566 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.9 - type: f1 value: 55.309753694581275 - type: precision value: 53.130476190476195 - type: recall value: 60.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.89999999999999 - type: f1 value: 67.92023809523809 - type: precision value: 65.82595238095237 - type: recall value: 72.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.80337756332931 - type: f1 value: 39.42174900558496 - type: precision value: 36.97101116280851 - type: recall value: 46.80337756332931 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.8 - type: f1 value: 86.79 - type: precision value: 85.375 - type: recall value: 89.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.199999999999996 - type: f1 value: 39.95484348984349 - type: precision value: 37.561071428571424 - type: recall value: 47.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.8 - type: f1 value: 84.68190476190475 - type: precision value: 83.275 - type: recall value: 87.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.76190476190476 - type: f1 value: 42.14965986394558 - type: precision value: 39.96743626743626 - type: recall value: 48.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.10000000000001 - type: f1 value: 59.58580086580086 - type: precision value: 57.150238095238095 - type: recall value: 66.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.3 - type: f1 value: 84.0 - type: precision value: 82.48666666666666 - type: recall value: 87.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 87.79523809523809 - type: precision value: 86.6 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.0 - type: f1 value: 83.81 - type: precision value: 82.36666666666666 - type: recall value: 87.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.9 - type: f1 value: 57.76533189033189 - type: precision value: 55.50595238095239 - type: recall value: 63.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.1 - type: f1 value: 71.83690476190478 - type: precision value: 70.04928571428573 - type: recall value: 76.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.3 - type: f1 value: 59.32626984126984 - type: precision value: 56.62535714285713 - type: recall value: 66.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 89.76666666666667 - type: main_score value: 89.76666666666667 - type: precision value: 88.64999999999999 - type: recall value: 92.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.16666666666666 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.71428571428571 - type: f1 value: 82.29142600436403 - type: precision value: 80.8076626877166 - type: recall value: 85.71428571428571 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.88888888888889 - type: f1 value: 85.7834757834758 - type: precision value: 84.43732193732193 - type: recall value: 88.88888888888889 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.5 - type: f1 value: 85.67190476190476 - type: precision value: 84.43333333333332 - type: recall value: 88.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.72727272727273 - type: f1 value: 78.21969696969695 - type: precision value: 76.18181818181819 - type: recall value: 82.72727272727273 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 61.0062893081761 - type: f1 value: 55.13976240391334 - type: precision value: 52.92112499659669 - type: recall value: 61.0062893081761 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.5 - type: f1 value: 86.86666666666666 - type: precision value: 85.69166666666668 - type: recall value: 89.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.54085603112841 - type: f1 value: 68.56031128404669 - type: precision value: 66.53047989623866 - type: recall value: 73.54085603112841 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.58974358974359 - type: f1 value: 36.45299145299145 - type: precision value: 33.81155881155882 - type: recall value: 43.58974358974359 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.599999999999994 - type: f1 value: 53.264689754689755 - type: precision value: 50.869166666666665 - type: recall value: 59.599999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.2 - type: f1 value: 81.61666666666665 - type: precision value: 80.02833333333335 - type: recall value: 85.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.78504672897196 - type: f1 value: 58.00029669188548 - type: precision value: 55.815809968847354 - type: recall value: 63.78504672897196 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.5 - type: f1 value: 61.518333333333345 - type: precision value: 59.622363699102834 - type: recall value: 66.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.6 - type: f1 value: 85.60222222222221 - type: precision value: 84.27916666666665 - type: recall value: 88.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.699999999999996 - type: f1 value: 52.732375957375965 - type: precision value: 50.63214035964035 - type: recall value: 58.699999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 89.99666666666667 - type: precision value: 89.03333333333333 - type: recall value: 92.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.10000000000001 - type: f1 value: 87.55666666666667 - type: precision value: 86.36166666666668 - type: recall value: 90.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 88.89000000000001 - type: precision value: 87.71166666666666 - type: recall value: 91.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.7 - type: f1 value: 60.67427750410509 - type: precision value: 58.71785714285714 - type: recall value: 65.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.39999999999999 - type: f1 value: 81.93190476190475 - type: precision value: 80.37833333333333 - type: recall value: 85.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.833333333333336 - type: f1 value: 42.006625781625786 - type: precision value: 40.077380952380956 - type: recall value: 47.833333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.4 - type: f1 value: 8.24465007215007 - type: precision value: 7.664597069597071 - type: recall value: 10.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.6 - type: f1 value: 77.76333333333334 - type: precision value: 75.57833333333332 - type: recall value: 82.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.67857142857143 - type: f1 value: 44.302721088435376 - type: precision value: 41.49801587301587 - type: recall value: 52.67857142857143 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 28.3205268935236 - type: f1 value: 22.426666605171157 - type: precision value: 20.685900116470915 - type: recall value: 28.3205268935236 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 22.7 - type: f1 value: 17.833970473970474 - type: precision value: 16.407335164835164 - type: recall value: 22.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.2 - type: f1 value: 89.92999999999999 - type: precision value: 88.87 - type: recall value: 92.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 89.25 - type: precision value: 88.21666666666667 - type: recall value: 91.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.19999999999999 - type: f1 value: 63.38269841269841 - type: precision value: 61.14773809523809 - type: recall value: 69.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.8 - type: f1 value: 42.839915639915645 - type: precision value: 40.770287114845935 - type: recall value: 48.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.8 - type: f1 value: 85.90666666666668 - type: precision value: 84.54166666666666 - type: recall value: 88.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.6 - type: f1 value: 40.85892920804686 - type: precision value: 38.838223114604695 - type: recall value: 46.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.0 - type: f1 value: 80.14190476190475 - type: precision value: 78.45333333333333 - type: recall value: 84.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.5 - type: f1 value: 87.78333333333333 - type: precision value: 86.5 - type: recall value: 90.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.5 - type: f1 value: 69.48397546897547 - type: precision value: 67.51869047619049 - type: recall value: 74.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.846715328467155 - type: f1 value: 27.828177499710343 - type: precision value: 26.63451511991658 - type: recall value: 32.846715328467155 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.0 - type: f1 value: 6.07664116764988 - type: precision value: 5.544177607179943 - type: recall value: 8.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.38555555555554 - type: precision value: 82.91583333333334 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 84.08333333333331 - type: precision value: 82.47333333333333 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.95238095238095 - type: f1 value: 76.13095238095238 - type: precision value: 74.05753968253967 - type: recall value: 80.95238095238095 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.799999999999999 - type: f1 value: 6.971422975172975 - type: precision value: 6.557814916172301 - type: recall value: 8.799999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.099378881987576 - type: f1 value: 37.01649742022413 - type: precision value: 34.69420618488942 - type: recall value: 44.099378881987576 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.32666666666667 - type: precision value: 78.60666666666665 - type: recall value: 84.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.5 - type: f1 value: 90.49666666666666 - type: precision value: 89.56666666666668 - type: recall value: 92.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.0 - type: f1 value: 8.268423529875141 - type: precision value: 7.878118605532398 - type: recall value: 10.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.22077922077922 - type: f1 value: 74.27128427128426 - type: precision value: 72.28715728715729 - type: recall value: 79.22077922077922 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.64885496183206 - type: f1 value: 58.87495456197747 - type: precision value: 55.992366412213734 - type: recall value: 65.64885496183206 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.06986899563319 - type: f1 value: 94.78408539543909 - type: precision value: 94.15332362930616 - type: recall value: 96.06986899563319 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.2 - type: f1 value: 71.72571428571428 - type: precision value: 69.41000000000001 - type: recall value: 77.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.4406779661017 - type: f1 value: 83.2391713747646 - type: precision value: 81.74199623352166 - type: recall value: 86.4406779661017 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.4 - type: f1 value: 6.017828743398003 - type: precision value: 5.4829865484756795 - type: recall value: 8.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.5 - type: f1 value: 79.74833333333333 - type: precision value: 78.04837662337664 - type: recall value: 83.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.4 - type: f1 value: 54.467301587301584 - type: precision value: 52.23242424242424 - type: recall value: 60.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.9 - type: f1 value: 69.68699134199134 - type: precision value: 67.59873015873016 - type: recall value: 74.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.0 - type: f1 value: 84.9652380952381 - type: precision value: 83.66166666666666 - type: recall value: 88.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.1 - type: f1 value: 7.681244588744588 - type: precision value: 7.370043290043291 - type: recall value: 9.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.9651474530831 - type: f1 value: 76.84220605132133 - type: precision value: 75.19606398962966 - type: recall value: 80.9651474530831 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 83.705 - type: precision value: 82.3120634920635 - type: recall value: 86.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 29.64426877470356 - type: f1 value: 23.98763072676116 - type: precision value: 22.506399397703746 - type: recall value: 29.64426877470356 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.4225352112676 - type: f1 value: 62.84037558685445 - type: precision value: 59.56572769953053 - type: recall value: 70.4225352112676 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 19.64071856287425 - type: f1 value: 15.125271011207756 - type: precision value: 13.865019261197494 - type: recall value: 19.64071856287425 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.80666666666666 - type: precision value: 86.70833333333331 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 23.15270935960591 - type: f1 value: 18.407224958949097 - type: precision value: 16.982385430661292 - type: recall value: 23.15270935960591 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.98591549295775 - type: f1 value: 49.94718309859154 - type: precision value: 47.77864154624717 - type: recall value: 55.98591549295775 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.07692307692307 - type: f1 value: 66.74358974358974 - type: precision value: 64.06837606837607 - type: recall value: 73.07692307692307 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.25 - type: precision value: 92.43333333333332 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.78705636743215 - type: f1 value: 31.63899658680452 - type: precision value: 29.72264397629742 - type: recall value: 37.78705636743215 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 21.6 - type: f1 value: 16.91697302697303 - type: precision value: 15.71225147075147 - type: recall value: 21.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.01628664495115 - type: f1 value: 81.38514037536838 - type: precision value: 79.83170466883823 - type: recall value: 85.01628664495115 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.39999999999999 - type: f1 value: 79.96380952380952 - type: precision value: 78.48333333333333 - type: recall value: 83.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.2 - type: f1 value: 79.26190476190476 - type: precision value: 77.58833333333334 - type: recall value: 83.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.59055118110236 - type: f1 value: 71.66854143232096 - type: precision value: 70.30183727034121 - type: recall value: 75.59055118110236 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.5 - type: f1 value: 59.26095238095238 - type: precision value: 56.81909090909092 - type: recall value: 65.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.26315789473685 - type: f1 value: 47.986523325858506 - type: precision value: 45.33950006595436 - type: recall value: 55.26315789473685 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.89999999999999 - type: f1 value: 78.835 - type: precision value: 77.04761904761905 - type: recall value: 82.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.269230769230774 - type: f1 value: 36.20421245421245 - type: precision value: 33.57371794871795 - type: recall value: 43.269230769230774 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.0 - type: f1 value: 84.70666666666666 - type: precision value: 83.23166666666665 - type: recall value: 88.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.4 - type: f1 value: 72.54666666666667 - type: precision value: 70.54318181818181 - type: recall value: 77.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.60000000000001 - type: f1 value: 74.1588888888889 - type: precision value: 72.30250000000001 - type: recall value: 78.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.40566037735849 - type: f1 value: 66.82587328813744 - type: precision value: 64.75039308176099 - type: recall value: 72.40566037735849 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.8 - type: f1 value: 68.56357142857144 - type: precision value: 66.3178822055138 - type: recall value: 73.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.78832116788321 - type: f1 value: 89.3552311435523 - type: precision value: 88.20559610705597 - type: recall value: 91.78832116788321 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.05085581085581 - type: precision value: 66.955 - type: recall value: 74.3 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.896 - type: map_at_10 value: 8.993 - type: map_at_100 value: 14.133999999999999 - type: map_at_1000 value: 15.668000000000001 - type: map_at_3 value: 5.862 - type: map_at_5 value: 7.17 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 42.931000000000004 - type: mrr_at_100 value: 44.81 - type: mrr_at_1000 value: 44.81 - type: mrr_at_3 value: 38.435 - type: mrr_at_5 value: 41.701 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 21.163 - type: ndcg_at_100 value: 33.306000000000004 - type: ndcg_at_1000 value: 45.275999999999996 - type: ndcg_at_3 value: 25.685999999999996 - type: ndcg_at_5 value: 23.732 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 17.755000000000003 - type: precision_at_100 value: 6.938999999999999 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 2.896 - type: recall_at_10 value: 13.333999999999998 - type: recall_at_100 value: 43.517 - type: recall_at_1000 value: 79.836 - type: recall_at_3 value: 6.306000000000001 - type: recall_at_5 value: 8.825 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.3874 - type: ap value: 13.829909072469423 - type: f1 value: 53.54534203543492 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 62.62026032823995 - type: f1 value: 62.85251350485221 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 33.21527881409797 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.97943613280086 - type: cos_sim_ap value: 70.75454316885921 - type: cos_sim_f1 value: 65.38274012676743 - type: cos_sim_precision value: 60.761214318078835 - type: cos_sim_recall value: 70.76517150395777 - type: dot_accuracy value: 79.0546581629612 - type: dot_ap value: 47.3197121792147 - type: dot_f1 value: 49.20106524633821 - type: dot_precision value: 42.45499808502489 - type: dot_recall value: 58.49604221635884 - type: euclidean_accuracy value: 85.08076533349228 - type: euclidean_ap value: 70.95016106374474 - type: euclidean_f1 value: 65.43987900176455 - type: euclidean_precision value: 62.64478764478765 - type: euclidean_recall value: 68.49604221635884 - type: manhattan_accuracy value: 84.93771234428085 - type: manhattan_ap value: 70.63668388755362 - type: manhattan_f1 value: 65.23895401262398 - type: manhattan_precision value: 56.946084218811485 - type: manhattan_recall value: 76.35883905013192 - type: max_accuracy value: 85.08076533349228 - type: max_ap value: 70.95016106374474 - type: max_f1 value: 65.43987900176455 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.69096130709822 - type: cos_sim_ap value: 84.82526278228542 - type: cos_sim_f1 value: 77.65485060585536 - type: cos_sim_precision value: 75.94582658619167 - type: cos_sim_recall value: 79.44256236526024 - type: dot_accuracy value: 80.97954748321496 - type: dot_ap value: 64.81642914145866 - type: dot_f1 value: 60.631996987229975 - type: dot_precision value: 54.5897293631712 - type: dot_recall value: 68.17831844779796 - type: euclidean_accuracy value: 88.6987231730508 - type: euclidean_ap value: 84.80003825477253 - type: euclidean_f1 value: 77.67194179854496 - type: euclidean_precision value: 75.7128235122094 - type: euclidean_recall value: 79.73514012935017 - type: manhattan_accuracy value: 88.62692591298949 - type: manhattan_ap value: 84.80451408255276 - type: manhattan_f1 value: 77.69888949572183 - type: manhattan_precision value: 73.70311528631622 - type: manhattan_recall value: 82.15275639051433 - type: max_accuracy value: 88.6987231730508 - type: max_ap value: 84.82526278228542 - type: max_f1 value: 77.69888949572183 - task: type: BitextMining dataset: name: MTEB BUCC.v2 (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: 1739dc11ffe9b7bfccd7f3d585aeb4c544fc6677 metrics: - type: accuracy value: 95.72566678212678 - type: f1 value: 94.42443135896548 - type: main_score value: 94.42443135896548 - type: precision value: 93.80868260016165 - type: recall value: 95.72566678212678 - task: type: Retrieval dataset: name: MTEB BelebeleRetrieval (rus_Cyrl-rus_Cyrl) type: facebook/belebele config: rus_Cyrl-rus_Cyrl split: test revision: 75b399394a9803252cfec289d103de462763db7c metrics: - type: main_score value: 92.23599999999999 - type: map_at_1 value: 87.111 - type: map_at_10 value: 90.717 - type: map_at_100 value: 90.879 - type: map_at_1000 value: 90.881 - type: map_at_20 value: 90.849 - type: map_at_3 value: 90.074 - type: map_at_5 value: 90.535 - type: mrr_at_1 value: 87.1111111111111 - type: mrr_at_10 value: 90.7173721340388 - type: mrr_at_100 value: 90.87859682638407 - type: mrr_at_1000 value: 90.88093553612326 - type: mrr_at_20 value: 90.84863516113515 - type: mrr_at_3 value: 90.07407407407409 - type: mrr_at_5 value: 90.53518518518521 - type: nauc_map_at_1000_diff1 value: 92.37373187280554 - type: nauc_map_at_1000_max value: 79.90465445423249 - type: nauc_map_at_1000_std value: -0.6220290556185463 - type: nauc_map_at_100_diff1 value: 92.37386697345335 - type: nauc_map_at_100_max value: 79.90991577223959 - type: nauc_map_at_100_std value: -0.602247514642845 - type: nauc_map_at_10_diff1 value: 92.30907447072467 - type: nauc_map_at_10_max value: 79.86831935337598 - type: nauc_map_at_10_std value: -0.7455191860719699 - type: nauc_map_at_1_diff1 value: 93.29828518358822 - type: nauc_map_at_1_max value: 78.69539619887887 - type: nauc_map_at_1_std value: -4.097150817605763 - type: nauc_map_at_20_diff1 value: 92.38414149703077 - type: nauc_map_at_20_max value: 79.94789814504661 - type: nauc_map_at_20_std value: -0.3928031130400773 - type: nauc_map_at_3_diff1 value: 92.21688899306734 - type: nauc_map_at_3_max value: 80.34586671780885 - type: nauc_map_at_3_std value: 0.24088319695435909 - type: nauc_map_at_5_diff1 value: 92.27931726042982 - type: nauc_map_at_5_max value: 79.99198834003367 - type: nauc_map_at_5_std value: -0.6296366922840796 - type: nauc_mrr_at_1000_diff1 value: 92.37373187280554 - type: nauc_mrr_at_1000_max value: 79.90465445423249 - type: nauc_mrr_at_1000_std value: -0.6220290556185463 - type: nauc_mrr_at_100_diff1 value: 92.37386697345335 - type: nauc_mrr_at_100_max value: 79.90991577223959 - type: nauc_mrr_at_100_std value: -0.602247514642845 - type: nauc_mrr_at_10_diff1 value: 92.30907447072467 - type: nauc_mrr_at_10_max value: 79.86831935337598 - type: nauc_mrr_at_10_std value: -0.7455191860719699 - type: nauc_mrr_at_1_diff1 value: 93.29828518358822 - type: nauc_mrr_at_1_max value: 78.69539619887887 - type: nauc_mrr_at_1_std value: -4.097150817605763 - type: nauc_mrr_at_20_diff1 value: 92.38414149703077 - type: nauc_mrr_at_20_max value: 79.94789814504661 - type: nauc_mrr_at_20_std value: -0.3928031130400773 - type: nauc_mrr_at_3_diff1 value: 92.21688899306734 - type: nauc_mrr_at_3_max value: 80.34586671780885 - type: nauc_mrr_at_3_std value: 0.24088319695435909 - type: nauc_mrr_at_5_diff1 value: 92.27931726042982 - type: nauc_mrr_at_5_max value: 79.99198834003367 - type: nauc_mrr_at_5_std value: -0.6296366922840796 - type: nauc_ndcg_at_1000_diff1 value: 92.30526497646306 - type: nauc_ndcg_at_1000_max value: 80.12734537480418 - type: nauc_ndcg_at_1000_std value: 0.22849408935578744 - type: nauc_ndcg_at_100_diff1 value: 92.31347123202318 - type: nauc_ndcg_at_100_max value: 80.29207038703142 - type: nauc_ndcg_at_100_std value: 0.816825944406239 - type: nauc_ndcg_at_10_diff1 value: 92.05430189845808 - type: nauc_ndcg_at_10_max value: 80.16515667442968 - type: nauc_ndcg_at_10_std value: 0.7486447532544893 - type: nauc_ndcg_at_1_diff1 value: 93.29828518358822 - type: nauc_ndcg_at_1_max value: 78.69539619887887 - type: nauc_ndcg_at_1_std value: -4.097150817605763 - type: nauc_ndcg_at_20_diff1 value: 92.40147868825079 - type: nauc_ndcg_at_20_max value: 80.5117307181802 - type: nauc_ndcg_at_20_std value: 2.0431351539517033 - type: nauc_ndcg_at_3_diff1 value: 91.88894444422789 - type: nauc_ndcg_at_3_max value: 81.09256084196045 - type: nauc_ndcg_at_3_std value: 2.422705909643621 - type: nauc_ndcg_at_5_diff1 value: 91.99711052955728 - type: nauc_ndcg_at_5_max value: 80.46996334573979 - type: nauc_ndcg_at_5_std value: 0.9086986899040708 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 93.46405228758012 - type: nauc_precision_at_100_max value: 100.0 - type: nauc_precision_at_100_std value: 70.71661998132774 - type: nauc_precision_at_10_diff1 value: 90.13938908896874 - type: nauc_precision_at_10_max value: 82.21121782046167 - type: nauc_precision_at_10_std value: 13.075230092036083 - type: nauc_precision_at_1_diff1 value: 93.29828518358822 - type: nauc_precision_at_1_max value: 78.69539619887887 - type: nauc_precision_at_1_std value: -4.097150817605763 - type: nauc_precision_at_20_diff1 value: 94.9723479135242 - type: nauc_precision_at_20_max value: 91.04000574588684 - type: nauc_precision_at_20_std value: 48.764634058749586 - type: nauc_precision_at_3_diff1 value: 90.52690041533852 - type: nauc_precision_at_3_max value: 84.35075179497126 - type: nauc_precision_at_3_std value: 12.036768730480507 - type: nauc_precision_at_5_diff1 value: 90.44234360410769 - type: nauc_precision_at_5_max value: 83.21895424836558 - type: nauc_precision_at_5_std value: 9.974323062558037 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 93.46405228758294 - type: nauc_recall_at_100_max value: 100.0 - type: nauc_recall_at_100_std value: 70.71661998132666 - type: nauc_recall_at_10_diff1 value: 90.13938908896864 - type: nauc_recall_at_10_max value: 82.21121782046124 - type: nauc_recall_at_10_std value: 13.075230092036506 - type: nauc_recall_at_1_diff1 value: 93.29828518358822 - type: nauc_recall_at_1_max value: 78.69539619887887 - type: nauc_recall_at_1_std value: -4.097150817605763 - type: nauc_recall_at_20_diff1 value: 94.97234791352489 - type: nauc_recall_at_20_max value: 91.04000574588774 - type: nauc_recall_at_20_std value: 48.764634058752065 - type: nauc_recall_at_3_diff1 value: 90.52690041533845 - type: nauc_recall_at_3_max value: 84.35075179497079 - type: nauc_recall_at_3_std value: 12.036768730480583 - type: nauc_recall_at_5_diff1 value: 90.44234360410861 - type: nauc_recall_at_5_max value: 83.21895424836595 - type: nauc_recall_at_5_std value: 9.974323062558147 - type: ndcg_at_1 value: 87.111 - type: ndcg_at_10 value: 92.23599999999999 - type: ndcg_at_100 value: 92.87100000000001 - type: ndcg_at_1000 value: 92.928 - type: ndcg_at_20 value: 92.67699999999999 - type: ndcg_at_3 value: 90.973 - type: ndcg_at_5 value: 91.801 - type: precision_at_1 value: 87.111 - type: precision_at_10 value: 9.689 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.928 - type: precision_at_3 value: 31.185000000000002 - type: precision_at_5 value: 19.111 - type: recall_at_1 value: 87.111 - type: recall_at_10 value: 96.88900000000001 - type: recall_at_100 value: 99.556 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 98.556 - type: recall_at_3 value: 93.556 - type: recall_at_5 value: 95.556 - task: type: Retrieval dataset: name: MTEB BelebeleRetrieval (rus_Cyrl-eng_Latn) type: facebook/belebele config: rus_Cyrl-eng_Latn split: test revision: 75b399394a9803252cfec289d103de462763db7c metrics: - type: main_score value: 86.615 - type: map_at_1 value: 78.0 - type: map_at_10 value: 83.822 - type: map_at_100 value: 84.033 - type: map_at_1000 value: 84.03500000000001 - type: map_at_20 value: 83.967 - type: map_at_3 value: 82.315 - type: map_at_5 value: 83.337 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 83.82213403880073 - type: mrr_at_100 value: 84.03281327810801 - type: mrr_at_1000 value: 84.03460051000452 - type: mrr_at_20 value: 83.9673773122303 - type: mrr_at_3 value: 82.31481481481484 - type: mrr_at_5 value: 83.33703703703708 - type: nauc_map_at_1000_diff1 value: 80.78467576987832 - type: nauc_map_at_1000_max value: 51.41718334647604 - type: nauc_map_at_1000_std value: -16.23873782768812 - type: nauc_map_at_100_diff1 value: 80.78490931240695 - type: nauc_map_at_100_max value: 51.41504597713061 - type: nauc_map_at_100_std value: -16.23538559475366 - type: nauc_map_at_10_diff1 value: 80.73989245374868 - type: nauc_map_at_10_max value: 51.43026079433827 - type: nauc_map_at_10_std value: -16.13414330905897 - type: nauc_map_at_1_diff1 value: 82.36966971144186 - type: nauc_map_at_1_max value: 52.988877039509916 - type: nauc_map_at_1_std value: -15.145824639495546 - type: nauc_map_at_20_diff1 value: 80.75923781626145 - type: nauc_map_at_20_max value: 51.40181079374639 - type: nauc_map_at_20_std value: -16.260566097377165 - type: nauc_map_at_3_diff1 value: 80.65242627065471 - type: nauc_map_at_3_max value: 50.623980338841214 - type: nauc_map_at_3_std value: -16.818343442794294 - type: nauc_map_at_5_diff1 value: 80.45976387021862 - type: nauc_map_at_5_max value: 51.533621728445866 - type: nauc_map_at_5_std value: -16.279891536945815 - type: nauc_mrr_at_1000_diff1 value: 80.78467576987832 - type: nauc_mrr_at_1000_max value: 51.41718334647604 - type: nauc_mrr_at_1000_std value: -16.23873782768812 - type: nauc_mrr_at_100_diff1 value: 80.78490931240695 - type: nauc_mrr_at_100_max value: 51.41504597713061 - type: nauc_mrr_at_100_std value: -16.23538559475366 - type: nauc_mrr_at_10_diff1 value: 80.73989245374868 - type: nauc_mrr_at_10_max value: 51.43026079433827 - type: nauc_mrr_at_10_std value: -16.13414330905897 - type: nauc_mrr_at_1_diff1 value: 82.36966971144186 - type: nauc_mrr_at_1_max value: 52.988877039509916 - type: nauc_mrr_at_1_std value: -15.145824639495546 - type: nauc_mrr_at_20_diff1 value: 80.75923781626145 - type: nauc_mrr_at_20_max value: 51.40181079374639 - type: nauc_mrr_at_20_std value: -16.260566097377165 - type: nauc_mrr_at_3_diff1 value: 80.65242627065471 - type: nauc_mrr_at_3_max value: 50.623980338841214 - type: nauc_mrr_at_3_std value: -16.818343442794294 - type: nauc_mrr_at_5_diff1 value: 80.45976387021862 - type: nauc_mrr_at_5_max value: 51.533621728445866 - type: nauc_mrr_at_5_std value: -16.279891536945815 - type: nauc_ndcg_at_1000_diff1 value: 80.60009446938174 - type: nauc_ndcg_at_1000_max value: 51.381708043594166 - type: nauc_ndcg_at_1000_std value: -16.054256944160848 - type: nauc_ndcg_at_100_diff1 value: 80.58971462930421 - type: nauc_ndcg_at_100_max value: 51.25436917735444 - type: nauc_ndcg_at_100_std value: -15.862944972269894 - type: nauc_ndcg_at_10_diff1 value: 80.37967179454489 - type: nauc_ndcg_at_10_max value: 51.590394257251006 - type: nauc_ndcg_at_10_std value: -15.489799384799591 - type: nauc_ndcg_at_1_diff1 value: 82.36966971144186 - type: nauc_ndcg_at_1_max value: 52.988877039509916 - type: nauc_ndcg_at_1_std value: -15.145824639495546 - type: nauc_ndcg_at_20_diff1 value: 80.40299527470081 - type: nauc_ndcg_at_20_max value: 51.395132284307074 - type: nauc_ndcg_at_20_std value: -15.906165526937203 - type: nauc_ndcg_at_3_diff1 value: 80.10347913649302 - type: nauc_ndcg_at_3_max value: 50.018431855573844 - type: nauc_ndcg_at_3_std value: -17.12743750163884 - type: nauc_ndcg_at_5_diff1 value: 79.65918647776613 - type: nauc_ndcg_at_5_max value: 51.76710880330806 - type: nauc_ndcg_at_5_std value: -16.071901882035945 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 77.41596638655459 - type: nauc_precision_at_100_max value: 22.572362278246565 - type: nauc_precision_at_100_std value: 26.890756302525716 - type: nauc_precision_at_10_diff1 value: 77.82112845138009 - type: nauc_precision_at_10_max value: 54.2550353474723 - type: nauc_precision_at_10_std value: -7.492997198879646 - type: nauc_precision_at_1_diff1 value: 82.36966971144186 - type: nauc_precision_at_1_max value: 52.988877039509916 - type: nauc_precision_at_1_std value: -15.145824639495546 - type: nauc_precision_at_20_diff1 value: 75.89091192032318 - type: nauc_precision_at_20_max value: 52.03275754746293 - type: nauc_precision_at_20_std value: -7.8411920323686175 - type: nauc_precision_at_3_diff1 value: 78.0256020644638 - type: nauc_precision_at_3_max value: 47.80353641248523 - type: nauc_precision_at_3_std value: -18.181625255723503 - type: nauc_precision_at_5_diff1 value: 75.21583976056174 - type: nauc_precision_at_5_max value: 53.716281032960765 - type: nauc_precision_at_5_std value: -14.411700753360812 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 77.4159663865523 - type: nauc_recall_at_100_max value: 22.57236227824646 - type: nauc_recall_at_100_std value: 26.89075630252133 - type: nauc_recall_at_10_diff1 value: 77.82112845138037 - type: nauc_recall_at_10_max value: 54.25503534747204 - type: nauc_recall_at_10_std value: -7.492997198879666 - type: nauc_recall_at_1_diff1 value: 82.36966971144186 - type: nauc_recall_at_1_max value: 52.988877039509916 - type: nauc_recall_at_1_std value: -15.145824639495546 - type: nauc_recall_at_20_diff1 value: 75.89091192032362 - type: nauc_recall_at_20_max value: 52.032757547463184 - type: nauc_recall_at_20_std value: -7.84119203236888 - type: nauc_recall_at_3_diff1 value: 78.02560206446354 - type: nauc_recall_at_3_max value: 47.80353641248526 - type: nauc_recall_at_3_std value: -18.181625255723656 - type: nauc_recall_at_5_diff1 value: 75.21583976056185 - type: nauc_recall_at_5_max value: 53.71628103296118 - type: nauc_recall_at_5_std value: -14.411700753360634 - type: ndcg_at_1 value: 78.0 - type: ndcg_at_10 value: 86.615 - type: ndcg_at_100 value: 87.558 - type: ndcg_at_1000 value: 87.613 - type: ndcg_at_20 value: 87.128 - type: ndcg_at_3 value: 83.639 - type: ndcg_at_5 value: 85.475 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.867 - type: precision_at_3 value: 29.148000000000003 - type: precision_at_5 value: 18.378 - type: recall_at_1 value: 78.0 - type: recall_at_10 value: 95.333 - type: recall_at_100 value: 99.556 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 97.333 - type: recall_at_3 value: 87.444 - type: recall_at_5 value: 91.889 - task: type: Retrieval dataset: name: MTEB BelebeleRetrieval (eng_Latn-rus_Cyrl) type: facebook/belebele config: eng_Latn-rus_Cyrl split: test revision: 75b399394a9803252cfec289d103de462763db7c metrics: - type: main_score value: 82.748 - type: map_at_1 value: 73.444 - type: map_at_10 value: 79.857 - type: map_at_100 value: 80.219 - type: map_at_1000 value: 80.22500000000001 - type: map_at_20 value: 80.10300000000001 - type: map_at_3 value: 78.593 - type: map_at_5 value: 79.515 - type: mrr_at_1 value: 73.44444444444444 - type: mrr_at_10 value: 79.85705467372136 - type: mrr_at_100 value: 80.21942320422542 - type: mrr_at_1000 value: 80.2245364027152 - type: mrr_at_20 value: 80.10273201266493 - type: mrr_at_3 value: 78.59259259259258 - type: mrr_at_5 value: 79.51481481481483 - type: nauc_map_at_1000_diff1 value: 83.69682652271125 - type: nauc_map_at_1000_max value: 61.70131708044767 - type: nauc_map_at_1000_std value: 9.345825405274955 - type: nauc_map_at_100_diff1 value: 83.68924820523492 - type: nauc_map_at_100_max value: 61.6965735573098 - type: nauc_map_at_100_std value: 9.366132859525775 - type: nauc_map_at_10_diff1 value: 83.61802964269985 - type: nauc_map_at_10_max value: 61.74274476167882 - type: nauc_map_at_10_std value: 9.504060995819101 - type: nauc_map_at_1_diff1 value: 86.37079221403225 - type: nauc_map_at_1_max value: 61.856861655370686 - type: nauc_map_at_1_std value: 4.708911881992707 - type: nauc_map_at_20_diff1 value: 83.62920965453047 - type: nauc_map_at_20_max value: 61.761029350326965 - type: nauc_map_at_20_std value: 9.572978651118351 - type: nauc_map_at_3_diff1 value: 83.66665673154306 - type: nauc_map_at_3_max value: 61.13597610587937 - type: nauc_map_at_3_std value: 9.309596395240598 - type: nauc_map_at_5_diff1 value: 83.52307226455358 - type: nauc_map_at_5_max value: 61.59405758027573 - type: nauc_map_at_5_std value: 9.320025423287671 - type: nauc_mrr_at_1000_diff1 value: 83.69682652271125 - type: nauc_mrr_at_1000_max value: 61.70131708044767 - type: nauc_mrr_at_1000_std value: 9.345825405274955 - type: nauc_mrr_at_100_diff1 value: 83.68924820523492 - type: nauc_mrr_at_100_max value: 61.6965735573098 - type: nauc_mrr_at_100_std value: 9.366132859525775 - type: nauc_mrr_at_10_diff1 value: 83.61802964269985 - type: nauc_mrr_at_10_max value: 61.74274476167882 - type: nauc_mrr_at_10_std value: 9.504060995819101 - type: nauc_mrr_at_1_diff1 value: 86.37079221403225 - type: nauc_mrr_at_1_max value: 61.856861655370686 - type: nauc_mrr_at_1_std value: 4.708911881992707 - type: nauc_mrr_at_20_diff1 value: 83.62920965453047 - type: nauc_mrr_at_20_max value: 61.761029350326965 - type: nauc_mrr_at_20_std value: 9.572978651118351 - type: nauc_mrr_at_3_diff1 value: 83.66665673154306 - type: nauc_mrr_at_3_max value: 61.13597610587937 - type: nauc_mrr_at_3_std value: 9.309596395240598 - type: nauc_mrr_at_5_diff1 value: 83.52307226455358 - type: nauc_mrr_at_5_max value: 61.59405758027573 - type: nauc_mrr_at_5_std value: 9.320025423287671 - type: nauc_ndcg_at_1000_diff1 value: 83.24213186482201 - type: nauc_ndcg_at_1000_max value: 61.77629841787496 - type: nauc_ndcg_at_1000_std value: 10.332527869705851 - type: nauc_ndcg_at_100_diff1 value: 83.06815820441027 - type: nauc_ndcg_at_100_max value: 61.6947181864579 - type: nauc_ndcg_at_100_std value: 10.888922975877316 - type: nauc_ndcg_at_10_diff1 value: 82.58238431386295 - type: nauc_ndcg_at_10_max value: 62.10333663935709 - type: nauc_ndcg_at_10_std value: 11.746030330958174 - type: nauc_ndcg_at_1_diff1 value: 86.37079221403225 - type: nauc_ndcg_at_1_max value: 61.856861655370686 - type: nauc_ndcg_at_1_std value: 4.708911881992707 - type: nauc_ndcg_at_20_diff1 value: 82.67888324480154 - type: nauc_ndcg_at_20_max value: 62.28124917486516 - type: nauc_ndcg_at_20_std value: 12.343058917563914 - type: nauc_ndcg_at_3_diff1 value: 82.71277373710663 - type: nauc_ndcg_at_3_max value: 60.66677922989939 - type: nauc_ndcg_at_3_std value: 10.843633736296528 - type: nauc_ndcg_at_5_diff1 value: 82.34691124846786 - type: nauc_ndcg_at_5_max value: 61.605961382062716 - type: nauc_ndcg_at_5_std value: 11.129011077702602 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 60.93103908230194 - type: nauc_precision_at_100_max value: 52.621048419370695 - type: nauc_precision_at_100_std value: 85.60090702947922 - type: nauc_precision_at_10_diff1 value: 76.26517273576093 - type: nauc_precision_at_10_max value: 65.2013694366636 - type: nauc_precision_at_10_std value: 26.50357920946173 - type: nauc_precision_at_1_diff1 value: 86.37079221403225 - type: nauc_precision_at_1_max value: 61.856861655370686 - type: nauc_precision_at_1_std value: 4.708911881992707 - type: nauc_precision_at_20_diff1 value: 73.47946930710295 - type: nauc_precision_at_20_max value: 70.19520986689217 - type: nauc_precision_at_20_std value: 45.93186111653967 - type: nauc_precision_at_3_diff1 value: 79.02026879450186 - type: nauc_precision_at_3_max value: 58.75074624692399 - type: nauc_precision_at_3_std value: 16.740684654251037 - type: nauc_precision_at_5_diff1 value: 76.47585662281637 - type: nauc_precision_at_5_max value: 61.86270922013127 - type: nauc_precision_at_5_std value: 20.1833625455035 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 60.93103908229921 - type: nauc_recall_at_100_max value: 52.62104841936668 - type: nauc_recall_at_100_std value: 85.60090702947748 - type: nauc_recall_at_10_diff1 value: 76.26517273576097 - type: nauc_recall_at_10_max value: 65.20136943666347 - type: nauc_recall_at_10_std value: 26.50357920946174 - type: nauc_recall_at_1_diff1 value: 86.37079221403225 - type: nauc_recall_at_1_max value: 61.856861655370686 - type: nauc_recall_at_1_std value: 4.708911881992707 - type: nauc_recall_at_20_diff1 value: 73.47946930710269 - type: nauc_recall_at_20_max value: 70.19520986689254 - type: nauc_recall_at_20_std value: 45.93186111653943 - type: nauc_recall_at_3_diff1 value: 79.02026879450173 - type: nauc_recall_at_3_max value: 58.750746246923924 - type: nauc_recall_at_3_std value: 16.740684654251076 - type: nauc_recall_at_5_diff1 value: 76.4758566228162 - type: nauc_recall_at_5_max value: 61.862709220131386 - type: nauc_recall_at_5_std value: 20.18336254550361 - type: ndcg_at_1 value: 73.444 - type: ndcg_at_10 value: 82.748 - type: ndcg_at_100 value: 84.416 - type: ndcg_at_1000 value: 84.52300000000001 - type: ndcg_at_20 value: 83.646 - type: ndcg_at_3 value: 80.267 - type: ndcg_at_5 value: 81.922 - type: precision_at_1 value: 73.444 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.761 - type: precision_at_3 value: 28.37 - type: precision_at_5 value: 17.822 - type: recall_at_1 value: 73.444 - type: recall_at_10 value: 91.667 - type: recall_at_100 value: 99.222 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 95.222 - type: recall_at_3 value: 85.111 - type: recall_at_5 value: 89.11099999999999 - task: type: BitextMining dataset: name: MTEB BibleNLPBitextMining (eng_Latn-rus_Cyrl) type: davidstap/biblenlp-corpus-mmteb config: eng_Latn-rus_Cyrl split: train revision: 264a18480c529d9e922483839b4b9758e690b762 metrics: - type: accuracy value: 96.875 - type: f1 value: 95.83333333333333 - type: main_score value: 95.83333333333333 - type: precision value: 95.3125 - type: recall value: 96.875 - task: type: BitextMining dataset: name: MTEB BibleNLPBitextMining (rus_Cyrl-eng_Latn) type: davidstap/biblenlp-corpus-mmteb config: rus_Cyrl-eng_Latn split: train revision: 264a18480c529d9e922483839b4b9758e690b762 metrics: - type: accuracy value: 88.671875 - type: f1 value: 85.3515625 - type: main_score value: 85.3515625 - type: precision value: 83.85416666666667 - type: recall value: 88.671875 - task: type: MultilabelClassification dataset: name: MTEB CEDRClassification (default) type: ai-forever/cedr-classification config: default split: test revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4 metrics: - type: accuracy value: 40.06907545164719 - type: f1 value: 26.285000550712407 - type: lrap value: 64.4280021253997 - type: main_score value: 40.06907545164719 - task: type: Classification dataset: name: MTEB CyrillicTurkicLangClassification (default) type: tatiana-merz/cyrillic_turkic_langs config: default split: test revision: e42d330f33d65b7b72dfd408883daf1661f06f18 metrics: - type: accuracy value: 43.3447265625 - type: f1 value: 40.08400146827895 - type: f1_weighted value: 40.08499428040896 - type: main_score value: 43.3447265625 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ace_Arab-rus_Cyrl) type: mteb/flores config: ace_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 6.225296442687747 - type: f1 value: 5.5190958860075 - type: main_score value: 5.5190958860075 - type: precision value: 5.3752643758000005 - type: recall value: 6.225296442687747 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bam_Latn-rus_Cyrl) type: mteb/flores config: bam_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 68.37944664031622 - type: f1 value: 64.54819836666252 - type: main_score value: 64.54819836666252 - type: precision value: 63.07479233454916 - type: recall value: 68.37944664031622 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (dzo_Tibt-rus_Cyrl) type: mteb/flores config: dzo_Tibt-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 0.09881422924901186 - type: f1 value: 0.00019509225912934226 - type: main_score value: 0.00019509225912934226 - type: precision value: 9.76425190207627e-05 - type: recall value: 0.09881422924901186 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hin_Deva-rus_Cyrl) type: mteb/flores config: hin_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.47299077733861 - type: main_score value: 99.47299077733861 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (khm_Khmr-rus_Cyrl) type: mteb/flores config: khm_Khmr-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 88.83399209486166 - type: f1 value: 87.71151056318254 - type: main_score value: 87.71151056318254 - type: precision value: 87.32012500709193 - type: recall value: 88.83399209486166 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mag_Deva-rus_Cyrl) type: mteb/flores config: mag_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.7239789196311 - type: main_score value: 97.7239789196311 - type: precision value: 97.61904761904762 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (pap_Latn-rus_Cyrl) type: mteb/flores config: pap_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.0711462450593 - type: f1 value: 93.68187806922984 - type: main_score value: 93.68187806922984 - type: precision value: 93.58925452707051 - type: recall value: 94.0711462450593 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (sot_Latn-rus_Cyrl) type: mteb/flores config: sot_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 90.9090909090909 - type: f1 value: 89.23171936758892 - type: main_score value: 89.23171936758892 - type: precision value: 88.51790014083866 - type: recall value: 90.9090909090909 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tur_Latn-rus_Cyrl) type: mteb/flores config: tur_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ace_Latn-rus_Cyrl) type: mteb/flores config: ace_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 66.10671936758892 - type: f1 value: 63.81888256297873 - type: main_score value: 63.81888256297873 - type: precision value: 63.01614067933451 - type: recall value: 66.10671936758892 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ban_Latn-rus_Cyrl) type: mteb/flores config: ban_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 79.44664031620553 - type: f1 value: 77.6311962082713 - type: main_score value: 77.6311962082713 - type: precision value: 76.93977931929739 - type: recall value: 79.44664031620553 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ell_Grek-rus_Cyrl) type: mteb/flores config: ell_Grek-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hne_Deva-rus_Cyrl) type: mteb/flores config: hne_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 96.25352907961603 - type: main_score value: 96.25352907961603 - type: precision value: 96.02155091285526 - type: recall value: 96.83794466403161 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kik_Latn-rus_Cyrl) type: mteb/flores config: kik_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 76.28458498023716 - type: f1 value: 73.5596919895859 - type: main_score value: 73.5596919895859 - type: precision value: 72.40900759055246 - type: recall value: 76.28458498023716 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mai_Deva-rus_Cyrl) type: mteb/flores config: mai_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.72727272727273 - type: f1 value: 97.37812911725956 - type: main_score value: 97.37812911725956 - type: precision value: 97.26002258610953 - type: recall value: 97.72727272727273 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (pbt_Arab-rus_Cyrl) type: mteb/flores config: pbt_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.0711462450593 - type: f1 value: 93.34700387331966 - type: main_score value: 93.34700387331966 - type: precision value: 93.06920556920556 - type: recall value: 94.0711462450593 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (spa_Latn-rus_Cyrl) type: mteb/flores config: spa_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (twi_Latn-rus_Cyrl) type: mteb/flores config: twi_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.73122529644269 - type: f1 value: 77.77434363246721 - type: main_score value: 77.77434363246721 - type: precision value: 76.54444287596462 - type: recall value: 80.73122529644269 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (acm_Arab-rus_Cyrl) type: mteb/flores config: acm_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.56521739130434 - type: f1 value: 92.92490118577075 - type: main_score value: 92.92490118577075 - type: precision value: 92.16897233201581 - type: recall value: 94.56521739130434 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bel_Cyrl-rus_Cyrl) type: mteb/flores config: bel_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.98550724637681 - type: main_score value: 98.98550724637681 - type: precision value: 98.88833992094862 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (eng_Latn-rus_Cyrl) type: mteb/flores config: eng_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hrv_Latn-rus_Cyrl) type: mteb/flores config: hrv_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 99.05138339920948 - type: main_score value: 99.05138339920948 - type: precision value: 99.00691699604744 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kin_Latn-rus_Cyrl) type: mteb/flores config: kin_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 88.2411067193676 - type: f1 value: 86.5485246227658 - type: main_score value: 86.5485246227658 - type: precision value: 85.90652101521667 - type: recall value: 88.2411067193676 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mal_Mlym-rus_Cyrl) type: mteb/flores config: mal_Mlym-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.51778656126481 - type: f1 value: 98.07971014492753 - type: main_score value: 98.07971014492753 - type: precision value: 97.88372859025033 - type: recall value: 98.51778656126481 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (pes_Arab-rus_Cyrl) type: mteb/flores config: pes_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.51778656126481 - type: f1 value: 98.0566534914361 - type: main_score value: 98.0566534914361 - type: precision value: 97.82608695652173 - type: recall value: 98.51778656126481 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (srd_Latn-rus_Cyrl) type: mteb/flores config: srd_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 82.6086956521739 - type: f1 value: 80.9173470979821 - type: main_score value: 80.9173470979821 - type: precision value: 80.24468672882627 - type: recall value: 82.6086956521739 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tzm_Tfng-rus_Cyrl) type: mteb/flores config: tzm_Tfng-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 7.41106719367589 - type: f1 value: 6.363562740945329 - type: main_score value: 6.363562740945329 - type: precision value: 6.090373175353411 - type: recall value: 7.41106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (acq_Arab-rus_Cyrl) type: mteb/flores config: acq_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.25691699604744 - type: f1 value: 93.81422924901187 - type: main_score value: 93.81422924901187 - type: precision value: 93.14064558629775 - type: recall value: 95.25691699604744 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bem_Latn-rus_Cyrl) type: mteb/flores config: bem_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 68.08300395256917 - type: f1 value: 65.01368772860867 - type: main_score value: 65.01368772860867 - type: precision value: 63.91052337510628 - type: recall value: 68.08300395256917 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (epo_Latn-rus_Cyrl) type: mteb/flores config: epo_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.41897233201581 - type: f1 value: 98.17193675889328 - type: main_score value: 98.17193675889328 - type: precision value: 98.08210564139418 - type: recall value: 98.41897233201581 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hun_Latn-rus_Cyrl) type: mteb/flores config: hun_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.1106719367589 - type: main_score value: 99.1106719367589 - type: precision value: 99.01185770750988 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kir_Cyrl-rus_Cyrl) type: mteb/flores config: kir_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 97.07549806364035 - type: main_score value: 97.07549806364035 - type: precision value: 96.90958498023716 - type: recall value: 97.5296442687747 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mar_Deva-rus_Cyrl) type: mteb/flores config: mar_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.44400527009222 - type: main_score value: 97.44400527009222 - type: precision value: 97.28966685488425 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (plt_Latn-rus_Cyrl) type: mteb/flores config: plt_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 79.9407114624506 - type: f1 value: 78.3154177760691 - type: main_score value: 78.3154177760691 - type: precision value: 77.69877344877344 - type: recall value: 79.9407114624506 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (srp_Cyrl-rus_Cyrl) type: mteb/flores config: srp_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (uig_Arab-rus_Cyrl) type: mteb/flores config: uig_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 83.20158102766798 - type: f1 value: 81.44381923034585 - type: main_score value: 81.44381923034585 - type: precision value: 80.78813411582477 - type: recall value: 83.20158102766798 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (aeb_Arab-rus_Cyrl) type: mteb/flores config: aeb_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.20553359683794 - type: f1 value: 88.75352907961603 - type: main_score value: 88.75352907961603 - type: precision value: 87.64328063241106 - type: recall value: 91.20553359683794 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ben_Beng-rus_Cyrl) type: mteb/flores config: ben_Beng-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.60671936758894 - type: main_score value: 98.60671936758894 - type: precision value: 98.4766139657444 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (est_Latn-rus_Cyrl) type: mteb/flores config: est_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.24505928853755 - type: f1 value: 95.27417027417027 - type: main_score value: 95.27417027417027 - type: precision value: 94.84107378129117 - type: recall value: 96.24505928853755 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hye_Armn-rus_Cyrl) type: mteb/flores config: hye_Armn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.67786561264822 - type: main_score value: 97.67786561264822 - type: precision value: 97.55839022637441 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kmb_Latn-rus_Cyrl) type: mteb/flores config: kmb_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 46.047430830039524 - type: f1 value: 42.94464804804471 - type: main_score value: 42.94464804804471 - type: precision value: 41.9851895607238 - type: recall value: 46.047430830039524 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (min_Arab-rus_Cyrl) type: mteb/flores config: min_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 3.9525691699604746 - type: f1 value: 3.402665192725756 - type: main_score value: 3.402665192725756 - type: precision value: 3.303787557740127 - type: recall value: 3.9525691699604746 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (pol_Latn-rus_Cyrl) type: mteb/flores config: pol_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ssw_Latn-rus_Cyrl) type: mteb/flores config: ssw_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 73.22134387351778 - type: f1 value: 70.43086049508975 - type: main_score value: 70.43086049508975 - type: precision value: 69.35312022355656 - type: recall value: 73.22134387351778 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ukr_Cyrl-rus_Cyrl) type: mteb/flores config: ukr_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.90118577075098 - type: f1 value: 99.86824769433464 - type: main_score value: 99.86824769433464 - type: precision value: 99.85177865612648 - type: recall value: 99.90118577075098 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (afr_Latn-rus_Cyrl) type: mteb/flores config: afr_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bho_Deva-rus_Cyrl) type: mteb/flores config: bho_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.0711462450593 - type: f1 value: 93.12182382834557 - type: main_score value: 93.12182382834557 - type: precision value: 92.7523453232338 - type: recall value: 94.0711462450593 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (eus_Latn-rus_Cyrl) type: mteb/flores config: eus_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.19367588932806 - type: f1 value: 91.23604975587072 - type: main_score value: 91.23604975587072 - type: precision value: 90.86697443588663 - type: recall value: 92.19367588932806 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ibo_Latn-rus_Cyrl) type: mteb/flores config: ibo_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 82.21343873517787 - type: f1 value: 80.17901604858126 - type: main_score value: 80.17901604858126 - type: precision value: 79.3792284780028 - type: recall value: 82.21343873517787 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kmr_Latn-rus_Cyrl) type: mteb/flores config: kmr_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 68.67588932806325 - type: f1 value: 66.72311714750278 - type: main_score value: 66.72311714750278 - type: precision value: 66.00178401554004 - type: recall value: 68.67588932806325 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (min_Latn-rus_Cyrl) type: mteb/flores config: min_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 78.65612648221344 - type: f1 value: 76.26592719972166 - type: main_score value: 76.26592719972166 - type: precision value: 75.39980459997484 - type: recall value: 78.65612648221344 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (por_Latn-rus_Cyrl) type: mteb/flores config: por_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 95.9669678147939 - type: main_score value: 95.9669678147939 - type: precision value: 95.59453227931488 - type: recall value: 96.83794466403161 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (sun_Latn-rus_Cyrl) type: mteb/flores config: sun_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.4901185770751 - type: f1 value: 91.66553983773662 - type: main_score value: 91.66553983773662 - type: precision value: 91.34530928009188 - type: recall value: 92.4901185770751 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (umb_Latn-rus_Cyrl) type: mteb/flores config: umb_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 41.00790513833992 - type: f1 value: 38.21319326004483 - type: main_score value: 38.21319326004483 - type: precision value: 37.200655467675546 - type: recall value: 41.00790513833992 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ajp_Arab-rus_Cyrl) type: mteb/flores config: ajp_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.35573122529645 - type: f1 value: 93.97233201581028 - type: main_score value: 93.97233201581028 - type: precision value: 93.33333333333333 - type: recall value: 95.35573122529645 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bjn_Arab-rus_Cyrl) type: mteb/flores config: bjn_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 3.6561264822134385 - type: f1 value: 3.1071978056336484 - type: main_score value: 3.1071978056336484 - type: precision value: 3.0039741229718215 - type: recall value: 3.6561264822134385 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ewe_Latn-rus_Cyrl) type: mteb/flores config: ewe_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 62.845849802371546 - type: f1 value: 59.82201175670472 - type: main_score value: 59.82201175670472 - type: precision value: 58.72629236362003 - type: recall value: 62.845849802371546 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ilo_Latn-rus_Cyrl) type: mteb/flores config: ilo_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 83.10276679841897 - type: f1 value: 80.75065288987582 - type: main_score value: 80.75065288987582 - type: precision value: 79.80726451662179 - type: recall value: 83.10276679841897 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (knc_Arab-rus_Cyrl) type: mteb/flores config: knc_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 10.079051383399209 - type: f1 value: 8.759282456080921 - type: main_score value: 8.759282456080921 - type: precision value: 8.474735138956142 - type: recall value: 10.079051383399209 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mkd_Cyrl-rus_Cyrl) type: mteb/flores config: mkd_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (prs_Arab-rus_Cyrl) type: mteb/flores config: prs_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (swe_Latn-rus_Cyrl) type: mteb/flores config: swe_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.22595520421606 - type: main_score value: 99.22595520421606 - type: precision value: 99.14361001317523 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (urd_Arab-rus_Cyrl) type: mteb/flores config: urd_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.25625823451911 - type: main_score value: 97.25625823451911 - type: precision value: 97.03063241106719 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (aka_Latn-rus_Cyrl) type: mteb/flores config: aka_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.22529644268775 - type: f1 value: 77.94307687941227 - type: main_score value: 77.94307687941227 - type: precision value: 76.58782793293665 - type: recall value: 81.22529644268775 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bjn_Latn-rus_Cyrl) type: mteb/flores config: bjn_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 85.27667984189723 - type: f1 value: 83.6869192829922 - type: main_score value: 83.6869192829922 - type: precision value: 83.08670670691656 - type: recall value: 85.27667984189723 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fao_Latn-rus_Cyrl) type: mteb/flores config: fao_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.9288537549407 - type: f1 value: 79.29806087454745 - type: main_score value: 79.29806087454745 - type: precision value: 78.71445871526987 - type: recall value: 80.9288537549407 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ind_Latn-rus_Cyrl) type: mteb/flores config: ind_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.5296442687747 - type: main_score value: 97.5296442687747 - type: precision value: 97.23320158102767 - type: recall value: 98.12252964426878 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (knc_Latn-rus_Cyrl) type: mteb/flores config: knc_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 33.49802371541502 - type: f1 value: 32.02378215033989 - type: main_score value: 32.02378215033989 - type: precision value: 31.511356103747406 - type: recall value: 33.49802371541502 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mlt_Latn-rus_Cyrl) type: mteb/flores config: mlt_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.40316205533597 - type: f1 value: 90.35317684386006 - type: main_score value: 90.35317684386006 - type: precision value: 89.94845939633488 - type: recall value: 91.40316205533597 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (quy_Latn-rus_Cyrl) type: mteb/flores config: quy_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 40.612648221343875 - type: f1 value: 38.74337544712602 - type: main_score value: 38.74337544712602 - type: precision value: 38.133716022178575 - type: recall value: 40.612648221343875 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (swh_Latn-rus_Cyrl) type: mteb/flores config: swh_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.13438735177866 - type: f1 value: 96.47435897435898 - type: main_score value: 96.47435897435898 - type: precision value: 96.18741765480895 - type: recall value: 97.13438735177866 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (uzn_Latn-rus_Cyrl) type: mteb/flores config: uzn_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 96.26355528529442 - type: main_score value: 96.26355528529442 - type: precision value: 96.0501756697409 - type: recall value: 96.83794466403161 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (als_Latn-rus_Cyrl) type: mteb/flores config: als_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.6907114624506 - type: main_score value: 98.6907114624506 - type: precision value: 98.6142480707698 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bod_Tibt-rus_Cyrl) type: mteb/flores config: bod_Tibt-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 1.0869565217391304 - type: f1 value: 0.9224649610442628 - type: main_score value: 0.9224649610442628 - type: precision value: 0.8894275740459898 - type: recall value: 1.0869565217391304 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fij_Latn-rus_Cyrl) type: mteb/flores config: fij_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 63.24110671936759 - type: f1 value: 60.373189068189525 - type: main_score value: 60.373189068189525 - type: precision value: 59.32326368115546 - type: recall value: 63.24110671936759 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (isl_Latn-rus_Cyrl) type: mteb/flores config: isl_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.03162055335969 - type: f1 value: 87.3102634715907 - type: main_score value: 87.3102634715907 - type: precision value: 86.65991814698712 - type: recall value: 89.03162055335969 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kon_Latn-rus_Cyrl) type: mteb/flores config: kon_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 73.91304347826086 - type: f1 value: 71.518235523573 - type: main_score value: 71.518235523573 - type: precision value: 70.58714102449801 - type: recall value: 73.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mni_Beng-rus_Cyrl) type: mteb/flores config: mni_Beng-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 29.545454545454547 - type: f1 value: 27.59513619889114 - type: main_score value: 27.59513619889114 - type: precision value: 26.983849851025344 - type: recall value: 29.545454545454547 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ron_Latn-rus_Cyrl) type: mteb/flores config: ron_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (szl_Latn-rus_Cyrl) type: mteb/flores config: szl_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.26482213438736 - type: f1 value: 85.18912031587512 - type: main_score value: 85.18912031587512 - type: precision value: 84.77199409959775 - type: recall value: 86.26482213438736 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (vec_Latn-rus_Cyrl) type: mteb/flores config: vec_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 85.67193675889328 - type: f1 value: 84.62529734716581 - type: main_score value: 84.62529734716581 - type: precision value: 84.2611422440705 - type: recall value: 85.67193675889328 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (amh_Ethi-rus_Cyrl) type: mteb/flores config: amh_Ethi-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.76284584980237 - type: f1 value: 93.91735076517685 - type: main_score value: 93.91735076517685 - type: precision value: 93.57553798858147 - type: recall value: 94.76284584980237 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bos_Latn-rus_Cyrl) type: mteb/flores config: bos_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 99.05655938264634 - type: main_score value: 99.05655938264634 - type: precision value: 99.01185770750988 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fin_Latn-rus_Cyrl) type: mteb/flores config: fin_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.43741765480895 - type: main_score value: 97.43741765480895 - type: precision value: 97.1590909090909 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ita_Latn-rus_Cyrl) type: mteb/flores config: ita_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kor_Hang-rus_Cyrl) type: mteb/flores config: kor_Hang-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.49868247694334 - type: main_score value: 96.49868247694334 - type: precision value: 96.10507246376811 - type: recall value: 97.33201581027669 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mos_Latn-rus_Cyrl) type: mteb/flores config: mos_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 34.683794466403164 - type: f1 value: 32.766819308009076 - type: main_score value: 32.766819308009076 - type: precision value: 32.1637493670237 - type: recall value: 34.683794466403164 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (run_Latn-rus_Cyrl) type: mteb/flores config: run_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 83.399209486166 - type: f1 value: 81.10578750604326 - type: main_score value: 81.10578750604326 - type: precision value: 80.16763162673529 - type: recall value: 83.399209486166 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tam_Taml-rus_Cyrl) type: mteb/flores config: tam_Taml-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.41897233201581 - type: f1 value: 98.01548089591567 - type: main_score value: 98.01548089591567 - type: precision value: 97.84020327498588 - type: recall value: 98.41897233201581 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (vie_Latn-rus_Cyrl) type: mteb/flores config: vie_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (apc_Arab-rus_Cyrl) type: mteb/flores config: apc_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.87351778656127 - type: f1 value: 92.10803689064558 - type: main_score value: 92.10803689064558 - type: precision value: 91.30434782608695 - type: recall value: 93.87351778656127 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bug_Latn-rus_Cyrl) type: mteb/flores config: bug_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 57.608695652173914 - type: f1 value: 54.95878654927162 - type: main_score value: 54.95878654927162 - type: precision value: 54.067987427805654 - type: recall value: 57.608695652173914 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fon_Latn-rus_Cyrl) type: mteb/flores config: fon_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 61.95652173913043 - type: f1 value: 58.06537275812945 - type: main_score value: 58.06537275812945 - type: precision value: 56.554057596959204 - type: recall value: 61.95652173913043 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (jav_Latn-rus_Cyrl) type: mteb/flores config: jav_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.47826086956522 - type: f1 value: 92.4784405318002 - type: main_score value: 92.4784405318002 - type: precision value: 92.09168143201127 - type: recall value: 93.47826086956522 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lao_Laoo-rus_Cyrl) type: mteb/flores config: lao_Laoo-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.10671936758892 - type: f1 value: 89.76104922745239 - type: main_score value: 89.76104922745239 - type: precision value: 89.24754593232855 - type: recall value: 91.10671936758892 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mri_Latn-rus_Cyrl) type: mteb/flores config: mri_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 71.14624505928853 - type: f1 value: 68.26947125119062 - type: main_score value: 68.26947125119062 - type: precision value: 67.15942311051006 - type: recall value: 71.14624505928853 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ace_Arab) type: mteb/flores config: rus_Cyrl-ace_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 19.565217391304348 - type: f1 value: 16.321465000323805 - type: main_score value: 16.321465000323805 - type: precision value: 15.478527409347508 - type: recall value: 19.565217391304348 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bam_Latn) type: mteb/flores config: rus_Cyrl-bam_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 73.41897233201581 - type: f1 value: 68.77366228182746 - type: main_score value: 68.77366228182746 - type: precision value: 66.96012924273795 - type: recall value: 73.41897233201581 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-dzo_Tibt) type: mteb/flores config: rus_Cyrl-dzo_Tibt split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 0.592885375494071 - type: f1 value: 0.02458062426370458 - type: main_score value: 0.02458062426370458 - type: precision value: 0.012824114724683876 - type: recall value: 0.592885375494071 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hin_Deva) type: mteb/flores config: rus_Cyrl-hin_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.90118577075098 - type: f1 value: 99.86824769433464 - type: main_score value: 99.86824769433464 - type: precision value: 99.85177865612648 - type: recall value: 99.90118577075098 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-khm_Khmr) type: mteb/flores config: rus_Cyrl-khm_Khmr split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.13438735177866 - type: f1 value: 96.24505928853755 - type: main_score value: 96.24505928853755 - type: precision value: 95.81686429512516 - type: recall value: 97.13438735177866 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mag_Deva) type: mteb/flores config: rus_Cyrl-mag_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.50592885375494 - type: f1 value: 99.35770750988142 - type: main_score value: 99.35770750988142 - type: precision value: 99.29183135704875 - type: recall value: 99.50592885375494 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-pap_Latn) type: mteb/flores config: rus_Cyrl-pap_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.93675889328063 - type: f1 value: 96.05072463768116 - type: main_score value: 96.05072463768116 - type: precision value: 95.66040843214758 - type: recall value: 96.93675889328063 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-sot_Latn) type: mteb/flores config: rus_Cyrl-sot_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.67588932806325 - type: f1 value: 91.7786561264822 - type: main_score value: 91.7786561264822 - type: precision value: 90.91238471673255 - type: recall value: 93.67588932806325 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tur_Latn) type: mteb/flores config: rus_Cyrl-tur_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ace_Latn) type: mteb/flores config: rus_Cyrl-ace_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 74.1106719367589 - type: f1 value: 70.21737923911836 - type: main_score value: 70.21737923911836 - type: precision value: 68.7068791410511 - type: recall value: 74.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ban_Latn) type: mteb/flores config: rus_Cyrl-ban_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.7193675889328 - type: f1 value: 78.76470334510617 - type: main_score value: 78.76470334510617 - type: precision value: 77.76208475761422 - type: recall value: 81.7193675889328 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ell_Grek) type: mteb/flores config: rus_Cyrl-ell_Grek split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.76021080368908 - type: main_score value: 97.76021080368908 - type: precision value: 97.48023715415019 - type: recall value: 98.3201581027668 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hne_Deva) type: mteb/flores config: rus_Cyrl-hne_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.51778656126481 - type: f1 value: 98.0566534914361 - type: main_score value: 98.0566534914361 - type: precision value: 97.82608695652173 - type: recall value: 98.51778656126481 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kik_Latn) type: mteb/flores config: rus_Cyrl-kik_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.73122529644269 - type: f1 value: 76.42689244220864 - type: main_score value: 76.42689244220864 - type: precision value: 74.63877909530083 - type: recall value: 80.73122529644269 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mai_Deva) type: mteb/flores config: rus_Cyrl-mai_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.56719367588933 - type: main_score value: 98.56719367588933 - type: precision value: 98.40250329380763 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-pbt_Arab) type: mteb/flores config: rus_Cyrl-pbt_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 96.73913043478261 - type: main_score value: 96.73913043478261 - type: precision value: 96.36034255599473 - type: recall value: 97.5296442687747 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-spa_Latn) type: mteb/flores config: rus_Cyrl-spa_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.20948616600789 - type: main_score value: 99.20948616600789 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-twi_Latn) type: mteb/flores config: rus_Cyrl-twi_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 82.01581027667984 - type: f1 value: 78.064787822953 - type: main_score value: 78.064787822953 - type: precision value: 76.43272186750448 - type: recall value: 82.01581027667984 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-acm_Arab) type: mteb/flores config: rus_Cyrl-acm_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.76021080368908 - type: main_score value: 97.76021080368908 - type: precision value: 97.48023715415019 - type: recall value: 98.3201581027668 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bel_Cyrl) type: mteb/flores config: rus_Cyrl-bel_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.22134387351778 - type: f1 value: 97.67786561264822 - type: main_score value: 97.67786561264822 - type: precision value: 97.4308300395257 - type: recall value: 98.22134387351778 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-eng_Latn) type: mteb/flores config: rus_Cyrl-eng_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hrv_Latn) type: mteb/flores config: rus_Cyrl-hrv_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.83069828722002 - type: main_score value: 98.83069828722002 - type: precision value: 98.69894598155466 - type: recall value: 99.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kin_Latn) type: mteb/flores config: rus_Cyrl-kin_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.37944664031622 - type: f1 value: 91.53162055335969 - type: main_score value: 91.53162055335969 - type: precision value: 90.71475625823452 - type: recall value: 93.37944664031622 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mal_Mlym) type: mteb/flores config: rus_Cyrl-mal_Mlym split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034255 - type: main_score value: 99.07773386034255 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-pes_Arab) type: mteb/flores config: rus_Cyrl-pes_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-srd_Latn) type: mteb/flores config: rus_Cyrl-srd_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.03162055335969 - type: f1 value: 86.11048371917937 - type: main_score value: 86.11048371917937 - type: precision value: 84.86001317523056 - type: recall value: 89.03162055335969 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tzm_Tfng) type: mteb/flores config: rus_Cyrl-tzm_Tfng split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 12.351778656126482 - type: f1 value: 10.112177999067715 - type: main_score value: 10.112177999067715 - type: precision value: 9.53495885438645 - type: recall value: 12.351778656126482 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-acq_Arab) type: mteb/flores config: rus_Cyrl-acq_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bem_Latn) type: mteb/flores config: rus_Cyrl-bem_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 73.22134387351778 - type: f1 value: 68.30479412989295 - type: main_score value: 68.30479412989295 - type: precision value: 66.40073447632736 - type: recall value: 73.22134387351778 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-epo_Latn) type: mteb/flores config: rus_Cyrl-epo_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hun_Latn) type: mteb/flores config: rus_Cyrl-hun_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.83794466403161 - type: f1 value: 95.88274044795784 - type: main_score value: 95.88274044795784 - type: precision value: 95.45454545454545 - type: recall value: 96.83794466403161 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kir_Cyrl) type: mteb/flores config: rus_Cyrl-kir_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.34387351778656 - type: f1 value: 95.49280429715212 - type: main_score value: 95.49280429715212 - type: precision value: 95.14163372859026 - type: recall value: 96.34387351778656 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mar_Deva) type: mteb/flores config: rus_Cyrl-mar_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.28722002635047 - type: main_score value: 98.28722002635047 - type: precision value: 98.07312252964427 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-plt_Latn) type: mteb/flores config: rus_Cyrl-plt_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 88.04347826086956 - type: f1 value: 85.14328063241106 - type: main_score value: 85.14328063241106 - type: precision value: 83.96339168078298 - type: recall value: 88.04347826086956 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-srp_Cyrl) type: mteb/flores config: rus_Cyrl-srp_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-uig_Arab) type: mteb/flores config: rus_Cyrl-uig_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.19367588932806 - type: f1 value: 89.98541313758706 - type: main_score value: 89.98541313758706 - type: precision value: 89.01021080368906 - type: recall value: 92.19367588932806 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-aeb_Arab) type: mteb/flores config: rus_Cyrl-aeb_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.8498023715415 - type: f1 value: 94.63109354413703 - type: main_score value: 94.63109354413703 - type: precision value: 94.05467720685111 - type: recall value: 95.8498023715415 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ben_Beng) type: mteb/flores config: rus_Cyrl-ben_Beng split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-est_Latn) type: mteb/flores config: rus_Cyrl-est_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.2588932806324 - type: main_score value: 94.2588932806324 - type: precision value: 93.65118577075098 - type: recall value: 95.55335968379447 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hye_Armn) type: mteb/flores config: rus_Cyrl-hye_Armn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.28722002635045 - type: main_score value: 98.28722002635045 - type: precision value: 98.07312252964427 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kmb_Latn) type: mteb/flores config: rus_Cyrl-kmb_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 54.24901185770751 - type: f1 value: 49.46146674116913 - type: main_score value: 49.46146674116913 - type: precision value: 47.81033799314432 - type: recall value: 54.24901185770751 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-min_Arab) type: mteb/flores config: rus_Cyrl-min_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 15.810276679841898 - type: f1 value: 13.271207641419332 - type: main_score value: 13.271207641419332 - type: precision value: 12.510673148766033 - type: recall value: 15.810276679841898 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-pol_Latn) type: mteb/flores config: rus_Cyrl-pol_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.32674571805006 - type: main_score value: 98.32674571805006 - type: precision value: 98.14723320158103 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ssw_Latn) type: mteb/flores config: rus_Cyrl-ssw_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.8300395256917 - type: f1 value: 76.51717847370023 - type: main_score value: 76.51717847370023 - type: precision value: 74.74143610013175 - type: recall value: 80.8300395256917 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ukr_Cyrl) type: mteb/flores config: rus_Cyrl-ukr_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-afr_Latn) type: mteb/flores config: rus_Cyrl-afr_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bho_Deva) type: mteb/flores config: rus_Cyrl-bho_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.6403162055336 - type: f1 value: 95.56982872200265 - type: main_score value: 95.56982872200265 - type: precision value: 95.0592885375494 - type: recall value: 96.6403162055336 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-eus_Latn) type: mteb/flores config: rus_Cyrl-eus_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.62845849802372 - type: f1 value: 96.9038208168643 - type: main_score value: 96.9038208168643 - type: precision value: 96.55797101449275 - type: recall value: 97.62845849802372 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ibo_Latn) type: mteb/flores config: rus_Cyrl-ibo_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.2292490118577 - type: f1 value: 86.35234330886506 - type: main_score value: 86.35234330886506 - type: precision value: 85.09881422924902 - type: recall value: 89.2292490118577 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kmr_Latn) type: mteb/flores config: rus_Cyrl-kmr_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 83.49802371541502 - type: f1 value: 79.23630717108978 - type: main_score value: 79.23630717108978 - type: precision value: 77.48188405797102 - type: recall value: 83.49802371541502 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-min_Latn) type: mteb/flores config: rus_Cyrl-min_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 79.34782608695652 - type: f1 value: 75.31689928429059 - type: main_score value: 75.31689928429059 - type: precision value: 73.91519410541149 - type: recall value: 79.34782608695652 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-por_Latn) type: mteb/flores config: rus_Cyrl-por_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.54150197628458 - type: f1 value: 95.53218520609825 - type: main_score value: 95.53218520609825 - type: precision value: 95.07575757575756 - type: recall value: 96.54150197628458 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-sun_Latn) type: mteb/flores config: rus_Cyrl-sun_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.2806324110672 - type: f1 value: 91.56973461321287 - type: main_score value: 91.56973461321287 - type: precision value: 90.84396334890405 - type: recall value: 93.2806324110672 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-umb_Latn) type: mteb/flores config: rus_Cyrl-umb_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 51.87747035573123 - type: f1 value: 46.36591778884269 - type: main_score value: 46.36591778884269 - type: precision value: 44.57730391234227 - type: recall value: 51.87747035573123 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ajp_Arab) type: mteb/flores config: rus_Cyrl-ajp_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bjn_Arab) type: mteb/flores config: rus_Cyrl-bjn_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 14.82213438735178 - type: f1 value: 12.365434276616856 - type: main_score value: 12.365434276616856 - type: precision value: 11.802079517180589 - type: recall value: 14.82213438735178 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ewe_Latn) type: mteb/flores config: rus_Cyrl-ewe_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 71.44268774703558 - type: f1 value: 66.74603174603175 - type: main_score value: 66.74603174603175 - type: precision value: 64.99933339607253 - type: recall value: 71.44268774703558 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ilo_Latn) type: mteb/flores config: rus_Cyrl-ilo_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 85.86956521739131 - type: f1 value: 83.00139015960917 - type: main_score value: 83.00139015960917 - type: precision value: 81.91411396574439 - type: recall value: 85.86956521739131 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-knc_Arab) type: mteb/flores config: rus_Cyrl-knc_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 14.525691699604742 - type: f1 value: 12.618283715726806 - type: main_score value: 12.618283715726806 - type: precision value: 12.048458493742352 - type: recall value: 14.525691699604742 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mkd_Cyrl) type: mteb/flores config: rus_Cyrl-mkd_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.22595520421606 - type: main_score value: 99.22595520421606 - type: precision value: 99.14361001317523 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-prs_Arab) type: mteb/flores config: rus_Cyrl-prs_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034255 - type: main_score value: 99.07773386034255 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-swe_Latn) type: mteb/flores config: rus_Cyrl-swe_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034256 - type: main_score value: 99.07773386034256 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-urd_Arab) type: mteb/flores config: rus_Cyrl-urd_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.61660079051383 - type: f1 value: 98.15546772068511 - type: main_score value: 98.15546772068511 - type: precision value: 97.92490118577075 - type: recall value: 98.61660079051383 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-aka_Latn) type: mteb/flores config: rus_Cyrl-aka_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.02766798418972 - type: f1 value: 76.73277809147375 - type: main_score value: 76.73277809147375 - type: precision value: 74.97404165882426 - type: recall value: 81.02766798418972 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bjn_Latn) type: mteb/flores config: rus_Cyrl-bjn_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.7588932806324 - type: f1 value: 83.92064566965753 - type: main_score value: 83.92064566965753 - type: precision value: 82.83734079929732 - type: recall value: 86.7588932806324 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fao_Latn) type: mteb/flores config: rus_Cyrl-fao_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 88.43873517786561 - type: f1 value: 85.48136645962732 - type: main_score value: 85.48136645962732 - type: precision value: 84.23418972332016 - type: recall value: 88.43873517786561 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ind_Latn) type: mteb/flores config: rus_Cyrl-ind_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-knc_Latn) type: mteb/flores config: rus_Cyrl-knc_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 45.8498023715415 - type: f1 value: 40.112030865489366 - type: main_score value: 40.112030865489366 - type: precision value: 38.28262440050776 - type: recall value: 45.8498023715415 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mlt_Latn) type: mteb/flores config: rus_Cyrl-mlt_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.18181818181817 - type: f1 value: 91.30787690570298 - type: main_score value: 91.30787690570298 - type: precision value: 90.4983060417843 - type: recall value: 93.18181818181817 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-quy_Latn) type: mteb/flores config: rus_Cyrl-quy_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 62.450592885375485 - type: f1 value: 57.28742975628178 - type: main_score value: 57.28742975628178 - type: precision value: 55.56854987623269 - type: recall value: 62.450592885375485 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-swh_Latn) type: mteb/flores config: rus_Cyrl-swh_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.77667984189723 - type: main_score value: 97.77667984189723 - type: precision value: 97.51317523056655 - type: recall value: 98.3201581027668 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-uzn_Latn) type: mteb/flores config: rus_Cyrl-uzn_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.59081498211933 - type: main_score value: 97.59081498211933 - type: precision value: 97.34848484848484 - type: recall value: 98.12252964426878 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-als_Latn) type: mteb/flores config: rus_Cyrl-als_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.09420289855073 - type: main_score value: 99.09420289855073 - type: precision value: 98.99538866930172 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bod_Tibt) type: mteb/flores config: rus_Cyrl-bod_Tibt split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 11.561264822134387 - type: f1 value: 8.121312045385636 - type: main_score value: 8.121312045385636 - type: precision value: 7.350577020893972 - type: recall value: 11.561264822134387 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fij_Latn) type: mteb/flores config: rus_Cyrl-fij_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 72.23320158102767 - type: f1 value: 67.21000233846082 - type: main_score value: 67.21000233846082 - type: precision value: 65.3869439739005 - type: recall value: 72.23320158102767 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-isl_Latn) type: mteb/flores config: rus_Cyrl-isl_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.99604743083005 - type: f1 value: 89.75955204216073 - type: main_score value: 89.75955204216073 - type: precision value: 88.7598814229249 - type: recall value: 91.99604743083005 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kon_Latn) type: mteb/flores config: rus_Cyrl-kon_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.81818181818183 - type: f1 value: 77.77800098452272 - type: main_score value: 77.77800098452272 - type: precision value: 76.1521268586486 - type: recall value: 81.81818181818183 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mni_Beng) type: mteb/flores config: rus_Cyrl-mni_Beng split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 54.74308300395256 - type: f1 value: 48.97285299254615 - type: main_score value: 48.97285299254615 - type: precision value: 46.95125742968299 - type: recall value: 54.74308300395256 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ron_Latn) type: mteb/flores config: rus_Cyrl-ron_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.22134387351778 - type: f1 value: 97.64492753623189 - type: main_score value: 97.64492753623189 - type: precision value: 97.36495388669302 - type: recall value: 98.22134387351778 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-szl_Latn) type: mteb/flores config: rus_Cyrl-szl_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.09486166007905 - type: f1 value: 90.10375494071147 - type: main_score value: 90.10375494071147 - type: precision value: 89.29606625258798 - type: recall value: 92.09486166007905 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-vec_Latn) type: mteb/flores config: rus_Cyrl-vec_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.4901185770751 - type: f1 value: 90.51430453604365 - type: main_score value: 90.51430453604365 - type: precision value: 89.69367588932808 - type: recall value: 92.4901185770751 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-amh_Ethi) type: mteb/flores config: rus_Cyrl-amh_Ethi split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.11791831357048 - type: main_score value: 97.11791831357048 - type: precision value: 96.77206851119894 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bos_Latn) type: mteb/flores config: rus_Cyrl-bos_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fin_Latn) type: mteb/flores config: rus_Cyrl-fin_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.65217391304348 - type: f1 value: 94.4235836627141 - type: main_score value: 94.4235836627141 - type: precision value: 93.84881422924902 - type: recall value: 95.65217391304348 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ita_Latn) type: mteb/flores config: rus_Cyrl-ita_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768117 - type: main_score value: 98.55072463768117 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kor_Hang) type: mteb/flores config: rus_Cyrl-kor_Hang split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.15349143610013 - type: main_score value: 94.15349143610013 - type: precision value: 93.49472990777339 - type: recall value: 95.55335968379447 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mos_Latn) type: mteb/flores config: rus_Cyrl-mos_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 43.67588932806324 - type: f1 value: 38.84849721190082 - type: main_score value: 38.84849721190082 - type: precision value: 37.43294462099682 - type: recall value: 43.67588932806324 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-run_Latn) type: mteb/flores config: rus_Cyrl-run_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 87.37483530961792 - type: main_score value: 87.37483530961792 - type: precision value: 86.07872200263506 - type: recall value: 90.21739130434783 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tam_Taml) type: mteb/flores config: rus_Cyrl-tam_Taml split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-vie_Latn) type: mteb/flores config: rus_Cyrl-vie_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.03557312252964 - type: f1 value: 96.13636363636364 - type: main_score value: 96.13636363636364 - type: precision value: 95.70981554677206 - type: recall value: 97.03557312252964 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-apc_Arab) type: mteb/flores config: rus_Cyrl-apc_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.49670619235836 - type: main_score value: 97.49670619235836 - type: precision value: 97.18379446640316 - type: recall value: 98.12252964426878 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bug_Latn) type: mteb/flores config: rus_Cyrl-bug_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 67.29249011857708 - type: f1 value: 62.09268717667927 - type: main_score value: 62.09268717667927 - type: precision value: 60.28554009748714 - type: recall value: 67.29249011857708 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fon_Latn) type: mteb/flores config: rus_Cyrl-fon_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 63.43873517786561 - type: f1 value: 57.66660107569199 - type: main_score value: 57.66660107569199 - type: precision value: 55.66676396919363 - type: recall value: 63.43873517786561 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-jav_Latn) type: mteb/flores config: rus_Cyrl-jav_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.46640316205533 - type: f1 value: 92.89384528514964 - type: main_score value: 92.89384528514964 - type: precision value: 92.19367588932806 - type: recall value: 94.46640316205533 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lao_Laoo) type: mteb/flores config: rus_Cyrl-lao_Laoo split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.23320158102767 - type: f1 value: 96.40974967061922 - type: main_score value: 96.40974967061922 - type: precision value: 96.034255599473 - type: recall value: 97.23320158102767 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mri_Latn) type: mteb/flores config: rus_Cyrl-mri_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 76.77865612648222 - type: f1 value: 73.11286539547409 - type: main_score value: 73.11286539547409 - type: precision value: 71.78177214337046 - type: recall value: 76.77865612648222 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-taq_Latn) type: mteb/flores config: rus_Cyrl-taq_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 41.99604743083004 - type: f1 value: 37.25127063318763 - type: main_score value: 37.25127063318763 - type: precision value: 35.718929186985726 - type: recall value: 41.99604743083004 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-war_Latn) type: mteb/flores config: rus_Cyrl-war_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.1699604743083 - type: main_score value: 94.1699604743083 - type: precision value: 93.52766798418972 - type: recall value: 95.55335968379447 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-arb_Arab) type: mteb/flores config: rus_Cyrl-arb_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bul_Cyrl) type: mteb/flores config: rus_Cyrl-bul_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fra_Latn) type: mteb/flores config: rus_Cyrl-fra_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.47299077733861 - type: main_score value: 99.47299077733861 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-jpn_Jpan) type: mteb/flores config: rus_Cyrl-jpn_Jpan split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.44268774703558 - type: f1 value: 95.30632411067194 - type: main_score value: 95.30632411067194 - type: precision value: 94.76284584980237 - type: recall value: 96.44268774703558 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lij_Latn) type: mteb/flores config: rus_Cyrl-lij_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 87.4703557312253 - type: main_score value: 87.4703557312253 - type: precision value: 86.29611330698287 - type: recall value: 90.21739130434783 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-mya_Mymr) type: mteb/flores config: rus_Cyrl-mya_Mymr split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.364953886693 - type: main_score value: 97.364953886693 - type: precision value: 97.03557312252964 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-sag_Latn) type: mteb/flores config: rus_Cyrl-sag_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 54.841897233201585 - type: f1 value: 49.61882037503349 - type: main_score value: 49.61882037503349 - type: precision value: 47.831968755881796 - type: recall value: 54.841897233201585 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-taq_Tfng) type: mteb/flores config: rus_Cyrl-taq_Tfng split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 15.316205533596838 - type: f1 value: 11.614836360389717 - type: main_score value: 11.614836360389717 - type: precision value: 10.741446193235223 - type: recall value: 15.316205533596838 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-wol_Latn) type: mteb/flores config: rus_Cyrl-wol_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 67.88537549407114 - type: f1 value: 62.2536417249856 - type: main_score value: 62.2536417249856 - type: precision value: 60.27629128666678 - type: recall value: 67.88537549407114 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-arb_Latn) type: mteb/flores config: rus_Cyrl-arb_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 27.766798418972332 - type: f1 value: 23.39674889624077 - type: main_score value: 23.39674889624077 - type: precision value: 22.28521155585345 - type: recall value: 27.766798418972332 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-cat_Latn) type: mteb/flores config: rus_Cyrl-cat_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.23320158102767 - type: f1 value: 96.42151326933936 - type: main_score value: 96.42151326933936 - type: precision value: 96.04743083003953 - type: recall value: 97.23320158102767 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fur_Latn) type: mteb/flores config: rus_Cyrl-fur_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 88.63636363636364 - type: f1 value: 85.80792396009788 - type: main_score value: 85.80792396009788 - type: precision value: 84.61508901726293 - type: recall value: 88.63636363636364 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kab_Latn) type: mteb/flores config: rus_Cyrl-kab_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 48.12252964426877 - type: f1 value: 43.05387582971066 - type: main_score value: 43.05387582971066 - type: precision value: 41.44165117538212 - type: recall value: 48.12252964426877 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lim_Latn) type: mteb/flores config: rus_Cyrl-lim_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.81818181818183 - type: f1 value: 77.81676163099087 - type: main_score value: 77.81676163099087 - type: precision value: 76.19565217391305 - type: recall value: 81.81818181818183 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-nld_Latn) type: mteb/flores config: rus_Cyrl-nld_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.4756258234519 - type: main_score value: 96.4756258234519 - type: precision value: 96.06389986824769 - type: recall value: 97.33201581027669 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-san_Deva) type: mteb/flores config: rus_Cyrl-san_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.47826086956522 - type: f1 value: 91.70289855072463 - type: main_score value: 91.70289855072463 - type: precision value: 90.9370882740448 - type: recall value: 93.47826086956522 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tat_Cyrl) type: mteb/flores config: rus_Cyrl-tat_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.72727272727273 - type: f1 value: 97.00263504611331 - type: main_score value: 97.00263504611331 - type: precision value: 96.65678524374177 - type: recall value: 97.72727272727273 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-xho_Latn) type: mteb/flores config: rus_Cyrl-xho_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.08300395256917 - type: f1 value: 91.12977602108036 - type: main_score value: 91.12977602108036 - type: precision value: 90.22562582345192 - type: recall value: 93.08300395256917 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ars_Arab) type: mteb/flores config: rus_Cyrl-ars_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.40711462450594 - type: f1 value: 99.2094861660079 - type: main_score value: 99.2094861660079 - type: precision value: 99.1106719367589 - type: recall value: 99.40711462450594 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ceb_Latn) type: mteb/flores config: rus_Cyrl-ceb_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.65217391304348 - type: f1 value: 94.3544137022398 - type: main_score value: 94.3544137022398 - type: precision value: 93.76646903820817 - type: recall value: 95.65217391304348 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-fuv_Latn) type: mteb/flores config: rus_Cyrl-fuv_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 51.18577075098815 - type: f1 value: 44.5990252610806 - type: main_score value: 44.5990252610806 - type: precision value: 42.34331599450177 - type: recall value: 51.18577075098815 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kac_Latn) type: mteb/flores config: rus_Cyrl-kac_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 46.93675889328063 - type: f1 value: 41.79004018701787 - type: main_score value: 41.79004018701787 - type: precision value: 40.243355662392624 - type: recall value: 46.93675889328063 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lin_Latn) type: mteb/flores config: rus_Cyrl-lin_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.50197628458498 - type: f1 value: 89.1205533596838 - type: main_score value: 89.1205533596838 - type: precision value: 88.07147562582345 - type: recall value: 91.50197628458498 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-nno_Latn) type: mteb/flores config: rus_Cyrl-nno_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.81422924901186 - type: f1 value: 98.41897233201581 - type: main_score value: 98.41897233201581 - type: precision value: 98.22134387351778 - type: recall value: 98.81422924901186 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-sat_Olck) type: mteb/flores config: rus_Cyrl-sat_Olck split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 2.371541501976284 - type: f1 value: 1.0726274943087382 - type: main_score value: 1.0726274943087382 - type: precision value: 0.875279634748803 - type: recall value: 2.371541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tel_Telu) type: mteb/flores config: rus_Cyrl-tel_Telu split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ydd_Hebr) type: mteb/flores config: rus_Cyrl-ydd_Hebr split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.42687747035573 - type: f1 value: 86.47609636740073 - type: main_score value: 86.47609636740073 - type: precision value: 85.13669301712781 - type: recall value: 89.42687747035573 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ary_Arab) type: mteb/flores config: rus_Cyrl-ary_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.82213438735178 - type: f1 value: 87.04545454545456 - type: main_score value: 87.04545454545456 - type: precision value: 85.76910408432148 - type: recall value: 89.82213438735178 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ces_Latn) type: mteb/flores config: rus_Cyrl-ces_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-gaz_Latn) type: mteb/flores config: rus_Cyrl-gaz_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 64.9209486166008 - type: f1 value: 58.697458119394874 - type: main_score value: 58.697458119394874 - type: precision value: 56.43402189597842 - type: recall value: 64.9209486166008 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kam_Latn) type: mteb/flores config: rus_Cyrl-kam_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 59.18972332015811 - type: f1 value: 53.19031511966295 - type: main_score value: 53.19031511966295 - type: precision value: 51.08128357343655 - type: recall value: 59.18972332015811 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lit_Latn) type: mteb/flores config: rus_Cyrl-lit_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.54150197628458 - type: f1 value: 95.5368906455863 - type: main_score value: 95.5368906455863 - type: precision value: 95.0592885375494 - type: recall value: 96.54150197628458 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-nob_Latn) type: mteb/flores config: rus_Cyrl-nob_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.51317523056655 - type: main_score value: 97.51317523056655 - type: precision value: 97.2167325428195 - type: recall value: 98.12252964426878 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-scn_Latn) type: mteb/flores config: rus_Cyrl-scn_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 84.0909090909091 - type: f1 value: 80.37000439174352 - type: main_score value: 80.37000439174352 - type: precision value: 78.83994628559846 - type: recall value: 84.0909090909091 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tgk_Cyrl) type: mteb/flores config: rus_Cyrl-tgk_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.68774703557312 - type: f1 value: 90.86344814605684 - type: main_score value: 90.86344814605684 - type: precision value: 90.12516469038208 - type: recall value: 92.68774703557312 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-yor_Latn) type: mteb/flores config: rus_Cyrl-yor_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 72.13438735177866 - type: f1 value: 66.78759646150951 - type: main_score value: 66.78759646150951 - type: precision value: 64.85080192096002 - type: recall value: 72.13438735177866 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-arz_Arab) type: mteb/flores config: rus_Cyrl-arz_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.364953886693 - type: main_score value: 97.364953886693 - type: precision value: 97.03557312252964 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-cjk_Latn) type: mteb/flores config: rus_Cyrl-cjk_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 51.976284584980235 - type: f1 value: 46.468762353149714 - type: main_score value: 46.468762353149714 - type: precision value: 44.64073366247278 - type: recall value: 51.976284584980235 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-gla_Latn) type: mteb/flores config: rus_Cyrl-gla_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 79.74308300395256 - type: f1 value: 75.55611165294958 - type: main_score value: 75.55611165294958 - type: precision value: 73.95033408620365 - type: recall value: 79.74308300395256 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kan_Knda) type: mteb/flores config: rus_Cyrl-kan_Knda split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.96245059288538 - type: main_score value: 98.96245059288538 - type: precision value: 98.84716732542819 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lmo_Latn) type: mteb/flores config: rus_Cyrl-lmo_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 82.41106719367589 - type: f1 value: 78.56413514022209 - type: main_score value: 78.56413514022209 - type: precision value: 77.15313068573938 - type: recall value: 82.41106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-npi_Deva) type: mteb/flores config: rus_Cyrl-npi_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.3201581027668 - type: main_score value: 98.3201581027668 - type: precision value: 98.12252964426878 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-shn_Mymr) type: mteb/flores config: rus_Cyrl-shn_Mymr split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 57.11462450592886 - type: f1 value: 51.51361369197337 - type: main_score value: 51.51361369197337 - type: precision value: 49.71860043649573 - type: recall value: 57.11462450592886 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tgl_Latn) type: mteb/flores config: rus_Cyrl-tgl_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.18379446640316 - type: main_score value: 97.18379446640316 - type: precision value: 96.88735177865613 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-yue_Hant) type: mteb/flores config: rus_Cyrl-yue_Hant split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.09420289855072 - type: main_score value: 99.09420289855072 - type: precision value: 98.9953886693017 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-asm_Beng) type: mteb/flores config: rus_Cyrl-asm_Beng split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 94.16007905138339 - type: main_score value: 94.16007905138339 - type: precision value: 93.50296442687747 - type: recall value: 95.55335968379447 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ckb_Arab) type: mteb/flores config: rus_Cyrl-ckb_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.88537549407114 - type: f1 value: 90.76745718050066 - type: main_score value: 90.76745718050066 - type: precision value: 89.80072463768116 - type: recall value: 92.88537549407114 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-gle_Latn) type: mteb/flores config: rus_Cyrl-gle_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.699604743083 - type: f1 value: 89.40899680030115 - type: main_score value: 89.40899680030115 - type: precision value: 88.40085638998683 - type: recall value: 91.699604743083 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kas_Arab) type: mteb/flores config: rus_Cyrl-kas_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 88.3399209486166 - type: f1 value: 85.14351590438548 - type: main_score value: 85.14351590438548 - type: precision value: 83.72364953886692 - type: recall value: 88.3399209486166 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ltg_Latn) type: mteb/flores config: rus_Cyrl-ltg_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 83.399209486166 - type: f1 value: 79.88408934061107 - type: main_score value: 79.88408934061107 - type: precision value: 78.53794509179885 - type: recall value: 83.399209486166 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-nso_Latn) type: mteb/flores config: rus_Cyrl-nso_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.20553359683794 - type: f1 value: 88.95406635525212 - type: main_score value: 88.95406635525212 - type: precision value: 88.01548089591567 - type: recall value: 91.20553359683794 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-sin_Sinh) type: mteb/flores config: rus_Cyrl-sin_Sinh split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.56719367588933 - type: main_score value: 98.56719367588933 - type: precision value: 98.40250329380763 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tha_Thai) type: mteb/flores config: rus_Cyrl-tha_Thai split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.94861660079052 - type: f1 value: 94.66403162055336 - type: main_score value: 94.66403162055336 - type: precision value: 94.03820816864295 - type: recall value: 95.94861660079052 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-zho_Hans) type: mteb/flores config: rus_Cyrl-zho_Hans split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.4308300395257 - type: f1 value: 96.5909090909091 - type: main_score value: 96.5909090909091 - type: precision value: 96.17918313570487 - type: recall value: 97.4308300395257 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ast_Latn) type: mteb/flores config: rus_Cyrl-ast_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.46640316205533 - type: f1 value: 92.86890645586297 - type: main_score value: 92.86890645586297 - type: precision value: 92.14756258234519 - type: recall value: 94.46640316205533 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-crh_Latn) type: mteb/flores config: rus_Cyrl-crh_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.66403162055336 - type: f1 value: 93.2663592446201 - type: main_score value: 93.2663592446201 - type: precision value: 92.66716073781292 - type: recall value: 94.66403162055336 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-glg_Latn) type: mteb/flores config: rus_Cyrl-glg_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.81422924901186 - type: f1 value: 98.46837944664031 - type: main_score value: 98.46837944664031 - type: precision value: 98.3201581027668 - type: recall value: 98.81422924901186 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kas_Deva) type: mteb/flores config: rus_Cyrl-kas_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 69.1699604743083 - type: f1 value: 63.05505292906477 - type: main_score value: 63.05505292906477 - type: precision value: 60.62594108789761 - type: recall value: 69.1699604743083 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ltz_Latn) type: mteb/flores config: rus_Cyrl-ltz_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.40316205533597 - type: f1 value: 89.26571616789009 - type: main_score value: 89.26571616789009 - type: precision value: 88.40179747788443 - type: recall value: 91.40316205533597 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-nus_Latn) type: mteb/flores config: rus_Cyrl-nus_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 38.93280632411067 - type: f1 value: 33.98513032905371 - type: main_score value: 33.98513032905371 - type: precision value: 32.56257884802308 - type: recall value: 38.93280632411067 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-slk_Latn) type: mteb/flores config: rus_Cyrl-slk_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.42094861660078 - type: main_score value: 97.42094861660078 - type: precision value: 97.14262187088273 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tir_Ethi) type: mteb/flores config: rus_Cyrl-tir_Ethi split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.30434782608695 - type: f1 value: 88.78129117259552 - type: main_score value: 88.78129117259552 - type: precision value: 87.61528326745717 - type: recall value: 91.30434782608695 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-zho_Hant) type: mteb/flores config: rus_Cyrl-zho_Hant split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.81422924901186 - type: main_score value: 98.81422924901186 - type: precision value: 98.66600790513834 - type: recall value: 99.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-awa_Deva) type: mteb/flores config: rus_Cyrl-awa_Deva split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.70092226613966 - type: main_score value: 97.70092226613966 - type: precision value: 97.50494071146245 - type: recall value: 98.12252964426878 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-cym_Latn) type: mteb/flores config: rus_Cyrl-cym_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.94861660079052 - type: f1 value: 94.74308300395256 - type: main_score value: 94.74308300395256 - type: precision value: 94.20289855072464 - type: recall value: 95.94861660079052 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-grn_Latn) type: mteb/flores config: rus_Cyrl-grn_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 77.96442687747036 - type: f1 value: 73.64286789187975 - type: main_score value: 73.64286789187975 - type: precision value: 71.99324893260821 - type: recall value: 77.96442687747036 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kat_Geor) type: mteb/flores config: rus_Cyrl-kat_Geor split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.56719367588933 - type: main_score value: 98.56719367588933 - type: precision value: 98.40250329380764 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lua_Latn) type: mteb/flores config: rus_Cyrl-lua_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 72.03557312252964 - type: f1 value: 67.23928163404449 - type: main_score value: 67.23928163404449 - type: precision value: 65.30797101449275 - type: recall value: 72.03557312252964 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-nya_Latn) type: mteb/flores config: rus_Cyrl-nya_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.29249011857708 - type: f1 value: 90.0494071146245 - type: main_score value: 90.0494071146245 - type: precision value: 89.04808959156786 - type: recall value: 92.29249011857708 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-slv_Latn) type: mteb/flores config: rus_Cyrl-slv_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tpi_Latn) type: mteb/flores config: rus_Cyrl-tpi_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.53359683794467 - type: f1 value: 76.59481822525301 - type: main_score value: 76.59481822525301 - type: precision value: 75.12913223140497 - type: recall value: 80.53359683794467 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-zsm_Latn) type: mteb/flores config: rus_Cyrl-zsm_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.58620365142104 - type: main_score value: 96.58620365142104 - type: precision value: 96.26152832674572 - type: recall value: 97.33201581027669 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ayr_Latn) type: mteb/flores config: rus_Cyrl-ayr_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 45.55335968379446 - type: f1 value: 40.13076578531388 - type: main_score value: 40.13076578531388 - type: precision value: 38.398064362362355 - type: recall value: 45.55335968379446 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-dan_Latn) type: mteb/flores config: rus_Cyrl-dan_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-guj_Gujr) type: mteb/flores config: rus_Cyrl-guj_Gujr split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kaz_Cyrl) type: mteb/flores config: rus_Cyrl-kaz_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.81422924901186 - type: f1 value: 98.43544137022398 - type: main_score value: 98.43544137022398 - type: precision value: 98.25428194993412 - type: recall value: 98.81422924901186 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lug_Latn) type: mteb/flores config: rus_Cyrl-lug_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 82.21343873517787 - type: f1 value: 77.97485726833554 - type: main_score value: 77.97485726833554 - type: precision value: 76.22376717485415 - type: recall value: 82.21343873517787 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-oci_Latn) type: mteb/flores config: rus_Cyrl-oci_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.87351778656127 - type: f1 value: 92.25319969885187 - type: main_score value: 92.25319969885187 - type: precision value: 91.5638528138528 - type: recall value: 93.87351778656127 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-smo_Latn) type: mteb/flores config: rus_Cyrl-smo_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 84.88142292490119 - type: f1 value: 81.24364765669114 - type: main_score value: 81.24364765669114 - type: precision value: 79.69991416137661 - type: recall value: 84.88142292490119 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tsn_Latn) type: mteb/flores config: rus_Cyrl-tsn_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.05533596837944 - type: f1 value: 83.90645586297761 - type: main_score value: 83.90645586297761 - type: precision value: 82.56752305665349 - type: recall value: 87.05533596837944 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-zul_Latn) type: mteb/flores config: rus_Cyrl-zul_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.15810276679841 - type: f1 value: 93.77140974967062 - type: main_score value: 93.77140974967062 - type: precision value: 93.16534914361002 - type: recall value: 95.15810276679841 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-azb_Arab) type: mteb/flores config: rus_Cyrl-azb_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.91699604743083 - type: f1 value: 77.18050065876152 - type: main_score value: 77.18050065876152 - type: precision value: 75.21519543258673 - type: recall value: 81.91699604743083 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-deu_Latn) type: mteb/flores config: rus_Cyrl-deu_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.50592885375494 - type: f1 value: 99.34123847167325 - type: main_score value: 99.34123847167325 - type: precision value: 99.2588932806324 - type: recall value: 99.50592885375494 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hat_Latn) type: mteb/flores config: rus_Cyrl-hat_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.00790513833992 - type: f1 value: 88.69126043039086 - type: main_score value: 88.69126043039086 - type: precision value: 87.75774044795784 - type: recall value: 91.00790513833992 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kbp_Latn) type: mteb/flores config: rus_Cyrl-kbp_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 47.233201581027664 - type: f1 value: 43.01118618096943 - type: main_score value: 43.01118618096943 - type: precision value: 41.739069205043556 - type: recall value: 47.233201581027664 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-luo_Latn) type: mteb/flores config: rus_Cyrl-luo_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 60.47430830039525 - type: f1 value: 54.83210565429816 - type: main_score value: 54.83210565429816 - type: precision value: 52.81630744284779 - type: recall value: 60.47430830039525 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-ory_Orya) type: mteb/flores config: rus_Cyrl-ory_Orya split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.1106719367589 - type: f1 value: 98.83069828722003 - type: main_score value: 98.83069828722003 - type: precision value: 98.69894598155467 - type: recall value: 99.1106719367589 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-sna_Latn) type: mteb/flores config: rus_Cyrl-sna_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.72332015810277 - type: f1 value: 87.30013645774514 - type: main_score value: 87.30013645774514 - type: precision value: 86.25329380764163 - type: recall value: 89.72332015810277 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tso_Latn) type: mteb/flores config: rus_Cyrl-tso_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 84.38735177865613 - type: f1 value: 80.70424744337788 - type: main_score value: 80.70424744337788 - type: precision value: 79.18560606060606 - type: recall value: 84.38735177865613 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-azj_Latn) type: mteb/flores config: rus_Cyrl-azj_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.56455862977602 - type: main_score value: 96.56455862977602 - type: precision value: 96.23682476943345 - type: recall value: 97.33201581027669 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-dik_Latn) type: mteb/flores config: rus_Cyrl-dik_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 46.047430830039524 - type: f1 value: 40.05513069495283 - type: main_score value: 40.05513069495283 - type: precision value: 38.072590197096126 - type: recall value: 46.047430830039524 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-hau_Latn) type: mteb/flores config: rus_Cyrl-hau_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.94466403162056 - type: f1 value: 84.76943346508563 - type: main_score value: 84.76943346508563 - type: precision value: 83.34486166007905 - type: recall value: 87.94466403162056 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-kea_Latn) type: mteb/flores config: rus_Cyrl-kea_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.42687747035573 - type: f1 value: 86.83803021747684 - type: main_score value: 86.83803021747684 - type: precision value: 85.78416149068323 - type: recall value: 89.42687747035573 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lus_Latn) type: mteb/flores config: rus_Cyrl-lus_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 68.97233201581028 - type: f1 value: 64.05480726292745 - type: main_score value: 64.05480726292745 - type: precision value: 62.42670749487858 - type: recall value: 68.97233201581028 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-pag_Latn) type: mteb/flores config: rus_Cyrl-pag_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 78.75494071146245 - type: f1 value: 74.58573558401933 - type: main_score value: 74.58573558401933 - type: precision value: 73.05532028358115 - type: recall value: 78.75494071146245 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-snd_Arab) type: mteb/flores config: rus_Cyrl-snd_Arab split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.8498023715415 - type: f1 value: 94.56521739130434 - type: main_score value: 94.56521739130434 - type: precision value: 93.97233201581028 - type: recall value: 95.8498023715415 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tuk_Latn) type: mteb/flores config: rus_Cyrl-tuk_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 68.08300395256917 - type: f1 value: 62.93565240205557 - type: main_score value: 62.93565240205557 - type: precision value: 61.191590257043934 - type: recall value: 68.08300395256917 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-bak_Cyrl) type: mteb/flores config: rus_Cyrl-bak_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.04743083003953 - type: f1 value: 94.86824769433464 - type: main_score value: 94.86824769433464 - type: precision value: 94.34288537549406 - type: recall value: 96.04743083003953 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-dyu_Latn) type: mteb/flores config: rus_Cyrl-dyu_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 37.45059288537549 - type: f1 value: 31.670482312800807 - type: main_score value: 31.670482312800807 - type: precision value: 29.99928568357422 - type: recall value: 37.45059288537549 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-heb_Hebr) type: mteb/flores config: rus_Cyrl-heb_Hebr split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.23320158102767 - type: f1 value: 96.38998682476942 - type: main_score value: 96.38998682476942 - type: precision value: 95.99802371541502 - type: recall value: 97.23320158102767 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-khk_Cyrl) type: mteb/flores config: rus_Cyrl-khk_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.41897233201581 - type: f1 value: 98.00724637681158 - type: main_score value: 98.00724637681158 - type: precision value: 97.82938076416336 - type: recall value: 98.41897233201581 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-lvs_Latn) type: mteb/flores config: rus_Cyrl-lvs_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.4308300395257 - type: f1 value: 96.61396574440053 - type: main_score value: 96.61396574440053 - type: precision value: 96.2203557312253 - type: recall value: 97.4308300395257 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-pan_Guru) type: mteb/flores config: rus_Cyrl-pan_Guru split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034256 - type: main_score value: 99.07773386034256 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-som_Latn) type: mteb/flores config: rus_Cyrl-som_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.74703557312253 - type: f1 value: 84.52898550724638 - type: main_score value: 84.52898550724638 - type: precision value: 83.09288537549409 - type: recall value: 87.74703557312253 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (rus_Cyrl-tum_Latn) type: mteb/flores config: rus_Cyrl-tum_Latn split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.15415019762845 - type: f1 value: 83.85069640504425 - type: main_score value: 83.85069640504425 - type: precision value: 82.43671183888576 - type: recall value: 87.15415019762845 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (taq_Latn-rus_Cyrl) type: mteb/flores config: taq_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 28.55731225296443 - type: f1 value: 26.810726360049568 - type: main_score value: 26.810726360049568 - type: precision value: 26.260342858265577 - type: recall value: 28.55731225296443 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (war_Latn-rus_Cyrl) type: mteb/flores config: war_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.86166007905138 - type: f1 value: 94.03147083483051 - type: main_score value: 94.03147083483051 - type: precision value: 93.70653606003322 - type: recall value: 94.86166007905138 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (arb_Arab-rus_Cyrl) type: mteb/flores config: arb_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.34387351778656 - type: f1 value: 95.23056653491436 - type: main_score value: 95.23056653491436 - type: precision value: 94.70520421607378 - type: recall value: 96.34387351778656 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bul_Cyrl-rus_Cyrl) type: mteb/flores config: bul_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.90118577075098 - type: f1 value: 99.86824769433464 - type: main_score value: 99.86824769433464 - type: precision value: 99.85177865612648 - type: recall value: 99.90118577075098 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fra_Latn-rus_Cyrl) type: mteb/flores config: fra_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (jpn_Jpan-rus_Cyrl) type: mteb/flores config: jpn_Jpan-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.3201581027668 - type: f1 value: 97.76021080368905 - type: main_score value: 97.76021080368905 - type: precision value: 97.48023715415019 - type: recall value: 98.3201581027668 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lij_Latn-rus_Cyrl) type: mteb/flores config: lij_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 83.49802371541502 - type: f1 value: 81.64800059239636 - type: main_score value: 81.64800059239636 - type: precision value: 80.9443055878478 - type: recall value: 83.49802371541502 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (mya_Mymr-rus_Cyrl) type: mteb/flores config: mya_Mymr-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 88.76776366313682 - type: main_score value: 88.76776366313682 - type: precision value: 88.18370446119435 - type: recall value: 90.21739130434783 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (sag_Latn-rus_Cyrl) type: mteb/flores config: sag_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 41.699604743083 - type: f1 value: 39.53066322643847 - type: main_score value: 39.53066322643847 - type: precision value: 38.822876239229274 - type: recall value: 41.699604743083 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (taq_Tfng-rus_Cyrl) type: mteb/flores config: taq_Tfng-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 10.67193675889328 - type: f1 value: 9.205744965817951 - type: main_score value: 9.205744965817951 - type: precision value: 8.85195219073817 - type: recall value: 10.67193675889328 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (wol_Latn-rus_Cyrl) type: mteb/flores config: wol_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 63.537549407114625 - type: f1 value: 60.65190727391827 - type: main_score value: 60.65190727391827 - type: precision value: 59.61144833427442 - type: recall value: 63.537549407114625 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (arb_Latn-rus_Cyrl) type: mteb/flores config: arb_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 13.142292490118576 - type: f1 value: 12.372910318176764 - type: main_score value: 12.372910318176764 - type: precision value: 12.197580895919188 - type: recall value: 13.142292490118576 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (cat_Latn-rus_Cyrl) type: mteb/flores config: cat_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.80599472990777 - type: main_score value: 98.80599472990777 - type: precision value: 98.72953133822698 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fur_Latn-rus_Cyrl) type: mteb/flores config: fur_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.02766798418972 - type: f1 value: 79.36184294084613 - type: main_score value: 79.36184294084613 - type: precision value: 78.69187826527705 - type: recall value: 81.02766798418972 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kab_Latn-rus_Cyrl) type: mteb/flores config: kab_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 34.387351778656125 - type: f1 value: 32.02306921576947 - type: main_score value: 32.02306921576947 - type: precision value: 31.246670347137467 - type: recall value: 34.387351778656125 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lim_Latn-rus_Cyrl) type: mteb/flores config: lim_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 78.26086956521739 - type: f1 value: 75.90239449214359 - type: main_score value: 75.90239449214359 - type: precision value: 75.02211430745493 - type: recall value: 78.26086956521739 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (nld_Latn-rus_Cyrl) type: mteb/flores config: nld_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.9459815546772 - type: main_score value: 98.9459815546772 - type: precision value: 98.81422924901186 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (san_Deva-rus_Cyrl) type: mteb/flores config: san_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.94466403162056 - type: f1 value: 86.68928897189767 - type: main_score value: 86.68928897189767 - type: precision value: 86.23822997079216 - type: recall value: 87.94466403162056 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tat_Cyrl-rus_Cyrl) type: mteb/flores config: tat_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.03557312252964 - type: f1 value: 96.4167365353136 - type: main_score value: 96.4167365353136 - type: precision value: 96.16847826086958 - type: recall value: 97.03557312252964 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (xho_Latn-rus_Cyrl) type: mteb/flores config: xho_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.95652173913044 - type: f1 value: 85.5506497283435 - type: main_score value: 85.5506497283435 - type: precision value: 84.95270479733395 - type: recall value: 86.95652173913044 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ars_Arab-rus_Cyrl) type: mteb/flores config: ars_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 96.6403162055336 - type: f1 value: 95.60935441370223 - type: main_score value: 95.60935441370223 - type: precision value: 95.13339920948617 - type: recall value: 96.6403162055336 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ceb_Latn-rus_Cyrl) type: mteb/flores config: ceb_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.7509881422925 - type: f1 value: 95.05209198303827 - type: main_score value: 95.05209198303827 - type: precision value: 94.77662283368805 - type: recall value: 95.7509881422925 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (fuv_Latn-rus_Cyrl) type: mteb/flores config: fuv_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 45.25691699604743 - type: f1 value: 42.285666666742365 - type: main_score value: 42.285666666742365 - type: precision value: 41.21979853402283 - type: recall value: 45.25691699604743 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kac_Latn-rus_Cyrl) type: mteb/flores config: kac_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 34.683794466403164 - type: f1 value: 33.3235346229031 - type: main_score value: 33.3235346229031 - type: precision value: 32.94673924616852 - type: recall value: 34.683794466403164 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lin_Latn-rus_Cyrl) type: mteb/flores config: lin_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.85770750988142 - type: f1 value: 85.1867110799439 - type: main_score value: 85.1867110799439 - type: precision value: 84.53038212173273 - type: recall value: 86.85770750988142 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (nno_Latn-rus_Cyrl) type: mteb/flores config: nno_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.4308300395257 - type: f1 value: 96.78383210991906 - type: main_score value: 96.78383210991906 - type: precision value: 96.51185770750989 - type: recall value: 97.4308300395257 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (sat_Olck-rus_Cyrl) type: mteb/flores config: sat_Olck-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 1.185770750988142 - type: f1 value: 1.0279253129117258 - type: main_score value: 1.0279253129117258 - type: precision value: 1.0129746819135175 - type: recall value: 1.185770750988142 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tel_Telu-rus_Cyrl) type: mteb/flores config: tel_Telu-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.12252964426878 - type: f1 value: 97.61198945981555 - type: main_score value: 97.61198945981555 - type: precision value: 97.401185770751 - type: recall value: 98.12252964426878 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ydd_Hebr-rus_Cyrl) type: mteb/flores config: ydd_Hebr-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 75.8893280632411 - type: f1 value: 74.00244008018511 - type: main_score value: 74.00244008018511 - type: precision value: 73.25683020960382 - type: recall value: 75.8893280632411 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ary_Arab-rus_Cyrl) type: mteb/flores config: ary_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.56126482213439 - type: f1 value: 83.72796285839765 - type: main_score value: 83.72796285839765 - type: precision value: 82.65014273166447 - type: recall value: 86.56126482213439 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ces_Latn-rus_Cyrl) type: mteb/flores config: ces_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.60474308300395 - type: f1 value: 99.4729907773386 - type: main_score value: 99.4729907773386 - type: precision value: 99.40711462450594 - type: recall value: 99.60474308300395 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (gaz_Latn-rus_Cyrl) type: mteb/flores config: gaz_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 42.58893280632411 - type: f1 value: 40.75832866805978 - type: main_score value: 40.75832866805978 - type: precision value: 40.14285046917723 - type: recall value: 42.58893280632411 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kam_Latn-rus_Cyrl) type: mteb/flores config: kam_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 45.25691699604743 - type: f1 value: 42.6975518029456 - type: main_score value: 42.6975518029456 - type: precision value: 41.87472710984596 - type: recall value: 45.25691699604743 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lit_Latn-rus_Cyrl) type: mteb/flores config: lit_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.33201581027669 - type: f1 value: 96.62384716732542 - type: main_score value: 96.62384716732542 - type: precision value: 96.3175230566535 - type: recall value: 97.33201581027669 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (nob_Latn-rus_Cyrl) type: mteb/flores config: nob_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.30368906455863 - type: main_score value: 98.30368906455863 - type: precision value: 98.10606060606061 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (scn_Latn-rus_Cyrl) type: mteb/flores config: scn_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 70.45454545454545 - type: f1 value: 68.62561022640075 - type: main_score value: 68.62561022640075 - type: precision value: 67.95229103411222 - type: recall value: 70.45454545454545 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tgk_Cyrl-rus_Cyrl) type: mteb/flores config: tgk_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.4901185770751 - type: f1 value: 91.58514492753623 - type: main_score value: 91.58514492753623 - type: precision value: 91.24759298672342 - type: recall value: 92.4901185770751 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (yor_Latn-rus_Cyrl) type: mteb/flores config: yor_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 67.98418972332016 - type: f1 value: 64.72874247330768 - type: main_score value: 64.72874247330768 - type: precision value: 63.450823399938685 - type: recall value: 67.98418972332016 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (arz_Arab-rus_Cyrl) type: mteb/flores config: arz_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 94.56521739130434 - type: f1 value: 93.07971014492755 - type: main_score value: 93.07971014492755 - type: precision value: 92.42753623188406 - type: recall value: 94.56521739130434 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (cjk_Latn-rus_Cyrl) type: mteb/flores config: cjk_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 38.63636363636363 - type: f1 value: 36.25747140862938 - type: main_score value: 36.25747140862938 - type: precision value: 35.49101355074723 - type: recall value: 38.63636363636363 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (gla_Latn-rus_Cyrl) type: mteb/flores config: gla_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 69.26877470355731 - type: f1 value: 66.11797423328613 - type: main_score value: 66.11797423328613 - type: precision value: 64.89369649409694 - type: recall value: 69.26877470355731 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kan_Knda-rus_Cyrl) type: mteb/flores config: kan_Knda-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.51505740636176 - type: main_score value: 97.51505740636176 - type: precision value: 97.30731225296442 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lmo_Latn-rus_Cyrl) type: mteb/flores config: lmo_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 73.3201581027668 - type: f1 value: 71.06371608677273 - type: main_score value: 71.06371608677273 - type: precision value: 70.26320288266223 - type: recall value: 73.3201581027668 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (npi_Deva-rus_Cyrl) type: mteb/flores config: npi_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.36645107198466 - type: main_score value: 97.36645107198466 - type: precision value: 97.1772068511199 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (shn_Mymr-rus_Cyrl) type: mteb/flores config: shn_Mymr-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 39.426877470355734 - type: f1 value: 37.16728785513024 - type: main_score value: 37.16728785513024 - type: precision value: 36.56918548278505 - type: recall value: 39.426877470355734 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tgl_Latn-rus_Cyrl) type: mteb/flores config: tgl_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.6378693769998 - type: main_score value: 97.6378693769998 - type: precision value: 97.55371440154047 - type: recall value: 97.92490118577075 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (yue_Hant-rus_Cyrl) type: mteb/flores config: yue_Hant-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.3833051006964 - type: main_score value: 97.3833051006964 - type: precision value: 97.1590909090909 - type: recall value: 97.92490118577075 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (asm_Beng-rus_Cyrl) type: mteb/flores config: asm_Beng-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.78656126482213 - type: f1 value: 91.76917395296842 - type: main_score value: 91.76917395296842 - type: precision value: 91.38292866553736 - type: recall value: 92.78656126482213 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ckb_Arab-rus_Cyrl) type: mteb/flores config: ckb_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.8300395256917 - type: f1 value: 79.17664345468799 - type: main_score value: 79.17664345468799 - type: precision value: 78.5622171683459 - type: recall value: 80.8300395256917 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (gle_Latn-rus_Cyrl) type: mteb/flores config: gle_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 85.86956521739131 - type: f1 value: 84.45408265372492 - type: main_score value: 84.45408265372492 - type: precision value: 83.8774340026703 - type: recall value: 85.86956521739131 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kas_Arab-rus_Cyrl) type: mteb/flores config: kas_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 76.28458498023716 - type: f1 value: 74.11216313578267 - type: main_score value: 74.11216313578267 - type: precision value: 73.2491277759584 - type: recall value: 76.28458498023716 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ltg_Latn-rus_Cyrl) type: mteb/flores config: ltg_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 71.14624505928853 - type: f1 value: 68.69245357723618 - type: main_score value: 68.69245357723618 - type: precision value: 67.8135329666459 - type: recall value: 71.14624505928853 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (nso_Latn-rus_Cyrl) type: mteb/flores config: nso_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.64822134387352 - type: f1 value: 85.98419219986725 - type: main_score value: 85.98419219986725 - type: precision value: 85.32513873917036 - type: recall value: 87.64822134387352 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (sin_Sinh-rus_Cyrl) type: mteb/flores config: sin_Sinh-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.62845849802372 - type: f1 value: 97.10144927536231 - type: main_score value: 97.10144927536231 - type: precision value: 96.87986585219788 - type: recall value: 97.62845849802372 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tha_Thai-rus_Cyrl) type: mteb/flores config: tha_Thai-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.71541501976284 - type: f1 value: 98.28722002635045 - type: main_score value: 98.28722002635045 - type: precision value: 98.07312252964427 - type: recall value: 98.71541501976284 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (zho_Hans-rus_Cyrl) type: mteb/flores config: zho_Hans-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.68247694334651 - type: main_score value: 98.68247694334651 - type: precision value: 98.51778656126481 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ast_Latn-rus_Cyrl) type: mteb/flores config: ast_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.65217391304348 - type: f1 value: 94.90649683857505 - type: main_score value: 94.90649683857505 - type: precision value: 94.61352657004831 - type: recall value: 95.65217391304348 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (crh_Latn-rus_Cyrl) type: mteb/flores config: crh_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 93.08300395256917 - type: f1 value: 92.20988998886428 - type: main_score value: 92.20988998886428 - type: precision value: 91.85631013694254 - type: recall value: 93.08300395256917 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (glg_Latn-rus_Cyrl) type: mteb/flores config: glg_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.55335968379447 - type: f1 value: 95.18006148440931 - type: main_score value: 95.18006148440931 - type: precision value: 95.06540560888386 - type: recall value: 95.55335968379447 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kas_Deva-rus_Cyrl) type: mteb/flores config: kas_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 55.03952569169961 - type: f1 value: 52.19871938895554 - type: main_score value: 52.19871938895554 - type: precision value: 51.17660971469557 - type: recall value: 55.03952569169961 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ltz_Latn-rus_Cyrl) type: mteb/flores config: ltz_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 87.64822134387352 - type: f1 value: 86.64179841897234 - type: main_score value: 86.64179841897234 - type: precision value: 86.30023235431587 - type: recall value: 87.64822134387352 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (nus_Latn-rus_Cyrl) type: mteb/flores config: nus_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 27.4703557312253 - type: f1 value: 25.703014277858088 - type: main_score value: 25.703014277858088 - type: precision value: 25.194105476917315 - type: recall value: 27.4703557312253 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (slk_Latn-rus_Cyrl) type: mteb/flores config: slk_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.1106719367589 - type: main_score value: 99.1106719367589 - type: precision value: 99.02832674571805 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tir_Ethi-rus_Cyrl) type: mteb/flores config: tir_Ethi-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 80.73122529644269 - type: f1 value: 78.66903754775608 - type: main_score value: 78.66903754775608 - type: precision value: 77.86431694163612 - type: recall value: 80.73122529644269 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (zho_Hant-rus_Cyrl) type: mteb/flores config: zho_Hant-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.22134387351778 - type: f1 value: 97.66798418972333 - type: main_score value: 97.66798418972333 - type: precision value: 97.40612648221344 - type: recall value: 98.22134387351778 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (awa_Deva-rus_Cyrl) type: mteb/flores config: awa_Deva-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 96.94224857268335 - type: main_score value: 96.94224857268335 - type: precision value: 96.68560606060606 - type: recall value: 97.5296442687747 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (cym_Latn-rus_Cyrl) type: mteb/flores config: cym_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 92.68774703557312 - type: f1 value: 91.69854302097961 - type: main_score value: 91.69854302097961 - type: precision value: 91.31236846157795 - type: recall value: 92.68774703557312 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (grn_Latn-rus_Cyrl) type: mteb/flores config: grn_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 64.13043478260869 - type: f1 value: 61.850586118740004 - type: main_score value: 61.850586118740004 - type: precision value: 61.0049495186209 - type: recall value: 64.13043478260869 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kat_Geor-rus_Cyrl) type: mteb/flores config: kat_Geor-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.59881422924902 - type: main_score value: 97.59881422924902 - type: precision value: 97.42534036012296 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lua_Latn-rus_Cyrl) type: mteb/flores config: lua_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 63.63636363636363 - type: f1 value: 60.9709122526128 - type: main_score value: 60.9709122526128 - type: precision value: 60.03915902282226 - type: recall value: 63.63636363636363 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (nya_Latn-rus_Cyrl) type: mteb/flores config: nya_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 89.2292490118577 - type: f1 value: 87.59723824473149 - type: main_score value: 87.59723824473149 - type: precision value: 86.90172707867349 - type: recall value: 89.2292490118577 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (slv_Latn-rus_Cyrl) type: mteb/flores config: slv_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.01185770750988 - type: f1 value: 98.74835309617917 - type: main_score value: 98.74835309617917 - type: precision value: 98.63636363636364 - type: recall value: 99.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tpi_Latn-rus_Cyrl) type: mteb/flores config: tpi_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 77.37154150197628 - type: f1 value: 75.44251611276084 - type: main_score value: 75.44251611276084 - type: precision value: 74.78103665109595 - type: recall value: 77.37154150197628 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (zsm_Latn-rus_Cyrl) type: mteb/flores config: zsm_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.2094861660079 - type: f1 value: 98.96245059288538 - type: main_score value: 98.96245059288538 - type: precision value: 98.8471673254282 - type: recall value: 99.2094861660079 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ayr_Latn-rus_Cyrl) type: mteb/flores config: ayr_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 27.766798418972332 - type: f1 value: 26.439103195281312 - type: main_score value: 26.439103195281312 - type: precision value: 26.052655604573964 - type: recall value: 27.766798418972332 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (dan_Latn-rus_Cyrl) type: mteb/flores config: dan_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.30830039525692 - type: f1 value: 99.07773386034255 - type: main_score value: 99.07773386034255 - type: precision value: 98.96245059288538 - type: recall value: 99.30830039525692 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (guj_Gujr-rus_Cyrl) type: mteb/flores config: guj_Gujr-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.26449275362317 - type: main_score value: 97.26449275362317 - type: precision value: 97.02498588368154 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kaz_Cyrl-rus_Cyrl) type: mteb/flores config: kaz_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.5296442687747 - type: f1 value: 97.03557312252964 - type: main_score value: 97.03557312252964 - type: precision value: 96.85022158342316 - type: recall value: 97.5296442687747 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lug_Latn-rus_Cyrl) type: mteb/flores config: lug_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 68.57707509881423 - type: f1 value: 65.93361605820395 - type: main_score value: 65.93361605820395 - type: precision value: 64.90348248593789 - type: recall value: 68.57707509881423 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (oci_Latn-rus_Cyrl) type: mteb/flores config: oci_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.26482213438736 - type: f1 value: 85.33176417155623 - type: main_score value: 85.33176417155623 - type: precision value: 85.00208833384637 - type: recall value: 86.26482213438736 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (smo_Latn-rus_Cyrl) type: mteb/flores config: smo_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 77.96442687747036 - type: f1 value: 75.70960450188885 - type: main_score value: 75.70960450188885 - type: precision value: 74.8312632736777 - type: recall value: 77.96442687747036 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tsn_Latn-rus_Cyrl) type: mteb/flores config: tsn_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 84.38735177865613 - type: f1 value: 82.13656376349225 - type: main_score value: 82.13656376349225 - type: precision value: 81.16794543904518 - type: recall value: 84.38735177865613 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (zul_Latn-rus_Cyrl) type: mteb/flores config: zul_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 90.21739130434783 - type: f1 value: 88.77570602050753 - type: main_score value: 88.77570602050753 - type: precision value: 88.15978104021582 - type: recall value: 90.21739130434783 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (azb_Arab-rus_Cyrl) type: mteb/flores config: azb_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 65.71146245059289 - type: f1 value: 64.18825390221271 - type: main_score value: 64.18825390221271 - type: precision value: 63.66811154793568 - type: recall value: 65.71146245059289 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (deu_Latn-rus_Cyrl) type: mteb/flores config: deu_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 99.70355731225297 - type: f1 value: 99.60474308300395 - type: main_score value: 99.60474308300395 - type: precision value: 99.55533596837944 - type: recall value: 99.70355731225297 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hat_Latn-rus_Cyrl) type: mteb/flores config: hat_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 86.7588932806324 - type: f1 value: 85.86738623695146 - type: main_score value: 85.86738623695146 - type: precision value: 85.55235467420822 - type: recall value: 86.7588932806324 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kbp_Latn-rus_Cyrl) type: mteb/flores config: kbp_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 34.88142292490119 - type: f1 value: 32.16511669463015 - type: main_score value: 32.16511669463015 - type: precision value: 31.432098549546318 - type: recall value: 34.88142292490119 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (luo_Latn-rus_Cyrl) type: mteb/flores config: luo_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 52.27272727272727 - type: f1 value: 49.60489626836975 - type: main_score value: 49.60489626836975 - type: precision value: 48.69639631803339 - type: recall value: 52.27272727272727 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (ory_Orya-rus_Cyrl) type: mteb/flores config: ory_Orya-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.82608695652173 - type: f1 value: 97.27437417654808 - type: main_score value: 97.27437417654808 - type: precision value: 97.04968944099377 - type: recall value: 97.82608695652173 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (sna_Latn-rus_Cyrl) type: mteb/flores config: sna_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 85.37549407114624 - type: f1 value: 83.09911316305177 - type: main_score value: 83.09911316305177 - type: precision value: 82.1284950958864 - type: recall value: 85.37549407114624 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tso_Latn-rus_Cyrl) type: mteb/flores config: tso_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 82.90513833992095 - type: f1 value: 80.28290385503824 - type: main_score value: 80.28290385503824 - type: precision value: 79.23672543237761 - type: recall value: 82.90513833992095 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (azj_Latn-rus_Cyrl) type: mteb/flores config: azj_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.02371541501977 - type: f1 value: 97.49200075287031 - type: main_score value: 97.49200075287031 - type: precision value: 97.266139657444 - type: recall value: 98.02371541501977 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (dik_Latn-rus_Cyrl) type: mteb/flores config: dik_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 38.43873517786561 - type: f1 value: 35.78152442955223 - type: main_score value: 35.78152442955223 - type: precision value: 34.82424325078237 - type: recall value: 38.43873517786561 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (hau_Latn-rus_Cyrl) type: mteb/flores config: hau_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.42292490118577 - type: f1 value: 79.24612283124593 - type: main_score value: 79.24612283124593 - type: precision value: 78.34736070751448 - type: recall value: 81.42292490118577 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (kea_Latn-rus_Cyrl) type: mteb/flores config: kea_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 81.62055335968378 - type: f1 value: 80.47015182884748 - type: main_score value: 80.47015182884748 - type: precision value: 80.02671028885862 - type: recall value: 81.62055335968378 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lus_Latn-rus_Cyrl) type: mteb/flores config: lus_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 62.74703557312253 - type: f1 value: 60.53900079111122 - type: main_score value: 60.53900079111122 - type: precision value: 59.80024202850289 - type: recall value: 62.74703557312253 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (pag_Latn-rus_Cyrl) type: mteb/flores config: pag_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 74.01185770750988 - type: f1 value: 72.57280648279529 - type: main_score value: 72.57280648279529 - type: precision value: 71.99952968456789 - type: recall value: 74.01185770750988 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (snd_Arab-rus_Cyrl) type: mteb/flores config: snd_Arab-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 91.30434782608695 - type: f1 value: 90.24653499445358 - type: main_score value: 90.24653499445358 - type: precision value: 89.83134068200232 - type: recall value: 91.30434782608695 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tuk_Latn-rus_Cyrl) type: mteb/flores config: tuk_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 47.62845849802372 - type: f1 value: 45.812928836644254 - type: main_score value: 45.812928836644254 - type: precision value: 45.23713833170355 - type: recall value: 47.62845849802372 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (bak_Cyrl-rus_Cyrl) type: mteb/flores config: bak_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.8498023715415 - type: f1 value: 95.18904459615922 - type: main_score value: 95.18904459615922 - type: precision value: 94.92812441182006 - type: recall value: 95.8498023715415 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (dyu_Latn-rus_Cyrl) type: mteb/flores config: dyu_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 29.64426877470356 - type: f1 value: 27.287335193938166 - type: main_score value: 27.287335193938166 - type: precision value: 26.583996026587492 - type: recall value: 29.64426877470356 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (heb_Hebr-rus_Cyrl) type: mteb/flores config: heb_Hebr-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 98.91304347826086 - type: f1 value: 98.55072463768116 - type: main_score value: 98.55072463768116 - type: precision value: 98.36956521739131 - type: recall value: 98.91304347826086 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (khk_Cyrl-rus_Cyrl) type: mteb/flores config: khk_Cyrl-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 95.15810276679841 - type: f1 value: 94.44009547764487 - type: main_score value: 94.44009547764487 - type: precision value: 94.16579797014579 - type: recall value: 95.15810276679841 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (lvs_Latn-rus_Cyrl) type: mteb/flores config: lvs_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.51467241585817 - type: main_score value: 97.51467241585817 - type: precision value: 97.36166007905138 - type: recall value: 97.92490118577075 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (pan_Guru-rus_Cyrl) type: mteb/flores config: pan_Guru-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 97.92490118577075 - type: f1 value: 97.42918313570486 - type: main_score value: 97.42918313570486 - type: precision value: 97.22261434217955 - type: recall value: 97.92490118577075 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (som_Latn-rus_Cyrl) type: mteb/flores config: som_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 75.69169960474308 - type: f1 value: 73.7211667065916 - type: main_score value: 73.7211667065916 - type: precision value: 72.95842401892384 - type: recall value: 75.69169960474308 - task: type: BitextMining dataset: name: MTEB FloresBitextMining (tum_Latn-rus_Cyrl) type: mteb/flores config: tum_Latn-rus_Cyrl split: devtest revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e metrics: - type: accuracy value: 85.67193675889328 - type: f1 value: 82.9296066252588 - type: main_score value: 82.9296066252588 - type: precision value: 81.77330225447936 - type: recall value: 85.67193675889328 - task: type: Classification dataset: name: MTEB GeoreviewClassification (default) type: ai-forever/georeview-classification config: default split: test revision: 3765c0d1de6b7d264bc459433c45e5a75513839c metrics: - type: accuracy value: 44.6630859375 - type: f1 value: 42.607425073610536 - type: f1_weighted value: 42.60639474586065 - type: main_score value: 44.6630859375 - task: type: Clustering dataset: name: MTEB GeoreviewClusteringP2P (default) type: ai-forever/georeview-clustering-p2p config: default split: test revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec metrics: - type: main_score value: 58.15951247070825 - type: v_measure value: 58.15951247070825 - type: v_measure_std value: 0.6739615788288809 - task: type: Classification dataset: name: MTEB HeadlineClassification (default) type: ai-forever/headline-classification config: default split: test revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb metrics: - type: accuracy value: 73.935546875 - type: f1 value: 73.8654872186846 - type: f1_weighted value: 73.86733122685095 - type: main_score value: 73.935546875 - task: type: Classification dataset: name: MTEB InappropriatenessClassification (default) type: ai-forever/inappropriateness-classification config: default split: test revision: 601651fdc45ef243751676e62dd7a19f491c0285 metrics: - type: accuracy value: 59.16015624999999 - type: ap value: 55.52276605836938 - type: ap_weighted value: 55.52276605836938 - type: f1 value: 58.614248199637956 - type: f1_weighted value: 58.614248199637956 - type: main_score value: 59.16015624999999 - task: type: Classification dataset: name: MTEB KinopoiskClassification (default) type: ai-forever/kinopoisk-sentiment-classification config: default split: test revision: 5911f26666ac11af46cb9c6849d0dc80a378af24 metrics: - type: accuracy value: 49.959999999999994 - type: f1 value: 48.4900332316098 - type: f1_weighted value: 48.4900332316098 - type: main_score value: 49.959999999999994 - task: type: Classification dataset: name: MTEB LanguageClassification (default) type: papluca/language-identification config: default split: test revision: aa56583bf2bc52b0565770607d6fc3faebecf9e2 metrics: - type: accuracy value: 71.005859375 - type: f1 value: 69.63481100303348 - type: f1_weighted value: 69.64640413409529 - type: main_score value: 71.005859375 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P (ru) type: reciTAL/mlsum config: ru split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: main_score value: 42.11280087032343 - type: v_measure value: 42.11280087032343 - type: v_measure_std value: 6.7619971723605135 - type: main_score value: 43.00112546945811 - type: v_measure value: 43.00112546945811 - type: v_measure_std value: 1.4740560414835675 - type: main_score value: 39.81446080575161 - type: v_measure value: 39.81446080575161 - type: v_measure_std value: 7.125661320308298 - type: main_score value: 39.29659668980239 - type: v_measure value: 39.29659668980239 - type: v_measure_std value: 2.6570502923023094 - task: type: Retrieval dataset: name: MTEB MultiLongDocRetrieval (ru) type: Shitao/MLDR config: ru split: dev revision: d67138e705d963e346253a80e59676ddb418810a metrics: - type: main_score value: 38.671 - type: map_at_1 value: 30.0 - type: map_at_10 value: 36.123 - type: map_at_100 value: 36.754999999999995 - type: map_at_1000 value: 36.806 - type: map_at_20 value: 36.464 - type: map_at_3 value: 35.25 - type: map_at_5 value: 35.8 - type: mrr_at_1 value: 30.0 - type: mrr_at_10 value: 36.122817460317464 - type: mrr_at_100 value: 36.75467016625293 - type: mrr_at_1000 value: 36.80612724920882 - type: mrr_at_20 value: 36.46359681984682 - type: mrr_at_3 value: 35.25 - type: mrr_at_5 value: 35.800000000000004 - type: nauc_map_at_1000_diff1 value: 55.61987610843598 - type: nauc_map_at_1000_max value: 52.506795017152186 - type: nauc_map_at_1000_std value: 2.95487192066911 - type: nauc_map_at_100_diff1 value: 55.598419532054734 - type: nauc_map_at_100_max value: 52.48192017040307 - type: nauc_map_at_100_std value: 2.930120252521189 - type: nauc_map_at_10_diff1 value: 56.02309155375198 - type: nauc_map_at_10_max value: 52.739573233234424 - type: nauc_map_at_10_std value: 2.4073432421641545 - type: nauc_map_at_1_diff1 value: 52.57059856776112 - type: nauc_map_at_1_max value: 50.55668152952304 - type: nauc_map_at_1_std value: 1.6572084853398048 - type: nauc_map_at_20_diff1 value: 55.75769029917031 - type: nauc_map_at_20_max value: 52.53663737242853 - type: nauc_map_at_20_std value: 2.8489192879814 - type: nauc_map_at_3_diff1 value: 56.90294128342709 - type: nauc_map_at_3_max value: 53.10608389782041 - type: nauc_map_at_3_std value: 1.4909731657889491 - type: nauc_map_at_5_diff1 value: 56.1258315436073 - type: nauc_map_at_5_max value: 52.398078357541564 - type: nauc_map_at_5_std value: 1.8256862015101467 - type: nauc_mrr_at_1000_diff1 value: 55.61987610843598 - type: nauc_mrr_at_1000_max value: 52.506795017152186 - type: nauc_mrr_at_1000_std value: 2.95487192066911 - type: nauc_mrr_at_100_diff1 value: 55.598419532054734 - type: nauc_mrr_at_100_max value: 52.48192017040307 - type: nauc_mrr_at_100_std value: 2.930120252521189 - type: nauc_mrr_at_10_diff1 value: 56.02309155375198 - type: nauc_mrr_at_10_max value: 52.739573233234424 - type: nauc_mrr_at_10_std value: 2.4073432421641545 - type: nauc_mrr_at_1_diff1 value: 52.57059856776112 - type: nauc_mrr_at_1_max value: 50.55668152952304 - type: nauc_mrr_at_1_std value: 1.6572084853398048 - type: nauc_mrr_at_20_diff1 value: 55.75769029917031 - type: nauc_mrr_at_20_max value: 52.53663737242853 - type: nauc_mrr_at_20_std value: 2.8489192879814 - type: nauc_mrr_at_3_diff1 value: 56.90294128342709 - type: nauc_mrr_at_3_max value: 53.10608389782041 - type: nauc_mrr_at_3_std value: 1.4909731657889491 - type: nauc_mrr_at_5_diff1 value: 56.1258315436073 - type: nauc_mrr_at_5_max value: 52.398078357541564 - type: nauc_mrr_at_5_std value: 1.8256862015101467 - type: nauc_ndcg_at_1000_diff1 value: 55.30733548408918 - type: nauc_ndcg_at_1000_max value: 53.51143366189318 - type: nauc_ndcg_at_1000_std value: 7.133789405525702 - type: nauc_ndcg_at_100_diff1 value: 54.32209039488095 - type: nauc_ndcg_at_100_max value: 52.67499334461009 - type: nauc_ndcg_at_100_std value: 6.878823275077807 - type: nauc_ndcg_at_10_diff1 value: 56.266780806997716 - type: nauc_ndcg_at_10_max value: 53.52837255793743 - type: nauc_ndcg_at_10_std value: 3.756832592964262 - type: nauc_ndcg_at_1_diff1 value: 52.57059856776112 - type: nauc_ndcg_at_1_max value: 50.55668152952304 - type: nauc_ndcg_at_1_std value: 1.6572084853398048 - type: nauc_ndcg_at_20_diff1 value: 55.39255420432796 - type: nauc_ndcg_at_20_max value: 52.946114684072235 - type: nauc_ndcg_at_20_std value: 5.414933414031693 - type: nauc_ndcg_at_3_diff1 value: 57.92826624996289 - type: nauc_ndcg_at_3_max value: 53.89907760306972 - type: nauc_ndcg_at_3_std value: 1.6661401245309218 - type: nauc_ndcg_at_5_diff1 value: 56.47508936029308 - type: nauc_ndcg_at_5_max value: 52.66800998045517 - type: nauc_ndcg_at_5_std value: 2.4127296184140423 - type: nauc_precision_at_1000_diff1 value: 57.25924020238401 - type: nauc_precision_at_1000_max value: 65.1132590931922 - type: nauc_precision_at_1000_std value: 40.60788709618145 - type: nauc_precision_at_100_diff1 value: 46.49620002554606 - type: nauc_precision_at_100_max value: 53.02960148167071 - type: nauc_precision_at_100_std value: 28.206028867032863 - type: nauc_precision_at_10_diff1 value: 56.562744749606765 - type: nauc_precision_at_10_max value: 56.00594967783547 - type: nauc_precision_at_10_std value: 8.368379831645163 - type: nauc_precision_at_1_diff1 value: 52.57059856776112 - type: nauc_precision_at_1_max value: 50.55668152952304 - type: nauc_precision_at_1_std value: 1.6572084853398048 - type: nauc_precision_at_20_diff1 value: 53.25915754614111 - type: nauc_precision_at_20_max value: 54.03255118937036 - type: nauc_precision_at_20_std value: 15.161611674272718 - type: nauc_precision_at_3_diff1 value: 60.726785748943854 - type: nauc_precision_at_3_max value: 56.139896875869354 - type: nauc_precision_at_3_std value: 2.2306901035769893 - type: nauc_precision_at_5_diff1 value: 57.1201127525187 - type: nauc_precision_at_5_max value: 53.28665761862506 - type: nauc_precision_at_5_std value: 4.358720050112237 - type: nauc_recall_at_1000_diff1 value: 57.259240202383964 - type: nauc_recall_at_1000_max value: 65.11325909319218 - type: nauc_recall_at_1000_std value: 40.60788709618142 - type: nauc_recall_at_100_diff1 value: 46.49620002554603 - type: nauc_recall_at_100_max value: 53.02960148167071 - type: nauc_recall_at_100_std value: 28.206028867032835 - type: nauc_recall_at_10_diff1 value: 56.562744749606765 - type: nauc_recall_at_10_max value: 56.00594967783549 - type: nauc_recall_at_10_std value: 8.368379831645147 - type: nauc_recall_at_1_diff1 value: 52.57059856776112 - type: nauc_recall_at_1_max value: 50.55668152952304 - type: nauc_recall_at_1_std value: 1.6572084853398048 - type: nauc_recall_at_20_diff1 value: 53.259157546141154 - type: nauc_recall_at_20_max value: 54.03255118937038 - type: nauc_recall_at_20_std value: 15.16161167427274 - type: nauc_recall_at_3_diff1 value: 60.72678574894387 - type: nauc_recall_at_3_max value: 56.13989687586933 - type: nauc_recall_at_3_std value: 2.2306901035770066 - type: nauc_recall_at_5_diff1 value: 57.12011275251864 - type: nauc_recall_at_5_max value: 53.28665761862502 - type: nauc_recall_at_5_std value: 4.3587200501122245 - type: ndcg_at_1 value: 30.0 - type: ndcg_at_10 value: 38.671 - type: ndcg_at_100 value: 42.173 - type: ndcg_at_1000 value: 44.016 - type: ndcg_at_20 value: 39.845000000000006 - type: ndcg_at_3 value: 36.863 - type: ndcg_at_5 value: 37.874 - type: precision_at_1 value: 30.0 - type: precision_at_10 value: 4.65 - type: precision_at_100 value: 0.64 - type: precision_at_1000 value: 0.08 - type: precision_at_20 value: 2.55 - type: precision_at_3 value: 13.833 - type: precision_at_5 value: 8.799999999999999 - type: recall_at_1 value: 30.0 - type: recall_at_10 value: 46.5 - type: recall_at_100 value: 64.0 - type: recall_at_1000 value: 79.5 - type: recall_at_20 value: 51.0 - type: recall_at_3 value: 41.5 - type: recall_at_5 value: 44.0 - task: type: Classification dataset: name: MTEB MultilingualSentimentClassification (rus) type: mteb/multilingual-sentiment-classification config: rus split: test revision: 2b9b4d10fc589af67794141fe8cbd3739de1eb33 metrics: - type: accuracy value: 79.52710495963092 - type: ap value: 84.5713457178972 - type: ap_weighted value: 84.5713457178972 - type: f1 value: 77.88661181524105 - type: f1_weighted value: 79.87563079922718 - type: main_score value: 79.52710495963092 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (arb_Arab-rus_Cyrl) type: mteb/NTREX config: arb_Arab-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 86.47971957936905 - type: f1 value: 82.79864240805654 - type: main_score value: 82.79864240805654 - type: precision value: 81.21485800128767 - type: recall value: 86.47971957936905 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (bel_Cyrl-rus_Cyrl) type: mteb/NTREX config: bel_Cyrl-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.84226339509264 - type: f1 value: 93.56399067465667 - type: main_score value: 93.56399067465667 - type: precision value: 93.01619095309631 - type: recall value: 94.84226339509264 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (ben_Beng-rus_Cyrl) type: mteb/NTREX config: ben_Beng-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.18828242363544 - type: f1 value: 90.42393889620612 - type: main_score value: 90.42393889620612 - type: precision value: 89.67904925153297 - type: recall value: 92.18828242363544 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (bos_Latn-rus_Cyrl) type: mteb/NTREX config: bos_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.69203805708563 - type: f1 value: 93.37172425304624 - type: main_score value: 93.37172425304624 - type: precision value: 92.79204521067315 - type: recall value: 94.69203805708563 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (bul_Cyrl-rus_Cyrl) type: mteb/NTREX config: bul_Cyrl-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.99549323985978 - type: f1 value: 96.13086296110833 - type: main_score value: 96.13086296110833 - type: precision value: 95.72441996327827 - type: recall value: 96.99549323985978 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (ces_Latn-rus_Cyrl) type: mteb/NTREX config: ces_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.94391587381071 - type: f1 value: 94.90680465142157 - type: main_score value: 94.90680465142157 - type: precision value: 94.44541812719079 - type: recall value: 95.94391587381071 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (deu_Latn-rus_Cyrl) type: mteb/NTREX config: deu_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.09414121181773 - type: f1 value: 94.94408279085295 - type: main_score value: 94.94408279085295 - type: precision value: 94.41245201135037 - type: recall value: 96.09414121181773 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (ell_Grek-rus_Cyrl) type: mteb/NTREX config: ell_Grek-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.19429143715573 - type: f1 value: 95.12101485561676 - type: main_score value: 95.12101485561676 - type: precision value: 94.60440660991488 - type: recall value: 96.19429143715573 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (eng_Latn-rus_Cyrl) type: mteb/NTREX config: eng_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.49474211316975 - type: f1 value: 95.46581777428045 - type: main_score value: 95.46581777428045 - type: precision value: 94.98414288098814 - type: recall value: 96.49474211316975 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (fas_Arab-rus_Cyrl) type: mteb/NTREX config: fas_Arab-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.44166249374061 - type: f1 value: 92.92383018972905 - type: main_score value: 92.92383018972905 - type: precision value: 92.21957936905358 - type: recall value: 94.44166249374061 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (fin_Latn-rus_Cyrl) type: mteb/NTREX config: fin_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.18828242363544 - type: f1 value: 90.2980661468393 - type: main_score value: 90.2980661468393 - type: precision value: 89.42580537472877 - type: recall value: 92.18828242363544 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (fra_Latn-rus_Cyrl) type: mteb/NTREX config: fra_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.84376564847271 - type: f1 value: 94.81054915706895 - type: main_score value: 94.81054915706895 - type: precision value: 94.31369276136427 - type: recall value: 95.84376564847271 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (heb_Hebr-rus_Cyrl) type: mteb/NTREX config: heb_Hebr-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.89233850776164 - type: f1 value: 93.42513770655985 - type: main_score value: 93.42513770655985 - type: precision value: 92.73493573693875 - type: recall value: 94.89233850776164 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (hin_Deva-rus_Cyrl) type: mteb/NTREX config: hin_Deva-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.23985978968453 - type: f1 value: 91.52816526376867 - type: main_score value: 91.52816526376867 - type: precision value: 90.76745946425466 - type: recall value: 93.23985978968453 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (hrv_Latn-rus_Cyrl) type: mteb/NTREX config: hrv_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.99098647971958 - type: f1 value: 92.36354531797697 - type: main_score value: 92.36354531797697 - type: precision value: 91.63228970439788 - type: recall value: 93.99098647971958 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (hun_Latn-rus_Cyrl) type: mteb/NTREX config: hun_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.64046069103655 - type: f1 value: 92.05224503421799 - type: main_score value: 92.05224503421799 - type: precision value: 91.33998616973079 - type: recall value: 93.64046069103655 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (ind_Latn-rus_Cyrl) type: mteb/NTREX config: ind_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 91.68753129694541 - type: f1 value: 89.26222667334335 - type: main_score value: 89.26222667334335 - type: precision value: 88.14638624603572 - type: recall value: 91.68753129694541 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (jpn_Jpan-rus_Cyrl) type: mteb/NTREX config: jpn_Jpan-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 91.28693039559339 - type: f1 value: 89.21161763348957 - type: main_score value: 89.21161763348957 - type: precision value: 88.31188340952988 - type: recall value: 91.28693039559339 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (kor_Hang-rus_Cyrl) type: mteb/NTREX config: kor_Hang-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 89.53430145217827 - type: f1 value: 86.88322165788365 - type: main_score value: 86.88322165788365 - type: precision value: 85.73950211030831 - type: recall value: 89.53430145217827 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (lit_Latn-rus_Cyrl) type: mteb/NTREX config: lit_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 90.28542814221332 - type: f1 value: 88.10249103814452 - type: main_score value: 88.10249103814452 - type: precision value: 87.17689323973752 - type: recall value: 90.28542814221332 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (mkd_Cyrl-rus_Cyrl) type: mteb/NTREX config: mkd_Cyrl-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.04256384576865 - type: f1 value: 93.65643703650713 - type: main_score value: 93.65643703650713 - type: precision value: 93.02036387915207 - type: recall value: 95.04256384576865 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (nld_Latn-rus_Cyrl) type: mteb/NTREX config: nld_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.39308963445168 - type: f1 value: 94.16207644800535 - type: main_score value: 94.16207644800535 - type: precision value: 93.582516632091 - type: recall value: 95.39308963445168 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (pol_Latn-rus_Cyrl) type: mteb/NTREX config: pol_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.7436154231347 - type: f1 value: 94.5067601402103 - type: main_score value: 94.5067601402103 - type: precision value: 93.91587381071608 - type: recall value: 95.7436154231347 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (por_Latn-rus_Cyrl) type: mteb/NTREX config: por_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 65.89884827240861 - type: f1 value: 64.61805459419219 - type: main_score value: 64.61805459419219 - type: precision value: 64.07119451106485 - type: recall value: 65.89884827240861 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-arb_Arab) type: mteb/NTREX config: rus_Cyrl-arb_Arab split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.2413620430646 - type: f1 value: 92.67663399861698 - type: main_score value: 92.67663399861698 - type: precision value: 91.94625271240193 - type: recall value: 94.2413620430646 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-bel_Cyrl) type: mteb/NTREX config: rus_Cyrl-bel_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.89233850776164 - type: f1 value: 93.40343849106993 - type: main_score value: 93.40343849106993 - type: precision value: 92.74077783341679 - type: recall value: 94.89233850776164 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-ben_Beng) type: mteb/NTREX config: rus_Cyrl-ben_Beng split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.2914371557336 - type: f1 value: 92.62226673343348 - type: main_score value: 92.62226673343348 - type: precision value: 91.84610248706393 - type: recall value: 94.2914371557336 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-bos_Latn) type: mteb/NTREX config: rus_Cyrl-bos_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.69354031046569 - type: f1 value: 94.50418051319403 - type: main_score value: 94.50418051319403 - type: precision value: 93.95843765648473 - type: recall value: 95.69354031046569 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-bul_Cyrl) type: mteb/NTREX config: rus_Cyrl-bul_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.89384076114172 - type: f1 value: 94.66199298948423 - type: main_score value: 94.66199298948423 - type: precision value: 94.08028709731263 - type: recall value: 95.89384076114172 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-ces_Latn) type: mteb/NTREX config: rus_Cyrl-ces_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.94091136705057 - type: f1 value: 92.3746731207923 - type: main_score value: 92.3746731207923 - type: precision value: 91.66207644800535 - type: recall value: 93.94091136705057 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-deu_Latn) type: mteb/NTREX config: rus_Cyrl-deu_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.94391587381071 - type: f1 value: 94.76214321482223 - type: main_score value: 94.76214321482223 - type: precision value: 94.20380570856285 - type: recall value: 95.94391587381071 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-ell_Grek) type: mteb/NTREX config: rus_Cyrl-ell_Grek split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.44316474712068 - type: f1 value: 94.14788849941579 - type: main_score value: 94.14788849941579 - type: precision value: 93.54197963612084 - type: recall value: 95.44316474712068 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-eng_Latn) type: mteb/NTREX config: rus_Cyrl-eng_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 98.14722083124687 - type: f1 value: 97.57135703555333 - type: main_score value: 97.57135703555333 - type: precision value: 97.2959439158738 - type: recall value: 98.14722083124687 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-fas_Arab) type: mteb/NTREX config: rus_Cyrl-fas_Arab split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.64196294441662 - type: f1 value: 93.24653647137372 - type: main_score value: 93.24653647137372 - type: precision value: 92.60724419963279 - type: recall value: 94.64196294441662 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-fin_Latn) type: mteb/NTREX config: rus_Cyrl-fin_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 87.98197295943916 - type: f1 value: 85.23368385912201 - type: main_score value: 85.23368385912201 - type: precision value: 84.08159858835873 - type: recall value: 87.98197295943916 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-fra_Latn) type: mteb/NTREX config: rus_Cyrl-fra_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.24436654982473 - type: f1 value: 95.07093974294774 - type: main_score value: 95.07093974294774 - type: precision value: 94.49591053246536 - type: recall value: 96.24436654982473 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-heb_Hebr) type: mteb/NTREX config: rus_Cyrl-heb_Hebr split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 91.08662994491738 - type: f1 value: 88.5161074945752 - type: main_score value: 88.5161074945752 - type: precision value: 87.36187614755467 - type: recall value: 91.08662994491738 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-hin_Deva) type: mteb/NTREX config: rus_Cyrl-hin_Deva split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.04256384576865 - type: f1 value: 93.66382907694876 - type: main_score value: 93.66382907694876 - type: precision value: 93.05291270238692 - type: recall value: 95.04256384576865 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-hrv_Latn) type: mteb/NTREX config: rus_Cyrl-hrv_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.14271407110667 - type: f1 value: 93.7481221832749 - type: main_score value: 93.7481221832749 - type: precision value: 93.10930681736892 - type: recall value: 95.14271407110667 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-hun_Latn) type: mteb/NTREX config: rus_Cyrl-hun_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 90.18527791687532 - type: f1 value: 87.61415933423946 - type: main_score value: 87.61415933423946 - type: precision value: 86.5166400394242 - type: recall value: 90.18527791687532 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-ind_Latn) type: mteb/NTREX config: rus_Cyrl-ind_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.69053580370556 - type: f1 value: 91.83608746453012 - type: main_score value: 91.83608746453012 - type: precision value: 90.97145718577868 - type: recall value: 93.69053580370556 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-jpn_Jpan) type: mteb/NTREX config: rus_Cyrl-jpn_Jpan split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 89.48422633950926 - type: f1 value: 86.91271033534429 - type: main_score value: 86.91271033534429 - type: precision value: 85.82671626487351 - type: recall value: 89.48422633950926 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-kor_Hang) type: mteb/NTREX config: rus_Cyrl-kor_Hang split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 88.4827240861292 - type: f1 value: 85.35080398375342 - type: main_score value: 85.35080398375342 - type: precision value: 83.9588549490903 - type: recall value: 88.4827240861292 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-lit_Latn) type: mteb/NTREX config: rus_Cyrl-lit_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 90.33550325488233 - type: f1 value: 87.68831819157307 - type: main_score value: 87.68831819157307 - type: precision value: 86.51524906407231 - type: recall value: 90.33550325488233 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-mkd_Cyrl) type: mteb/NTREX config: rus_Cyrl-mkd_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.94391587381071 - type: f1 value: 94.90402270071775 - type: main_score value: 94.90402270071775 - type: precision value: 94.43915873810715 - type: recall value: 95.94391587381071 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-nld_Latn) type: mteb/NTREX config: rus_Cyrl-nld_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.98948422633951 - type: f1 value: 91.04323151393756 - type: main_score value: 91.04323151393756 - type: precision value: 90.14688699716241 - type: recall value: 92.98948422633951 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-pol_Latn) type: mteb/NTREX config: rus_Cyrl-pol_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.34151226840261 - type: f1 value: 92.8726422967785 - type: main_score value: 92.8726422967785 - type: precision value: 92.19829744616925 - type: recall value: 94.34151226840261 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-por_Latn) type: mteb/NTREX config: rus_Cyrl-por_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 86.17926890335504 - type: f1 value: 82.7304882287356 - type: main_score value: 82.7304882287356 - type: precision value: 81.28162481817964 - type: recall value: 86.17926890335504 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-slk_Latn) type: mteb/NTREX config: rus_Cyrl-slk_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.7391086629945 - type: f1 value: 90.75112669003506 - type: main_score value: 90.75112669003506 - type: precision value: 89.8564513436822 - type: recall value: 92.7391086629945 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-slv_Latn) type: mteb/NTREX config: rus_Cyrl-slv_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.8893340010015 - type: f1 value: 91.05992321816058 - type: main_score value: 91.05992321816058 - type: precision value: 90.22589439715128 - type: recall value: 92.8893340010015 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-spa_Latn) type: mteb/NTREX config: rus_Cyrl-spa_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.49474211316975 - type: f1 value: 95.4715406442998 - type: main_score value: 95.4715406442998 - type: precision value: 94.9799699549324 - type: recall value: 96.49474211316975 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-srp_Cyrl) type: mteb/NTREX config: rus_Cyrl-srp_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 81.07160741111667 - type: f1 value: 76.55687285507015 - type: main_score value: 76.55687285507015 - type: precision value: 74.71886401030116 - type: recall value: 81.07160741111667 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-srp_Latn) type: mteb/NTREX config: rus_Cyrl-srp_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.14271407110667 - type: f1 value: 93.73302377809138 - type: main_score value: 93.73302377809138 - type: precision value: 93.06960440660991 - type: recall value: 95.14271407110667 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-swa_Latn) type: mteb/NTREX config: rus_Cyrl-swa_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.79218828242364 - type: f1 value: 93.25988983475212 - type: main_score value: 93.25988983475212 - type: precision value: 92.53463528626273 - type: recall value: 94.79218828242364 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-swe_Latn) type: mteb/NTREX config: rus_Cyrl-swe_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.04256384576865 - type: f1 value: 93.58704723752295 - type: main_score value: 93.58704723752295 - type: precision value: 92.91437155733601 - type: recall value: 95.04256384576865 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-tam_Taml) type: mteb/NTREX config: rus_Cyrl-tam_Taml split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.28993490235354 - type: f1 value: 91.63912535469872 - type: main_score value: 91.63912535469872 - type: precision value: 90.87738750983617 - type: recall value: 93.28993490235354 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-tur_Latn) type: mteb/NTREX config: rus_Cyrl-tur_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.74061091637456 - type: f1 value: 91.96628275746953 - type: main_score value: 91.96628275746953 - type: precision value: 91.15923885828742 - type: recall value: 93.74061091637456 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-ukr_Cyrl) type: mteb/NTREX config: rus_Cyrl-ukr_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.99399098647972 - type: f1 value: 94.89567684860624 - type: main_score value: 94.89567684860624 - type: precision value: 94.37072275079286 - type: recall value: 95.99399098647972 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-vie_Latn) type: mteb/NTREX config: rus_Cyrl-vie_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 91.4371557336004 - type: f1 value: 88.98681355366382 - type: main_score value: 88.98681355366382 - type: precision value: 87.89183775663496 - type: recall value: 91.4371557336004 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-zho_Hant) type: mteb/NTREX config: rus_Cyrl-zho_Hant split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.7891837756635 - type: f1 value: 90.79047142141783 - type: main_score value: 90.79047142141783 - type: precision value: 89.86980470706058 - type: recall value: 92.7891837756635 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (rus_Cyrl-zul_Latn) type: mteb/NTREX config: rus_Cyrl-zul_Latn split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 87.43114672008012 - type: f1 value: 84.04618833011422 - type: main_score value: 84.04618833011422 - type: precision value: 82.52259341393041 - type: recall value: 87.43114672008012 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (slk_Latn-rus_Cyrl) type: mteb/NTREX config: slk_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.34301452178268 - type: f1 value: 94.20392493502158 - type: main_score value: 94.20392493502158 - type: precision value: 93.67384409948257 - type: recall value: 95.34301452178268 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (slv_Latn-rus_Cyrl) type: mteb/NTREX config: slv_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 92.23835753630446 - type: f1 value: 90.5061759305625 - type: main_score value: 90.5061759305625 - type: precision value: 89.74231188051918 - type: recall value: 92.23835753630446 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (spa_Latn-rus_Cyrl) type: mteb/NTREX config: spa_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.54481722583876 - type: f1 value: 95.54665331330328 - type: main_score value: 95.54665331330328 - type: precision value: 95.06342847604739 - type: recall value: 96.54481722583876 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (srp_Cyrl-rus_Cyrl) type: mteb/NTREX config: srp_Cyrl-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 83.62543815723585 - type: f1 value: 80.77095672699816 - type: main_score value: 80.77095672699816 - type: precision value: 79.74674313056886 - type: recall value: 83.62543815723585 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (srp_Latn-rus_Cyrl) type: mteb/NTREX config: srp_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 94.44166249374061 - type: f1 value: 93.00733206591994 - type: main_score value: 93.00733206591994 - type: precision value: 92.37203026762366 - type: recall value: 94.44166249374061 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (swa_Latn-rus_Cyrl) type: mteb/NTREX config: swa_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 90.23535302954431 - type: f1 value: 87.89596482636041 - type: main_score value: 87.89596482636041 - type: precision value: 86.87060227370694 - type: recall value: 90.23535302954431 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (swe_Latn-rus_Cyrl) type: mteb/NTREX config: swe_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 95.44316474712068 - type: f1 value: 94.1896177599733 - type: main_score value: 94.1896177599733 - type: precision value: 93.61542313470206 - type: recall value: 95.44316474712068 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (tam_Taml-rus_Cyrl) type: mteb/NTREX config: tam_Taml-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 89.68452679018529 - type: f1 value: 87.37341160650037 - type: main_score value: 87.37341160650037 - type: precision value: 86.38389402285247 - type: recall value: 89.68452679018529 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (tur_Latn-rus_Cyrl) type: mteb/NTREX config: tur_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.89083625438157 - type: f1 value: 92.33892505424804 - type: main_score value: 92.33892505424804 - type: precision value: 91.63125640842216 - type: recall value: 93.89083625438157 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (ukr_Cyrl-rus_Cyrl) type: mteb/NTREX config: ukr_Cyrl-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 96.14421632448673 - type: f1 value: 95.11028447433054 - type: main_score value: 95.11028447433054 - type: precision value: 94.62944416624937 - type: recall value: 96.14421632448673 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (vie_Latn-rus_Cyrl) type: mteb/NTREX config: vie_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 93.79068602904357 - type: f1 value: 92.14989150392256 - type: main_score value: 92.14989150392256 - type: precision value: 91.39292271740945 - type: recall value: 93.79068602904357 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (zho_Hant-rus_Cyrl) type: mteb/NTREX config: zho_Hant-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 89.13370055082625 - type: f1 value: 86.51514618639217 - type: main_score value: 86.51514618639217 - type: precision value: 85.383920035898 - type: recall value: 89.13370055082625 - task: type: BitextMining dataset: name: MTEB NTREXBitextMining (zul_Latn-rus_Cyrl) type: mteb/NTREX config: zul_Latn-rus_Cyrl split: test revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 metrics: - type: accuracy value: 81.17175763645467 - type: f1 value: 77.72331766047338 - type: main_score value: 77.72331766047338 - type: precision value: 76.24629555848075 - type: recall value: 81.17175763645467 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (ru) type: GEM/opusparcus config: ru split: test.full revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cosine_accuracy value: 73.09136420525657 - type: cosine_accuracy_threshold value: 87.70400881767273 - type: cosine_ap value: 86.51938550599533 - type: cosine_f1 value: 80.84358523725834 - type: cosine_f1_threshold value: 86.90648078918457 - type: cosine_precision value: 73.24840764331209 - type: cosine_recall value: 90.19607843137256 - type: dot_accuracy value: 73.09136420525657 - type: dot_accuracy_threshold value: 87.7040147781372 - type: dot_ap value: 86.51934769946833 - type: dot_f1 value: 80.84358523725834 - type: dot_f1_threshold value: 86.90648078918457 - type: dot_precision value: 73.24840764331209 - type: dot_recall value: 90.19607843137256 - type: euclidean_accuracy value: 73.09136420525657 - type: euclidean_accuracy_threshold value: 49.590304493904114 - type: euclidean_ap value: 86.51934769946833 - type: euclidean_f1 value: 80.84358523725834 - type: euclidean_f1_threshold value: 51.173269748687744 - type: euclidean_precision value: 73.24840764331209 - type: euclidean_recall value: 90.19607843137256 - type: main_score value: 86.51976811057995 - type: manhattan_accuracy value: 73.40425531914893 - type: manhattan_accuracy_threshold value: 757.8278541564941 - type: manhattan_ap value: 86.51976811057995 - type: manhattan_f1 value: 80.92898615453328 - type: manhattan_f1_threshold value: 778.3821105957031 - type: manhattan_precision value: 74.32321575061526 - type: manhattan_recall value: 88.8235294117647 - type: max_ap value: 86.51976811057995 - type: max_f1 value: 80.92898615453328 - type: max_precision value: 74.32321575061526 - type: max_recall value: 90.19607843137256 - type: similarity_accuracy value: 73.09136420525657 - type: similarity_accuracy_threshold value: 87.70400881767273 - type: similarity_ap value: 86.51938550599533 - type: similarity_f1 value: 80.84358523725834 - type: similarity_f1_threshold value: 86.90648078918457 - type: similarity_precision value: 73.24840764331209 - type: similarity_recall value: 90.19607843137256 - task: type: Retrieval dataset: name: MTEB PublicHealthQA (russian) type: xhluca/publichealth-qa config: russian split: test revision: main metrics: - type: main_score value: 79.303 - type: map_at_1 value: 61.538000000000004 - type: map_at_10 value: 74.449 - type: map_at_100 value: 74.687 - type: map_at_1000 value: 74.687 - type: map_at_20 value: 74.589 - type: map_at_3 value: 73.333 - type: map_at_5 value: 74.256 - type: mrr_at_1 value: 61.53846153846154 - type: mrr_at_10 value: 74.44871794871794 - type: mrr_at_100 value: 74.68730304304074 - type: mrr_at_1000 value: 74.68730304304074 - type: mrr_at_20 value: 74.58857808857809 - type: mrr_at_3 value: 73.33333333333333 - type: mrr_at_5 value: 74.25641025641025 - type: nauc_map_at_1000_diff1 value: 61.375798048778506 - type: nauc_map_at_1000_max value: 51.37093181241067 - type: nauc_map_at_1000_std value: 41.735794471409015 - type: nauc_map_at_100_diff1 value: 61.375798048778506 - type: nauc_map_at_100_max value: 51.37093181241067 - type: nauc_map_at_100_std value: 41.735794471409015 - type: nauc_map_at_10_diff1 value: 61.12796039757213 - type: nauc_map_at_10_max value: 51.843445267118014 - type: nauc_map_at_10_std value: 42.243121474939365 - type: nauc_map_at_1_diff1 value: 66.39100974909151 - type: nauc_map_at_1_max value: 44.77165601342703 - type: nauc_map_at_1_std value: 32.38542979413408 - type: nauc_map_at_20_diff1 value: 61.16611123434347 - type: nauc_map_at_20_max value: 51.52605092407306 - type: nauc_map_at_20_std value: 41.94787773313971 - type: nauc_map_at_3_diff1 value: 61.40157474408937 - type: nauc_map_at_3_max value: 51.47230077853947 - type: nauc_map_at_3_std value: 42.63540269440141 - type: nauc_map_at_5_diff1 value: 61.07631147583098 - type: nauc_map_at_5_max value: 52.02626939341523 - type: nauc_map_at_5_std value: 42.511607332150334 - type: nauc_mrr_at_1000_diff1 value: 61.375798048778506 - type: nauc_mrr_at_1000_max value: 51.37093181241067 - type: nauc_mrr_at_1000_std value: 41.735794471409015 - type: nauc_mrr_at_100_diff1 value: 61.375798048778506 - type: nauc_mrr_at_100_max value: 51.37093181241067 - type: nauc_mrr_at_100_std value: 41.735794471409015 - type: nauc_mrr_at_10_diff1 value: 61.12796039757213 - type: nauc_mrr_at_10_max value: 51.843445267118014 - type: nauc_mrr_at_10_std value: 42.243121474939365 - type: nauc_mrr_at_1_diff1 value: 66.39100974909151 - type: nauc_mrr_at_1_max value: 44.77165601342703 - type: nauc_mrr_at_1_std value: 32.38542979413408 - type: nauc_mrr_at_20_diff1 value: 61.16611123434347 - type: nauc_mrr_at_20_max value: 51.52605092407306 - type: nauc_mrr_at_20_std value: 41.94787773313971 - type: nauc_mrr_at_3_diff1 value: 61.40157474408937 - type: nauc_mrr_at_3_max value: 51.47230077853947 - type: nauc_mrr_at_3_std value: 42.63540269440141 - type: nauc_mrr_at_5_diff1 value: 61.07631147583098 - type: nauc_mrr_at_5_max value: 52.02626939341523 - type: nauc_mrr_at_5_std value: 42.511607332150334 - type: nauc_ndcg_at_1000_diff1 value: 60.54821630436157 - type: nauc_ndcg_at_1000_max value: 52.584328363863634 - type: nauc_ndcg_at_1000_std value: 43.306961101645946 - type: nauc_ndcg_at_100_diff1 value: 60.54821630436157 - type: nauc_ndcg_at_100_max value: 52.584328363863634 - type: nauc_ndcg_at_100_std value: 43.306961101645946 - type: nauc_ndcg_at_10_diff1 value: 58.800340278109886 - type: nauc_ndcg_at_10_max value: 55.31050771670664 - type: nauc_ndcg_at_10_std value: 46.40931672942848 - type: nauc_ndcg_at_1_diff1 value: 66.39100974909151 - type: nauc_ndcg_at_1_max value: 44.77165601342703 - type: nauc_ndcg_at_1_std value: 32.38542979413408 - type: nauc_ndcg_at_20_diff1 value: 58.88690479697946 - type: nauc_ndcg_at_20_max value: 54.19269661177923 - type: nauc_ndcg_at_20_std value: 45.39305589413174 - type: nauc_ndcg_at_3_diff1 value: 59.61866351451574 - type: nauc_ndcg_at_3_max value: 54.23992718744033 - type: nauc_ndcg_at_3_std value: 46.997379274101 - type: nauc_ndcg_at_5_diff1 value: 58.70739588066225 - type: nauc_ndcg_at_5_max value: 55.76766902539152 - type: nauc_ndcg_at_5_std value: 47.10553115762958 - type: nauc_precision_at_1000_diff1 value: 100.0 - type: nauc_precision_at_1000_max value: 100.0 - type: nauc_precision_at_1000_std value: 100.0 - type: nauc_precision_at_100_diff1 value: .nan - type: nauc_precision_at_100_max value: .nan - type: nauc_precision_at_100_std value: .nan - type: nauc_precision_at_10_diff1 value: 35.72622112397501 - type: nauc_precision_at_10_max value: 89.84297108673948 - type: nauc_precision_at_10_std value: 86.60269192422707 - type: nauc_precision_at_1_diff1 value: 66.39100974909151 - type: nauc_precision_at_1_max value: 44.77165601342703 - type: nauc_precision_at_1_std value: 32.38542979413408 - type: nauc_precision_at_20_diff1 value: 29.188449183726433 - type: nauc_precision_at_20_max value: 86.45729478231968 - type: nauc_precision_at_20_std value: 86.45729478231968 - type: nauc_precision_at_3_diff1 value: 50.294126629236224 - type: nauc_precision_at_3_max value: 68.98223127174579 - type: nauc_precision_at_3_std value: 70.31195520376356 - type: nauc_precision_at_5_diff1 value: 39.648884288124385 - type: nauc_precision_at_5_max value: 86.3409770687935 - type: nauc_precision_at_5_std value: 83.74875373878356 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: .nan - type: nauc_recall_at_100_max value: .nan - type: nauc_recall_at_100_std value: .nan - type: nauc_recall_at_10_diff1 value: 35.72622112397516 - type: nauc_recall_at_10_max value: 89.84297108673968 - type: nauc_recall_at_10_std value: 86.60269192422749 - type: nauc_recall_at_1_diff1 value: 66.39100974909151 - type: nauc_recall_at_1_max value: 44.77165601342703 - type: nauc_recall_at_1_std value: 32.38542979413408 - type: nauc_recall_at_20_diff1 value: 29.188449183726323 - type: nauc_recall_at_20_max value: 86.45729478231985 - type: nauc_recall_at_20_std value: 86.45729478231985 - type: nauc_recall_at_3_diff1 value: 50.29412662923603 - type: nauc_recall_at_3_max value: 68.98223127174562 - type: nauc_recall_at_3_std value: 70.31195520376346 - type: nauc_recall_at_5_diff1 value: 39.64888428812445 - type: nauc_recall_at_5_max value: 86.34097706879359 - type: nauc_recall_at_5_std value: 83.74875373878366 - type: ndcg_at_1 value: 61.538000000000004 - type: ndcg_at_10 value: 79.303 - type: ndcg_at_100 value: 80.557 - type: ndcg_at_1000 value: 80.557 - type: ndcg_at_20 value: 79.732 - type: ndcg_at_3 value: 77.033 - type: ndcg_at_5 value: 78.818 - type: precision_at_1 value: 61.538000000000004 - type: precision_at_10 value: 9.385 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.769 - type: precision_at_3 value: 29.231 - type: precision_at_5 value: 18.462 - type: recall_at_1 value: 61.538000000000004 - type: recall_at_10 value: 93.84599999999999 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 95.38499999999999 - type: recall_at_3 value: 87.69200000000001 - type: recall_at_5 value: 92.308 - task: type: STS dataset: name: MTEB RUParaPhraserSTS (default) type: merionum/ru_paraphraser config: default split: test revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4 metrics: - type: cosine_pearson value: 64.73554596215753 - type: cosine_spearman value: 70.45849652271855 - type: euclidean_pearson value: 68.08069844834267 - type: euclidean_spearman value: 70.45854872959124 - type: main_score value: 70.45849652271855 - type: manhattan_pearson value: 67.88325986519624 - type: manhattan_spearman value: 70.21131896834542 - type: pearson value: 64.73554596215753 - type: spearman value: 70.45849652271855 - task: type: Retrieval dataset: name: MTEB RiaNewsRetrieval (default) type: ai-forever/ria-news-retrieval config: default split: test revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7 metrics: - type: main_score value: 70.00999999999999 - type: map_at_1 value: 55.97 - type: map_at_10 value: 65.59700000000001 - type: map_at_100 value: 66.057 - type: map_at_1000 value: 66.074 - type: map_at_20 value: 65.892 - type: map_at_3 value: 63.74999999999999 - type: map_at_5 value: 64.84299999999999 - type: mrr_at_1 value: 55.88999999999999 - type: mrr_at_10 value: 65.55873015872977 - type: mrr_at_100 value: 66.01891495129716 - type: mrr_at_1000 value: 66.03538391493299 - type: mrr_at_20 value: 65.85351193431555 - type: mrr_at_3 value: 63.7133333333329 - type: mrr_at_5 value: 64.80483333333268 - type: nauc_map_at_1000_diff1 value: 65.95332946436318 - type: nauc_map_at_1000_max value: 28.21204156197811 - type: nauc_map_at_1000_std value: -13.139245767083743 - type: nauc_map_at_100_diff1 value: 65.94763105024367 - type: nauc_map_at_100_max value: 28.212832170078205 - type: nauc_map_at_100_std value: -13.131425849370665 - type: nauc_map_at_10_diff1 value: 65.88455089448388 - type: nauc_map_at_10_max value: 28.13555838776792 - type: nauc_map_at_10_std value: -13.326989827081023 - type: nauc_map_at_1_diff1 value: 69.31275711813979 - type: nauc_map_at_1_max value: 26.386708520283758 - type: nauc_map_at_1_std value: -14.434616447245464 - type: nauc_map_at_20_diff1 value: 65.91227032605677 - type: nauc_map_at_20_max value: 28.20538655600886 - type: nauc_map_at_20_std value: -13.191148834410274 - type: nauc_map_at_3_diff1 value: 66.0051677952641 - type: nauc_map_at_3_max value: 28.25443420019022 - type: nauc_map_at_3_std value: -13.893284109029558 - type: nauc_map_at_5_diff1 value: 65.89784348297898 - type: nauc_map_at_5_max value: 28.26449765184183 - type: nauc_map_at_5_std value: -13.506692912805008 - type: nauc_mrr_at_1000_diff1 value: 66.06599513750889 - type: nauc_mrr_at_1000_max value: 28.191556650722287 - type: nauc_mrr_at_1000_std value: -13.098487982930276 - type: nauc_mrr_at_100_diff1 value: 66.0602307977725 - type: nauc_mrr_at_100_max value: 28.19235936624514 - type: nauc_mrr_at_100_std value: -13.09069677716269 - type: nauc_mrr_at_10_diff1 value: 65.99546819079403 - type: nauc_mrr_at_10_max value: 28.11556170120022 - type: nauc_mrr_at_10_std value: -13.286711073897553 - type: nauc_mrr_at_1_diff1 value: 69.49541040517995 - type: nauc_mrr_at_1_max value: 26.354622707276153 - type: nauc_mrr_at_1_std value: -14.358839778104695 - type: nauc_mrr_at_20_diff1 value: 66.02427154257936 - type: nauc_mrr_at_20_max value: 28.18509383563462 - type: nauc_mrr_at_20_std value: -13.150543398429 - type: nauc_mrr_at_3_diff1 value: 66.11258119082618 - type: nauc_mrr_at_3_max value: 28.239510722224004 - type: nauc_mrr_at_3_std value: -13.857249251136269 - type: nauc_mrr_at_5_diff1 value: 66.00633786765626 - type: nauc_mrr_at_5_max value: 28.244875152193032 - type: nauc_mrr_at_5_std value: -13.467206028704434 - type: nauc_ndcg_at_1000_diff1 value: 65.02876183314446 - type: nauc_ndcg_at_1000_max value: 29.109368390197194 - type: nauc_ndcg_at_1000_std value: -11.56514359821697 - type: nauc_ndcg_at_100_diff1 value: 64.85837726893713 - type: nauc_ndcg_at_100_max value: 29.19990133137256 - type: nauc_ndcg_at_100_std value: -11.17450348161257 - type: nauc_ndcg_at_10_diff1 value: 64.53842705024796 - type: nauc_ndcg_at_10_max value: 28.748734006088526 - type: nauc_ndcg_at_10_std value: -12.331395505957063 - type: nauc_ndcg_at_1_diff1 value: 69.31275711813979 - type: nauc_ndcg_at_1_max value: 26.386708520283758 - type: nauc_ndcg_at_1_std value: -14.434616447245464 - type: nauc_ndcg_at_20_diff1 value: 64.59017606740504 - type: nauc_ndcg_at_20_max value: 29.047332048898017 - type: nauc_ndcg_at_20_std value: -11.746548770195954 - type: nauc_ndcg_at_3_diff1 value: 64.87900935713822 - type: nauc_ndcg_at_3_max value: 28.953157521204403 - type: nauc_ndcg_at_3_std value: -13.639947228880942 - type: nauc_ndcg_at_5_diff1 value: 64.61466953479034 - type: nauc_ndcg_at_5_max value: 29.01899321868392 - type: nauc_ndcg_at_5_std value: -12.85356404799802 - type: nauc_precision_at_1000_diff1 value: 48.85481417002382 - type: nauc_precision_at_1000_max value: 57.129837326696375 - type: nauc_precision_at_1000_std value: 37.889524999906435 - type: nauc_precision_at_100_diff1 value: 53.374672326788264 - type: nauc_precision_at_100_max value: 43.819333062207974 - type: nauc_precision_at_100_std value: 21.387064885769362 - type: nauc_precision_at_10_diff1 value: 57.66571169774445 - type: nauc_precision_at_10_max value: 31.779694837242033 - type: nauc_precision_at_10_std value: -6.6248399147180255 - type: nauc_precision_at_1_diff1 value: 69.31275711813979 - type: nauc_precision_at_1_max value: 26.386708520283758 - type: nauc_precision_at_1_std value: -14.434616447245464 - type: nauc_precision_at_20_diff1 value: 55.93570036001682 - type: nauc_precision_at_20_max value: 34.98640173388743 - type: nauc_precision_at_20_std value: -0.36518465159326174 - type: nauc_precision_at_3_diff1 value: 60.94100093991508 - type: nauc_precision_at_3_max value: 31.422239034357673 - type: nauc_precision_at_3_std value: -12.72576556537896 - type: nauc_precision_at_5_diff1 value: 59.450505195434054 - type: nauc_precision_at_5_max value: 32.07638712418377 - type: nauc_precision_at_5_std value: -10.024459103498598 - type: nauc_recall_at_1000_diff1 value: 48.854814170024184 - type: nauc_recall_at_1000_max value: 57.129837326697164 - type: nauc_recall_at_1000_std value: 37.88952499990672 - type: nauc_recall_at_100_diff1 value: 53.37467232678822 - type: nauc_recall_at_100_max value: 43.8193330622079 - type: nauc_recall_at_100_std value: 21.387064885769398 - type: nauc_recall_at_10_diff1 value: 57.66571169774447 - type: nauc_recall_at_10_max value: 31.779694837242133 - type: nauc_recall_at_10_std value: -6.62483991471789 - type: nauc_recall_at_1_diff1 value: 69.31275711813979 - type: nauc_recall_at_1_max value: 26.386708520283758 - type: nauc_recall_at_1_std value: -14.434616447245464 - type: nauc_recall_at_20_diff1 value: 55.93570036001682 - type: nauc_recall_at_20_max value: 34.986401733887554 - type: nauc_recall_at_20_std value: -0.3651846515931506 - type: nauc_recall_at_3_diff1 value: 60.94100093991499 - type: nauc_recall_at_3_max value: 31.422239034357606 - type: nauc_recall_at_3_std value: -12.725765565378966 - type: nauc_recall_at_5_diff1 value: 59.450505195434125 - type: nauc_recall_at_5_max value: 32.07638712418387 - type: nauc_recall_at_5_std value: -10.024459103498472 - type: ndcg_at_1 value: 55.97 - type: ndcg_at_10 value: 70.00999999999999 - type: ndcg_at_100 value: 72.20100000000001 - type: ndcg_at_1000 value: 72.65599999999999 - type: ndcg_at_20 value: 71.068 - type: ndcg_at_3 value: 66.228 - type: ndcg_at_5 value: 68.191 - type: precision_at_1 value: 55.97 - type: precision_at_10 value: 8.373999999999999 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.097 - type: precision_at_20 value: 4.3950000000000005 - type: precision_at_3 value: 24.46 - type: precision_at_5 value: 15.626000000000001 - type: recall_at_1 value: 55.97 - type: recall_at_10 value: 83.74000000000001 - type: recall_at_100 value: 93.87 - type: recall_at_1000 value: 97.49 - type: recall_at_20 value: 87.89 - type: recall_at_3 value: 73.38 - type: recall_at_5 value: 78.13 - task: type: Reranking dataset: name: MTEB RuBQReranking (default) type: ai-forever/rubq-reranking config: default split: test revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2 metrics: - type: main_score value: 71.44929565043827 - type: map value: 71.44929565043827 - type: mrr value: 77.78391820945014 - type: nAUC_map_diff1 value: 38.140840668080244 - type: nAUC_map_max value: 27.54328688105381 - type: nAUC_map_std value: 16.81572082284672 - type: nAUC_mrr_diff1 value: 44.51350415961509 - type: nAUC_mrr_max value: 36.491182016669754 - type: nAUC_mrr_std value: 22.47139593052269 - task: type: Retrieval dataset: name: MTEB RuBQRetrieval (default) type: ai-forever/rubq-retrieval config: default split: test revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b metrics: - type: main_score value: 68.529 - type: map_at_1 value: 42.529 - type: map_at_10 value: 60.864 - type: map_at_100 value: 61.868 - type: map_at_1000 value: 61.907000000000004 - type: map_at_20 value: 61.596 - type: map_at_3 value: 55.701 - type: map_at_5 value: 58.78 - type: mrr_at_1 value: 60.57919621749409 - type: mrr_at_10 value: 70.55614188149649 - type: mrr_at_100 value: 70.88383816664494 - type: mrr_at_1000 value: 70.89719252668833 - type: mrr_at_20 value: 70.79839750105347 - type: mrr_at_3 value: 68.4594168636722 - type: mrr_at_5 value: 69.67100078802214 - type: nauc_map_at_1000_diff1 value: 40.67438785660885 - type: nauc_map_at_1000_max value: 32.79981738507424 - type: nauc_map_at_1000_std value: -6.873402600044831 - type: nauc_map_at_100_diff1 value: 40.65643664443284 - type: nauc_map_at_100_max value: 32.81594799919249 - type: nauc_map_at_100_std value: -6.8473246794498195 - type: nauc_map_at_10_diff1 value: 40.39048268484908 - type: nauc_map_at_10_max value: 32.403242161479525 - type: nauc_map_at_10_std value: -7.344413799841244 - type: nauc_map_at_1_diff1 value: 44.36306892906905 - type: nauc_map_at_1_max value: 25.61348630699028 - type: nauc_map_at_1_std value: -8.713074613333902 - type: nauc_map_at_20_diff1 value: 40.530326570124615 - type: nauc_map_at_20_max value: 32.74028319323205 - type: nauc_map_at_20_std value: -7.008180779820569 - type: nauc_map_at_3_diff1 value: 40.764924859364044 - type: nauc_map_at_3_max value: 29.809671682025336 - type: nauc_map_at_3_std value: -9.205620202725564 - type: nauc_map_at_5_diff1 value: 40.88599496021476 - type: nauc_map_at_5_max value: 32.1701894666848 - type: nauc_map_at_5_std value: -7.801251849010623 - type: nauc_mrr_at_1000_diff1 value: 48.64181373540728 - type: nauc_mrr_at_1000_max value: 40.136947990653546 - type: nauc_mrr_at_1000_std value: -7.250260497468805 - type: nauc_mrr_at_100_diff1 value: 48.63349902496212 - type: nauc_mrr_at_100_max value: 40.14510559704008 - type: nauc_mrr_at_100_std value: -7.228702374801103 - type: nauc_mrr_at_10_diff1 value: 48.58580560194813 - type: nauc_mrr_at_10_max value: 40.15075599433366 - type: nauc_mrr_at_10_std value: -7.267928771548688 - type: nauc_mrr_at_1_diff1 value: 51.47535097164919 - type: nauc_mrr_at_1_max value: 38.23579750430856 - type: nauc_mrr_at_1_std value: -9.187785187137633 - type: nauc_mrr_at_20_diff1 value: 48.58688378336222 - type: nauc_mrr_at_20_max value: 40.13408744088299 - type: nauc_mrr_at_20_std value: -7.283132775160146 - type: nauc_mrr_at_3_diff1 value: 48.66833005454742 - type: nauc_mrr_at_3_max value: 40.07987333638038 - type: nauc_mrr_at_3_std value: -7.738819947521418 - type: nauc_mrr_at_5_diff1 value: 48.76536305941537 - type: nauc_mrr_at_5_max value: 40.381929739522185 - type: nauc_mrr_at_5_std value: -7.592858318378928 - type: nauc_ndcg_at_1000_diff1 value: 41.67304442004693 - type: nauc_ndcg_at_1000_max value: 35.84126926253235 - type: nauc_ndcg_at_1000_std value: -4.78971011604655 - type: nauc_ndcg_at_100_diff1 value: 41.16918850185783 - type: nauc_ndcg_at_100_max value: 36.082461962326505 - type: nauc_ndcg_at_100_std value: -4.092442251697269 - type: nauc_ndcg_at_10_diff1 value: 40.300065598615205 - type: nauc_ndcg_at_10_max value: 34.87866296788365 - type: nauc_ndcg_at_10_std value: -5.866529277842453 - type: nauc_ndcg_at_1_diff1 value: 51.74612915209495 - type: nauc_ndcg_at_1_max value: 37.71907067970078 - type: nauc_ndcg_at_1_std value: -9.064124266098696 - type: nauc_ndcg_at_20_diff1 value: 40.493949850214584 - type: nauc_ndcg_at_20_max value: 35.69331503650286 - type: nauc_ndcg_at_20_std value: -4.995310342975443 - type: nauc_ndcg_at_3_diff1 value: 41.269443212112364 - type: nauc_ndcg_at_3_max value: 32.572844460953334 - type: nauc_ndcg_at_3_std value: -9.063015396458791 - type: nauc_ndcg_at_5_diff1 value: 41.37039652522888 - type: nauc_ndcg_at_5_max value: 34.67416011393571 - type: nauc_ndcg_at_5_std value: -7.106845569862319 - type: nauc_precision_at_1000_diff1 value: -9.571769961090155 - type: nauc_precision_at_1000_max value: 5.574782583417188 - type: nauc_precision_at_1000_std value: 7.28333847923847 - type: nauc_precision_at_100_diff1 value: -7.7405012003383735 - type: nauc_precision_at_100_max value: 9.67745355070353 - type: nauc_precision_at_100_std value: 9.327890294080992 - type: nauc_precision_at_10_diff1 value: -1.006879647532931 - type: nauc_precision_at_10_max value: 15.899825481231064 - type: nauc_precision_at_10_std value: 4.2284084852153105 - type: nauc_precision_at_1_diff1 value: 51.74612915209495 - type: nauc_precision_at_1_max value: 37.71907067970078 - type: nauc_precision_at_1_std value: -9.064124266098696 - type: nauc_precision_at_20_diff1 value: -4.982301544401409 - type: nauc_precision_at_20_max value: 13.241674471380568 - type: nauc_precision_at_20_std value: 7.052280133821539 - type: nauc_precision_at_3_diff1 value: 15.442614376387374 - type: nauc_precision_at_3_max value: 25.12695418083 - type: nauc_precision_at_3_std value: -3.1150066697920638 - type: nauc_precision_at_5_diff1 value: 8.381026072692444 - type: nauc_precision_at_5_max value: 22.839056540604822 - type: nauc_precision_at_5_std value: 1.5126905486524331 - type: nauc_recall_at_1000_diff1 value: -0.8869709920433502 - type: nauc_recall_at_1000_max value: 45.092324433377264 - type: nauc_recall_at_1000_std value: 62.21264093315108 - type: nauc_recall_at_100_diff1 value: 16.036715011075714 - type: nauc_recall_at_100_max value: 39.79963411771158 - type: nauc_recall_at_100_std value: 28.41850069503361 - type: nauc_recall_at_10_diff1 value: 25.189622794479998 - type: nauc_recall_at_10_max value: 30.82355277039427 - type: nauc_recall_at_10_std value: 0.0964544736531047 - type: nauc_recall_at_1_diff1 value: 44.36306892906905 - type: nauc_recall_at_1_max value: 25.61348630699028 - type: nauc_recall_at_1_std value: -8.713074613333902 - type: nauc_recall_at_20_diff1 value: 20.43424504746087 - type: nauc_recall_at_20_max value: 33.96010554649377 - type: nauc_recall_at_20_std value: 6.900984030301936 - type: nauc_recall_at_3_diff1 value: 33.86531858793492 - type: nauc_recall_at_3_max value: 27.725692256711188 - type: nauc_recall_at_3_std value: -8.533124289305709 - type: nauc_recall_at_5_diff1 value: 32.006964557701686 - type: nauc_recall_at_5_max value: 31.493370659289806 - type: nauc_recall_at_5_std value: -4.8639793547793255 - type: ndcg_at_1 value: 60.461 - type: ndcg_at_10 value: 68.529 - type: ndcg_at_100 value: 71.664 - type: ndcg_at_1000 value: 72.396 - type: ndcg_at_20 value: 70.344 - type: ndcg_at_3 value: 61.550000000000004 - type: ndcg_at_5 value: 64.948 - type: precision_at_1 value: 60.461 - type: precision_at_10 value: 13.28 - type: precision_at_100 value: 1.555 - type: precision_at_1000 value: 0.164 - type: precision_at_20 value: 7.216 - type: precision_at_3 value: 33.077 - type: precision_at_5 value: 23.014000000000003 - type: recall_at_1 value: 42.529 - type: recall_at_10 value: 81.169 - type: recall_at_100 value: 93.154 - type: recall_at_1000 value: 98.18299999999999 - type: recall_at_20 value: 87.132 - type: recall_at_3 value: 63.905 - type: recall_at_5 value: 71.967 - task: type: Classification dataset: name: MTEB RuReviewsClassification (default) type: ai-forever/ru-reviews-classification config: default split: test revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a metrics: - type: accuracy value: 61.17675781250001 - type: f1 value: 60.354535346041374 - type: f1_weighted value: 60.35437313166116 - type: main_score value: 61.17675781250001 - task: type: STS dataset: name: MTEB RuSTSBenchmarkSTS (default) type: ai-forever/ru-stsbenchmark-sts config: default split: test revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018 metrics: - type: cosine_pearson value: 78.1301041727274 - type: cosine_spearman value: 78.08238025421747 - type: euclidean_pearson value: 77.35224254583635 - type: euclidean_spearman value: 78.08235336582496 - type: main_score value: 78.08238025421747 - type: manhattan_pearson value: 77.24138550052075 - type: manhattan_spearman value: 77.98199107904142 - type: pearson value: 78.1301041727274 - type: spearman value: 78.08238025421747 - task: type: Classification dataset: name: MTEB RuSciBenchGRNTIClassification (default) type: ai-forever/ru-scibench-grnti-classification config: default split: test revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1 metrics: - type: accuracy value: 54.990234375 - type: f1 value: 53.537019057131374 - type: f1_weighted value: 53.552745354520766 - type: main_score value: 54.990234375 - task: type: Clustering dataset: name: MTEB RuSciBenchGRNTIClusteringP2P (default) type: ai-forever/ru-scibench-grnti-classification config: default split: test revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1 metrics: - type: main_score value: 50.775228895355106 - type: v_measure value: 50.775228895355106 - type: v_measure_std value: 0.9533571150165796 - task: type: Classification dataset: name: MTEB RuSciBenchOECDClassification (default) type: ai-forever/ru-scibench-oecd-classification config: default split: test revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471 metrics: - type: accuracy value: 41.71875 - type: f1 value: 39.289100975858304 - type: f1_weighted value: 39.29257829217775 - type: main_score value: 41.71875 - task: type: Clustering dataset: name: MTEB RuSciBenchOECDClusteringP2P (default) type: ai-forever/ru-scibench-oecd-classification config: default split: test revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471 metrics: - type: main_score value: 45.10904808834516 - type: v_measure value: 45.10904808834516 - type: v_measure_std value: 1.0572643410157534 - task: type: Classification dataset: name: MTEB SIB200Classification (rus_Cyrl) type: mteb/sib200 config: rus_Cyrl split: test revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b metrics: - type: accuracy value: 66.36363636363637 - type: f1 value: 64.6940336621617 - type: f1_weighted value: 66.43317771876966 - type: main_score value: 66.36363636363637 - task: type: Clustering dataset: name: MTEB SIB200ClusteringS2S (rus_Cyrl) type: mteb/sib200 config: rus_Cyrl split: test revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b metrics: - type: main_score value: 33.99178497314711 - type: v_measure value: 33.99178497314711 - type: v_measure_std value: 4.036337464043786 - task: type: STS dataset: name: MTEB STS22.v2 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: d31f33a128469b20e357535c39b82fb3c3f6f2bd metrics: - type: cosine_pearson value: 50.724322379215934 - type: cosine_spearman value: 59.90449732164651 - type: euclidean_pearson value: 50.227545226784024 - type: euclidean_spearman value: 59.898906527601085 - type: main_score value: 59.90449732164651 - type: manhattan_pearson value: 50.21762139819405 - type: manhattan_spearman value: 59.761039813759 - type: pearson value: 50.724322379215934 - type: spearman value: 59.90449732164651 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (ru) type: mteb/stsb_multi_mt config: ru split: dev revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_pearson value: 78.43928769569945 - type: cosine_spearman value: 78.23961768018884 - type: euclidean_pearson value: 77.4718694027985 - type: euclidean_spearman value: 78.23887044760475 - type: main_score value: 78.23961768018884 - type: manhattan_pearson value: 77.34517128089547 - type: manhattan_spearman value: 78.1146477340426 - type: pearson value: 78.43928769569945 - type: spearman value: 78.23961768018884 - task: type: MultilabelClassification dataset: name: MTEB SensitiveTopicsClassification (default) type: ai-forever/sensitive-topics-classification config: default split: test revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2 metrics: - type: accuracy value: 22.8125 - type: f1 value: 17.31969589593409 - type: lrap value: 33.82412380642287 - type: main_score value: 22.8125 - task: type: PairClassification dataset: name: MTEB TERRa (default) type: ai-forever/terra-pairclassification config: default split: dev revision: 7b58f24536063837d644aab9a023c62199b2a612 metrics: - type: cosine_accuracy value: 57.32899022801303 - type: cosine_accuracy_threshold value: 85.32201051712036 - type: cosine_ap value: 55.14264553720072 - type: cosine_f1 value: 66.83544303797468 - type: cosine_f1_threshold value: 85.32201051712036 - type: cosine_precision value: 54.54545454545454 - type: cosine_recall value: 86.27450980392157 - type: dot_accuracy value: 57.32899022801303 - type: dot_accuracy_threshold value: 85.32201051712036 - type: dot_ap value: 55.14264553720072 - type: dot_f1 value: 66.83544303797468 - type: dot_f1_threshold value: 85.32201051712036 - type: dot_precision value: 54.54545454545454 - type: dot_recall value: 86.27450980392157 - type: euclidean_accuracy value: 57.32899022801303 - type: euclidean_accuracy_threshold value: 54.18117046356201 - type: euclidean_ap value: 55.14264553720072 - type: euclidean_f1 value: 66.83544303797468 - type: euclidean_f1_threshold value: 54.18117046356201 - type: euclidean_precision value: 54.54545454545454 - type: euclidean_recall value: 86.27450980392157 - type: main_score value: 55.14264553720072 - type: manhattan_accuracy value: 57.32899022801303 - type: manhattan_accuracy_threshold value: 828.8480758666992 - type: manhattan_ap value: 55.077974053622555 - type: manhattan_f1 value: 66.82352941176471 - type: manhattan_f1_threshold value: 885.6784820556641 - type: manhattan_precision value: 52.20588235294118 - type: manhattan_recall value: 92.81045751633987 - type: max_ap value: 55.14264553720072 - type: max_f1 value: 66.83544303797468 - type: max_precision value: 54.54545454545454 - type: max_recall value: 92.81045751633987 - type: similarity_accuracy value: 57.32899022801303 - type: similarity_accuracy_threshold value: 85.32201051712036 - type: similarity_ap value: 55.14264553720072 - type: similarity_f1 value: 66.83544303797468 - type: similarity_f1_threshold value: 85.32201051712036 - type: similarity_precision value: 54.54545454545454 - type: similarity_recall value: 86.27450980392157 - task: type: PairClassification dataset: name: MTEB XNLI (ru) type: mteb/xnli config: ru split: test revision: 09698e0180d87dc247ca447d3a1248b931ac0cdb metrics: - type: cosine_accuracy value: 67.6923076923077 - type: cosine_accuracy_threshold value: 87.6681923866272 - type: cosine_ap value: 73.18693800863593 - type: cosine_f1 value: 70.40641099026904 - type: cosine_f1_threshold value: 85.09706258773804 - type: cosine_precision value: 57.74647887323944 - type: cosine_recall value: 90.17595307917888 - type: dot_accuracy value: 67.6923076923077 - type: dot_accuracy_threshold value: 87.66818642616272 - type: dot_ap value: 73.18693800863593 - type: dot_f1 value: 70.40641099026904 - type: dot_f1_threshold value: 85.09706258773804 - type: dot_precision value: 57.74647887323944 - type: dot_recall value: 90.17595307917888 - type: euclidean_accuracy value: 67.6923076923077 - type: euclidean_accuracy_threshold value: 49.662476778030396 - type: euclidean_ap value: 73.18693800863593 - type: euclidean_f1 value: 70.40641099026904 - type: euclidean_f1_threshold value: 54.59475517272949 - type: euclidean_precision value: 57.74647887323944 - type: euclidean_recall value: 90.17595307917888 - type: main_score value: 73.18693800863593 - type: manhattan_accuracy value: 67.54578754578755 - type: manhattan_accuracy_threshold value: 777.1001815795898 - type: manhattan_ap value: 72.98861474758783 - type: manhattan_f1 value: 70.6842435655995 - type: manhattan_f1_threshold value: 810.3782653808594 - type: manhattan_precision value: 61.80021953896817 - type: manhattan_recall value: 82.55131964809385 - type: max_ap value: 73.18693800863593 - type: max_f1 value: 70.6842435655995 - type: max_precision value: 61.80021953896817 - type: max_recall value: 90.17595307917888 - type: similarity_accuracy value: 67.6923076923077 - type: similarity_accuracy_threshold value: 87.6681923866272 - type: similarity_ap value: 73.18693800863593 - type: similarity_f1 value: 70.40641099026904 - type: similarity_f1_threshold value: 85.09706258773804 - type: similarity_precision value: 57.74647887323944 - type: similarity_recall value: 90.17595307917888 - task: type: PairClassification dataset: name: MTEB XNLIV2 (russian) type: mteb/xnli2.0-multi-pair config: russian split: test revision: 5b7d477a8c62cdd18e2fed7e015497c20b4371ad metrics: - type: cosine_accuracy value: 68.35164835164835 - type: cosine_accuracy_threshold value: 88.48621845245361 - type: cosine_ap value: 73.10205506215699 - type: cosine_f1 value: 71.28712871287128 - type: cosine_f1_threshold value: 87.00399398803711 - type: cosine_precision value: 61.67023554603854 - type: cosine_recall value: 84.4574780058651 - type: dot_accuracy value: 68.35164835164835 - type: dot_accuracy_threshold value: 88.48622441291809 - type: dot_ap value: 73.10191110714706 - type: dot_f1 value: 71.28712871287128 - type: dot_f1_threshold value: 87.00399398803711 - type: dot_precision value: 61.67023554603854 - type: dot_recall value: 84.4574780058651 - type: euclidean_accuracy value: 68.35164835164835 - type: euclidean_accuracy_threshold value: 47.98704385757446 - type: euclidean_ap value: 73.10205506215699 - type: euclidean_f1 value: 71.28712871287128 - type: euclidean_f1_threshold value: 50.982362031936646 - type: euclidean_precision value: 61.67023554603854 - type: euclidean_recall value: 84.4574780058651 - type: main_score value: 73.10205506215699 - type: manhattan_accuracy value: 67.91208791208791 - type: manhattan_accuracy_threshold value: 746.1360931396484 - type: manhattan_ap value: 72.8954736175069 - type: manhattan_f1 value: 71.1297071129707 - type: manhattan_f1_threshold value: 808.0789566040039 - type: manhattan_precision value: 60.04036326942482 - type: manhattan_recall value: 87.2434017595308 - type: max_ap value: 73.10205506215699 - type: max_f1 value: 71.28712871287128 - type: max_precision value: 61.67023554603854 - type: max_recall value: 87.2434017595308 - type: similarity_accuracy value: 68.35164835164835 - type: similarity_accuracy_threshold value: 88.48621845245361 - type: similarity_ap value: 73.10205506215699 - type: similarity_f1 value: 71.28712871287128 - type: similarity_f1_threshold value: 87.00399398803711 - type: similarity_precision value: 61.67023554603854 - type: similarity_recall value: 84.4574780058651 - task: type: Retrieval dataset: name: MTEB XQuADRetrieval (ru) type: google/xquad config: ru split: validation revision: 51adfef1c1287aab1d2d91b5bead9bcfb9c68583 metrics: - type: main_score value: 95.705 - type: map_at_1 value: 90.802 - type: map_at_10 value: 94.427 - type: map_at_100 value: 94.451 - type: map_at_1000 value: 94.451 - type: map_at_20 value: 94.446 - type: map_at_3 value: 94.121 - type: map_at_5 value: 94.34 - type: mrr_at_1 value: 90.80168776371308 - type: mrr_at_10 value: 94.42659567343111 - type: mrr_at_100 value: 94.45099347521871 - type: mrr_at_1000 value: 94.45099347521871 - type: mrr_at_20 value: 94.44574530017569 - type: mrr_at_3 value: 94.12095639943743 - type: mrr_at_5 value: 94.34036568213786 - type: nauc_map_at_1000_diff1 value: 87.40573202946949 - type: nauc_map_at_1000_max value: 65.56220344468791 - type: nauc_map_at_1000_std value: 8.865583291735863 - type: nauc_map_at_100_diff1 value: 87.40573202946949 - type: nauc_map_at_100_max value: 65.56220344468791 - type: nauc_map_at_100_std value: 8.865583291735863 - type: nauc_map_at_10_diff1 value: 87.43657080570291 - type: nauc_map_at_10_max value: 65.71295628534446 - type: nauc_map_at_10_std value: 9.055399339099655 - type: nauc_map_at_1_diff1 value: 88.08395824560428 - type: nauc_map_at_1_max value: 62.92813192908893 - type: nauc_map_at_1_std value: 6.738987385482432 - type: nauc_map_at_20_diff1 value: 87.40979818966589 - type: nauc_map_at_20_max value: 65.59474346926105 - type: nauc_map_at_20_std value: 8.944420599300914 - type: nauc_map_at_3_diff1 value: 86.97771892161035 - type: nauc_map_at_3_max value: 66.14330030122467 - type: nauc_map_at_3_std value: 8.62516327793521 - type: nauc_map_at_5_diff1 value: 87.30273362211798 - type: nauc_map_at_5_max value: 66.1522476584607 - type: nauc_map_at_5_std value: 9.780940862679724 - type: nauc_mrr_at_1000_diff1 value: 87.40573202946949 - type: nauc_mrr_at_1000_max value: 65.56220344468791 - type: nauc_mrr_at_1000_std value: 8.865583291735863 - type: nauc_mrr_at_100_diff1 value: 87.40573202946949 - type: nauc_mrr_at_100_max value: 65.56220344468791 - type: nauc_mrr_at_100_std value: 8.865583291735863 - type: nauc_mrr_at_10_diff1 value: 87.43657080570291 - type: nauc_mrr_at_10_max value: 65.71295628534446 - type: nauc_mrr_at_10_std value: 9.055399339099655 - type: nauc_mrr_at_1_diff1 value: 88.08395824560428 - type: nauc_mrr_at_1_max value: 62.92813192908893 - type: nauc_mrr_at_1_std value: 6.738987385482432 - type: nauc_mrr_at_20_diff1 value: 87.40979818966589 - type: nauc_mrr_at_20_max value: 65.59474346926105 - type: nauc_mrr_at_20_std value: 8.944420599300914 - type: nauc_mrr_at_3_diff1 value: 86.97771892161035 - type: nauc_mrr_at_3_max value: 66.14330030122467 - type: nauc_mrr_at_3_std value: 8.62516327793521 - type: nauc_mrr_at_5_diff1 value: 87.30273362211798 - type: nauc_mrr_at_5_max value: 66.1522476584607 - type: nauc_mrr_at_5_std value: 9.780940862679724 - type: nauc_ndcg_at_1000_diff1 value: 87.37823158814116 - type: nauc_ndcg_at_1000_max value: 66.00874244792789 - type: nauc_ndcg_at_1000_std value: 9.479929342875067 - type: nauc_ndcg_at_100_diff1 value: 87.37823158814116 - type: nauc_ndcg_at_100_max value: 66.00874244792789 - type: nauc_ndcg_at_100_std value: 9.479929342875067 - type: nauc_ndcg_at_10_diff1 value: 87.54508467181488 - type: nauc_ndcg_at_10_max value: 66.88756470312894 - type: nauc_ndcg_at_10_std value: 10.812624405397022 - type: nauc_ndcg_at_1_diff1 value: 88.08395824560428 - type: nauc_ndcg_at_1_max value: 62.92813192908893 - type: nauc_ndcg_at_1_std value: 6.738987385482432 - type: nauc_ndcg_at_20_diff1 value: 87.42097894104597 - type: nauc_ndcg_at_20_max value: 66.37031898778943 - type: nauc_ndcg_at_20_std value: 10.34862538094813 - type: nauc_ndcg_at_3_diff1 value: 86.50039907157999 - type: nauc_ndcg_at_3_max value: 67.97798288917929 - type: nauc_ndcg_at_3_std value: 10.162410286746852 - type: nauc_ndcg_at_5_diff1 value: 87.13322094568531 - type: nauc_ndcg_at_5_max value: 68.08576118683821 - type: nauc_ndcg_at_5_std value: 12.639637379592855 - type: nauc_precision_at_1000_diff1 value: 100.0 - type: nauc_precision_at_1000_max value: 100.0 - type: nauc_precision_at_1000_std value: 100.0 - type: nauc_precision_at_100_diff1 value: 100.0 - type: nauc_precision_at_100_max value: 100.0 - type: nauc_precision_at_100_std value: 100.0 - type: nauc_precision_at_10_diff1 value: 93.46711505595813 - type: nauc_precision_at_10_max value: 100.0 - type: nauc_precision_at_10_std value: 65.42573557179935 - type: nauc_precision_at_1_diff1 value: 88.08395824560428 - type: nauc_precision_at_1_max value: 62.92813192908893 - type: nauc_precision_at_1_std value: 6.738987385482432 - type: nauc_precision_at_20_diff1 value: 91.28948674127133 - type: nauc_precision_at_20_max value: 100.0 - type: nauc_precision_at_20_std value: 90.74278258632364 - type: nauc_precision_at_3_diff1 value: 82.64606115071832 - type: nauc_precision_at_3_max value: 83.26201582412921 - type: nauc_precision_at_3_std value: 23.334013491433762 - type: nauc_precision_at_5_diff1 value: 85.0867539350284 - type: nauc_precision_at_5_max value: 96.57011448655484 - type: nauc_precision_at_5_std value: 56.46869543426768 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: .nan - type: nauc_recall_at_100_max value: .nan - type: nauc_recall_at_100_std value: .nan - type: nauc_recall_at_10_diff1 value: 93.46711505595623 - type: nauc_recall_at_10_max value: 100.0 - type: nauc_recall_at_10_std value: 65.42573557180279 - type: nauc_recall_at_1_diff1 value: 88.08395824560428 - type: nauc_recall_at_1_max value: 62.92813192908893 - type: nauc_recall_at_1_std value: 6.738987385482432 - type: nauc_recall_at_20_diff1 value: 91.28948674127474 - type: nauc_recall_at_20_max value: 100.0 - type: nauc_recall_at_20_std value: 90.74278258632704 - type: nauc_recall_at_3_diff1 value: 82.64606115071967 - type: nauc_recall_at_3_max value: 83.26201582413023 - type: nauc_recall_at_3_std value: 23.334013491434007 - type: nauc_recall_at_5_diff1 value: 85.08675393502854 - type: nauc_recall_at_5_max value: 96.57011448655487 - type: nauc_recall_at_5_std value: 56.46869543426658 - type: ndcg_at_1 value: 90.802 - type: ndcg_at_10 value: 95.705 - type: ndcg_at_100 value: 95.816 - type: ndcg_at_1000 value: 95.816 - type: ndcg_at_20 value: 95.771 - type: ndcg_at_3 value: 95.11699999999999 - type: ndcg_at_5 value: 95.506 - type: precision_at_1 value: 90.802 - type: precision_at_10 value: 9.949 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.987 - type: precision_at_3 value: 32.658 - type: precision_at_5 value: 19.781000000000002 - type: recall_at_1 value: 90.802 - type: recall_at_10 value: 99.494 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 99.747 - type: recall_at_3 value: 97.975 - type: recall_at_5 value: 98.90299999999999 --- ## Multilingual-E5-small [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small') model = AutoModel.from_pretrained('intfloat/multilingual-e5-small') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-small') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
[ "SEMANTIC_SIMILARITY", "TRANSLATION", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
khoa-klaytn/bge-base-en-v1.5-angle
khoa-klaytn
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,704
1,704
745
2
--- language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-base-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.14925373134328 - type: ap value: 39.32336517995478 - type: f1 value: 70.16902252611425 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.386825 - type: ap value: 90.21276917991995 - type: f1 value: 93.37741030006174 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.846000000000004 - type: f1 value: 48.14646269778261 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.754000000000005 - type: map_at_10 value: 55.761 - type: map_at_100 value: 56.330999999999996 - type: map_at_1000 value: 56.333999999999996 - type: map_at_3 value: 51.92 - type: map_at_5 value: 54.010999999999996 - type: mrr_at_1 value: 41.181 - type: mrr_at_10 value: 55.967999999999996 - type: mrr_at_100 value: 56.538 - type: mrr_at_1000 value: 56.542 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.208999999999996 - type: ndcg_at_1 value: 40.754000000000005 - type: ndcg_at_10 value: 63.605000000000004 - type: ndcg_at_100 value: 66.05199999999999 - type: ndcg_at_1000 value: 66.12 - type: ndcg_at_3 value: 55.708 - type: ndcg_at_5 value: 59.452000000000005 - type: precision_at_1 value: 40.754000000000005 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.149000000000001 - type: recall_at_1 value: 40.754000000000005 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 75.747 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.74884539679369 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.8075893810716 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.128470519187736 - type: mrr value: 74.28065778481289 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.24629081484655 - type: cos_sim_spearman value: 86.93752309911496 - type: euclidean_pearson value: 87.58589628573816 - type: euclidean_spearman value: 88.05622328825284 - type: manhattan_pearson value: 87.5594959805773 - type: manhattan_spearman value: 88.19658793233961 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.9512987012987 - type: f1 value: 86.92515357973708 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.10263762928872 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.69711517426737 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.327 - type: map_at_10 value: 44.099 - type: map_at_100 value: 45.525 - type: map_at_1000 value: 45.641999999999996 - type: map_at_3 value: 40.47 - type: map_at_5 value: 42.36 - type: mrr_at_1 value: 39.199 - type: mrr_at_10 value: 49.651 - type: mrr_at_100 value: 50.29 - type: mrr_at_1000 value: 50.329 - type: mrr_at_3 value: 46.924 - type: mrr_at_5 value: 48.548 - type: ndcg_at_1 value: 39.199 - type: ndcg_at_10 value: 50.773 - type: ndcg_at_100 value: 55.67999999999999 - type: ndcg_at_1000 value: 57.495 - type: ndcg_at_3 value: 45.513999999999996 - type: ndcg_at_5 value: 47.703 - type: precision_at_1 value: 39.199 - type: precision_at_10 value: 9.914000000000001 - type: precision_at_100 value: 1.5310000000000001 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.984 - type: precision_at_5 value: 15.737000000000002 - type: recall_at_1 value: 32.327 - type: recall_at_10 value: 63.743 - type: recall_at_100 value: 84.538 - type: recall_at_1000 value: 96.089 - type: recall_at_3 value: 48.065000000000005 - type: recall_at_5 value: 54.519 - type: map_at_1 value: 32.671 - type: map_at_10 value: 42.954 - type: map_at_100 value: 44.151 - type: map_at_1000 value: 44.287 - type: map_at_3 value: 39.912 - type: map_at_5 value: 41.798 - type: mrr_at_1 value: 41.465 - type: mrr_at_10 value: 49.351 - type: mrr_at_100 value: 49.980000000000004 - type: mrr_at_1000 value: 50.016000000000005 - type: mrr_at_3 value: 47.144000000000005 - type: mrr_at_5 value: 48.592999999999996 - type: ndcg_at_1 value: 41.465 - type: ndcg_at_10 value: 48.565999999999995 - type: ndcg_at_100 value: 52.76499999999999 - type: ndcg_at_1000 value: 54.749 - type: ndcg_at_3 value: 44.57 - type: ndcg_at_5 value: 46.759 - type: precision_at_1 value: 41.465 - type: precision_at_10 value: 9.107999999999999 - type: precision_at_100 value: 1.433 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 21.423000000000002 - type: precision_at_5 value: 15.414 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 57.738 - type: recall_at_100 value: 75.86500000000001 - type: recall_at_1000 value: 88.36 - type: recall_at_3 value: 45.626 - type: recall_at_5 value: 51.812000000000005 - type: map_at_1 value: 41.185 - type: map_at_10 value: 53.929 - type: map_at_100 value: 54.92 - type: map_at_1000 value: 54.967999999999996 - type: map_at_3 value: 50.70400000000001 - type: map_at_5 value: 52.673 - type: mrr_at_1 value: 47.398 - type: mrr_at_10 value: 57.303000000000004 - type: mrr_at_100 value: 57.959 - type: mrr_at_1000 value: 57.985 - type: mrr_at_3 value: 54.932 - type: mrr_at_5 value: 56.464999999999996 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 59.653 - type: ndcg_at_100 value: 63.627 - type: ndcg_at_1000 value: 64.596 - type: ndcg_at_3 value: 54.455 - type: ndcg_at_5 value: 57.245000000000005 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.524000000000001 - type: precision_at_100 value: 1.243 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.389 - type: precision_at_5 value: 16.752 - type: recall_at_1 value: 41.185 - type: recall_at_10 value: 73.193 - type: recall_at_100 value: 90.357 - type: recall_at_1000 value: 97.253 - type: recall_at_3 value: 59.199999999999996 - type: recall_at_5 value: 66.118 - type: map_at_1 value: 27.27 - type: map_at_10 value: 36.223 - type: map_at_100 value: 37.218 - type: map_at_1000 value: 37.293 - type: map_at_3 value: 33.503 - type: map_at_5 value: 35.097 - type: mrr_at_1 value: 29.492 - type: mrr_at_10 value: 38.352000000000004 - type: mrr_at_100 value: 39.188 - type: mrr_at_1000 value: 39.247 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.401 - type: ndcg_at_1 value: 29.492 - type: ndcg_at_10 value: 41.239 - type: ndcg_at_100 value: 46.066 - type: ndcg_at_1000 value: 47.992000000000004 - type: ndcg_at_3 value: 36.11 - type: ndcg_at_5 value: 38.772 - type: precision_at_1 value: 29.492 - type: precision_at_10 value: 6.260000000000001 - type: precision_at_100 value: 0.914 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 15.104000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.27 - type: recall_at_10 value: 54.589 - type: recall_at_100 value: 76.70700000000001 - type: recall_at_1000 value: 91.158 - type: recall_at_3 value: 40.974 - type: recall_at_5 value: 47.327000000000005 - type: map_at_1 value: 17.848 - type: map_at_10 value: 26.207 - type: map_at_100 value: 27.478 - type: map_at_1000 value: 27.602 - type: map_at_3 value: 23.405 - type: map_at_5 value: 24.98 - type: mrr_at_1 value: 21.891 - type: mrr_at_10 value: 31.041999999999998 - type: mrr_at_100 value: 32.092 - type: mrr_at_1000 value: 32.151999999999994 - type: mrr_at_3 value: 28.358 - type: mrr_at_5 value: 29.969 - type: ndcg_at_1 value: 21.891 - type: ndcg_at_10 value: 31.585 - type: ndcg_at_100 value: 37.531 - type: ndcg_at_1000 value: 40.256 - type: ndcg_at_3 value: 26.508 - type: ndcg_at_5 value: 28.894 - type: precision_at_1 value: 21.891 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 0.9990000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.769 - type: precision_at_5 value: 9.279 - type: recall_at_1 value: 17.848 - type: recall_at_10 value: 43.452 - type: recall_at_100 value: 69.216 - type: recall_at_1000 value: 88.102 - type: recall_at_3 value: 29.18 - type: recall_at_5 value: 35.347 - type: map_at_1 value: 30.94 - type: map_at_10 value: 41.248000000000005 - type: map_at_100 value: 42.495 - type: map_at_1000 value: 42.602000000000004 - type: map_at_3 value: 37.939 - type: map_at_5 value: 39.924 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 47.041 - type: mrr_at_100 value: 47.83 - type: mrr_at_1000 value: 47.878 - type: mrr_at_3 value: 44.466 - type: mrr_at_5 value: 46.111999999999995 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 47.223 - type: ndcg_at_100 value: 52.394 - type: ndcg_at_1000 value: 54.432 - type: ndcg_at_3 value: 42.032000000000004 - type: ndcg_at_5 value: 44.772 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.393 - type: precision_at_100 value: 1.2890000000000001 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 19.698 - type: precision_at_5 value: 14.013 - type: recall_at_1 value: 30.94 - type: recall_at_10 value: 59.316 - type: recall_at_100 value: 80.783 - type: recall_at_1000 value: 94.15400000000001 - type: recall_at_3 value: 44.712 - type: recall_at_5 value: 51.932 - type: map_at_1 value: 27.104 - type: map_at_10 value: 36.675999999999995 - type: map_at_100 value: 38.076 - type: map_at_1000 value: 38.189 - type: map_at_3 value: 33.733999999999995 - type: map_at_5 value: 35.287 - type: mrr_at_1 value: 33.904 - type: mrr_at_10 value: 42.55 - type: mrr_at_100 value: 43.434 - type: mrr_at_1000 value: 43.494 - type: mrr_at_3 value: 40.126 - type: mrr_at_5 value: 41.473 - type: ndcg_at_1 value: 33.904 - type: ndcg_at_10 value: 42.414 - type: ndcg_at_100 value: 48.203 - type: ndcg_at_1000 value: 50.437 - type: ndcg_at_3 value: 37.633 - type: ndcg_at_5 value: 39.67 - type: precision_at_1 value: 33.904 - type: precision_at_10 value: 7.82 - type: precision_at_100 value: 1.2409999999999999 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 17.884 - type: precision_at_5 value: 12.648000000000001 - type: recall_at_1 value: 27.104 - type: recall_at_10 value: 53.563 - type: recall_at_100 value: 78.557 - type: recall_at_1000 value: 93.533 - type: recall_at_3 value: 39.92 - type: recall_at_5 value: 45.457 - type: map_at_1 value: 27.707749999999997 - type: map_at_10 value: 36.961 - type: map_at_100 value: 38.158833333333334 - type: map_at_1000 value: 38.270333333333326 - type: map_at_3 value: 34.07183333333334 - type: map_at_5 value: 35.69533333333334 - type: mrr_at_1 value: 32.81875 - type: mrr_at_10 value: 41.293 - type: mrr_at_100 value: 42.116499999999995 - type: mrr_at_1000 value: 42.170249999999996 - type: mrr_at_3 value: 38.83983333333333 - type: mrr_at_5 value: 40.29775 - type: ndcg_at_1 value: 32.81875 - type: ndcg_at_10 value: 42.355 - type: ndcg_at_100 value: 47.41374999999999 - type: ndcg_at_1000 value: 49.5805 - type: ndcg_at_3 value: 37.52825 - type: ndcg_at_5 value: 39.83266666666667 - type: precision_at_1 value: 32.81875 - type: precision_at_10 value: 7.382416666666666 - type: precision_at_100 value: 1.1640833333333334 - type: precision_at_1000 value: 0.15383333333333335 - type: precision_at_3 value: 17.134166666666665 - type: precision_at_5 value: 12.174833333333336 - type: recall_at_1 value: 27.707749999999997 - type: recall_at_10 value: 53.945 - type: recall_at_100 value: 76.191 - type: recall_at_1000 value: 91.101 - type: recall_at_3 value: 40.39083333333334 - type: recall_at_5 value: 46.40083333333333 - type: map_at_1 value: 26.482 - type: map_at_10 value: 33.201 - type: map_at_100 value: 34.107 - type: map_at_1000 value: 34.197 - type: map_at_3 value: 31.174000000000003 - type: map_at_5 value: 32.279 - type: mrr_at_1 value: 29.908 - type: mrr_at_10 value: 36.235 - type: mrr_at_100 value: 37.04 - type: mrr_at_1000 value: 37.105 - type: mrr_at_3 value: 34.355999999999995 - type: mrr_at_5 value: 35.382999999999996 - type: ndcg_at_1 value: 29.908 - type: ndcg_at_10 value: 37.325 - type: ndcg_at_100 value: 41.795 - type: ndcg_at_1000 value: 44.105 - type: ndcg_at_3 value: 33.555 - type: ndcg_at_5 value: 35.266999999999996 - type: precision_at_1 value: 29.908 - type: precision_at_10 value: 5.721 - type: precision_at_100 value: 0.8630000000000001 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 14.008000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 26.482 - type: recall_at_10 value: 47.072 - type: recall_at_100 value: 67.27 - type: recall_at_1000 value: 84.371 - type: recall_at_3 value: 36.65 - type: recall_at_5 value: 40.774 - type: map_at_1 value: 18.815 - type: map_at_10 value: 26.369999999999997 - type: map_at_100 value: 27.458 - type: map_at_1000 value: 27.588 - type: map_at_3 value: 23.990000000000002 - type: map_at_5 value: 25.345000000000002 - type: mrr_at_1 value: 22.953000000000003 - type: mrr_at_10 value: 30.342999999999996 - type: mrr_at_100 value: 31.241000000000003 - type: mrr_at_1000 value: 31.319000000000003 - type: mrr_at_3 value: 28.16 - type: mrr_at_5 value: 29.406 - type: ndcg_at_1 value: 22.953000000000003 - type: ndcg_at_10 value: 31.151 - type: ndcg_at_100 value: 36.309000000000005 - type: ndcg_at_1000 value: 39.227000000000004 - type: ndcg_at_3 value: 26.921 - type: ndcg_at_5 value: 28.938000000000002 - type: precision_at_1 value: 22.953000000000003 - type: precision_at_10 value: 5.602 - type: precision_at_100 value: 0.9530000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 12.606 - type: precision_at_5 value: 9.119 - type: recall_at_1 value: 18.815 - type: recall_at_10 value: 41.574 - type: recall_at_100 value: 64.84400000000001 - type: recall_at_1000 value: 85.406 - type: recall_at_3 value: 29.694 - type: recall_at_5 value: 34.935 - type: map_at_1 value: 27.840999999999998 - type: map_at_10 value: 36.797999999999995 - type: map_at_100 value: 37.993 - type: map_at_1000 value: 38.086999999999996 - type: map_at_3 value: 34.050999999999995 - type: map_at_5 value: 35.379 - type: mrr_at_1 value: 32.649 - type: mrr_at_10 value: 41.025 - type: mrr_at_100 value: 41.878 - type: mrr_at_1000 value: 41.929 - type: mrr_at_3 value: 38.573 - type: mrr_at_5 value: 39.715 - type: ndcg_at_1 value: 32.649 - type: ndcg_at_10 value: 42.142 - type: ndcg_at_100 value: 47.558 - type: ndcg_at_1000 value: 49.643 - type: ndcg_at_3 value: 37.12 - type: ndcg_at_5 value: 38.983000000000004 - type: precision_at_1 value: 32.649 - type: precision_at_10 value: 7.08 - type: precision_at_100 value: 1.1039999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.698 - type: precision_at_5 value: 11.511000000000001 - type: recall_at_1 value: 27.840999999999998 - type: recall_at_10 value: 54.245 - type: recall_at_100 value: 77.947 - type: recall_at_1000 value: 92.36999999999999 - type: recall_at_3 value: 40.146 - type: recall_at_5 value: 44.951 - type: map_at_1 value: 26.529000000000003 - type: map_at_10 value: 35.010000000000005 - type: map_at_100 value: 36.647 - type: map_at_1000 value: 36.857 - type: map_at_3 value: 31.968000000000004 - type: map_at_5 value: 33.554 - type: mrr_at_1 value: 31.818 - type: mrr_at_10 value: 39.550999999999995 - type: mrr_at_100 value: 40.54 - type: mrr_at_1000 value: 40.596 - type: mrr_at_3 value: 36.726 - type: mrr_at_5 value: 38.416 - type: ndcg_at_1 value: 31.818 - type: ndcg_at_10 value: 40.675 - type: ndcg_at_100 value: 46.548 - type: ndcg_at_1000 value: 49.126 - type: ndcg_at_3 value: 35.829 - type: ndcg_at_5 value: 38.0 - type: precision_at_1 value: 31.818 - type: precision_at_10 value: 7.826 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 16.601 - type: precision_at_5 value: 12.095 - type: recall_at_1 value: 26.529000000000003 - type: recall_at_10 value: 51.03 - type: recall_at_100 value: 77.556 - type: recall_at_1000 value: 93.804 - type: recall_at_3 value: 36.986000000000004 - type: recall_at_5 value: 43.096000000000004 - type: map_at_1 value: 23.480999999999998 - type: map_at_10 value: 30.817 - type: map_at_100 value: 31.838 - type: map_at_1000 value: 31.932 - type: map_at_3 value: 28.011999999999997 - type: map_at_5 value: 29.668 - type: mrr_at_1 value: 25.323 - type: mrr_at_10 value: 33.072 - type: mrr_at_100 value: 33.926 - type: mrr_at_1000 value: 33.993 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 32.092 - type: ndcg_at_1 value: 25.323 - type: ndcg_at_10 value: 35.514 - type: ndcg_at_100 value: 40.489000000000004 - type: ndcg_at_1000 value: 42.908 - type: ndcg_at_3 value: 30.092000000000002 - type: ndcg_at_5 value: 32.989000000000004 - type: precision_at_1 value: 25.323 - type: precision_at_10 value: 5.545 - type: precision_at_100 value: 0.861 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.446 - type: precision_at_5 value: 9.131 - type: recall_at_1 value: 23.480999999999998 - type: recall_at_10 value: 47.825 - type: recall_at_100 value: 70.652 - type: recall_at_1000 value: 88.612 - type: recall_at_3 value: 33.537 - type: recall_at_5 value: 40.542 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 13.333999999999998 - type: map_at_10 value: 22.524 - type: map_at_100 value: 24.506 - type: map_at_1000 value: 24.715 - type: map_at_3 value: 19.022 - type: map_at_5 value: 20.693 - type: mrr_at_1 value: 29.186 - type: mrr_at_10 value: 41.22 - type: mrr_at_100 value: 42.16 - type: mrr_at_1000 value: 42.192 - type: mrr_at_3 value: 38.013000000000005 - type: mrr_at_5 value: 39.704 - type: ndcg_at_1 value: 29.186 - type: ndcg_at_10 value: 31.167 - type: ndcg_at_100 value: 38.879000000000005 - type: ndcg_at_1000 value: 42.376000000000005 - type: ndcg_at_3 value: 25.817 - type: ndcg_at_5 value: 27.377000000000002 - type: precision_at_1 value: 29.186 - type: precision_at_10 value: 9.693999999999999 - type: precision_at_100 value: 1.8030000000000002 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 19.11 - type: precision_at_5 value: 14.344999999999999 - type: recall_at_1 value: 13.333999999999998 - type: recall_at_10 value: 37.092000000000006 - type: recall_at_100 value: 63.651 - type: recall_at_1000 value: 83.05 - type: recall_at_3 value: 23.74 - type: recall_at_5 value: 28.655 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.151 - type: map_at_10 value: 19.653000000000002 - type: map_at_100 value: 28.053 - type: map_at_1000 value: 29.709000000000003 - type: map_at_3 value: 14.191 - type: map_at_5 value: 16.456 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.4 - type: mrr_at_100 value: 74.715 - type: mrr_at_1000 value: 74.726 - type: mrr_at_3 value: 72.417 - type: mrr_at_5 value: 73.667 - type: ndcg_at_1 value: 54.25 - type: ndcg_at_10 value: 40.77 - type: ndcg_at_100 value: 46.359 - type: ndcg_at_1000 value: 54.193000000000005 - type: ndcg_at_3 value: 44.832 - type: ndcg_at_5 value: 42.63 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 32.175 - type: precision_at_100 value: 10.668 - type: precision_at_1000 value: 2.067 - type: precision_at_3 value: 47.667 - type: precision_at_5 value: 41.3 - type: recall_at_1 value: 9.151 - type: recall_at_10 value: 25.003999999999998 - type: recall_at_100 value: 52.976 - type: recall_at_1000 value: 78.315 - type: recall_at_3 value: 15.487 - type: recall_at_5 value: 18.999 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.89999999999999 - type: f1 value: 46.47777925067403 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 73.706 - type: map_at_10 value: 82.423 - type: map_at_100 value: 82.67999999999999 - type: map_at_1000 value: 82.694 - type: map_at_3 value: 81.328 - type: map_at_5 value: 82.001 - type: mrr_at_1 value: 79.613 - type: mrr_at_10 value: 87.07000000000001 - type: mrr_at_100 value: 87.169 - type: mrr_at_1000 value: 87.17 - type: mrr_at_3 value: 86.404 - type: mrr_at_5 value: 86.856 - type: ndcg_at_1 value: 79.613 - type: ndcg_at_10 value: 86.289 - type: ndcg_at_100 value: 87.201 - type: ndcg_at_1000 value: 87.428 - type: ndcg_at_3 value: 84.625 - type: ndcg_at_5 value: 85.53699999999999 - type: precision_at_1 value: 79.613 - type: precision_at_10 value: 10.399 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.473 - type: precision_at_5 value: 20.132 - type: recall_at_1 value: 73.706 - type: recall_at_10 value: 93.559 - type: recall_at_100 value: 97.188 - type: recall_at_1000 value: 98.555 - type: recall_at_3 value: 88.98700000000001 - type: recall_at_5 value: 91.373 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 19.841 - type: map_at_10 value: 32.643 - type: map_at_100 value: 34.575 - type: map_at_1000 value: 34.736 - type: map_at_3 value: 28.317999999999998 - type: map_at_5 value: 30.964000000000002 - type: mrr_at_1 value: 39.660000000000004 - type: mrr_at_10 value: 48.620000000000005 - type: mrr_at_100 value: 49.384 - type: mrr_at_1000 value: 49.415 - type: mrr_at_3 value: 45.988 - type: mrr_at_5 value: 47.361 - type: ndcg_at_1 value: 39.660000000000004 - type: ndcg_at_10 value: 40.646 - type: ndcg_at_100 value: 47.657 - type: ndcg_at_1000 value: 50.428 - type: ndcg_at_3 value: 36.689 - type: ndcg_at_5 value: 38.211 - type: precision_at_1 value: 39.660000000000004 - type: precision_at_10 value: 11.235000000000001 - type: precision_at_100 value: 1.8530000000000002 - type: precision_at_1000 value: 0.23600000000000002 - type: precision_at_3 value: 24.587999999999997 - type: precision_at_5 value: 18.395 - type: recall_at_1 value: 19.841 - type: recall_at_10 value: 48.135 - type: recall_at_100 value: 74.224 - type: recall_at_1000 value: 90.826 - type: recall_at_3 value: 33.536 - type: recall_at_5 value: 40.311 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.358 - type: map_at_10 value: 64.497 - type: map_at_100 value: 65.362 - type: map_at_1000 value: 65.41900000000001 - type: map_at_3 value: 61.06700000000001 - type: map_at_5 value: 63.317 - type: mrr_at_1 value: 80.716 - type: mrr_at_10 value: 86.10799999999999 - type: mrr_at_100 value: 86.265 - type: mrr_at_1000 value: 86.27 - type: mrr_at_3 value: 85.271 - type: mrr_at_5 value: 85.82499999999999 - type: ndcg_at_1 value: 80.716 - type: ndcg_at_10 value: 72.597 - type: ndcg_at_100 value: 75.549 - type: ndcg_at_1000 value: 76.61 - type: ndcg_at_3 value: 67.874 - type: ndcg_at_5 value: 70.655 - type: precision_at_1 value: 80.716 - type: precision_at_10 value: 15.148 - type: precision_at_100 value: 1.745 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 43.597 - type: precision_at_5 value: 28.351 - type: recall_at_1 value: 40.358 - type: recall_at_10 value: 75.739 - type: recall_at_100 value: 87.259 - type: recall_at_1000 value: 94.234 - type: recall_at_3 value: 65.39500000000001 - type: recall_at_5 value: 70.878 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.80799999999998 - type: ap value: 86.81350378180757 - type: f1 value: 90.79901248314215 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.096 - type: map_at_10 value: 34.384 - type: map_at_100 value: 35.541 - type: map_at_1000 value: 35.589999999999996 - type: map_at_3 value: 30.496000000000002 - type: map_at_5 value: 32.718 - type: mrr_at_1 value: 22.750999999999998 - type: mrr_at_10 value: 35.024 - type: mrr_at_100 value: 36.125 - type: mrr_at_1000 value: 36.168 - type: mrr_at_3 value: 31.225 - type: mrr_at_5 value: 33.416000000000004 - type: ndcg_at_1 value: 22.750999999999998 - type: ndcg_at_10 value: 41.351 - type: ndcg_at_100 value: 46.92 - type: ndcg_at_1000 value: 48.111 - type: ndcg_at_3 value: 33.439 - type: ndcg_at_5 value: 37.407000000000004 - type: precision_at_1 value: 22.750999999999998 - type: precision_at_10 value: 6.564 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.288 - type: precision_at_5 value: 10.581999999999999 - type: recall_at_1 value: 22.096 - type: recall_at_10 value: 62.771 - type: recall_at_100 value: 88.529 - type: recall_at_1000 value: 97.55 - type: recall_at_3 value: 41.245 - type: recall_at_5 value: 50.788 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.16780665754673 - type: f1 value: 93.96331194859894 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.90606475148198 - type: f1 value: 58.58344986604187 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.14660390047075 - type: f1 value: 74.31533923533614 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.16139878950908 - type: f1 value: 80.18532656824924 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.949880906135085 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.56300351524862 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.196521894371315 - type: mrr value: 32.22644231694389 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.783 - type: map_at_10 value: 14.549000000000001 - type: map_at_100 value: 18.433 - type: map_at_1000 value: 19.949 - type: map_at_3 value: 10.936 - type: map_at_5 value: 12.514 - type: mrr_at_1 value: 47.368 - type: mrr_at_10 value: 56.42 - type: mrr_at_100 value: 56.908 - type: mrr_at_1000 value: 56.95 - type: mrr_at_3 value: 54.283 - type: mrr_at_5 value: 55.568 - type: ndcg_at_1 value: 45.666000000000004 - type: ndcg_at_10 value: 37.389 - type: ndcg_at_100 value: 34.253 - type: ndcg_at_1000 value: 43.059999999999995 - type: ndcg_at_3 value: 42.725 - type: ndcg_at_5 value: 40.193 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 27.988000000000003 - type: precision_at_100 value: 8.672 - type: precision_at_1000 value: 2.164 - type: precision_at_3 value: 40.248 - type: precision_at_5 value: 34.737 - type: recall_at_1 value: 6.783 - type: recall_at_10 value: 17.838 - type: recall_at_100 value: 33.672000000000004 - type: recall_at_1000 value: 66.166 - type: recall_at_3 value: 11.849 - type: recall_at_5 value: 14.205000000000002 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.698999999999998 - type: map_at_10 value: 46.556 - type: map_at_100 value: 47.652 - type: map_at_1000 value: 47.68 - type: map_at_3 value: 42.492000000000004 - type: map_at_5 value: 44.763999999999996 - type: mrr_at_1 value: 35.747 - type: mrr_at_10 value: 49.242999999999995 - type: mrr_at_100 value: 50.052 - type: mrr_at_1000 value: 50.068 - type: mrr_at_3 value: 45.867000000000004 - type: mrr_at_5 value: 47.778999999999996 - type: ndcg_at_1 value: 35.717999999999996 - type: ndcg_at_10 value: 54.14600000000001 - type: ndcg_at_100 value: 58.672999999999995 - type: ndcg_at_1000 value: 59.279 - type: ndcg_at_3 value: 46.407 - type: ndcg_at_5 value: 50.181 - type: precision_at_1 value: 35.717999999999996 - type: precision_at_10 value: 8.844000000000001 - type: precision_at_100 value: 1.139 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 20.993000000000002 - type: precision_at_5 value: 14.791000000000002 - type: recall_at_1 value: 31.698999999999998 - type: recall_at_10 value: 74.693 - type: recall_at_100 value: 94.15299999999999 - type: recall_at_1000 value: 98.585 - type: recall_at_3 value: 54.388999999999996 - type: recall_at_5 value: 63.08200000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.283 - type: map_at_10 value: 85.24000000000001 - type: map_at_100 value: 85.882 - type: map_at_1000 value: 85.897 - type: map_at_3 value: 82.326 - type: map_at_5 value: 84.177 - type: mrr_at_1 value: 82.21000000000001 - type: mrr_at_10 value: 88.228 - type: mrr_at_100 value: 88.32 - type: mrr_at_1000 value: 88.32 - type: mrr_at_3 value: 87.323 - type: mrr_at_5 value: 87.94800000000001 - type: ndcg_at_1 value: 82.17999999999999 - type: ndcg_at_10 value: 88.9 - type: ndcg_at_100 value: 90.079 - type: ndcg_at_1000 value: 90.158 - type: ndcg_at_3 value: 86.18299999999999 - type: ndcg_at_5 value: 87.71799999999999 - type: precision_at_1 value: 82.17999999999999 - type: precision_at_10 value: 13.464 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.693 - type: precision_at_5 value: 24.792 - type: recall_at_1 value: 71.283 - type: recall_at_10 value: 95.742 - type: recall_at_100 value: 99.67200000000001 - type: recall_at_1000 value: 99.981 - type: recall_at_3 value: 87.888 - type: recall_at_5 value: 92.24 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.24267063669042 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.88056988932578 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.903 - type: map_at_10 value: 13.202 - type: map_at_100 value: 15.5 - type: map_at_1000 value: 15.870999999999999 - type: map_at_3 value: 9.407 - type: map_at_5 value: 11.238 - type: mrr_at_1 value: 24.2 - type: mrr_at_10 value: 35.867 - type: mrr_at_100 value: 37.001 - type: mrr_at_1000 value: 37.043 - type: mrr_at_3 value: 32.5 - type: mrr_at_5 value: 34.35 - type: ndcg_at_1 value: 24.2 - type: ndcg_at_10 value: 21.731 - type: ndcg_at_100 value: 30.7 - type: ndcg_at_1000 value: 36.618 - type: ndcg_at_3 value: 20.72 - type: ndcg_at_5 value: 17.954 - type: precision_at_1 value: 24.2 - type: precision_at_10 value: 11.33 - type: precision_at_100 value: 2.4410000000000003 - type: precision_at_1000 value: 0.386 - type: precision_at_3 value: 19.667 - type: precision_at_5 value: 15.86 - type: recall_at_1 value: 4.903 - type: recall_at_10 value: 22.962 - type: recall_at_100 value: 49.563 - type: recall_at_1000 value: 78.238 - type: recall_at_3 value: 11.953 - type: recall_at_5 value: 16.067999999999998 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.12694254604078 - type: cos_sim_spearman value: 80.30141815181918 - type: euclidean_pearson value: 81.34015449877128 - type: euclidean_spearman value: 80.13984197010849 - type: manhattan_pearson value: 81.31767068124086 - type: manhattan_spearman value: 80.11720513114103 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.13112984010417 - type: cos_sim_spearman value: 78.03063573402875 - type: euclidean_pearson value: 83.51928418844804 - type: euclidean_spearman value: 78.4045235411144 - type: manhattan_pearson value: 83.49981637388689 - type: manhattan_spearman value: 78.4042575139372 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.50327987379504 - type: cos_sim_spearman value: 84.18556767756205 - type: euclidean_pearson value: 82.69684424327679 - type: euclidean_spearman value: 83.5368106038335 - type: manhattan_pearson value: 82.57967581007374 - type: manhattan_spearman value: 83.43009053133697 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.50756863007814 - type: cos_sim_spearman value: 82.27204331279108 - type: euclidean_pearson value: 81.39535251429741 - type: euclidean_spearman value: 81.84386626336239 - type: manhattan_pearson value: 81.34281737280695 - type: manhattan_spearman value: 81.81149375673166 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.8727714856726 - type: cos_sim_spearman value: 87.95738287792312 - type: euclidean_pearson value: 86.62920602795887 - type: euclidean_spearman value: 87.05207355381243 - type: manhattan_pearson value: 86.53587918472225 - type: manhattan_spearman value: 86.95382961029586 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.52240359769479 - type: cos_sim_spearman value: 85.47685776238286 - type: euclidean_pearson value: 84.25815333483058 - type: euclidean_spearman value: 85.27415639683198 - type: manhattan_pearson value: 84.29127757025637 - type: manhattan_spearman value: 85.30226224917351 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.42501708915708 - type: cos_sim_spearman value: 86.42276182795041 - type: euclidean_pearson value: 86.5408207354761 - type: euclidean_spearman value: 85.46096321750838 - type: manhattan_pearson value: 86.54177303026881 - type: manhattan_spearman value: 85.50313151916117 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.86521089250766 - type: cos_sim_spearman value: 65.94868540323003 - type: euclidean_pearson value: 67.16569626533084 - type: euclidean_spearman value: 66.37667004134917 - type: manhattan_pearson value: 67.1482365102333 - type: manhattan_spearman value: 66.53240122580029 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.64746265365318 - type: cos_sim_spearman value: 86.41888825906786 - type: euclidean_pearson value: 85.27453642725811 - type: euclidean_spearman value: 85.94095796602544 - type: manhattan_pearson value: 85.28643660505334 - type: manhattan_spearman value: 85.95028003260744 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.48903153618527 - type: mrr value: 96.41081503826601 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.594 - type: map_at_10 value: 69.296 - type: map_at_100 value: 69.782 - type: map_at_1000 value: 69.795 - type: map_at_3 value: 66.23 - type: map_at_5 value: 68.293 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 70.339 - type: mrr_at_100 value: 70.708 - type: mrr_at_1000 value: 70.722 - type: mrr_at_3 value: 68.0 - type: mrr_at_5 value: 69.56700000000001 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 74.039 - type: ndcg_at_100 value: 76.103 - type: ndcg_at_1000 value: 76.47800000000001 - type: ndcg_at_3 value: 68.967 - type: ndcg_at_5 value: 71.96900000000001 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.866999999999999 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 18.2 - type: recall_at_1 value: 58.594 - type: recall_at_10 value: 87.422 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 74.217 - type: recall_at_5 value: 81.539 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85049504950496 - type: cos_sim_ap value: 96.33111544137081 - type: cos_sim_f1 value: 92.35443037974684 - type: cos_sim_precision value: 93.53846153846153 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.82376237623762 - type: dot_ap value: 95.38082527310888 - type: dot_f1 value: 90.90909090909092 - type: dot_precision value: 92.90187891440502 - type: dot_recall value: 89.0 - type: euclidean_accuracy value: 99.84851485148515 - type: euclidean_ap value: 96.32316003996347 - type: euclidean_f1 value: 92.2071392659628 - type: euclidean_precision value: 92.71991911021233 - type: euclidean_recall value: 91.7 - type: manhattan_accuracy value: 99.84851485148515 - type: manhattan_ap value: 96.3655668249217 - type: manhattan_f1 value: 92.18356026222895 - type: manhattan_precision value: 92.98067141403867 - type: manhattan_recall value: 91.4 - type: max_accuracy value: 99.85049504950496 - type: max_ap value: 96.3655668249217 - type: max_f1 value: 92.35443037974684 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.94861371629051 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.009430451385 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.61164066427969 - type: mrr value: 55.49710603938544 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.622620124907662 - type: cos_sim_spearman value: 31.0678351356163 - type: dot_pearson value: 30.863727693306814 - type: dot_spearman value: 31.230306567021255 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 2.011 - type: map_at_100 value: 10.974 - type: map_at_1000 value: 25.819 - type: map_at_3 value: 0.6649999999999999 - type: map_at_5 value: 1.076 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 78.07300000000001 - type: ndcg_at_100 value: 58.231 - type: ndcg_at_1000 value: 51.153000000000006 - type: ndcg_at_3 value: 81.123 - type: ndcg_at_5 value: 81.059 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 83.0 - type: precision_at_100 value: 59.38 - type: precision_at_1000 value: 22.55 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.2079999999999997 - type: recall_at_100 value: 14.069 - type: recall_at_1000 value: 47.678 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.161 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.809 - type: map_at_10 value: 10.394 - type: map_at_100 value: 16.598 - type: map_at_1000 value: 18.142 - type: map_at_3 value: 5.572 - type: map_at_5 value: 7.1370000000000005 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 46.564 - type: mrr_at_100 value: 47.469 - type: mrr_at_1000 value: 47.469 - type: mrr_at_3 value: 42.177 - type: mrr_at_5 value: 44.524 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 25.701 - type: ndcg_at_100 value: 37.532 - type: ndcg_at_1000 value: 48.757 - type: ndcg_at_3 value: 28.199999999999996 - type: ndcg_at_5 value: 25.987 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 23.469 - type: precision_at_100 value: 7.9799999999999995 - type: precision_at_1000 value: 1.5350000000000001 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.809 - type: recall_at_10 value: 16.887 - type: recall_at_100 value: 48.67 - type: recall_at_1000 value: 82.89699999999999 - type: recall_at_3 value: 6.521000000000001 - type: recall_at_5 value: 9.609 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.57860000000001 - type: ap value: 13.82629211536393 - type: f1 value: 54.59860966183956 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.38030560271647 - type: f1 value: 59.69685552567865 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.4736717043405 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.92853311080646 - type: cos_sim_ap value: 77.67872502591382 - type: cos_sim_f1 value: 70.33941236068895 - type: cos_sim_precision value: 67.63273258645884 - type: cos_sim_recall value: 73.27176781002639 - type: dot_accuracy value: 85.79603027954938 - type: dot_ap value: 73.73786190233379 - type: dot_f1 value: 67.3437901774235 - type: dot_precision value: 65.67201604814443 - type: dot_recall value: 69.10290237467018 - type: euclidean_accuracy value: 86.94045419324074 - type: euclidean_ap value: 77.6687791535167 - type: euclidean_f1 value: 70.47209214023542 - type: euclidean_precision value: 67.7207492094381 - type: euclidean_recall value: 73.45646437994723 - type: manhattan_accuracy value: 86.87488823985218 - type: manhattan_ap value: 77.63373392430728 - type: manhattan_f1 value: 70.40920716112532 - type: manhattan_precision value: 68.31265508684864 - type: manhattan_recall value: 72.63852242744063 - type: max_accuracy value: 86.94045419324074 - type: max_ap value: 77.67872502591382 - type: max_f1 value: 70.47209214023542 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.67155664221679 - type: cos_sim_ap value: 85.64591703003417 - type: cos_sim_f1 value: 77.59531005352656 - type: cos_sim_precision value: 73.60967184801382 - type: cos_sim_recall value: 82.03726516784724 - type: dot_accuracy value: 88.41541506578181 - type: dot_ap value: 84.6482788957769 - type: dot_f1 value: 77.04748541466657 - type: dot_precision value: 74.02440754931176 - type: dot_recall value: 80.3279950723745 - type: euclidean_accuracy value: 88.63080684596576 - type: euclidean_ap value: 85.44570045321562 - type: euclidean_f1 value: 77.28769403336106 - type: euclidean_precision value: 72.90600040958427 - type: euclidean_recall value: 82.22975053895904 - type: manhattan_accuracy value: 88.59393798269105 - type: manhattan_ap value: 85.40271361038187 - type: manhattan_f1 value: 77.17606419344392 - type: manhattan_precision value: 72.4447747078295 - type: manhattan_recall value: 82.5685247921158 - type: max_accuracy value: 88.67155664221679 - type: max_ap value: 85.64591703003417 - type: max_f1 value: 77.59531005352656 --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search. And it also can be used in vector databases for LLMs. ************* 🌟**Updates**🌟 ************* - 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire: - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released - 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BEAR", "BIOSSES", "SCIFACT" ]
Non_BioNLP
NouRed/medqsum-bart-large-xsum-meqsum
NouRed
summarization
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "summarization", "medical question answering", "medical question understanding", "consumer health question", "prompt engineering", "LLM", "en", "dataset:bigbio/meqsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,692
1,704
41
1
--- datasets: - bigbio/meqsum language: en library_name: transformers license: apache-2.0 tags: - summarization - bart - medical question answering - medical question understanding - consumer health question - prompt engineering - LLM widget: - text: ' SUBJECT: high inner eye pressure above 21 possible glaucoma MESSAGE: have seen inner eye pressure increase as I have begin taking Rizatriptan. I understand the med narrows blood vessels. Can this med. cause or effect the closed or wide angle issues with the eyelense/glacoma.' model-index: - name: medqsum-bart-large-xsum-meqsum results: - task: type: summarization name: Summarization dataset: name: Dataset for medical question summarization type: bigbio/meqsum split: valid metrics: - type: rogue-1 value: 54.32 name: Validation ROGUE-1 - type: rogue-2 value: 38.08 name: Validation ROGUE-2 - type: rogue-l value: 51.98 name: Validation ROGUE-L - type: rogue-l-sum value: 51.99 name: Validation ROGUE-L-SUM --- [![](https://img.shields.io/badge/GitHub-Repo-blue)](https://github.com/zekaouinoureddine/MedQSum) ## MedQSum <a href="https://github.com/zekaouinoureddine/MedQSum"> <img src="https://raw.githubusercontent.com/zekaouinoureddine/MedQSum/master/assets/models.png" alt="drawing" width="600"/> </a> ## TL;DR **medqsum-bart-large-xsum-meqsum** is the best fine-tuned model in the paper [Enhancing Large Language Models' Utility for Medical Question-Answering: A Patient Health Question Summarization Approach](https://doi.org/10.1109/SITA60746.2023.10373720), which introduces a solution to get the most out of LLMs, when answering health-related questions. We address the challenge of crafting accurate prompts by summarizing consumer health questions (CHQs) to generate clear and concise medical questions. Our approach involves fine-tuning Transformer-based models, including Flan-T5 in resource-constrained environments and three medical question summarization datasets. ## Hyperparameters ```json { "dataset_name": "MeQSum", "learning_rate": 3e-05, "model_name_or_path": "facebook/bart-large-xsum", "num_train_epochs": 4, "per_device_eval_batch_size": 4, "per_device_train_batch_size": 4, "predict_with_generate": true, } ``` ## Usage ```python from transformers import pipeline summarizer = pipeline("summarization", model="NouRed/medqsum-bart-large-xsum-meqsum") chq = '''SUBJECT: high inner eye pressure above 21 possible glaucoma MESSAGE: have seen inner eye pressure increase as I have begin taking Rizatriptan. I understand the med narrows blood vessels. Can this med. cause or effect the closed or wide angle issues with the eyelense/glacoma. ''' summarizer(chq) ``` ## Results | key | value | | --- | ----- | | eval_rouge1 | 54.32 | | eval_rouge2 | 38.08 | | eval_rougeL | 51.98 | | eval_rougeLsum | 51.99 | ## Cite This ``` @INPROCEEDINGS{10373720, author={Zekaoui, Nour Eddine and Yousfi, Siham and Mikram, Mounia and Rhanoui, Maryem}, booktitle={2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA)}, title={Enhancing Large Language Models’ Utility for Medical Question-Answering: A Patient Health Question Summarization Approach}, year={2023}, volume={}, number={}, pages={1-8}, doi={10.1109/SITA60746.2023.10373720}} ```
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
[ "MEQSUM" ]
BioNLP
medspaner/EriBERTa-clinical-trials-7sgs-umls
medspaner
null
[ "pytorch", "safetensors", "roberta", "generated_from_trainer", "arxiv:2306.07373", "license:cc-by-nc-4.0", "region:us" ]
1,726
1,727
16
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: "Criterios de inclusión: 18 a 65 años; necrosis avascular de cadera; sintomática\ \ de menos de 6 meses; capaz de otorgar consentimiento informado.\n Criterios\ \ de exclusión: embarazo, lactancia, mujer fértil sin métodos anticonceptivos\ \ adecuados; tratamiento activo con bifosfonatos; infección por VIH, hepatitis\ \ B o hepatitis C; historia de neoplasia en cualquier organo." - text: 'Recuperación de daño hepático relacionado con nutrición parenteral con ácidos omega-3 en adultos críticos: ensayo clínico aleatorizado.' - text: 'Título público: Análisis del dolor tras inyección intramuscular de penicilina con agujas de mayor calibre y anestésico local, frente a aguja tradicional sin anestésico en pacientes con sífilis' model-index: - name: EriBERTa-clinical-trials-umls-7sgs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EriBERTa-clinical-trials-umls-7sgs This medical named entity recognition model detects 7 types of semantic groups from the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) ([Bodenreider 2004](https://academic.oup.com/nar/article/32/suppl_1/D267/2505235)): - ANAT: body parts and anatomy (e.g. *garganta*, 'throat') - CHEM: chemical entities and pharmacological substances (e.g. *aspirina*,'aspirin') - DEVI: medical devices (e.g. *catéter*, 'catheter') - DISO: pathologic conditions (e.g. *dolor*, 'pain') - LIVB: living beings (e.g. *paciente*, 'patient') - PHYS: physiological processes (e.g. *respiración*, 'breathing') - PROC: diagnostic and therapeutic procedures, laboratory analyses and medical research activities (e.g. *cirugía*, 'surgery') The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.881 (±0.005) - Recall: 0.896 (±0.002) - F1: 0.889 (±0.003) - Accuracy: 0.959 (±0.001) ## Model description This model adapts the pre-trained model [EriBERTa-base](https://huggingface.co/HiTZ/EriBERTa-base), presented in [De la Iglesia et al. (2023)](https://arxiv.org/abs/2306.07373). It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) vs 2. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) vs 2. It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: average 15.50 epochs (±4.12); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.881 (±0.005) | 0.896 (±0.002) | 0.889 (±0.003) | 0.959 (±0.001) | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "NAMED_ENTITY_RECOGNITION" ]
[ "SCIELO" ]
BioNLP
pruas/BENT-PubMedBERT-NER-Gene
pruas
token-classification
[ "transformers", "pytorch", "bert", "token-classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,673
1,709
271
13
--- language: - en license: apache-2.0 pipeline_tag: token-classification --- Named Entity Recognition (NER) model to recognize gene and protein entities. Please cite our work: ``` @article{NILNKER2022, title = {NILINKER: Attention-based approach to NIL Entity Linking}, journal = {Journal of Biomedical Informatics}, volume = {132}, pages = {104137}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2022.104137}, url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526}, author = {Pedro Ruas and Francisco M. Couto}, } ``` [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets: - [miRNA-Test-Corpus](https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html): entity type "Genes/Proteins" - [CellFinder](https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/): entity type "GeneProtein" - [CoMAGC](http://biopathway.org/CoMAGC/): entity "Gene" - [CRAFT](https://github.com/UCDenver-ccp/CRAFT/tree/master/concept-annotation): entity type "PR" - [GREC Corpus](http://www.nactem.ac.uk/GREC/standoff.php): entity types "Gene", "Protein", "Protein_Complex", "Enzyme" - [JNLPBA](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004): entity types "protein", "DNA", "RNA" - [PGxCorpus](https://www.nature.com/articles/s41597-019-0342-9): entity type "Gene_or_protein" - [FSU_PRGE](https://julielab.de/Resources/FSU_PRGE.html): entity types "protein", "protein_complex", "protein_familiy_or_group" - [BC2GM corpus](https://github.com/spyysalo/bc2gm-corpus)- [](): entity type - [CHEMPROT](https://biocreative.bioinformatics.udel.edu/resources/corpora/chemprot-corpus-biocreative-vi/): entity types "GENE-Y", "GENE-N" - [mTOR pathway event corpus](https://github.com/openbiocorpora/mtor-pathway/tree/master/original-data): entity type "Protein" - [DNA Methylation](https://github.com/openbiocorpora/dna-methylation/tree/master/original-data) - [BioNLP11ID](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP11ID-ggp-IOB): entity type "Gene/protein" - [BioNLP09](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP09-IOB) - [BioNLP11EPI](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP11EPI-IOB) - [BioNLP13CG](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP13CG-ggp-IOB): entity type "gene_or_gene_product" - [BioNLP13GE](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP13GE-IOB): entity type "Protein" - [BioNLP13PC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP13PC-ggp-IOB): entity type "Gene_or_gene_product" - [MLEE](http://nactem.ac.uk/MLEE/): entity type "Gene_or_gene_product"
[ "NAMED_ENTITY_RECOGNITION" ]
[ "CRAFT", "CELLFINDER", "CHEMPROT", "JNLPBA", "MLEE", "MIRNA" ]
BioNLP
barisaydin/text2vec-base-multilingual
barisaydin
sentence-similarity
[ "transformers", "pytorch", "bert", "feature-extraction", "text2vec", "sentence-similarity", "mteb", "zh", "en", "de", "fr", "it", "nl", "pt", "pl", "ru", "license:apache-2.0", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,695
1,695
11
0
--- datasets: - https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset language: - zh - en - de - fr - it - nl - pt - pl - ru library_name: transformers license: apache-2.0 metrics: - spearmanr pipeline_tag: sentence-similarity tags: - text2vec - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: text2vec-base-multilingual results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 70.97014925373134 - type: ap value: 33.95151328318672 - type: f1 value: 65.14740155705596 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 68.69379014989293 - type: ap value: 79.68277579733802 - type: f1 value: 66.54960052336921 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 70.90704647676162 - type: ap value: 20.747518928580437 - type: f1 value: 58.64365465884924 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 61.605995717344754 - type: ap value: 14.135974879487028 - type: f1 value: 49.980224800472136 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 66.103375 - type: ap value: 61.10087197664471 - type: f1 value: 65.75198509894145 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 33.134 - type: f1 value: 32.7905397597083 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 33.388 - type: f1 value: 33.190561196873084 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 34.824 - type: f1 value: 34.297290157740726 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 33.449999999999996 - type: f1 value: 33.08017234412433 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 30.046 - type: f1 value: 29.857141661482228 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 32.522 - type: f1 value: 31.854699911472174 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 32.31918856561886 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 25.503481615956137 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 57.91471462820568 - type: mrr value: 71.82990370663501 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 68.83853315193127 - type: cos_sim_spearman value: 66.16174850417771 - type: euclidean_pearson value: 56.65313897263153 - type: euclidean_spearman value: 52.69156205876939 - type: manhattan_pearson value: 56.97282154658304 - type: manhattan_spearman value: 53.167476517261015 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.08441558441558 - type: f1 value: 77.99825264827898 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 28.98583420521256 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 23.195091778460892 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 43.35 - type: f1 value: 38.80269436557695 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 59.348 - type: ap value: 55.75065220262251 - type: f1 value: 58.72117519082607 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 81.04879160966712 - type: f1 value: 80.86889779192701 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 78.59397013243168 - type: f1 value: 77.09902761555972 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 79.24282855236824 - type: f1 value: 78.75883867079015 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 76.16661446915127 - type: f1 value: 76.30204722831901 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 78.74506991753317 - type: f1 value: 77.50560442779701 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 77.67088607594937 - type: f1 value: 77.21442956887493 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 62.786137710898316 - type: f1 value: 46.23474201126368 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 55.285996055226825 - type: f1 value: 37.98039513682919 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 58.67911941294196 - type: f1 value: 40.541410807124954 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 53.257124960851854 - type: f1 value: 38.42982319259366 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 59.62352097525995 - type: f1 value: 41.28886486568534 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 58.799276672694404 - type: f1 value: 43.68379466247341 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 45.42030934767989 - type: f1 value: 44.12201543566376 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 37.67652992602556 - type: f1 value: 35.422091900843164 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 45.02353732347007 - type: f1 value: 41.852484084738194 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.70880968392737 - type: f1 value: 46.904360615435046 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 43.78950907868191 - type: f1 value: 41.58872353920405 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 28.759246805648957 - type: f1 value: 27.41182001374226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.74176193678547 - type: f1 value: 53.82727354182497 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.55682582380632 - type: f1 value: 49.41963627941866 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.46940147948891 - type: f1 value: 55.28178711367465 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.83322125084063 - type: f1 value: 61.836172900845554 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.27505043712172 - type: f1 value: 57.642436374361154 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.05178211163417 - type: f1 value: 56.858998820504056 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.357094821788834 - type: f1 value: 54.79711189260453 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.79959650302623 - type: f1 value: 57.59158671719513 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.1768661735037 - type: f1 value: 48.886397276270515 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.06455951580362 - type: f1 value: 55.01530952684585 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.3591123066577 - type: f1 value: 55.9277783370191 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.108271687962336 - type: f1 value: 51.195023400664596 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.26832548755883 - type: f1 value: 56.60774065423401 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 35.806993947545394 - type: f1 value: 34.290418953173294 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.27841291190315 - type: f1 value: 56.9438998642419 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.78009414929389 - type: f1 value: 59.15780842483667 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 31.153328850033624 - type: f1 value: 30.11004596099605 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 44.50235373234701 - type: f1 value: 44.040585262624745 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 40.99193006052455 - type: f1 value: 39.505480119272484 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.95696032279758 - type: f1 value: 43.093638940785326 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.73100201748486 - type: f1 value: 52.79750744404114 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.865501008742434 - type: f1 value: 53.64798408964839 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.891728312037664 - type: f1 value: 45.261229414636055 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.2259583053127 - type: f1 value: 50.5903419246987 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.277067921990586 - type: f1 value: 52.472042479965886 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.95696032279757 - type: f1 value: 49.79330411854258 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.63685272360457 - type: f1 value: 52.81267480650003 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.451916610625425 - type: f1 value: 57.34790386645091 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.91055817081372 - type: f1 value: 56.39195048528157 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.84196368527236 - type: f1 value: 58.72244763127063 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.04102219233354 - type: f1 value: 55.67040186148946 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.01613987895091 - type: f1 value: 57.203949825484855 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.35843981170141 - type: f1 value: 54.18656338999773 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.47948890383322 - type: f1 value: 54.772224557130954 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.43981170141224 - type: f1 value: 56.09260971364242 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 33.9609952925353 - type: f1 value: 33.18853392353405 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 44.29388029589778 - type: f1 value: 41.51986533284474 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.13517148621385 - type: f1 value: 43.94784138379624 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.856086079354405 - type: f1 value: 56.618177384748456 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 35.35978480161398 - type: f1 value: 34.060680080365046 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.630127774041696 - type: f1 value: 57.46288652988266 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.7908540685945 - type: f1 value: 51.46934239116157 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.6469401479489 - type: f1 value: 53.9903066185816 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.85743106926698 - type: f1 value: 59.31579548450755 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.46805648957633 - type: f1 value: 57.48469733657326 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.86415601882985 - type: f1 value: 49.41696672602645 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 41.183591123066584 - type: f1 value: 40.04563865770774 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.08069939475455 - type: f1 value: 50.724800165846126 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 51.287827841291204 - type: f1 value: 50.72873776739851 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.53328850033624 - type: f1 value: 45.93317866639667 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 34.347679892400805 - type: f1 value: 31.941581141280828 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.073301950235376 - type: f1 value: 62.228728940111054 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.398789509078675 - type: f1 value: 54.80778341609032 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.79892400806993 - type: f1 value: 60.69430756982446 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.96368527236046 - type: f1 value: 66.5893927997656 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.21250840618695 - type: f1 value: 62.347177794128925 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.43779421654339 - type: f1 value: 61.307701312085605 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.09952925353059 - type: f1 value: 60.313907927386914 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.38601210490922 - type: f1 value: 63.05968938353488 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.2878278412912 - type: f1 value: 55.92927644838597 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.62878278412912 - type: f1 value: 60.25299253652635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.28850033624748 - type: f1 value: 62.77053246337031 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 54.875588433086754 - type: f1 value: 54.30717357279134 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.99394754539341 - type: f1 value: 61.73085530883037 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.581035642232685 - type: f1 value: 36.96287269695893 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.350369872225976 - type: f1 value: 61.807327324823966 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.17148621385338 - type: f1 value: 65.29620144656751 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 36.12642905178212 - type: f1 value: 35.334393048479484 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.26899798251513 - type: f1 value: 49.041065960139434 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 44.24344317417619 - type: f1 value: 42.42177854872125 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 47.370544720914594 - type: f1 value: 46.589722581465324 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.89038332212508 - type: f1 value: 57.753607921990394 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.506388702084756 - type: f1 value: 56.0485860423295 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.06388702084734 - type: f1 value: 50.109364641824584 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 55.053799596503026 - type: f1 value: 54.490665705666686 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.77135171486213 - type: f1 value: 58.2808650158803 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 55.71620712844654 - type: f1 value: 53.863034882475304 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.26227303295225 - type: f1 value: 59.86604657147016 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.3759246805649 - type: f1 value: 62.45257339288533 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.552118359112306 - type: f1 value: 61.354449605776765 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.40753194351043 - type: f1 value: 61.98779889528889 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.68258238063214 - type: f1 value: 60.59973978976571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.31002017484868 - type: f1 value: 62.412312268503655 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 61.429051782111635 - type: f1 value: 61.60095590401424 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.229320780094156 - type: f1 value: 61.02251426747547 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.42501681237391 - type: f1 value: 63.461494430605235 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.51714862138534 - type: f1 value: 37.12466722986362 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.99731002017485 - type: f1 value: 45.859147049984834 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 51.01882985877605 - type: f1 value: 49.01040173136056 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.234700739744454 - type: f1 value: 62.732294595214746 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 38.72225958305312 - type: f1 value: 36.603231928120906 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.48554135843982 - type: f1 value: 63.97380562022752 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.7955615332885 - type: f1 value: 55.95308241204802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 57.06455951580362 - type: f1 value: 56.95570494066693 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.8338937457969 - type: f1 value: 65.6778746906008 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.369199731002034 - type: f1 value: 63.527650116059945 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 29.442504112215538 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 26.16062814161053 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 65.319 - type: map_at_10 value: 78.72 - type: map_at_100 value: 79.44600000000001 - type: map_at_1000 value: 79.469 - type: map_at_3 value: 75.693 - type: map_at_5 value: 77.537 - type: mrr_at_1 value: 75.24 - type: mrr_at_10 value: 82.304 - type: mrr_at_100 value: 82.485 - type: mrr_at_1000 value: 82.489 - type: mrr_at_3 value: 81.002 - type: mrr_at_5 value: 81.817 - type: ndcg_at_1 value: 75.26 - type: ndcg_at_10 value: 83.07 - type: ndcg_at_100 value: 84.829 - type: ndcg_at_1000 value: 85.087 - type: ndcg_at_3 value: 79.67699999999999 - type: ndcg_at_5 value: 81.42 - type: precision_at_1 value: 75.26 - type: precision_at_10 value: 12.697 - type: precision_at_100 value: 1.4829999999999999 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 34.849999999999994 - type: precision_at_5 value: 23.054 - type: recall_at_1 value: 65.319 - type: recall_at_10 value: 91.551 - type: recall_at_100 value: 98.053 - type: recall_at_1000 value: 99.516 - type: recall_at_3 value: 81.819 - type: recall_at_5 value: 86.66199999999999 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 31.249791587189996 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 43.302922383029816 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.80670811345861 - type: cos_sim_spearman value: 79.97373018384307 - type: euclidean_pearson value: 83.40205934125837 - type: euclidean_spearman value: 79.73331008251854 - type: manhattan_pearson value: 83.3320983393412 - type: manhattan_spearman value: 79.677919746045 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.3816087627948 - type: cos_sim_spearman value: 80.91314664846955 - type: euclidean_pearson value: 85.10603071031096 - type: euclidean_spearman value: 79.42663939501841 - type: manhattan_pearson value: 85.16096376014066 - type: manhattan_spearman value: 79.51936545543191 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.44665329940209 - type: cos_sim_spearman value: 82.86479010707745 - type: euclidean_pearson value: 84.06719627734672 - type: euclidean_spearman value: 84.9356099976297 - type: manhattan_pearson value: 84.10370009572624 - type: manhattan_spearman value: 84.96828040546536 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.05704260568437 - type: cos_sim_spearman value: 87.36399473803172 - type: euclidean_pearson value: 86.8895170159388 - type: euclidean_spearman value: 87.16246440866921 - type: manhattan_pearson value: 86.80814774538997 - type: manhattan_spearman value: 87.09320142699522 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.97825118945852 - type: cos_sim_spearman value: 88.31438033558268 - type: euclidean_pearson value: 87.05174694758092 - type: euclidean_spearman value: 87.80659468392355 - type: manhattan_pearson value: 86.98831322198717 - type: manhattan_spearman value: 87.72820615049285 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 78.68745420126719 - type: cos_sim_spearman value: 81.6058424699445 - type: euclidean_pearson value: 81.16540133861879 - type: euclidean_spearman value: 81.86377535458067 - type: manhattan_pearson value: 81.13813317937021 - type: manhattan_spearman value: 81.87079962857256 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 68.06192660936868 - type: cos_sim_spearman value: 68.2376353514075 - type: euclidean_pearson value: 60.68326946956215 - type: euclidean_spearman value: 59.19352349785952 - type: manhattan_pearson value: 60.6592944683418 - type: manhattan_spearman value: 59.167534419270865 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.78098264855684 - type: cos_sim_spearman value: 78.02670452969812 - type: euclidean_pearson value: 77.26694463661255 - type: euclidean_spearman value: 77.47007626009587 - type: manhattan_pearson value: 77.25070088632027 - type: manhattan_spearman value: 77.36368265830724 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 78.45418506379532 - type: cos_sim_spearman value: 78.60412019902428 - type: euclidean_pearson value: 79.90303710850512 - type: euclidean_spearman value: 78.67123625004957 - type: manhattan_pearson value: 80.09189580897753 - type: manhattan_spearman value: 79.02484481441483 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.35556731232779 - type: cos_sim_spearman value: 81.48249735354844 - type: euclidean_pearson value: 81.66748026636621 - type: euclidean_spearman value: 80.35571574338547 - type: manhattan_pearson value: 81.38214732806365 - type: manhattan_spearman value: 79.9018202958774 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.4527703176897 - type: cos_sim_spearman value: 85.81084095829584 - type: euclidean_pearson value: 86.43489162324457 - type: euclidean_spearman value: 85.27110976093296 - type: manhattan_pearson value: 86.43674259444512 - type: manhattan_spearman value: 85.05719308026032 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.00411240034492 - type: cos_sim_spearman value: 76.33887356560854 - type: euclidean_pearson value: 76.81730660019446 - type: euclidean_spearman value: 75.04432185451306 - type: manhattan_pearson value: 77.22298813168995 - type: manhattan_spearman value: 75.56420330256725 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 79.1447136836213 - type: cos_sim_spearman value: 81.80823850788917 - type: euclidean_pearson value: 80.84505734814422 - type: euclidean_spearman value: 81.714168092736 - type: manhattan_pearson value: 80.84713816174187 - type: manhattan_spearman value: 81.61267814749516 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.01257457052873 - type: cos_sim_spearman value: 87.91146458004216 - type: euclidean_pearson value: 88.36771859717994 - type: euclidean_spearman value: 87.73182474597515 - type: manhattan_pearson value: 88.26551451003671 - type: manhattan_spearman value: 87.71675151388992 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 79.20121618382373 - type: cos_sim_spearman value: 78.05794691968603 - type: euclidean_pearson value: 79.93819925682054 - type: euclidean_spearman value: 78.00586118701553 - type: manhattan_pearson value: 80.05598625820885 - type: manhattan_spearman value: 78.04802948866832 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 81.51743373871778 - type: cos_sim_spearman value: 80.98266651818703 - type: euclidean_pearson value: 81.11875722505269 - type: euclidean_spearman value: 79.45188413284538 - type: manhattan_pearson value: 80.7988457619225 - type: manhattan_spearman value: 79.49643569311485 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 81.78679924046351 - type: cos_sim_spearman value: 80.9986574147117 - type: euclidean_pearson value: 82.09130079135713 - type: euclidean_spearman value: 80.66215667390159 - type: manhattan_pearson value: 82.0328610549654 - type: manhattan_spearman value: 80.31047226932408 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 58.08082172994642 - type: cos_sim_spearman value: 62.9940530222459 - type: euclidean_pearson value: 58.47927303460365 - type: euclidean_spearman value: 60.8440317609258 - type: manhattan_pearson value: 58.32438211697841 - type: manhattan_spearman value: 60.69642636776064 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 33.83985707464123 - type: cos_sim_spearman value: 46.89093209603036 - type: euclidean_pearson value: 34.63602187576556 - type: euclidean_spearman value: 46.31087228200712 - type: manhattan_pearson value: 34.66899391543166 - type: manhattan_spearman value: 46.33049538425276 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.61315965767736 - type: cos_sim_spearman value: 58.9434266730386 - type: euclidean_pearson value: 50.35885602217862 - type: euclidean_spearman value: 58.238679883286025 - type: manhattan_pearson value: 53.01732044381151 - type: manhattan_spearman value: 58.10482351761412 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.771738440430177 - type: cos_sim_spearman value: 34.807259227816054 - type: euclidean_pearson value: 17.82657835823811 - type: euclidean_spearman value: 34.27912898498941 - type: manhattan_pearson value: 19.121527758886312 - type: manhattan_spearman value: 34.4940050226265 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 52.8354704676683 - type: cos_sim_spearman value: 57.28629534815841 - type: euclidean_pearson value: 54.10329332004385 - type: euclidean_spearman value: 58.15030615859976 - type: manhattan_pearson value: 55.42372087433115 - type: manhattan_spearman value: 57.52270736584036 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 31.01976557986924 - type: cos_sim_spearman value: 54.506959483927616 - type: euclidean_pearson value: 36.917863022119086 - type: euclidean_spearman value: 53.750194241538566 - type: manhattan_pearson value: 37.200177833241085 - type: manhattan_spearman value: 53.507659188082535 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 46.38635647225934 - type: cos_sim_spearman value: 54.50892732637536 - type: euclidean_pearson value: 40.8331015184763 - type: euclidean_spearman value: 53.142903182230924 - type: manhattan_pearson value: 43.07655692906317 - type: manhattan_spearman value: 53.5833474125901 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 60.52525456662916 - type: cos_sim_spearman value: 63.23975489531082 - type: euclidean_pearson value: 58.989191722317514 - type: euclidean_spearman value: 62.536326639863894 - type: manhattan_pearson value: 61.32982866201855 - type: manhattan_spearman value: 63.068262822520516 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.63798684577696 - type: cos_sim_spearman value: 74.09937723367189 - type: euclidean_pearson value: 63.77494904383906 - type: euclidean_spearman value: 71.15932571292481 - type: manhattan_pearson value: 63.69646122775205 - type: manhattan_spearman value: 70.54960698541632 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 36.50262468726711 - type: cos_sim_spearman value: 45.00322499674274 - type: euclidean_pearson value: 32.58759216581778 - type: euclidean_spearman value: 40.13720951315429 - type: manhattan_pearson value: 34.88422299605277 - type: manhattan_spearman value: 40.63516862200963 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 56.498552617040275 - type: cos_sim_spearman value: 67.71358426124443 - type: euclidean_pearson value: 57.16474781778287 - type: euclidean_spearman value: 65.721515493531 - type: manhattan_pearson value: 59.25227610738926 - type: manhattan_spearman value: 65.89743680340739 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.97978814727984 - type: cos_sim_spearman value: 65.85821395092104 - type: euclidean_pearson value: 59.11117270978519 - type: euclidean_spearman value: 64.50062069934965 - type: manhattan_pearson value: 59.4436213778161 - type: manhattan_spearman value: 64.4003273074382 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 58.00873192515712 - type: cos_sim_spearman value: 60.167708809138745 - type: euclidean_pearson value: 56.91950637760252 - type: euclidean_spearman value: 58.50593399441014 - type: manhattan_pearson value: 58.683747352584994 - type: manhattan_spearman value: 59.38110066799761 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.26020658151187 - type: cos_sim_spearman value: 61.29236187204147 - type: euclidean_pearson value: 55.993896804147056 - type: euclidean_spearman value: 58.654928232615354 - type: manhattan_pearson value: 56.612492816099426 - type: manhattan_spearman value: 58.65144067094258 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.13817835368122 - type: cos_sim_spearman value: 50.78524216975442 - type: euclidean_pearson value: 46.56046454501862 - type: euclidean_spearman value: 50.3935060082369 - type: manhattan_pearson value: 48.0232348418531 - type: manhattan_spearman value: 50.79528358464199 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 44.274388638585286 - type: cos_sim_spearman value: 49.43124017389838 - type: euclidean_pearson value: 42.45909582681174 - type: euclidean_spearman value: 49.661383797129055 - type: manhattan_pearson value: 42.5771970142383 - type: manhattan_spearman value: 50.14423414390715 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.119500839749776 - type: cos_sim_spearman value: 39.324070169024424 - type: euclidean_pearson value: 35.83247077201831 - type: euclidean_spearman value: 42.61903924348457 - type: manhattan_pearson value: 35.50415034487894 - type: manhattan_spearman value: 41.87998075949351 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.62575835691209 - type: cos_sim_spearman value: 73.24670207647144 - type: euclidean_pearson value: 78.07793323914657 - type: euclidean_spearman value: 73.24670207647144 - type: manhattan_pearson value: 77.51429306378206 - type: manhattan_spearman value: 73.24670207647144 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.09375596849891 - type: cos_sim_spearman value: 86.44881302053585 - type: euclidean_pearson value: 84.71259163967213 - type: euclidean_spearman value: 85.63661992344069 - type: manhattan_pearson value: 84.64466537502614 - type: manhattan_spearman value: 85.53769949940238 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 70.2056154684549 - type: mrr value: 89.52703161036494 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.57623762376238 - type: cos_sim_ap value: 83.53051588811371 - type: cos_sim_f1 value: 77.72704211060375 - type: cos_sim_precision value: 78.88774459320288 - type: cos_sim_recall value: 76.6 - type: dot_accuracy value: 99.06435643564356 - type: dot_ap value: 27.003124923857463 - type: dot_f1 value: 34.125269978401725 - type: dot_precision value: 37.08920187793427 - type: dot_recall value: 31.6 - type: euclidean_accuracy value: 99.61485148514852 - type: euclidean_ap value: 85.47332647001774 - type: euclidean_f1 value: 80.0808897876643 - type: euclidean_precision value: 80.98159509202453 - type: euclidean_recall value: 79.2 - type: manhattan_accuracy value: 99.61683168316831 - type: manhattan_ap value: 85.41969859598552 - type: manhattan_f1 value: 79.77755308392315 - type: manhattan_precision value: 80.67484662576688 - type: manhattan_recall value: 78.9 - type: max_accuracy value: 99.61683168316831 - type: max_ap value: 85.47332647001774 - type: max_f1 value: 80.0808897876643 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 34.35688940053467 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 30.64427069276576 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 44.89500754900078 - type: mrr value: 45.33215558950853 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.653069624224084 - type: cos_sim_spearman value: 30.10187112430319 - type: dot_pearson value: 28.966278202103666 - type: dot_spearman value: 28.342234095507767 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 65.96839999999999 - type: ap value: 11.846327590186444 - type: f1 value: 50.518102944693574 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 55.220713073005086 - type: f1 value: 55.47856175692088 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 31.581473892235877 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.94093103653812 - type: cos_sim_ap value: 62.48963249213361 - type: cos_sim_f1 value: 58.9541137429912 - type: cos_sim_precision value: 52.05091937765205 - type: cos_sim_recall value: 67.96833773087072 - type: dot_accuracy value: 78.24998509864696 - type: dot_ap value: 40.82371294480071 - type: dot_f1 value: 44.711163153786096 - type: dot_precision value: 35.475379374419326 - type: dot_recall value: 60.4485488126649 - type: euclidean_accuracy value: 83.13166835548668 - type: euclidean_ap value: 63.459878609769774 - type: euclidean_f1 value: 60.337199569532466 - type: euclidean_precision value: 55.171659741963694 - type: euclidean_recall value: 66.56992084432719 - type: manhattan_accuracy value: 83.00649698992669 - type: manhattan_ap value: 63.263161177904905 - type: manhattan_f1 value: 60.17122874713614 - type: manhattan_precision value: 55.40750610703975 - type: manhattan_recall value: 65.8311345646438 - type: max_accuracy value: 83.13166835548668 - type: max_ap value: 63.459878609769774 - type: max_f1 value: 60.337199569532466 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.80416812201653 - type: cos_sim_ap value: 83.45540469219863 - type: cos_sim_f1 value: 75.58836427422892 - type: cos_sim_precision value: 71.93934335002783 - type: cos_sim_recall value: 79.62734832152756 - type: dot_accuracy value: 83.04226336011176 - type: dot_ap value: 70.63007268018524 - type: dot_f1 value: 65.35980325765405 - type: dot_precision value: 60.84677151768532 - type: dot_recall value: 70.59593470896212 - type: euclidean_accuracy value: 87.60430007373773 - type: euclidean_ap value: 83.10068502536592 - type: euclidean_f1 value: 75.02510506936439 - type: euclidean_precision value: 72.56637168141593 - type: euclidean_recall value: 77.65629812134279 - type: manhattan_accuracy value: 87.60041914076145 - type: manhattan_ap value: 83.05480769911229 - type: manhattan_f1 value: 74.98522895125554 - type: manhattan_precision value: 72.04797047970479 - type: manhattan_recall value: 78.17215891592238 - type: max_accuracy value: 87.80416812201653 - type: max_ap value: 83.45540469219863 - type: max_f1 value: 75.58836427422892 --- # shibing624/text2vec-base-multilingual This is a CoSENT(Cosine Sentence) model: shibing624/text2vec-base-multilingual. It maps sentences to a 384 dimensional dense vector space and can be used for tasks like sentence embeddings, text matching or semantic search. - training dataset: https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset - base model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 - max_seq_length: 256 - best epoch: 4 - sentence embedding dim: 384 ## Evaluation For an automated evaluation of this model, see the *Evaluation Benchmark*: [text2vec](https://github.com/shibing624/text2vec) ## Languages Available languages are: de, en, es, fr, it, nl, pl, pt, ru, zh ### Release Models | Arch | BaseModel | Model | ATEC | BQ | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc | Avg | QPS | |:-----------|:-------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----:|:-----:|:-----:|:-----:|:-----:|:-------:|:-------:|:---------:|:-----:| | Word2Vec | word2vec | [w2v-light-tencent-chinese](https://ai.tencent.com/ailab/nlp/en/download.html) | 20.00 | 31.49 | 59.46 | 2.57 | 55.78 | 55.04 | 20.70 | 35.03 | 23769 | | SBERT | xlm-roberta-base | [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 18.42 | 38.52 | 63.96 | 10.14 | 78.90 | 63.01 | 52.28 | 46.46 | 3138 | | Instructor | hfl/chinese-roberta-wwm-ext | [moka-ai/m3e-base](https://huggingface.co/moka-ai/m3e-base) | 41.27 | 63.81 | 74.87 | 12.20 | 76.96 | 75.83 | 60.55 | 57.93 | 2980 | | CoSENT | hfl/chinese-macbert-base | [shibing624/text2vec-base-chinese](https://huggingface.co/shibing624/text2vec-base-chinese) | 31.93 | 42.67 | 70.16 | 17.21 | 79.30 | 70.27 | 50.42 | 51.61 | 3008 | | CoSENT | hfl/chinese-lert-large | [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 32.61 | 44.59 | 69.30 | 14.51 | 79.44 | 73.01 | 59.04 | 53.12 | 2092 | | CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence) | 43.37 | 61.43 | 73.48 | 38.90 | 78.25 | 70.60 | 53.08 | 59.87 | 3089 | | CoSENT | nghuyong/ernie-3.0-base-zh | [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase) | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 | 76.70 | 63.30 | **63.08** | 3066 | | CoSENT | sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 | [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual) | 32.39 | 50.33 | 65.64 | 32.56 | 74.45 | 68.88 | 51.17 | 53.67 | 4004 | Illustrate: - Result evaluation index: spearman coefficient - The `shibing624/text2vec-base-chinese` model is trained using the CoSENT method. It is trained on Chinese STS-B data based on `hfl/chinese-macbert-base` and has achieved good results in the Chinese STS-B test set evaluation. , run [examples/training_sup_text_matching_model.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model.py) code to train the model, the model file has been uploaded to HF model hub, Chinese universal semantic matching task Recommended Use - The `shibing624/text2vec-base-chinese-sentence` model is trained using the CoSENT method and is based on the manually selected Chinese STS data set of `nghuyong/ernie-3.0-base-zh` [shibing624/nli-zh-all/ text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset), and is used in various Chinese NLI test set evaluation has achieved good results. Run the [examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py) code to train the model, and the model file has been uploaded to HF model hub, recommended for Chinese s2s (sentence vs sentence) semantic matching tasks - The `shibing624/text2vec-base-chinese-paraphrase` model is trained using the CoSENT method and is based on the manually selected Chinese STS data set of `nghuyong/ernie-3.0-base-zh` [shibing624/nli-zh-all/ text2vec-base-chinese-paraphrase-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-paraphrase-dataset), the data set is relative to [shibing624 /nli-zh-all/text2vec-base-chinese-sentence-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-chinese-sentence-dataset) s2p (sentence to paraphrase) data was added to strengthen its long text representation capabilities, and the evaluation on each Chinese NLI test set reached SOTA, running [examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec /blob/master/examples/training_sup_text_matching_model_jsonl_data.py) code can train the model. The model file has been uploaded to HF model hub. It is recommended for Chinese s2p (sentence vs paragraph) semantic matching tasks. - The `shibing624/text2vec-base-multilingual` model is trained using the CoSENT method and is based on the manually selected multilingual STS data set of `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2` [shibing624/nli-zh -all/text2vec-base-multilingual-dataset](https://huggingface.co/datasets/shibing624/nli-zh-all/tree/main/text2vec-base-multilingual-dataset) trained and tested in Chinese and English The set evaluation effect is improved compared to the original model. Run the [examples/training_sup_text_matching_model_jsonl_data.py](https://github.com/shibing624/text2vec/blob/master/examples/training_sup_text_matching_model_jsonl_data.py) code to train the model, and the model file has been uploaded. HF model hub, recommended for multi-language semantic matching tasks - `w2v-light-tencent-chinese` is the Word2Vec model of Tencent word vector, which is loaded and used by CPU. It is suitable for Chinese text matching tasks and cold start situations where data is missing. - The GPU test environment of QPS is Tesla V100 with 32GB memory. Model training experiment report: [Experiment report](https://github.com/shibing624/text2vec/blob/master/docs/model_report.md) ## Usage (text2vec) Using this model becomes easy when you have [text2vec](https://github.com/shibing624/text2vec) installed: ``` pip install -U text2vec ``` Then you can use the model like this: ```python from text2vec import SentenceModel sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card'] model = SentenceModel('shibing624/text2vec-base-multilingual') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [text2vec](https://github.com/shibing624/text2vec), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. Install transformers: ``` pip install transformers ``` Then load model and predict: ```python from transformers import AutoTokenizer, AutoModel import torch # Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] # First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('shibing624/text2vec-base-multilingual') model = AutoModel.from_pretrained('shibing624/text2vec-base-multilingual') sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card'] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Usage (sentence-transformers) [sentence-transformers](https://github.com/UKPLab/sentence-transformers) is a popular library to compute dense vector representations for sentences. Install sentence-transformers: ``` pip install -U sentence-transformers ``` Then load model and predict: ```python from sentence_transformers import SentenceTransformer m = SentenceTransformer("shibing624/text2vec-base-multilingual") sentences = ['如何更换花呗绑定银行卡', 'How to replace the Huabei bundled bank card'] sentence_embeddings = m.encode(sentences) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` CoSENT( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_mean_tokens': True}) ) ``` ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the rank loss by comparing with true pairs and false pairs. ## Citing & Authors This model was trained by [text2vec](https://github.com/shibing624/text2vec). If you find this model helpful, feel free to cite: ```bibtex @software{text2vec, author = {Ming Xu}, title = {text2vec: A Tool for Text to Vector}, year = {2023}, url = {https://github.com/shibing624/text2vec}, } ```
[ "SUMMARIZATION" ]
[ "BIOSSES" ]
Non_BioNLP
RomainDarous/large_directFourEpoch_meanPooling_mistranslationModel
RomainDarous
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4460010", "loss:CoSENTLoss", "dataset:RomainDarous/corrupted_os_by_language", "arxiv:1908.10084", "base_model:RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel", "base_model:finetune:RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,741
1,741
14
0
--- base_model: RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel datasets: - RomainDarous/corrupted_os_by_language library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4460010 - loss:CoSENTLoss widget: - source_sentence: Malformed target specific variable definition sentences: - Hedefe özgü değişken tanımı bozuk - Kan alle data in die gids lees - "слава Украине! героям слава!\uFEFF" - source_sentence: Can't write an inode bitmap sentences: - Skontrolujte stav aktualizácií alebo to skúste znova neskôr. - Malsukcesis skribi i nodan bitmapon - Zastępuje wersję GL obsługiwaną przez sterownik - source_sentence: Optimize soft proofing color transformations sentences: - 'arkadaslar biz artik her an kirmizi kart yiyecek,bencil,pas yapamayan,isabetsiz orta yapani istemiyoruz. sozde efsaneniz bu sezon Besiktasa en cok zarar verenlerden biriydi. kendini dusunmeden once Besiktasi dusunecek adam lazim bize. o yuzden #GoHomeQuaresma' - Yav bizim dedikodusunu yaptığımız insanın bile bi vizyonu var. Senin hakkında neden oturup konuşalım? - Ik ben een transgender. - source_sentence: 'Pass 1: Checking @is, @bs, and sizes' sentences: - Bu adam cidden kurabiye gibi ben bunu çayın yanında yerim - sagnat. errada. invisible. justificació. idioma - Wilt u echt de primaire sleutel verplaatsen? (j N) - source_sentence: Search for matching log entries sentences: - quem te lembra? caralho tô assustada aqui kkkkk - sendotasunik gabeko\ egoera bistaratuko den ala ez adierazten du - En aquest cas, hem d'incloure les imatges del contenidor )sr iov per a càrregues de treball de telco (per exemple, com a referència, es podrien obtenir des de valors de helm chart) model-index: - name: SentenceTransformer based on RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.980320627958563 name: Pearson Cosine - type: spearman_cosine value: 0.8655830126826171 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.9804333155239368 name: Pearson Cosine - type: spearman_cosine value: 0.865640780478526 name: Spearman Cosine --- # SentenceTransformer based on RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel](https://huggingface.co/RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel) on the [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel](https://huggingface.co/RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel) <!-- at revision bc422140f1c78b1065a14873f780d44f9d659b55 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/large_directFourEpoch_meanPooling_mistranslationModel") # Run inference sentences = [ 'Search for matching log entries', 'quem te lembra? caralho tô assustada aqui kkkkk', 'sendotasunik gabeko\\ egoera bistaratuko den ala ez adierazten du', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.9803 | 0.9804 | | **spearman_cosine** | **0.8656** | **0.8656** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 18.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.47 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:---------------| | <code>Check spelling. Print the document. Show completion window. General. Show help</code> | <code>Kontrolli õigekirja. присоединяюсь. </code> | <code>0</code> | | <code>EXIF not supported for this file format.</code> | <code>Šiam failo formatui EXIF nepalaikomas.</code> | <code>1</code> | | <code>This package includes the documentation for texlive everyhook</code> | <code>Paket ini menyertakan dokumentasi untuk texlive everyhook</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 17.71 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 26.95 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Could not identify the current seat.</code> | <code> 天天花着男人的钱还这这创造新词汇男权你可真牛批,你也就这一出了一问男权,就说是我是吧,到现在我也没听到你给我们讲的男权,你也就是在网上喷喷,现实走道都不敢探头自卑,你现实要把你女权的劲拿出来总低啥头,您老应该去国家教育局把男权加上是吧,你们女权天天说自己生活不好没地位,给你们地位了你们能干啥?用你们的女权打到全世界男性是吧,能相出男权这一词您老也是人才呀,是不是庆幸自己是个女的,活在自己想想的世界里不觉得孤单吗,假象有男权是吧,自己假象和男权还说自己不是田园女权,田园女权能连自己都骂说自己妈是驴爸是大鼎的也是奇葩呀,那我们国家大肆宣扬过你们这么田园女权吗,国家要的是女性人群自主自理,你们可好看看你们女权干的啥事,给你们女权地位高了,看看你们女权干的事n绿地集团高管怎么都不说呀,人家可是有钱有地位,也不是我们说三从四德洗衣做饭你们女权会吗?,那我问问你们女权干过啥惊天大事,还甩锅给孔子,还封建社会,那我问问你们女权在福利面前为啥说自己是女性呀不是社会主义社会吗不应该男女平等吗,天天自己也不知道是不是抱个手机天天欧巴欧巴,你家那位要是不陪你看一会就会问你是不是不爱我了是吧大姐,您老也就赚这白菜钱操心国家事,中国五千年的历史被您老一句否决,还嘲讽人家日本女性,好意思说自己不是女权,三从四德流传这么久到您这变成日本文化了,我就想问问男权您老是怎么想的,那你问孔子老人家呗为什么女人要三从四德,我说的是女权你干嘛自己对号入座,连中华人民传承的东西都不认跟我这谈男权,还男权您老给我举个例子呗,让我们男权听听都是h啥,这些不都是你们女权的标准吗?,还男权,您老醒醒吧这里是现实,不是你的公主世界,总觉得自己多么多么重要,地球没你是不能转了还是人类要灭亡呀,我真的想问一句你给我找一条男权的新闻,咋了我们男人不能提女权呗你老授权了呗,那我们谈论田园女权你老对号入座干嘛,天天过节要礼物,还嫌弃自己男朋友没有钱,我寻思你找个有钱人包养你呗,对了有钱人怎么可能看上你这种女权的呢,还要孩子跟女方姓我也没看见你没跟你妈姓呀,年年过节男人给你们送礼物你们女人给男人送过礼物吗?,一问我不是陪着他吗我对他说我爱你了这不是最好的礼物吗?,男人只要不送礼物就是不爱你们了呗,人家国际女权讲的男人能做的我们女人也能做,田园女权男人能做的我们女人为啥要做,还男权我笑了,以前结婚几头牛换个衣服原装的,现在几十万彩...</code> | <code>0</code> | | <code>Undoing Date and Time Adjustment</code> | <code>正在取消日期和时间调整</code> | <code>1</code> | | <code>Dependency package for gsl_2_6 gnu hpc</code> | <code>Pacotes de desenvolvimento do KDE</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | corrupted open os by language loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:----------------------------------:|:------------------------:|:------------------------:| | 1.0 | 55751 | 0.0771 | 0.2658 | 0.8656 | - | | -1 | -1 | - | - | - | 0.8656 | ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.4.1 - Transformers: 4.48.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.3.0 - Datasets: 2.16.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY", "TRANSLATION" ]
[ "CAS" ]
Non_BioNLP
abhinand/MedEmbed-large-v0.1
abhinand
null
[ "sentence-transformers", "safetensors", "bert", "medembed", "medical-embedding", "clinical-embedding", "information-retrieval", "en", "dataset:MedicalQARetrieval", "dataset:NFCorpus", "dataset:PublicHealthQA", "dataset:TRECCOVID", "dataset:ArguAna", "base_model:BAAI/bge-large-en-v1.5", "base_model:finetune:BAAI/bge-large-en-v1.5", "license:apache-2.0", "region:us" ]
1,729
1,729
85,805
18
--- base_model: - BAAI/bge-large-en-v1.5 datasets: - MedicalQARetrieval - NFCorpus - PublicHealthQA - TRECCOVID - ArguAna language: en license: apache-2.0 metrics: - nDCG - MAP - Recall - Precision - MRR tags: - medembed - medical-embedding - clinical-embedding - information-retrieval - sentence-transformers --- # MedEmbed: Specialized Embedding Model for Medical and Clinical Information Retrieval ![benchmark-scores](https://cdn-uploads.huggingface.co/production/uploads/60c8619d95d852a24572b025/gTx5-m68LQ3eyNd6fLki2.png) ## Model Description MedEmbed is a family of embedding models fine-tuned specifically for medical and clinical data, designed to enhance performance in healthcare-related natural language processing (NLP) tasks, particularly information retrieval. **GitHub Repo:** [https://github.com/abhinand5/MedEmbed](https://github.com/abhinand5/MedEmbed) **Technical Blog Post:** [https://huggingface.co/blog/abhinand/medembed-finetuned-embedding-models-for-medical-ir](https://huggingface.co/blog/abhinand/medembed-finetuned-embedding-models-for-medical-ir) ## Intended Use This model is intended for use in medical and clinical contexts to improve information retrieval, question answering, and semantic search tasks. It can be integrated into healthcare systems, research tools, and medical literature databases to enhance search capabilities and information access. ## Training Data ![synthetic-datagen-flow](https://cdn-uploads.huggingface.co/production/uploads/60c8619d95d852a24572b025/asaA5QDO_j0PWFQV9NXCu.png) The model was trained using a simple yet effective synthetic data generation pipeline: 1. Source: Clinical notes from PubMed Central (PMC) 2. Processing: [LLaMA 3.1 70B](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) model used to generate query-response pairs 3. Augmentation: Negative sampling for challenging examples 4. Format: Triplets (query, positive response, negative response) for contrastive learning ## Performance MedEmbed consistently outperforms general-purpose embedding models across various medical NLP benchmarks: - ArguAna - MedicalQARetrieval - NFCorpus - PublicHealthQA - TRECCOVID Specific performance metrics (nDCG, MAP, Recall, Precision, MRR) are available in the full documentation. ## Limitations While highly effective for medical and clinical data, this model may not generalize well to non-medical domains. It should be used with caution in general-purpose NLP tasks. ## Ethical Considerations Users should be aware of potential biases in medical data and the ethical implications of AI in healthcare. This model should be used as a tool to assist, not replace, human expertise in medical decision-making. ## Citation If you use this model in your research, please cite: ```bibtex @software{balachandran2024medembed, author = {Balachandran, Abhinand}, title = {MedEmbed: Medical-Focused Embedding Models}, year = {2024}, url = {https://github.com/abhinand5/MedEmbed} } ``` For more detailed information, visit our GitHub repository.
[ "QUESTION_ANSWERING" ]
[ "MEDICAL DATA" ]
BioNLP
twadada/lma3-correct
twadada
null
[ "mteb", "model-index", "region:us" ]
1,726
1,726
0
0
--- tags: - mteb model-index: - name: llama3_STSprompt_new results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: None config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.76119402985076 - type: ap value: 39.59339434775403 - type: f1 value: 70.4551858327582 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: None config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 60.84025 - type: ap value: 56.82293339927369 - type: f1 value: 60.461296544029764 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: None config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 31.786 - type: f1 value: 31.584406207778038 - task: type: Retrieval dataset: name: MTEB ArguAna type: None config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 21.407999999999998 - type: map_at_10 value: 36.149 - type: map_at_100 value: 37.397000000000006 - type: map_at_1000 value: 37.415 - type: map_at_3 value: 31.117 - type: map_at_5 value: 33.851 - type: mrr_at_1 value: 21.834999999999997 - type: mrr_at_10 value: 36.334 - type: mrr_at_100 value: 37.574999999999996 - type: mrr_at_1000 value: 37.592 - type: mrr_at_3 value: 31.283 - type: mrr_at_5 value: 34.046 - type: ndcg_at_1 value: 21.407999999999998 - type: ndcg_at_10 value: 44.786 - type: ndcg_at_100 value: 50.169 - type: ndcg_at_1000 value: 50.566 - type: ndcg_at_3 value: 34.319 - type: ndcg_at_5 value: 39.259 - type: precision_at_1 value: 21.407999999999998 - type: precision_at_10 value: 7.2620000000000005 - type: precision_at_100 value: 0.963 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.533 - type: precision_at_5 value: 11.124 - type: recall_at_1 value: 21.407999999999998 - type: recall_at_10 value: 72.617 - type: recall_at_100 value: 96.30199999999999 - type: recall_at_1000 value: 99.289 - type: recall_at_3 value: 43.599 - type: recall_at_5 value: 55.619 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: None config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 35.204388842106965 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: None config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 24.528430856566054 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: None config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 54.435706376344065 - type: mrr value: 68.46535197643232 - task: type: STS dataset: name: MTEB BIOSSES type: None config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 80.75947376350628 - type: cos_sim_spearman value: 78.76866226728414 - type: euclidean_pearson value: 79.77815732635439 - type: euclidean_spearman value: 78.76866226728414 - type: manhattan_pearson value: 77.27913529620776 - type: manhattan_spearman value: 76.785484549114 - task: type: Classification dataset: name: MTEB Banking77Classification type: None config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 72.85714285714285 - type: f1 value: 72.12088062707018 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: None config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 32.91654959096841 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: None config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 23.40685330251096 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: None config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 23.343 - type: map_at_10 value: 30.634 - type: map_at_100 value: 31.904 - type: map_at_1000 value: 32.056000000000004 - type: map_at_3 value: 27.988000000000003 - type: map_at_5 value: 29.604000000000003 - type: mrr_at_1 value: 29.471000000000004 - type: mrr_at_10 value: 36.76 - type: mrr_at_100 value: 37.569 - type: mrr_at_1000 value: 37.637 - type: mrr_at_3 value: 34.406 - type: mrr_at_5 value: 35.900999999999996 - type: ndcg_at_1 value: 29.471000000000004 - type: ndcg_at_10 value: 35.754000000000005 - type: ndcg_at_100 value: 41.185 - type: ndcg_at_1000 value: 44.132 - type: ndcg_at_3 value: 31.791000000000004 - type: ndcg_at_5 value: 33.867000000000004 - type: precision_at_1 value: 29.471000000000004 - type: precision_at_10 value: 6.7379999999999995 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.17600000000000002 - type: precision_at_3 value: 15.212 - type: precision_at_5 value: 11.245 - type: recall_at_1 value: 23.343 - type: recall_at_10 value: 44.596999999999994 - type: recall_at_100 value: 68.522 - type: recall_at_1000 value: 88.539 - type: recall_at_3 value: 32.592 - type: recall_at_5 value: 38.479 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: None config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 19.200999999999997 - type: map_at_10 value: 26.07 - type: map_at_100 value: 27.145999999999997 - type: map_at_1000 value: 27.265 - type: map_at_3 value: 23.952 - type: map_at_5 value: 25.083 - type: mrr_at_1 value: 24.459 - type: mrr_at_10 value: 30.769000000000002 - type: mrr_at_100 value: 31.595000000000002 - type: mrr_at_1000 value: 31.659 - type: mrr_at_3 value: 28.769 - type: mrr_at_5 value: 29.902 - type: ndcg_at_1 value: 24.459 - type: ndcg_at_10 value: 30.387999999999998 - type: ndcg_at_100 value: 35.17 - type: ndcg_at_1000 value: 37.852999999999994 - type: ndcg_at_3 value: 26.893 - type: ndcg_at_5 value: 28.43 - type: precision_at_1 value: 24.459 - type: precision_at_10 value: 5.599 - type: precision_at_100 value: 1.018 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 12.972 - type: precision_at_5 value: 9.159 - type: recall_at_1 value: 19.200999999999997 - type: recall_at_10 value: 38.721 - type: recall_at_100 value: 59.278 - type: recall_at_1000 value: 77.7 - type: recall_at_3 value: 28.402 - type: recall_at_5 value: 32.778 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: None config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 27.450000000000003 - type: map_at_10 value: 36.176 - type: map_at_100 value: 37.291000000000004 - type: map_at_1000 value: 37.388 - type: map_at_3 value: 33.525 - type: map_at_5 value: 35.128 - type: mrr_at_1 value: 31.661 - type: mrr_at_10 value: 39.306999999999995 - type: mrr_at_100 value: 40.256 - type: mrr_at_1000 value: 40.314 - type: mrr_at_3 value: 36.928 - type: mrr_at_5 value: 38.342 - type: ndcg_at_1 value: 31.661 - type: ndcg_at_10 value: 40.942 - type: ndcg_at_100 value: 46.225 - type: ndcg_at_1000 value: 48.369 - type: ndcg_at_3 value: 36.131 - type: ndcg_at_5 value: 38.646 - type: precision_at_1 value: 31.661 - type: precision_at_10 value: 6.596 - type: precision_at_100 value: 1.014 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 15.967 - type: precision_at_5 value: 11.298 - type: recall_at_1 value: 27.450000000000003 - type: recall_at_10 value: 52.268 - type: recall_at_100 value: 76.02499999999999 - type: recall_at_1000 value: 91.365 - type: recall_at_3 value: 39.377 - type: recall_at_5 value: 45.537 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: None config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 13.808000000000002 - type: map_at_10 value: 18.166 - type: map_at_100 value: 19.005 - type: map_at_1000 value: 19.127 - type: map_at_3 value: 16.821 - type: map_at_5 value: 17.407 - type: mrr_at_1 value: 15.254000000000001 - type: mrr_at_10 value: 19.505 - type: mrr_at_100 value: 20.337 - type: mrr_at_1000 value: 20.444000000000003 - type: mrr_at_3 value: 18.192 - type: mrr_at_5 value: 18.746 - type: ndcg_at_1 value: 15.254000000000001 - type: ndcg_at_10 value: 20.926000000000002 - type: ndcg_at_100 value: 25.363999999999997 - type: ndcg_at_1000 value: 28.986 - type: ndcg_at_3 value: 18.151999999999997 - type: ndcg_at_5 value: 19.154 - type: precision_at_1 value: 15.254000000000001 - type: precision_at_10 value: 3.209 - type: precision_at_100 value: 0.5760000000000001 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 7.5329999999999995 - type: precision_at_5 value: 5.13 - type: recall_at_1 value: 13.808000000000002 - type: recall_at_10 value: 28.301 - type: recall_at_100 value: 49.118 - type: recall_at_1000 value: 77.371 - type: recall_at_3 value: 20.625 - type: recall_at_5 value: 23.064 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: None config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 7.545 - type: map_at_10 value: 11.065999999999999 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 12.145999999999999 - type: map_at_3 value: 9.862 - type: map_at_5 value: 10.458 - type: mrr_at_1 value: 9.826 - type: mrr_at_10 value: 13.819 - type: mrr_at_100 value: 14.760000000000002 - type: mrr_at_1000 value: 14.863999999999999 - type: mrr_at_3 value: 12.520999999999999 - type: mrr_at_5 value: 13.062000000000001 - type: ndcg_at_1 value: 9.826 - type: ndcg_at_10 value: 13.713000000000001 - type: ndcg_at_100 value: 18.834999999999997 - type: ndcg_at_1000 value: 22.567 - type: ndcg_at_3 value: 11.358 - type: ndcg_at_5 value: 12.212 - type: precision_at_1 value: 9.826 - type: precision_at_10 value: 2.637 - type: precision_at_100 value: 0.621 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 5.514 - type: precision_at_5 value: 3.9800000000000004 - type: recall_at_1 value: 7.545 - type: recall_at_10 value: 19.229 - type: recall_at_100 value: 42.588 - type: recall_at_1000 value: 70.111 - type: recall_at_3 value: 12.617999999999999 - type: recall_at_5 value: 14.777999999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: None config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 19.002 - type: map_at_10 value: 25.330000000000002 - type: map_at_100 value: 26.561 - type: map_at_1000 value: 26.704 - type: map_at_3 value: 23.1 - type: map_at_5 value: 24.426000000000002 - type: mrr_at_1 value: 23.388 - type: mrr_at_10 value: 29.847 - type: mrr_at_100 value: 30.847 - type: mrr_at_1000 value: 30.932 - type: mrr_at_3 value: 27.687 - type: mrr_at_5 value: 29.000999999999998 - type: ndcg_at_1 value: 23.388 - type: ndcg_at_10 value: 29.721999999999998 - type: ndcg_at_100 value: 35.549 - type: ndcg_at_1000 value: 38.815 - type: ndcg_at_3 value: 25.887 - type: ndcg_at_5 value: 27.858 - type: precision_at_1 value: 23.388 - type: precision_at_10 value: 5.409 - type: precision_at_100 value: 0.983 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 11.902 - type: precision_at_5 value: 8.738999999999999 - type: recall_at_1 value: 19.002 - type: recall_at_10 value: 38.507000000000005 - type: recall_at_100 value: 64.043 - type: recall_at_1000 value: 86.97500000000001 - type: recall_at_3 value: 27.477 - type: recall_at_5 value: 32.719 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: None config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 14.74 - type: map_at_10 value: 21.048000000000002 - type: map_at_100 value: 22.172 - type: map_at_1000 value: 22.319 - type: map_at_3 value: 18.658 - type: map_at_5 value: 19.962 - type: mrr_at_1 value: 18.265 - type: mrr_at_10 value: 24.971 - type: mrr_at_100 value: 25.834000000000003 - type: mrr_at_1000 value: 25.929000000000002 - type: mrr_at_3 value: 22.66 - type: mrr_at_5 value: 23.955000000000002 - type: ndcg_at_1 value: 18.265 - type: ndcg_at_10 value: 25.404 - type: ndcg_at_100 value: 30.519000000000002 - type: ndcg_at_1000 value: 34.109 - type: ndcg_at_3 value: 21.104 - type: ndcg_at_5 value: 23.044 - type: precision_at_1 value: 18.265 - type: precision_at_10 value: 4.909 - type: precision_at_100 value: 0.89 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 10.046 - type: precision_at_5 value: 7.6259999999999994 - type: recall_at_1 value: 14.74 - type: recall_at_10 value: 34.69 - type: recall_at_100 value: 56.674 - type: recall_at_1000 value: 82.37 - type: recall_at_3 value: 22.924 - type: recall_at_5 value: 27.72 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 15.725666666666669 - type: map_at_10 value: 21.168583333333334 - type: map_at_100 value: 22.158666666666665 - type: map_at_1000 value: 22.29175 - type: map_at_3 value: 19.335833333333337 - type: map_at_5 value: 20.32875 - type: mrr_at_1 value: 18.944333333333336 - type: mrr_at_10 value: 24.424 - type: mrr_at_100 value: 25.27591666666667 - type: mrr_at_1000 value: 25.36416666666667 - type: mrr_at_3 value: 22.627416666666665 - type: mrr_at_5 value: 23.61191666666667 - type: ndcg_at_1 value: 18.944333333333336 - type: ndcg_at_10 value: 24.833083333333335 - type: ndcg_at_100 value: 29.647666666666666 - type: ndcg_at_1000 value: 32.913333333333334 - type: ndcg_at_3 value: 21.536166666666663 - type: ndcg_at_5 value: 23.013250000000003 - type: precision_at_1 value: 18.944333333333336 - type: precision_at_10 value: 4.4035 - type: precision_at_100 value: 0.8164999999999999 - type: precision_at_1000 value: 0.1284166666666667 - type: precision_at_3 value: 9.893416666666667 - type: precision_at_5 value: 7.1185 - type: recall_at_1 value: 15.725666666666669 - type: recall_at_10 value: 32.66266666666667 - type: recall_at_100 value: 54.48908333333333 - type: recall_at_1000 value: 78.27425 - type: recall_at_3 value: 23.251916666666666 - type: recall_at_5 value: 27.12425 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: None config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 12.077 - type: map_at_10 value: 16.801 - type: map_at_100 value: 17.655 - type: map_at_1000 value: 17.751 - type: map_at_3 value: 15.509 - type: map_at_5 value: 16.162000000000003 - type: mrr_at_1 value: 13.804 - type: mrr_at_10 value: 18.745 - type: mrr_at_100 value: 19.599 - type: mrr_at_1000 value: 19.678 - type: mrr_at_3 value: 17.434 - type: mrr_at_5 value: 18.101 - type: ndcg_at_1 value: 13.804 - type: ndcg_at_10 value: 19.698999999999998 - type: ndcg_at_100 value: 24.254 - type: ndcg_at_1000 value: 27.083000000000002 - type: ndcg_at_3 value: 17.21 - type: ndcg_at_5 value: 18.240000000000002 - type: precision_at_1 value: 13.804 - type: precision_at_10 value: 3.313 - type: precision_at_100 value: 0.618 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 7.872999999999999 - type: precision_at_5 value: 5.399 - type: recall_at_1 value: 12.077 - type: recall_at_10 value: 26.668999999999997 - type: recall_at_100 value: 47.831 - type: recall_at_1000 value: 69.413 - type: recall_at_3 value: 19.49 - type: recall_at_5 value: 22.225 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: None config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 8.115 - type: map_at_10 value: 11.722000000000001 - type: map_at_100 value: 12.348 - type: map_at_1000 value: 12.481 - type: map_at_3 value: 10.426 - type: map_at_5 value: 11.182 - type: mrr_at_1 value: 10.255 - type: mrr_at_10 value: 14.277999999999999 - type: mrr_at_100 value: 14.899999999999999 - type: mrr_at_1000 value: 15.009 - type: mrr_at_3 value: 12.91 - type: mrr_at_5 value: 13.712 - type: ndcg_at_1 value: 10.255 - type: ndcg_at_10 value: 14.35 - type: ndcg_at_100 value: 17.784 - type: ndcg_at_1000 value: 21.539 - type: ndcg_at_3 value: 11.967 - type: ndcg_at_5 value: 13.126 - type: precision_at_1 value: 10.255 - type: precision_at_10 value: 2.719 - type: precision_at_100 value: 0.53 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 5.781 - type: precision_at_5 value: 4.329000000000001 - type: recall_at_1 value: 8.115 - type: recall_at_10 value: 19.848 - type: recall_at_100 value: 35.974000000000004 - type: recall_at_1000 value: 63.839 - type: recall_at_3 value: 13.073 - type: recall_at_5 value: 16.109 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: None config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 14.966 - type: map_at_10 value: 18.857 - type: map_at_100 value: 19.723 - type: map_at_1000 value: 19.845 - type: map_at_3 value: 17.207 - type: map_at_5 value: 17.97 - type: mrr_at_1 value: 17.444000000000003 - type: mrr_at_10 value: 21.769 - type: mrr_at_100 value: 22.58 - type: mrr_at_1000 value: 22.676 - type: mrr_at_3 value: 20.04 - type: mrr_at_5 value: 20.852 - type: ndcg_at_1 value: 17.444000000000003 - type: ndcg_at_10 value: 22.020999999999997 - type: ndcg_at_100 value: 26.573999999999998 - type: ndcg_at_1000 value: 30.047 - type: ndcg_at_3 value: 18.673000000000002 - type: ndcg_at_5 value: 19.927 - type: precision_at_1 value: 17.444000000000003 - type: precision_at_10 value: 3.6380000000000003 - type: precision_at_100 value: 0.659 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 8.116 - type: precision_at_5 value: 5.7090000000000005 - type: recall_at_1 value: 14.966 - type: recall_at_10 value: 29.195 - type: recall_at_100 value: 50.092999999999996 - type: recall_at_1000 value: 75.858 - type: recall_at_3 value: 19.695 - type: recall_at_5 value: 22.979 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: None config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 17.05 - type: map_at_10 value: 22.636 - type: map_at_100 value: 23.860999999999997 - type: map_at_1000 value: 24.078 - type: map_at_3 value: 20.922 - type: map_at_5 value: 21.886 - type: mrr_at_1 value: 20.751 - type: mrr_at_10 value: 26.348 - type: mrr_at_100 value: 27.354 - type: mrr_at_1000 value: 27.447 - type: mrr_at_3 value: 24.671000000000003 - type: mrr_at_5 value: 25.728 - type: ndcg_at_1 value: 20.751 - type: ndcg_at_10 value: 26.684 - type: ndcg_at_100 value: 31.863999999999997 - type: ndcg_at_1000 value: 35.515 - type: ndcg_at_3 value: 24.035 - type: ndcg_at_5 value: 25.308000000000003 - type: precision_at_1 value: 20.751 - type: precision_at_10 value: 5.099 - type: precision_at_100 value: 1.154 - type: precision_at_1000 value: 0.208 - type: precision_at_3 value: 11.397 - type: precision_at_5 value: 8.261000000000001 - type: recall_at_1 value: 17.05 - type: recall_at_10 value: 33.784 - type: recall_at_100 value: 58.012 - type: recall_at_1000 value: 82.789 - type: recall_at_3 value: 25.393 - type: recall_at_5 value: 29.035 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: None config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 11.411 - type: map_at_10 value: 15.517 - type: map_at_100 value: 16.229 - type: map_at_1000 value: 16.341 - type: map_at_3 value: 14.06 - type: map_at_5 value: 14.677000000000001 - type: mrr_at_1 value: 12.753999999999998 - type: mrr_at_10 value: 16.97 - type: mrr_at_100 value: 17.68 - type: mrr_at_1000 value: 17.781 - type: mrr_at_3 value: 15.311 - type: mrr_at_5 value: 16.041 - type: ndcg_at_1 value: 12.753999999999998 - type: ndcg_at_10 value: 18.394 - type: ndcg_at_100 value: 22.448999999999998 - type: ndcg_at_1000 value: 25.945 - type: ndcg_at_3 value: 15.232999999999999 - type: ndcg_at_5 value: 16.347 - type: precision_at_1 value: 12.753999999999998 - type: precision_at_10 value: 2.976 - type: precision_at_100 value: 0.543 - type: precision_at_1000 value: 0.092 - type: precision_at_3 value: 6.4079999999999995 - type: precision_at_5 value: 4.547 - type: recall_at_1 value: 11.411 - type: recall_at_10 value: 26.143 - type: recall_at_100 value: 45.711 - type: recall_at_1000 value: 72.961 - type: recall_at_3 value: 17.357 - type: recall_at_5 value: 20.068 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: None config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 7.674 - type: map_at_10 value: 14.216000000000001 - type: map_at_100 value: 15.751999999999999 - type: map_at_1000 value: 15.967999999999998 - type: map_at_3 value: 11.283 - type: map_at_5 value: 12.837000000000002 - type: mrr_at_1 value: 17.915 - type: mrr_at_10 value: 28.315 - type: mrr_at_100 value: 29.328 - type: mrr_at_1000 value: 29.387999999999998 - type: mrr_at_3 value: 24.636 - type: mrr_at_5 value: 26.773000000000003 - type: ndcg_at_1 value: 17.915 - type: ndcg_at_10 value: 21.224999999999998 - type: ndcg_at_100 value: 27.994000000000003 - type: ndcg_at_1000 value: 32.037 - type: ndcg_at_3 value: 15.984000000000002 - type: ndcg_at_5 value: 18.072 - type: precision_at_1 value: 17.915 - type: precision_at_10 value: 7.023 - type: precision_at_100 value: 1.438 - type: precision_at_1000 value: 0.217 - type: precision_at_3 value: 12.009 - type: precision_at_5 value: 10.084999999999999 - type: recall_at_1 value: 7.674 - type: recall_at_10 value: 27.153 - type: recall_at_100 value: 51.035 - type: recall_at_1000 value: 74.024 - type: recall_at_3 value: 14.862 - type: recall_at_5 value: 19.928 - task: type: Retrieval dataset: name: MTEB DBPedia type: None config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 4.776 - type: map_at_10 value: 10.783 - type: map_at_100 value: 15.287999999999998 - type: map_at_1000 value: 16.328 - type: map_at_3 value: 7.401000000000001 - type: map_at_5 value: 8.993 - type: mrr_at_1 value: 45.75 - type: mrr_at_10 value: 54.373000000000005 - type: mrr_at_100 value: 55.068 - type: mrr_at_1000 value: 55.096000000000004 - type: mrr_at_3 value: 51.792 - type: mrr_at_5 value: 53.32900000000001 - type: ndcg_at_1 value: 33.5 - type: ndcg_at_10 value: 25.586 - type: ndcg_at_100 value: 29.076999999999998 - type: ndcg_at_1000 value: 36.24 - type: ndcg_at_3 value: 28.259 - type: ndcg_at_5 value: 27.315 - type: precision_at_1 value: 45.75 - type: precision_at_10 value: 22.425 - type: precision_at_100 value: 6.978 - type: precision_at_1000 value: 1.409 - type: precision_at_3 value: 33.583 - type: precision_at_5 value: 29.4 - type: recall_at_1 value: 4.776 - type: recall_at_10 value: 15.557000000000002 - type: recall_at_100 value: 36.02 - type: recall_at_1000 value: 60.653 - type: recall_at_3 value: 8.562 - type: recall_at_5 value: 11.487 - task: type: Classification dataset: name: MTEB EmotionClassification type: None config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.910000000000004 - type: f1 value: 45.27010147379218 - task: type: Retrieval dataset: name: MTEB FEVER type: None config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 19.009 - type: map_at_10 value: 28.401 - type: map_at_100 value: 29.377 - type: map_at_1000 value: 29.444 - type: map_at_3 value: 25.533 - type: map_at_5 value: 27.193 - type: mrr_at_1 value: 20.297 - type: mrr_at_10 value: 30.075000000000003 - type: mrr_at_100 value: 31.019999999999996 - type: mrr_at_1000 value: 31.075000000000003 - type: mrr_at_3 value: 27.088 - type: mrr_at_5 value: 28.83 - type: ndcg_at_1 value: 20.297 - type: ndcg_at_10 value: 33.949 - type: ndcg_at_100 value: 38.79 - type: ndcg_at_1000 value: 40.619 - type: ndcg_at_3 value: 28.077 - type: ndcg_at_5 value: 31.055 - type: precision_at_1 value: 20.297 - type: precision_at_10 value: 5.42 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 12.161 - type: precision_at_5 value: 8.869 - type: recall_at_1 value: 19.009 - type: recall_at_10 value: 49.716 - type: recall_at_100 value: 72.032 - type: recall_at_1000 value: 86.136 - type: recall_at_3 value: 33.821 - type: recall_at_5 value: 40.983999999999995 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: None config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 7.593 - type: map_at_10 value: 12.928999999999998 - type: map_at_100 value: 14.167 - type: map_at_1000 value: 14.374 - type: map_at_3 value: 10.908 - type: map_at_5 value: 12.030000000000001 - type: mrr_at_1 value: 15.432000000000002 - type: mrr_at_10 value: 21.886 - type: mrr_at_100 value: 23.018 - type: mrr_at_1000 value: 23.115 - type: mrr_at_3 value: 19.599 - type: mrr_at_5 value: 21.003 - type: ndcg_at_1 value: 15.432000000000002 - type: ndcg_at_10 value: 17.781 - type: ndcg_at_100 value: 23.669 - type: ndcg_at_1000 value: 28.384999999999998 - type: ndcg_at_3 value: 14.912 - type: ndcg_at_5 value: 16.119 - type: precision_at_1 value: 15.432000000000002 - type: precision_at_10 value: 5.154 - type: precision_at_100 value: 1.103 - type: precision_at_1000 value: 0.193 - type: precision_at_3 value: 9.927999999999999 - type: precision_at_5 value: 7.654 - type: recall_at_1 value: 7.593 - type: recall_at_10 value: 22.942999999999998 - type: recall_at_100 value: 45.762 - type: recall_at_1000 value: 74.97 - type: recall_at_3 value: 14.108 - type: recall_at_5 value: 18.195 - task: type: Retrieval dataset: name: MTEB HotpotQA type: None config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 19.622 - type: map_at_10 value: 27.953 - type: map_at_100 value: 28.932999999999996 - type: map_at_1000 value: 29.042 - type: map_at_3 value: 25.657000000000004 - type: map_at_5 value: 26.951000000000004 - type: mrr_at_1 value: 39.244 - type: mrr_at_10 value: 46.881 - type: mrr_at_100 value: 47.568 - type: mrr_at_1000 value: 47.617 - type: mrr_at_3 value: 44.882 - type: mrr_at_5 value: 46.013999999999996 - type: ndcg_at_1 value: 39.244 - type: ndcg_at_10 value: 35.43 - type: ndcg_at_100 value: 39.783 - type: ndcg_at_1000 value: 42.32 - type: ndcg_at_3 value: 31.238 - type: ndcg_at_5 value: 33.29 - type: precision_at_1 value: 39.244 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.129 - type: precision_at_1000 value: 0.147 - type: precision_at_3 value: 19.725 - type: precision_at_5 value: 13.458 - type: recall_at_1 value: 19.622 - type: recall_at_10 value: 39.055 - type: recall_at_100 value: 56.442 - type: recall_at_1000 value: 73.383 - type: recall_at_3 value: 29.587999999999997 - type: recall_at_5 value: 33.646 - task: type: Classification dataset: name: MTEB ImdbClassification type: None config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 59.50080000000001 - type: ap value: 55.788718834100095 - type: f1 value: 59.31886351889661 - task: type: Retrieval dataset: name: MTEB MSMARCO type: None config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 6.361 - type: map_at_10 value: 11.3 - type: map_at_100 value: 12.23 - type: map_at_1000 value: 12.344 - type: map_at_3 value: 9.475 - type: map_at_5 value: 10.402000000000001 - type: mrr_at_1 value: 6.5040000000000004 - type: mrr_at_10 value: 11.554 - type: mrr_at_100 value: 12.486 - type: mrr_at_1000 value: 12.595 - type: mrr_at_3 value: 9.692 - type: mrr_at_5 value: 10.639 - type: ndcg_at_1 value: 6.519 - type: ndcg_at_10 value: 14.435999999999998 - type: ndcg_at_100 value: 19.442 - type: ndcg_at_1000 value: 22.834 - type: ndcg_at_3 value: 10.578 - type: ndcg_at_5 value: 12.248000000000001 - type: precision_at_1 value: 6.519 - type: precision_at_10 value: 2.52 - type: precision_at_100 value: 0.511 - type: precision_at_1000 value: 0.08 - type: precision_at_3 value: 4.642 - type: precision_at_5 value: 3.633 - type: recall_at_1 value: 6.361 - type: recall_at_10 value: 24.273 - type: recall_at_100 value: 48.545 - type: recall_at_1000 value: 75.789 - type: recall_at_3 value: 13.567000000000002 - type: recall_at_5 value: 17.562 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: None config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.02963976288191 - type: f1 value: 88.93893646570741 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: None config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 60.592795257637945 - type: f1 value: 42.681889382124076 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: None config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.47612642905178 - type: f1 value: 60.06479567610216 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: None config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.64156018829858 - type: f1 value: 67.51582950719383 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: None config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 29.59174951459128 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: None config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 25.506220008220637 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: None config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.293104746926446 - type: mrr value: 31.267960016056946 - task: type: Retrieval dataset: name: MTEB NFCorpus type: None config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 4.619 - type: map_at_10 value: 10.05 - type: map_at_100 value: 13.109000000000002 - type: map_at_1000 value: 14.657 - type: map_at_3 value: 7.438000000000001 - type: map_at_5 value: 8.64 - type: mrr_at_1 value: 36.842000000000006 - type: mrr_at_10 value: 47.339999999999996 - type: mrr_at_100 value: 48.089999999999996 - type: mrr_at_1000 value: 48.141 - type: mrr_at_3 value: 44.891999999999996 - type: mrr_at_5 value: 46.238 - type: ndcg_at_1 value: 35.138999999999996 - type: ndcg_at_10 value: 29.275000000000002 - type: ndcg_at_100 value: 28.326 - type: ndcg_at_1000 value: 37.551 - type: ndcg_at_3 value: 32.97 - type: ndcg_at_5 value: 31.452999999999996 - type: precision_at_1 value: 36.842000000000006 - type: precision_at_10 value: 21.95 - type: precision_at_100 value: 7.870000000000001 - type: precision_at_1000 value: 2.131 - type: precision_at_3 value: 31.579 - type: precision_at_5 value: 27.245 - type: recall_at_1 value: 4.619 - type: recall_at_10 value: 14.347999999999999 - type: recall_at_100 value: 30.683 - type: recall_at_1000 value: 63.588 - type: recall_at_3 value: 8.552 - type: recall_at_5 value: 11.205 - task: type: Retrieval dataset: name: MTEB NQ type: None config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 9.099 - type: map_at_10 value: 16.035 - type: map_at_100 value: 17.333000000000002 - type: map_at_1000 value: 17.442 - type: map_at_3 value: 13.264000000000001 - type: map_at_5 value: 14.751 - type: mrr_at_1 value: 10.342 - type: mrr_at_10 value: 17.718 - type: mrr_at_100 value: 18.883 - type: mrr_at_1000 value: 18.973000000000003 - type: mrr_at_3 value: 14.909 - type: mrr_at_5 value: 16.442999999999998 - type: ndcg_at_1 value: 10.342 - type: ndcg_at_10 value: 20.698 - type: ndcg_at_100 value: 27.167 - type: ndcg_at_1000 value: 30.070999999999998 - type: ndcg_at_3 value: 15.018999999999998 - type: ndcg_at_5 value: 17.659 - type: precision_at_1 value: 10.342 - type: precision_at_10 value: 3.9539999999999997 - type: precision_at_100 value: 0.761 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 7.106999999999999 - type: precision_at_5 value: 5.776 - type: recall_at_1 value: 9.099 - type: recall_at_10 value: 33.596 - type: recall_at_100 value: 63.562 - type: recall_at_1000 value: 85.72999999999999 - type: recall_at_3 value: 18.472 - type: recall_at_5 value: 24.585 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: None config: default split: test revision: None metrics: - type: map_at_1 value: 63.827 - type: map_at_10 value: 76.964 - type: map_at_100 value: 77.704 - type: map_at_1000 value: 77.735 - type: map_at_3 value: 73.934 - type: map_at_5 value: 75.82600000000001 - type: mrr_at_1 value: 73.48 - type: mrr_at_10 value: 81.093 - type: mrr_at_100 value: 81.317 - type: mrr_at_1000 value: 81.32000000000001 - type: mrr_at_3 value: 79.657 - type: mrr_at_5 value: 80.63 - type: ndcg_at_1 value: 73.56 - type: ndcg_at_10 value: 81.65599999999999 - type: ndcg_at_100 value: 83.65 - type: ndcg_at_1000 value: 83.953 - type: ndcg_at_3 value: 78.051 - type: ndcg_at_5 value: 80.022 - type: precision_at_1 value: 73.56 - type: precision_at_10 value: 12.367 - type: precision_at_100 value: 1.465 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 33.907 - type: precision_at_5 value: 22.494 - type: recall_at_1 value: 63.827 - type: recall_at_10 value: 90.716 - type: recall_at_100 value: 98.15700000000001 - type: recall_at_1000 value: 99.75 - type: recall_at_3 value: 80.58800000000001 - type: recall_at_5 value: 85.94500000000001 - type: map_at_1 value: 3.013 - type: map_at_10 value: 7.526 - type: map_at_100 value: 9.064 - type: map_at_1000 value: 9.334000000000001 - type: map_at_3 value: 5.372 - type: map_at_5 value: 6.547 - type: mrr_at_1 value: 14.799999999999999 - type: mrr_at_10 value: 23.24 - type: mrr_at_100 value: 24.552 - type: mrr_at_1000 value: 24.623 - type: mrr_at_3 value: 19.967 - type: mrr_at_5 value: 21.917 - type: ndcg_at_1 value: 14.799999999999999 - type: ndcg_at_10 value: 13.34 - type: ndcg_at_100 value: 20.208000000000002 - type: ndcg_at_1000 value: 25.352999999999998 - type: ndcg_at_3 value: 12.161 - type: ndcg_at_5 value: 11.05 - type: precision_at_1 value: 14.799999999999999 - type: precision_at_10 value: 7.049999999999999 - type: precision_at_100 value: 1.703 - type: precision_at_1000 value: 0.294 - type: precision_at_3 value: 11.4 - type: precision_at_5 value: 9.959999999999999 - type: recall_at_1 value: 3.013 - type: recall_at_10 value: 14.298 - type: recall_at_100 value: 34.583000000000006 - type: recall_at_1000 value: 59.772999999999996 - type: recall_at_3 value: 6.948 - type: recall_at_5 value: 10.088 - type: map_at_1 value: 0.105 - type: map_at_10 value: 0.721 - type: map_at_100 value: 4.549 - type: map_at_1000 value: 12.995999999999999 - type: map_at_3 value: 0.28400000000000003 - type: map_at_5 value: 0.41700000000000004 - type: mrr_at_1 value: 46.0 - type: mrr_at_10 value: 60.319 - type: mrr_at_100 value: 60.885999999999996 - type: mrr_at_1000 value: 60.885999999999996 - type: mrr_at_3 value: 57.333 - type: mrr_at_5 value: 59.833000000000006 - type: ndcg_at_1 value: 44.0 - type: ndcg_at_10 value: 40.998000000000005 - type: ndcg_at_100 value: 33.910000000000004 - type: ndcg_at_1000 value: 33.009 - type: ndcg_at_3 value: 44.061 - type: ndcg_at_5 value: 42.599 - type: precision_at_1 value: 46.0 - type: precision_at_10 value: 44.0 - type: precision_at_100 value: 36.5 - type: precision_at_1000 value: 16.332 - type: precision_at_3 value: 47.333 - type: precision_at_5 value: 46.400000000000006 - type: recall_at_1 value: 0.105 - type: recall_at_10 value: 0.966 - type: recall_at_100 value: 7.939 - type: recall_at_1000 value: 32.742 - type: recall_at_3 value: 0.332 - type: recall_at_5 value: 0.524 - task: type: Clustering dataset: name: MTEB RedditClustering type: None config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 37.97924574250143 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: None config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 48.71752191180725 - task: type: STS dataset: name: MTEB SICK-R type: None config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 74.07986021045129 - type: cos_sim_spearman value: 64.67577113468512 - type: euclidean_pearson value: 68.68990414597263 - type: euclidean_spearman value: 64.6758531649771 - type: manhattan_pearson value: 64.11188923174566 - type: manhattan_spearman value: 61.592371585956954 - task: type: STS dataset: name: MTEB STS12 type: None config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 74.27146005453507 - type: cos_sim_spearman value: 65.93558276183687 - type: euclidean_pearson value: 69.43586842102822 - type: euclidean_spearman value: 65.93703144865253 - type: manhattan_pearson value: 69.6516644189866 - type: manhattan_spearman value: 67.20730340507434 - task: type: STS dataset: name: MTEB STS13 type: None config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 74.13479779206598 - type: cos_sim_spearman value: 75.25854160622782 - type: euclidean_pearson value: 75.03898639719446 - type: euclidean_spearman value: 75.25855381633339 - type: manhattan_pearson value: 75.14786852843275 - type: manhattan_spearman value: 75.52670123327891 - task: type: STS dataset: name: MTEB STS14 type: None config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 76.3585600169152 - type: cos_sim_spearman value: 73.17704515393254 - type: euclidean_pearson value: 74.891448527922 - type: euclidean_spearman value: 73.1770354645383 - type: manhattan_pearson value: 72.00352080759266 - type: manhattan_spearman value: 71.33088649793352 - task: type: STS dataset: name: MTEB STS15 type: None config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 81.2547221125337 - type: cos_sim_spearman value: 81.56092496306586 - type: euclidean_pearson value: 81.231224879122 - type: euclidean_spearman value: 81.5609157140913 - type: manhattan_pearson value: 79.39410985986596 - type: manhattan_spearman value: 80.14350013562763 - task: type: STS dataset: name: MTEB STS16 type: None config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 75.57832167022859 - type: cos_sim_spearman value: 76.32409937204893 - type: euclidean_pearson value: 75.69939675596083 - type: euclidean_spearman value: 76.32464971104763 - type: manhattan_pearson value: 75.37942909301383 - type: manhattan_spearman value: 75.89813657924768 - task: type: STS dataset: name: MTEB STS17 (en-en) type: None config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.96852885219164 - type: cos_sim_spearman value: 85.48862854621792 - type: euclidean_pearson value: 84.50084962774064 - type: euclidean_spearman value: 85.48950185584661 - type: manhattan_pearson value: 79.54278385586173 - type: manhattan_spearman value: 80.17022136076748 - task: type: STS dataset: name: MTEB STS22 (en) type: None config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 64.76167399937835 - type: cos_sim_spearman value: 62.00631665062863 - type: euclidean_pearson value: 63.93004336582023 - type: euclidean_spearman value: 62.00631665062863 - type: manhattan_pearson value: 63.57713263161092 - type: manhattan_spearman value: 61.927941480246915 - task: type: STS dataset: name: MTEB STSBenchmark type: None config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 79.23095984093324 - type: cos_sim_spearman value: 77.53643283015238 - type: euclidean_pearson value: 78.32103757433086 - type: euclidean_spearman value: 77.53645129624546 - type: manhattan_pearson value: 74.68643604195545 - type: manhattan_spearman value: 74.12436119957579 - task: type: Reranking dataset: name: MTEB SciDocsRR type: None config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 73.00659666541951 - type: mrr value: 91.75728411022529 - task: type: Retrieval dataset: name: MTEB SciFact type: None config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 40.278000000000006 - type: map_at_10 value: 48.849 - type: map_at_100 value: 49.895 - type: map_at_1000 value: 49.946 - type: map_at_3 value: 46.431 - type: map_at_5 value: 47.544 - type: mrr_at_1 value: 42.667 - type: mrr_at_10 value: 50.517999999999994 - type: mrr_at_100 value: 51.354 - type: mrr_at_1000 value: 51.403 - type: mrr_at_3 value: 48.443999999999996 - type: mrr_at_5 value: 49.311 - type: ndcg_at_1 value: 42.667 - type: ndcg_at_10 value: 53.688 - type: ndcg_at_100 value: 58.342000000000006 - type: ndcg_at_1000 value: 59.719 - type: ndcg_at_3 value: 48.85 - type: ndcg_at_5 value: 50.699000000000005 - type: precision_at_1 value: 42.667 - type: precision_at_10 value: 7.467 - type: precision_at_100 value: 1.0070000000000001 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 19.333 - type: precision_at_5 value: 12.733 - type: recall_at_1 value: 40.278000000000006 - type: recall_at_10 value: 67.27199999999999 - type: recall_at_100 value: 88.1 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 53.917 - type: recall_at_5 value: 58.443999999999996 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: None config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74455445544554 - type: cos_sim_ap value: 92.6197642156122 - type: cos_sim_f1 value: 86.78481012658227 - type: cos_sim_precision value: 87.8974358974359 - type: cos_sim_recall value: 85.7 - type: dot_accuracy value: 99.74455445544554 - type: dot_ap value: 92.6197642156122 - type: dot_f1 value: 86.78481012658227 - type: dot_precision value: 87.8974358974359 - type: dot_recall value: 85.7 - type: euclidean_accuracy value: 99.74455445544554 - type: euclidean_ap value: 92.6197642156122 - type: euclidean_f1 value: 86.78481012658227 - type: euclidean_precision value: 87.8974358974359 - type: euclidean_recall value: 85.7 - type: manhattan_accuracy value: 99.76237623762376 - type: manhattan_ap value: 93.22739174872532 - type: manhattan_f1 value: 87.87878787878789 - type: manhattan_precision value: 88.77551020408163 - type: manhattan_recall value: 87.0 - type: max_accuracy value: 99.76237623762376 - type: max_ap value: 93.22739174872532 - type: max_f1 value: 87.87878787878789 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: None config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 34.40384098107476 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: None config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 29.79004125896283 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: None config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 42.544704411607 - type: mrr value: 43.00333959341312 - task: type: Summarization dataset: name: MTEB SummEval type: None config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 27.97500813769544 - type: cos_sim_spearman value: 27.944732299630086 - type: dot_pearson value: 27.97500812408448 - type: dot_spearman value: 27.91532727171099 - task: type: Retrieval dataset: name: MTEB Touche2020 type: None config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 1.327 - type: map_at_10 value: 7.445 - type: map_at_100 value: 14.402999999999999 - type: map_at_1000 value: 16.102 - type: map_at_3 value: 3.689 - type: map_at_5 value: 5.305 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 38.279 - type: mrr_at_100 value: 39.434999999999995 - type: mrr_at_1000 value: 39.451 - type: mrr_at_3 value: 33.672999999999995 - type: mrr_at_5 value: 36.939 - type: ndcg_at_1 value: 20.408 - type: ndcg_at_10 value: 21.571 - type: ndcg_at_100 value: 35.510000000000005 - type: ndcg_at_1000 value: 47.25 - type: ndcg_at_3 value: 22.866 - type: ndcg_at_5 value: 23.951 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 21.224 - type: precision_at_100 value: 8.449 - type: precision_at_1000 value: 1.59 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 26.531 - type: recall_at_1 value: 1.327 - type: recall_at_10 value: 13.715 - type: recall_at_100 value: 49.903 - type: recall_at_1000 value: 86.25 - type: recall_at_3 value: 5.0040000000000004 - type: recall_at_5 value: 7.959 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: None config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 75.24119999999999 - type: ap value: 16.333693949079212 - type: f1 value: 57.95963832110142 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: None config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 50.80362195812111 - type: f1 value: 51.07007731951286 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: None config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.237341017271426 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: None config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.87536508314956 - type: cos_sim_ap value: 62.453427699972195 - type: cos_sim_f1 value: 60.47918683446273 - type: cos_sim_precision value: 55.85605721949038 - type: cos_sim_recall value: 65.93667546174143 - type: dot_accuracy value: 82.87536508314956 - type: dot_ap value: 62.453427699972195 - type: dot_f1 value: 60.47918683446273 - type: dot_precision value: 55.85605721949038 - type: dot_recall value: 65.93667546174143 - type: euclidean_accuracy value: 82.87536508314956 - type: euclidean_ap value: 62.453427699972195 - type: euclidean_f1 value: 60.47918683446273 - type: euclidean_precision value: 55.85605721949038 - type: euclidean_recall value: 65.93667546174143 - type: manhattan_accuracy value: 80.44942480777254 - type: manhattan_ap value: 54.51058097426563 - type: manhattan_f1 value: 53.99701556171392 - type: manhattan_precision value: 45.29685264663805 - type: manhattan_recall value: 66.83377308707124 - type: max_accuracy value: 82.87536508314956 - type: max_ap value: 62.453427699972195 - type: max_f1 value: 60.47918683446273 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: None config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.47622928552023 - type: cos_sim_ap value: 82.95452704175317 - type: cos_sim_f1 value: 75.16585581601062 - type: cos_sim_precision value: 72.0941742081448 - type: cos_sim_recall value: 78.51093316907914 - type: dot_accuracy value: 87.47622928552023 - type: dot_ap value: 82.95452715549013 - type: dot_f1 value: 75.16585581601062 - type: dot_precision value: 72.0941742081448 - type: dot_recall value: 78.51093316907914 - type: euclidean_accuracy value: 87.47622928552023 - type: euclidean_ap value: 82.95452684622329 - type: euclidean_f1 value: 75.16585581601062 - type: euclidean_precision value: 72.0941742081448 - type: euclidean_recall value: 78.51093316907914 - type: manhattan_accuracy value: 87.0396243256879 - type: manhattan_ap value: 81.90067228921978 - type: manhattan_f1 value: 74.29892820437529 - type: manhattan_precision value: 70.99466891133558 - type: manhattan_recall value: 77.9257776408993 - type: max_accuracy value: 87.47622928552023 - type: max_ap value: 82.95452715549013 - type: max_f1 value: 75.16585581601062 ---
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
mlx-community/multilingual-e5-large-mlx
mlx-community
feature-extraction
[ "sentence-transformers", "xlm-roberta", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "mlx", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,704
1,704
1,978
3
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers - mlx model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- # multilingual-e5-large-mlx This model was converted to MLX format from [`intfloat/multilingual-e5-large`](). Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model. ## Use with mlx ```bash pip install mlx git clone https://github.com/ml-explore/mlx-examples.git cd mlx-examples/llms/hf_llm python generate.py --model mlx-community/multilingual-e5-large-mlx --prompt "My name is" ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
aisingapore/gemma2-9b-cpt-sea-lionv3-base
aisingapore
text-generation
[ "transformers", "safetensors", "gemma2", "text-generation", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "arxiv:2309.06085", "arxiv:2101.09635", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,730
1,734
550
2
--- base_model: google/gemma-2-9b language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my library_name: transformers license: gemma pipeline_tag: text-generation --- <div> <img src="gemma_2_9b_sea-lion_v3_base_banner.png"/> </div> # Gemma2 9B CPT SEA-LIONv3 SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Gemma2 9B CPT SEA-LIONv3 Base is a multilingual model which has undergone continued pre-training on approximately **200B** tokens across the 11 official Southeast Asian languages: English, Chinese, Vietnamese, Indonesian, Thai, Tamil, Filipino, Malay, Khmer, Lao, Burmese. SEA-LION stands for <i>Southeast Asian Languages In One Network</i>. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai, Vietnamese - **License:** [Gemma Community License](https://ai.google.dev/gemma/terms) ## Model Details ### Model Description We performed continued pre-training in English and ASEAN languages on [Gemma-2-9B](https://huggingface.co/google/gemma-2-9b), a decoder model using the Gemma 2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Base. For tokenisation, the model employs the default tokenizer used in Gemma 2 9B. ### Benchmark Performance We evaluated Gemma2 9B CPT SEA-LIONv3 base model on general language capabilities. #### General Language Capabilities For the evaluation of general language capabilities, we employed the [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI). Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance. The evaluation was done **five-shot** with native prompts on a sample of 100-1000 instances for each dataset. For more details on Gemma2 9B CPT SEA-LIONv3 base benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/ ## Technical Specifications ### Infrastructure Gemma2 9B CPT SEA-LIONv3 was trained using [MosaicML Composer](https://github.com/mosaicml/composer) on the following hardware: | Training Details | Gemma2 9B CPT SEA-LIONv3 | |----------------------|:------------------------:| | SingTel HGX-100 | 8 instances | | Nvidia H100 80GB GPU | 64 | | Training Duration | 10 days | ### Configuration | HyperParameter | Gemma2 9B CPT SEA-LIONv3 | |-------------------|:------------------------:| | Precision | bfloat16 | | Optimizer | decoupled_adamw | | Scheduler | weight_stable_decay | | Learning Rate | 1.0e-5 | | Global Batch Size | 512 | | Micro Batch Size | 1 | ## Data Gemma2 9B CPT SEA-LIONv3 base model was continued pre-trained on 200B tokens of the following data: | Language | Source | Total Tokens (B) | Percentage (%) | Total percentage (%) | | ------------------------ | ---------------- | ---------------- | -------------- | -------------------- | | Code | StackV2 | 40 | 20 | 20 | | English | Dolma | 37.5 | 18.75 | 25 | | | Fineweb-Edu | 7.5 | 3.75 | | | Others | 5 | 2.5 | | Chinese | SEA-LION Pile v1 | 12 | 6 | 13 | | | Others | 14 | 7 | | Vietnamese | SEA-LION Pile v1 | 8.4 | 4.2 | 13 | | | VinBigData | 16 | 8 | | | Others | 1.6 | 0.8 | | Indonesian | SEA-LION Pile v1 | 7 | 3.5 | 13 | | | SEA-LION Pile v2 | 7 | 3.5 | | | Others | 12 | 6 | | Thai | SEA-LION Pile v1 | 10.7 | 5.35 | 10 | | | WangChanBERTa | 8.5 | 4.25 | | | Others | 0.8 | 0.4 | | Filipino - Malay - Tamil | SEA-LION Pile v1 | 4.28 | 2.14 | 3 | | | Others | 1.72 | 0.86 | | Khmer - Lao - Burmese | SEA-LION Pile v1 | 5.2 | 2.6 | 3 | | | Others | 0.8 | 0.4 | Note: - All token counts are counted using Gemma 2 9B tokenizer - SEA-LION Pile v1 is processed from Common Crawl WET, which is published [here](https://huggingface.co/datasets/aisingapore/sea-lion-pile). The cutoff date of this version is September 2020. - SEA-LION Pile v2 is processed from Common Crawl WARC from October 2020 to April 2024. - Tamil news is sourced with permission from [Seithi](https://seithi.mediacorp.sg/) ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form.](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository.](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. ## References ### Thai Pre-Training Data Reference ```bibtex @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
[ "CHIA" ]
Non_BioNLP
sheldonrobinson/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF
sheldonrobinson
sentence-similarity
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "mteb", "arctic", "snowflake-arctic-embed", "transformers.js", "llama-cpp", "gguf-my-repo", "base_model:Snowflake/snowflake-arctic-embed-m-v1.5", "base_model:quantized:Snowflake/snowflake-arctic-embed-m-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,729
1,729
11
0
--- base_model: Snowflake/snowflake-arctic-embed-m-v1.5 license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - arctic - snowflake-arctic-embed - transformers.js - llama-cpp - gguf-my-repo model-index: - name: snowflake-arctic-embed-m-v1.5 results: - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 59.53000000000001 - type: map_at_1 value: 34.282000000000004 - type: map_at_10 value: 50.613 - type: map_at_100 value: 51.269 - type: map_at_1000 value: 51.271 - type: map_at_20 value: 51.158 - type: map_at_3 value: 45.626 - type: map_at_5 value: 48.638 - type: mrr_at_1 value: 34.92176386913229 - type: mrr_at_10 value: 50.856081645555406 - type: mrr_at_100 value: 51.510739437069034 - type: mrr_at_1000 value: 51.51299498830165 - type: mrr_at_20 value: 51.39987941081724 - type: mrr_at_3 value: 45.993361782835514 - type: mrr_at_5 value: 48.88098624940742 - type: nauc_map_at_1000_diff1 value: 10.628675774160785 - type: nauc_map_at_1000_max value: -10.11742589992339 - type: nauc_map_at_1000_std value: -18.29277379812427 - type: nauc_map_at_100_diff1 value: 10.63250240035489 - type: nauc_map_at_100_max value: -10.112078786734363 - type: nauc_map_at_100_std value: -18.288524872706834 - type: nauc_map_at_10_diff1 value: 10.476494913081712 - type: nauc_map_at_10_max value: -9.890937746734037 - type: nauc_map_at_10_std value: -18.279750514750443 - type: nauc_map_at_1_diff1 value: 14.549204048461151 - type: nauc_map_at_1_max value: -12.230560087701225 - type: nauc_map_at_1_std value: -19.469903650130362 - type: nauc_map_at_20_diff1 value: 10.586564571825674 - type: nauc_map_at_20_max value: -10.00292720526217 - type: nauc_map_at_20_std value: -18.258077347878064 - type: nauc_map_at_3_diff1 value: 10.378663968090372 - type: nauc_map_at_3_max value: -10.458896171786185 - type: nauc_map_at_3_std value: -18.38852760333766 - type: nauc_map_at_5_diff1 value: 10.235960275925581 - type: nauc_map_at_5_max value: -10.239496080409058 - type: nauc_map_at_5_std value: -18.817023479445886 - type: nauc_mrr_at_1000_diff1 value: 8.718212649575722 - type: nauc_mrr_at_1000_max value: -10.81022794038691 - type: nauc_mrr_at_1000_std value: -17.87669499555167 - type: nauc_mrr_at_100_diff1 value: 8.722174171165133 - type: nauc_mrr_at_100_max value: -10.804840985713525 - type: nauc_mrr_at_100_std value: -17.872487099359986 - type: nauc_mrr_at_10_diff1 value: 8.609421635870238 - type: nauc_mrr_at_10_max value: -10.568644717548432 - type: nauc_mrr_at_10_std value: -17.872968762635814 - type: nauc_mrr_at_1_diff1 value: 12.69590006263834 - type: nauc_mrr_at_1_max value: -12.082056561238321 - type: nauc_mrr_at_1_std value: -18.036424092186657 - type: nauc_mrr_at_20_diff1 value: 8.684842497970315 - type: nauc_mrr_at_20_max value: -10.691578914627286 - type: nauc_mrr_at_20_std value: -17.84350301434992 - type: nauc_mrr_at_3_diff1 value: 8.649761557556763 - type: nauc_mrr_at_3_max value: -11.104694428047496 - type: nauc_mrr_at_3_std value: -18.149917948370344 - type: nauc_mrr_at_5_diff1 value: 8.433489750038396 - type: nauc_mrr_at_5_max value: -10.917772454397436 - type: nauc_mrr_at_5_std value: -18.4094211134111 - type: nauc_ndcg_at_1000_diff1 value: 10.19041067807956 - type: nauc_ndcg_at_1000_max value: -9.54328201605796 - type: nauc_ndcg_at_1000_std value: -17.824620427456633 - type: nauc_ndcg_at_100_diff1 value: 10.289491087585963 - type: nauc_ndcg_at_100_max value: -9.357214331420337 - type: nauc_ndcg_at_100_std value: -17.657600653632873 - type: nauc_ndcg_at_10_diff1 value: 9.435530877596092 - type: nauc_ndcg_at_10_max value: -8.182581635383546 - type: nauc_ndcg_at_10_std value: -17.603156479980388 - type: nauc_ndcg_at_1_diff1 value: 14.549204048461151 - type: nauc_ndcg_at_1_max value: -12.230560087701225 - type: nauc_ndcg_at_1_std value: -19.469903650130362 - type: nauc_ndcg_at_20_diff1 value: 9.885227087275197 - type: nauc_ndcg_at_20_max value: -8.52362662391439 - type: nauc_ndcg_at_20_std value: -17.441705436231764 - type: nauc_ndcg_at_3_diff1 value: 9.22542769998547 - type: nauc_ndcg_at_3_max value: -9.903590564219288 - type: nauc_ndcg_at_3_std value: -18.357220221111593 - type: nauc_ndcg_at_5_diff1 value: 8.8756720745828 - type: nauc_ndcg_at_5_max value: -9.269764943861245 - type: nauc_ndcg_at_5_std value: -19.009229433187784 - type: nauc_precision_at_1000_diff1 value: 3.733355117431035 - type: nauc_precision_at_1000_max value: 3.9603571352517393 - type: nauc_precision_at_1000_std value: 70.07345061131439 - type: nauc_precision_at_100_diff1 value: 29.019032142462457 - type: nauc_precision_at_100_max value: 40.75153328286103 - type: nauc_precision_at_100_std value: 62.634249549126594 - type: nauc_precision_at_10_diff1 value: 2.5762677254910353 - type: nauc_precision_at_10_max value: 6.096298633773051 - type: nauc_precision_at_10_std value: -11.507400451348587 - type: nauc_precision_at_1_diff1 value: 14.549204048461151 - type: nauc_precision_at_1_max value: -12.230560087701225 - type: nauc_precision_at_1_std value: -19.469903650130362 - type: nauc_precision_at_20_diff1 value: 1.715540124567996 - type: nauc_precision_at_20_max value: 21.53546453945913 - type: nauc_precision_at_20_std value: 1.537961142195571 - type: nauc_precision_at_3_diff1 value: 5.701850652555737 - type: nauc_precision_at_3_max value: -8.180345365085552 - type: nauc_precision_at_3_std value: -18.37033750502482 - type: nauc_precision_at_5_diff1 value: 3.6053552181042843 - type: nauc_precision_at_5_max value: -5.207647070615612 - type: nauc_precision_at_5_std value: -19.89491085427258 - type: nauc_recall_at_1000_diff1 value: 3.733355117431255 - type: nauc_recall_at_1000_max value: 3.9603571352482194 - type: nauc_recall_at_1000_std value: 70.07345061131205 - type: nauc_recall_at_100_diff1 value: 29.01903214246288 - type: nauc_recall_at_100_max value: 40.7515332828621 - type: nauc_recall_at_100_std value: 62.63424954912607 - type: nauc_recall_at_10_diff1 value: 2.5762677254911988 - type: nauc_recall_at_10_max value: 6.0962986337729905 - type: nauc_recall_at_10_std value: -11.507400451348577 - type: nauc_recall_at_1_diff1 value: 14.549204048461151 - type: nauc_recall_at_1_max value: -12.230560087701225 - type: nauc_recall_at_1_std value: -19.469903650130362 - type: nauc_recall_at_20_diff1 value: 1.7155401245682675 - type: nauc_recall_at_20_max value: 21.535464539459632 - type: nauc_recall_at_20_std value: 1.5379611421957025 - type: nauc_recall_at_3_diff1 value: 5.7018506525557875 - type: nauc_recall_at_3_max value: -8.180345365085538 - type: nauc_recall_at_3_std value: -18.370337505024796 - type: nauc_recall_at_5_diff1 value: 3.6053552181043913 - type: nauc_recall_at_5_max value: -5.207647070615579 - type: nauc_recall_at_5_std value: -19.894910854272492 - type: ndcg_at_1 value: 34.282000000000004 - type: ndcg_at_10 value: 59.53000000000001 - type: ndcg_at_100 value: 62.187000000000005 - type: ndcg_at_1000 value: 62.243 - type: ndcg_at_20 value: 61.451 - type: ndcg_at_3 value: 49.393 - type: ndcg_at_5 value: 54.771 - type: precision_at_1 value: 34.282000000000004 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.769 - type: precision_at_3 value: 20.104 - type: precision_at_5 value: 14.651 - type: recall_at_1 value: 34.282000000000004 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.21799999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 95.377 - type: recall_at_3 value: 60.313 - type: recall_at_5 value: 73.257 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: main_score value: 53.885000000000005 - type: map_at_1 value: 35.429 - type: map_at_10 value: 47.469 - type: map_at_100 value: 48.997 - type: map_at_1000 value: 49.117 - type: map_at_20 value: 48.324 - type: map_at_3 value: 43.835 - type: map_at_5 value: 46.043 - type: mrr_at_1 value: 43.34763948497854 - type: mrr_at_10 value: 53.258623430297234 - type: mrr_at_100 value: 53.99123884299005 - type: mrr_at_1000 value: 54.02458101713216 - type: mrr_at_20 value: 53.695964669618945 - type: mrr_at_3 value: 50.81068192656173 - type: mrr_at_5 value: 52.45588936576058 - type: nauc_map_at_1000_diff1 value: 51.55382824218782 - type: nauc_map_at_1000_max value: 31.855350695084606 - type: nauc_map_at_1000_std value: -5.465862008150992 - type: nauc_map_at_100_diff1 value: 51.55889312452534 - type: nauc_map_at_100_max value: 31.88429637207401 - type: nauc_map_at_100_std value: -5.40805152544196 - type: nauc_map_at_10_diff1 value: 51.6592677505875 - type: nauc_map_at_10_max value: 31.554425233617543 - type: nauc_map_at_10_std value: -6.125756131339046 - type: nauc_map_at_1_diff1 value: 55.6889617582672 - type: nauc_map_at_1_max value: 27.821166966868176 - type: nauc_map_at_1_std value: -5.778838498211728 - type: nauc_map_at_20_diff1 value: 51.70520970992564 - type: nauc_map_at_20_max value: 31.811676633900465 - type: nauc_map_at_20_std value: -5.463596751904718 - type: nauc_map_at_3_diff1 value: 53.206169626589606 - type: nauc_map_at_3_max value: 31.64373830824983 - type: nauc_map_at_3_std value: -6.054761451312827 - type: nauc_map_at_5_diff1 value: 52.37308971673694 - type: nauc_map_at_5_max value: 31.974302019633644 - type: nauc_map_at_5_std value: -6.302653399940531 - type: nauc_mrr_at_1000_diff1 value: 49.345152231490616 - type: nauc_mrr_at_1000_max value: 33.49789501712511 - type: nauc_mrr_at_1000_std value: -6.054730861163538 - type: nauc_mrr_at_100_diff1 value: 49.3387577601307 - type: nauc_mrr_at_100_max value: 33.48149992464187 - type: nauc_mrr_at_100_std value: -6.061177137579308 - type: nauc_mrr_at_10_diff1 value: 49.08312288449718 - type: nauc_mrr_at_10_max value: 33.470393322577465 - type: nauc_mrr_at_10_std value: -6.180286430216975 - type: nauc_mrr_at_1_diff1 value: 52.43364978537192 - type: nauc_mrr_at_1_max value: 31.521755633355713 - type: nauc_mrr_at_1_std value: -7.002499524130836 - type: nauc_mrr_at_20_diff1 value: 49.311059224991766 - type: nauc_mrr_at_20_max value: 33.538523037692144 - type: nauc_mrr_at_20_std value: -6.034619474981136 - type: nauc_mrr_at_3_diff1 value: 49.90489868439366 - type: nauc_mrr_at_3_max value: 34.400493912164606 - type: nauc_mrr_at_3_std value: -6.028875320994629 - type: nauc_mrr_at_5_diff1 value: 49.033661898983475 - type: nauc_mrr_at_5_max value: 33.732315350193936 - type: nauc_mrr_at_5_std value: -6.272548556330368 - type: nauc_ndcg_at_1000_diff1 value: 49.81681892539247 - type: nauc_ndcg_at_1000_max value: 33.06518006062093 - type: nauc_ndcg_at_1000_std value: -4.282105713014755 - type: nauc_ndcg_at_100_diff1 value: 49.42362108857786 - type: nauc_ndcg_at_100_max value: 32.92024325540483 - type: nauc_ndcg_at_100_std value: -3.7786765305496717 - type: nauc_ndcg_at_10_diff1 value: 48.83102435475594 - type: nauc_ndcg_at_10_max value: 31.898404563611958 - type: nauc_ndcg_at_10_std value: -6.2024003866707 - type: nauc_ndcg_at_1_diff1 value: 52.43364978537192 - type: nauc_ndcg_at_1_max value: 31.521755633355713 - type: nauc_ndcg_at_1_std value: -7.002499524130836 - type: nauc_ndcg_at_20_diff1 value: 49.466526454438316 - type: nauc_ndcg_at_20_max value: 32.424462698701674 - type: nauc_ndcg_at_20_std value: -4.520809563712905 - type: nauc_ndcg_at_3_diff1 value: 50.997884562583884 - type: nauc_ndcg_at_3_max value: 33.26787046916917 - type: nauc_ndcg_at_3_std value: -6.340699471083753 - type: nauc_ndcg_at_5_diff1 value: 49.68314458398097 - type: nauc_ndcg_at_5_max value: 32.80910071143984 - type: nauc_ndcg_at_5_std value: -6.734495576445887 - type: nauc_precision_at_1000_diff1 value: -24.18940012795299 - type: nauc_precision_at_1000_max value: -10.995343674356896 - type: nauc_precision_at_1000_std value: -8.298841004724856 - type: nauc_precision_at_100_diff1 value: -18.104939577865935 - type: nauc_precision_at_100_max value: -1.3757613100627637 - type: nauc_precision_at_100_std value: 0.07661922190466432 - type: nauc_precision_at_10_diff1 value: 3.9624459059275967 - type: nauc_precision_at_10_max value: 14.841561593450391 - type: nauc_precision_at_10_std value: -2.485374333613117 - type: nauc_precision_at_1_diff1 value: 52.43364978537192 - type: nauc_precision_at_1_max value: 31.521755633355713 - type: nauc_precision_at_1_std value: -7.002499524130836 - type: nauc_precision_at_20_diff1 value: -4.4791763436505265 - type: nauc_precision_at_20_max value: 9.157872836996276 - type: nauc_precision_at_20_std value: 2.086903518342088 - type: nauc_precision_at_3_diff1 value: 28.480888018235568 - type: nauc_precision_at_3_max value: 30.34526267718485 - type: nauc_precision_at_3_std value: -6.3006706923866025 - type: nauc_precision_at_5_diff1 value: 16.488039195453517 - type: nauc_precision_at_5_max value: 24.593477099241852 - type: nauc_precision_at_5_std value: -5.316448107840636 - type: nauc_recall_at_1000_diff1 value: 34.715187316533076 - type: nauc_recall_at_1000_max value: 58.2266544684947 - type: nauc_recall_at_1000_std value: 63.85237636398278 - type: nauc_recall_at_100_diff1 value: 36.08623826028132 - type: nauc_recall_at_100_max value: 33.05011429439473 - type: nauc_recall_at_100_std value: 16.559545021212564 - type: nauc_recall_at_10_diff1 value: 39.76738610714205 - type: nauc_recall_at_10_max value: 28.233045706945997 - type: nauc_recall_at_10_std value: -5.13243784043598 - type: nauc_recall_at_1_diff1 value: 55.6889617582672 - type: nauc_recall_at_1_max value: 27.821166966868176 - type: nauc_recall_at_1_std value: -5.778838498211728 - type: nauc_recall_at_20_diff1 value: 41.18682480073759 - type: nauc_recall_at_20_max value: 29.525993239296945 - type: nauc_recall_at_20_std value: 1.5003598438954298 - type: nauc_recall_at_3_diff1 value: 48.31879460301157 - type: nauc_recall_at_3_max value: 32.93751306970167 - type: nauc_recall_at_3_std value: -5.28070084211707 - type: nauc_recall_at_5_diff1 value: 44.327686388315435 - type: nauc_recall_at_5_max value: 32.04823486234599 - type: nauc_recall_at_5_std value: -6.4221525602778256 - type: ndcg_at_1 value: 43.348 - type: ndcg_at_10 value: 53.885000000000005 - type: ndcg_at_100 value: 59.204 - type: ndcg_at_1000 value: 60.744 - type: ndcg_at_20 value: 55.995 - type: ndcg_at_3 value: 49.112 - type: ndcg_at_5 value: 51.61900000000001 - type: precision_at_1 value: 43.348 - type: precision_at_10 value: 10.242999999999999 - type: precision_at_100 value: 1.6150000000000002 - type: precision_at_1000 value: 0.203 - type: precision_at_20 value: 6.066 - type: precision_at_3 value: 23.605 - type: precision_at_5 value: 17.024 - type: recall_at_1 value: 35.429 - type: recall_at_10 value: 65.77199999999999 - type: recall_at_100 value: 87.89 - type: recall_at_1000 value: 97.13000000000001 - type: recall_at_20 value: 73.299 - type: recall_at_3 value: 52.034000000000006 - type: recall_at_5 value: 58.96 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: main_score value: 49.55 - type: map_at_1 value: 31.684 - type: map_at_10 value: 43.258 - type: map_at_100 value: 44.628 - type: map_at_1000 value: 44.761 - type: map_at_20 value: 44.015 - type: map_at_3 value: 39.778000000000006 - type: map_at_5 value: 41.643 - type: mrr_at_1 value: 39.87261146496815 - type: mrr_at_10 value: 49.31978566373469 - type: mrr_at_100 value: 49.94922739445482 - type: mrr_at_1000 value: 49.990325601254106 - type: mrr_at_20 value: 49.70597468576704 - type: mrr_at_3 value: 47.070063694267546 - type: mrr_at_5 value: 48.23248407643316 - type: nauc_map_at_1000_diff1 value: 53.44044712371752 - type: nauc_map_at_1000_max value: 34.5651440062204 - type: nauc_map_at_1000_std value: -0.9814384609230475 - type: nauc_map_at_100_diff1 value: 53.429004435388464 - type: nauc_map_at_100_max value: 34.52038957273436 - type: nauc_map_at_100_std value: -1.1021936362699805 - type: nauc_map_at_10_diff1 value: 53.879128574022005 - type: nauc_map_at_10_max value: 33.74771524140917 - type: nauc_map_at_10_std value: -2.945132777205236 - type: nauc_map_at_1_diff1 value: 60.25159799695403 - type: nauc_map_at_1_max value: 26.843892985235808 - type: nauc_map_at_1_std value: -9.618702739509093 - type: nauc_map_at_20_diff1 value: 53.56789898225283 - type: nauc_map_at_20_max value: 34.11628845872402 - type: nauc_map_at_20_std value: -2.024376635870884 - type: nauc_map_at_3_diff1 value: 54.45882099014072 - type: nauc_map_at_3_max value: 31.29495446507793 - type: nauc_map_at_3_std value: -6.391948228781555 - type: nauc_map_at_5_diff1 value: 54.20536489050697 - type: nauc_map_at_5_max value: 32.31001487256826 - type: nauc_map_at_5_std value: -5.050953263346934 - type: nauc_mrr_at_1000_diff1 value: 50.835858995999125 - type: nauc_mrr_at_1000_max value: 38.20717381701079 - type: nauc_mrr_at_1000_std value: 4.174163368228787 - type: nauc_mrr_at_100_diff1 value: 50.827072441041224 - type: nauc_mrr_at_100_max value: 38.21077622034756 - type: nauc_mrr_at_100_std value: 4.1951082737013365 - type: nauc_mrr_at_10_diff1 value: 50.90578491570948 - type: nauc_mrr_at_10_max value: 38.19229691746408 - type: nauc_mrr_at_10_std value: 3.8290750066335546 - type: nauc_mrr_at_1_diff1 value: 54.807021746871186 - type: nauc_mrr_at_1_max value: 37.09225642043841 - type: nauc_mrr_at_1_std value: 0.5654547513131355 - type: nauc_mrr_at_20_diff1 value: 50.86247832095378 - type: nauc_mrr_at_20_max value: 38.19277867384178 - type: nauc_mrr_at_20_std value: 4.098932316791841 - type: nauc_mrr_at_3_diff1 value: 50.788934370903036 - type: nauc_mrr_at_3_max value: 37.72130561895659 - type: nauc_mrr_at_3_std value: 2.7339370381517583 - type: nauc_mrr_at_5_diff1 value: 50.72543792525547 - type: nauc_mrr_at_5_max value: 37.57740908475375 - type: nauc_mrr_at_5_std value: 2.742881431085094 - type: nauc_ndcg_at_1000_diff1 value: 50.89692885407576 - type: nauc_ndcg_at_1000_max value: 37.250583054716955 - type: nauc_ndcg_at_1000_std value: 5.552279826578831 - type: nauc_ndcg_at_100_diff1 value: 50.624606875496944 - type: nauc_ndcg_at_100_max value: 37.1024514234627 - type: nauc_ndcg_at_100_std value: 5.495892760032762 - type: nauc_ndcg_at_10_diff1 value: 51.910387255793445 - type: nauc_ndcg_at_10_max value: 36.71168418905039 - type: nauc_ndcg_at_10_std value: 2.3064115117905217 - type: nauc_ndcg_at_1_diff1 value: 54.807021746871186 - type: nauc_ndcg_at_1_max value: 37.09225642043841 - type: nauc_ndcg_at_1_std value: 0.5654547513131355 - type: nauc_ndcg_at_20_diff1 value: 51.43416588546778 - type: nauc_ndcg_at_20_max value: 36.76387180172346 - type: nauc_ndcg_at_20_std value: 3.7012798827049718 - type: nauc_ndcg_at_3_diff1 value: 50.91198494475423 - type: nauc_ndcg_at_3_max value: 34.92770670756687 - type: nauc_ndcg_at_3_std value: -0.9071486759887368 - type: nauc_ndcg_at_5_diff1 value: 51.63559468683886 - type: nauc_ndcg_at_5_max value: 34.86849679864564 - type: nauc_ndcg_at_5_std value: -0.734837221224976 - type: nauc_precision_at_1000_diff1 value: -13.43645457127175 - type: nauc_precision_at_1000_max value: 12.71162105198664 - type: nauc_precision_at_1000_std value: 33.175399007040255 - type: nauc_precision_at_100_diff1 value: -8.549834785105412 - type: nauc_precision_at_100_max value: 22.47383497331883 - type: nauc_precision_at_100_std value: 39.09108761430844 - type: nauc_precision_at_10_diff1 value: 7.556572451100043 - type: nauc_precision_at_10_max value: 35.35285122987575 - type: nauc_precision_at_10_std value: 29.417466305615967 - type: nauc_precision_at_1_diff1 value: 54.807021746871186 - type: nauc_precision_at_1_max value: 37.09225642043841 - type: nauc_precision_at_1_std value: 0.5654547513131355 - type: nauc_precision_at_20_diff1 value: -0.550158641635712 - type: nauc_precision_at_20_max value: 29.9068430006187 - type: nauc_precision_at_20_std value: 33.920603132821185 - type: nauc_precision_at_3_diff1 value: 25.551264664276687 - type: nauc_precision_at_3_max value: 37.59463225854679 - type: nauc_precision_at_3_std value: 13.707295021359043 - type: nauc_precision_at_5_diff1 value: 17.76136129817151 - type: nauc_precision_at_5_max value: 35.85363807255972 - type: nauc_precision_at_5_std value: 19.48470876841111 - type: nauc_recall_at_1000_diff1 value: 37.1593620123866 - type: nauc_recall_at_1000_max value: 46.29322536951135 - type: nauc_recall_at_1000_std value: 51.47312657083967 - type: nauc_recall_at_100_diff1 value: 37.7542224949536 - type: nauc_recall_at_100_max value: 38.84120637703135 - type: nauc_recall_at_100_std value: 28.839672572221925 - type: nauc_recall_at_10_diff1 value: 46.24130302658384 - type: nauc_recall_at_10_max value: 35.89001724712849 - type: nauc_recall_at_10_std value: 6.985137790828618 - type: nauc_recall_at_1_diff1 value: 60.25159799695403 - type: nauc_recall_at_1_max value: 26.843892985235808 - type: nauc_recall_at_1_std value: -9.618702739509093 - type: nauc_recall_at_20_diff1 value: 43.63576680886187 - type: nauc_recall_at_20_max value: 36.79079644708101 - type: nauc_recall_at_20_std value: 13.81561928605839 - type: nauc_recall_at_3_diff1 value: 48.2299322140522 - type: nauc_recall_at_3_max value: 30.038088484376203 - type: nauc_recall_at_3_std value: -4.871116183843762 - type: nauc_recall_at_5_diff1 value: 47.22331872695983 - type: nauc_recall_at_5_max value: 30.398541477173136 - type: nauc_recall_at_5_std value: -3.2038541888528957 - type: ndcg_at_1 value: 39.873 - type: ndcg_at_10 value: 49.55 - type: ndcg_at_100 value: 53.809 - type: ndcg_at_1000 value: 55.767999999999994 - type: ndcg_at_20 value: 51.275999999999996 - type: ndcg_at_3 value: 44.91 - type: ndcg_at_5 value: 46.855999999999995 - type: precision_at_1 value: 39.873 - type: precision_at_10 value: 9.65 - type: precision_at_100 value: 1.522 - type: precision_at_1000 value: 0.196 - type: precision_at_20 value: 5.701 - type: precision_at_3 value: 22.166 - type: precision_at_5 value: 15.643 - type: recall_at_1 value: 31.684 - type: recall_at_10 value: 60.69 - type: recall_at_100 value: 78.521 - type: recall_at_1000 value: 91.02900000000001 - type: recall_at_20 value: 66.973 - type: recall_at_3 value: 46.807 - type: recall_at_5 value: 52.402 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: main_score value: 62.686 - type: map_at_1 value: 43.856 - type: map_at_10 value: 57.056 - type: map_at_100 value: 58.048 - type: map_at_1000 value: 58.092 - type: map_at_20 value: 57.684000000000005 - type: map_at_3 value: 53.958 - type: map_at_5 value: 55.80500000000001 - type: mrr_at_1 value: 50.03134796238244 - type: mrr_at_10 value: 60.31022043091019 - type: mrr_at_100 value: 60.91892338857461 - type: mrr_at_1000 value: 60.93770463536649 - type: mrr_at_20 value: 60.705642387392736 - type: mrr_at_3 value: 58.286311389759746 - type: mrr_at_5 value: 59.49320794148393 - type: nauc_map_at_1000_diff1 value: 54.849140197256695 - type: nauc_map_at_1000_max value: 38.978448968260224 - type: nauc_map_at_1000_std value: 0.4955439383268162 - type: nauc_map_at_100_diff1 value: 54.824334747823364 - type: nauc_map_at_100_max value: 38.959443109450994 - type: nauc_map_at_100_std value: 0.49626092018886037 - type: nauc_map_at_10_diff1 value: 54.778189277103394 - type: nauc_map_at_10_max value: 38.20972191654546 - type: nauc_map_at_10_std value: -0.7239823837455759 - type: nauc_map_at_1_diff1 value: 58.74017164752485 - type: nauc_map_at_1_max value: 31.528974862589585 - type: nauc_map_at_1_std value: -3.273824691929492 - type: nauc_map_at_20_diff1 value: 54.78943693416187 - type: nauc_map_at_20_max value: 38.77930316443076 - type: nauc_map_at_20_std value: 0.25607460088355544 - type: nauc_map_at_3_diff1 value: 55.68313410225767 - type: nauc_map_at_3_max value: 36.22847284104399 - type: nauc_map_at_3_std value: -3.010979639100503 - type: nauc_map_at_5_diff1 value: 55.11385094420661 - type: nauc_map_at_5_max value: 37.319681045490924 - type: nauc_map_at_5_std value: -2.156640733221061 - type: nauc_mrr_at_1000_diff1 value: 54.504759468380705 - type: nauc_mrr_at_1000_max value: 40.58849492650406 - type: nauc_mrr_at_1000_std value: 1.8226622175866118 - type: nauc_mrr_at_100_diff1 value: 54.4918034449886 - type: nauc_mrr_at_100_max value: 40.59202728933427 - type: nauc_mrr_at_100_std value: 1.8276428096536335 - type: nauc_mrr_at_10_diff1 value: 54.33603399493329 - type: nauc_mrr_at_10_max value: 40.58896878978089 - type: nauc_mrr_at_10_std value: 1.5733340909114375 - type: nauc_mrr_at_1_diff1 value: 58.062410036466105 - type: nauc_mrr_at_1_max value: 37.660958859966506 - type: nauc_mrr_at_1_std value: 0.029007600674170648 - type: nauc_mrr_at_20_diff1 value: 54.43793386924358 - type: nauc_mrr_at_20_max value: 40.66773423875307 - type: nauc_mrr_at_20_std value: 1.891967891797154 - type: nauc_mrr_at_3_diff1 value: 54.77901284537966 - type: nauc_mrr_at_3_max value: 40.182219821206964 - type: nauc_mrr_at_3_std value: 0.8911935034597871 - type: nauc_mrr_at_5_diff1 value: 54.466068837163675 - type: nauc_mrr_at_5_max value: 40.334996916684126 - type: nauc_mrr_at_5_std value: 0.9460830492892364 - type: nauc_ndcg_at_1000_diff1 value: 53.8465376860938 - type: nauc_ndcg_at_1000_max value: 41.63158111016696 - type: nauc_ndcg_at_1000_std value: 3.864205884257578 - type: nauc_ndcg_at_100_diff1 value: 53.4025864436944 - type: nauc_ndcg_at_100_max value: 41.805453995307914 - type: nauc_ndcg_at_100_std value: 4.36777557904857 - type: nauc_ndcg_at_10_diff1 value: 52.96034987157544 - type: nauc_ndcg_at_10_max value: 40.7601173480795 - type: nauc_ndcg_at_10_std value: 1.905824035879141 - type: nauc_ndcg_at_1_diff1 value: 58.062410036466105 - type: nauc_ndcg_at_1_max value: 37.660958859966506 - type: nauc_ndcg_at_1_std value: 0.029007600674170648 - type: nauc_ndcg_at_20_diff1 value: 53.2834771889242 - type: nauc_ndcg_at_20_max value: 41.713541932946406 - type: nauc_ndcg_at_20_std value: 3.865102828793311 - type: nauc_ndcg_at_3_diff1 value: 54.03389464372289 - type: nauc_ndcg_at_3_max value: 38.41449914649933 - type: nauc_ndcg_at_3_std value: -0.886276189886313 - type: nauc_ndcg_at_5_diff1 value: 53.456413320299 - type: nauc_ndcg_at_5_max value: 39.49048882649335 - type: nauc_ndcg_at_5_std value: -0.42692690160443814 - type: nauc_precision_at_1000_diff1 value: -14.770791653274824 - type: nauc_precision_at_1000_max value: 21.479874538905246 - type: nauc_precision_at_1000_std value: 28.607024261300207 - type: nauc_precision_at_100_diff1 value: -12.189696449878126 - type: nauc_precision_at_100_max value: 26.69785787492456 - type: nauc_precision_at_100_std value: 33.59098307467553 - type: nauc_precision_at_10_diff1 value: 6.922968330978399 - type: nauc_precision_at_10_max value: 34.52138344123087 - type: nauc_precision_at_10_std value: 21.768427637079952 - type: nauc_precision_at_1_diff1 value: 58.062410036466105 - type: nauc_precision_at_1_max value: 37.660958859966506 - type: nauc_precision_at_1_std value: 0.029007600674170648 - type: nauc_precision_at_20_diff1 value: -0.6837867902179278 - type: nauc_precision_at_20_max value: 33.98683709011133 - type: nauc_precision_at_20_std value: 30.8845561918902 - type: nauc_precision_at_3_diff1 value: 28.195043041120847 - type: nauc_precision_at_3_max value: 37.659916094938836 - type: nauc_precision_at_3_std value: 7.226520146634867 - type: nauc_precision_at_5_diff1 value: 16.633667288096245 - type: nauc_precision_at_5_max value: 34.90176597404891 - type: nauc_precision_at_5_std value: 12.421585442334088 - type: nauc_recall_at_1000_diff1 value: 45.20743732415397 - type: nauc_recall_at_1000_max value: 72.77115913579242 - type: nauc_recall_at_1000_std value: 70.48328496679083 - type: nauc_recall_at_100_diff1 value: 38.56282680810794 - type: nauc_recall_at_100_max value: 55.46797683321103 - type: nauc_recall_at_100_std value: 36.878791151929136 - type: nauc_recall_at_10_diff1 value: 44.18252051452362 - type: nauc_recall_at_10_max value: 43.33391810040086 - type: nauc_recall_at_10_std value: 6.663378192277723 - type: nauc_recall_at_1_diff1 value: 58.74017164752485 - type: nauc_recall_at_1_max value: 31.528974862589585 - type: nauc_recall_at_1_std value: -3.273824691929492 - type: nauc_recall_at_20_diff1 value: 44.19944231642417 - type: nauc_recall_at_20_max value: 49.401101483915866 - type: nauc_recall_at_20_std value: 18.97803841673839 - type: nauc_recall_at_3_diff1 value: 49.56378985428704 - type: nauc_recall_at_3_max value: 36.434210616870224 - type: nauc_recall_at_3_std value: -2.850559971607616 - type: nauc_recall_at_5_diff1 value: 47.37107217086109 - type: nauc_recall_at_5_max value: 39.0236745509895 - type: nauc_recall_at_5_std value: -1.7402454457937195 - type: ndcg_at_1 value: 50.031000000000006 - type: ndcg_at_10 value: 62.686 - type: ndcg_at_100 value: 66.403 - type: ndcg_at_1000 value: 67.241 - type: ndcg_at_20 value: 64.37899999999999 - type: ndcg_at_3 value: 57.859 - type: ndcg_at_5 value: 60.375 - type: precision_at_1 value: 50.031000000000006 - type: precision_at_10 value: 9.856 - type: precision_at_100 value: 1.266 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_20 value: 5.489 - type: precision_at_3 value: 25.746999999999996 - type: precision_at_5 value: 17.492 - type: recall_at_1 value: 43.856 - type: recall_at_10 value: 75.824 - type: recall_at_100 value: 91.622 - type: recall_at_1000 value: 97.538 - type: recall_at_20 value: 81.951 - type: recall_at_3 value: 63.016000000000005 - type: recall_at_5 value: 69.18299999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: main_score value: 43.983 - type: map_at_1 value: 28.942 - type: map_at_10 value: 38.621 - type: map_at_100 value: 39.7 - type: map_at_1000 value: 39.766 - type: map_at_20 value: 39.262 - type: map_at_3 value: 35.719 - type: map_at_5 value: 37.378 - type: mrr_at_1 value: 31.29943502824859 - type: mrr_at_10 value: 40.76463994260603 - type: mrr_at_100 value: 41.67073617629083 - type: mrr_at_1000 value: 41.717446259457105 - type: mrr_at_20 value: 41.32577374689195 - type: mrr_at_3 value: 37.984934086628996 - type: mrr_at_5 value: 39.64595103578152 - type: nauc_map_at_1000_diff1 value: 43.64461679688985 - type: nauc_map_at_1000_max value: 31.53717883948204 - type: nauc_map_at_1000_std value: 1.193745788248017 - type: nauc_map_at_100_diff1 value: 43.63847825079489 - type: nauc_map_at_100_max value: 31.536602619279165 - type: nauc_map_at_100_std value: 1.2001240243342401 - type: nauc_map_at_10_diff1 value: 43.845991987142014 - type: nauc_map_at_10_max value: 31.27509937344113 - type: nauc_map_at_10_std value: 0.7327934840520994 - type: nauc_map_at_1_diff1 value: 50.62269273984579 - type: nauc_map_at_1_max value: 30.16325757909521 - type: nauc_map_at_1_std value: -0.6398875136233392 - type: nauc_map_at_20_diff1 value: 43.630758403790914 - type: nauc_map_at_20_max value: 31.408258098047703 - type: nauc_map_at_20_std value: 1.12616034652217 - type: nauc_map_at_3_diff1 value: 44.823493567359456 - type: nauc_map_at_3_max value: 31.075886347614496 - type: nauc_map_at_3_std value: -0.25126874515735426 - type: nauc_map_at_5_diff1 value: 43.79768853087658 - type: nauc_map_at_5_max value: 31.091080995725324 - type: nauc_map_at_5_std value: 0.16440771782544047 - type: nauc_mrr_at_1000_diff1 value: 42.7865400752329 - type: nauc_mrr_at_1000_max value: 32.84731670326893 - type: nauc_mrr_at_1000_std value: 2.6067637582013825 - type: nauc_mrr_at_100_diff1 value: 42.771741548331065 - type: nauc_mrr_at_100_max value: 32.85324232845987 - type: nauc_mrr_at_100_std value: 2.6092786694308376 - type: nauc_mrr_at_10_diff1 value: 42.82969738870672 - type: nauc_mrr_at_10_max value: 32.69407549631432 - type: nauc_mrr_at_10_std value: 2.302903910016054 - type: nauc_mrr_at_1_diff1 value: 49.05638333657571 - type: nauc_mrr_at_1_max value: 33.12030717171514 - type: nauc_mrr_at_1_std value: 1.3278035087690774 - type: nauc_mrr_at_20_diff1 value: 42.74267239536286 - type: nauc_mrr_at_20_max value: 32.78571108973092 - type: nauc_mrr_at_20_std value: 2.5932669908758643 - type: nauc_mrr_at_3_diff1 value: 43.69963426089187 - type: nauc_mrr_at_3_max value: 32.78193126956233 - type: nauc_mrr_at_3_std value: 1.634874463134699 - type: nauc_mrr_at_5_diff1 value: 42.838630647832524 - type: nauc_mrr_at_5_max value: 32.459318735260545 - type: nauc_mrr_at_5_std value: 1.9412518283209172 - type: nauc_ndcg_at_1000_diff1 value: 41.01253839851583 - type: nauc_ndcg_at_1000_max value: 32.69570568894237 - type: nauc_ndcg_at_1000_std value: 3.4254737113410343 - type: nauc_ndcg_at_100_diff1 value: 40.62589243745832 - type: nauc_ndcg_at_100_max value: 32.664990655736126 - type: nauc_ndcg_at_100_std value: 3.799569445326048 - type: nauc_ndcg_at_10_diff1 value: 41.31658753735306 - type: nauc_ndcg_at_10_max value: 31.511946320339295 - type: nauc_ndcg_at_10_std value: 2.0492930500796662 - type: nauc_ndcg_at_1_diff1 value: 49.05638333657571 - type: nauc_ndcg_at_1_max value: 33.12030717171514 - type: nauc_ndcg_at_1_std value: 1.3278035087690774 - type: nauc_ndcg_at_20_diff1 value: 40.66188223212841 - type: nauc_ndcg_at_20_max value: 31.926240431497476 - type: nauc_ndcg_at_20_std value: 3.370398664595343 - type: nauc_ndcg_at_3_diff1 value: 43.035580180241 - type: nauc_ndcg_at_3_max value: 31.363874129878404 - type: nauc_ndcg_at_3_std value: 0.1422507242819929 - type: nauc_ndcg_at_5_diff1 value: 41.29049003955878 - type: nauc_ndcg_at_5_max value: 31.112034994977737 - type: nauc_ndcg_at_5_std value: 0.860179279828966 - type: nauc_precision_at_1000_diff1 value: -12.41854465881981 - type: nauc_precision_at_1000_max value: 14.706779246590548 - type: nauc_precision_at_1000_std value: 9.812804367375206 - type: nauc_precision_at_100_diff1 value: 2.797520107808461 - type: nauc_precision_at_100_max value: 24.335873541811406 - type: nauc_precision_at_100_std value: 12.87186398750545 - type: nauc_precision_at_10_diff1 value: 24.530962799265847 - type: nauc_precision_at_10_max value: 31.00772010798733 - type: nauc_precision_at_10_std value: 6.696733001548185 - type: nauc_precision_at_1_diff1 value: 49.05638333657571 - type: nauc_precision_at_1_max value: 33.12030717171514 - type: nauc_precision_at_1_std value: 1.3278035087690774 - type: nauc_precision_at_20_diff1 value: 16.25028416351204 - type: nauc_precision_at_20_max value: 29.629326492027342 - type: nauc_precision_at_20_std value: 11.085888573121679 - type: nauc_precision_at_3_diff1 value: 33.923667689694256 - type: nauc_precision_at_3_max value: 33.5859782361996 - type: nauc_precision_at_3_std value: 1.9468331086918693 - type: nauc_precision_at_5_diff1 value: 27.917827233088875 - type: nauc_precision_at_5_max value: 33.13290043423535 - type: nauc_precision_at_5_std value: 3.800870695945311 - type: nauc_recall_at_1000_diff1 value: 9.680283388428789 - type: nauc_recall_at_1000_max value: 49.479399284871235 - type: nauc_recall_at_1000_std value: 31.506985071436088 - type: nauc_recall_at_100_diff1 value: 23.607673377885448 - type: nauc_recall_at_100_max value: 36.637750366403935 - type: nauc_recall_at_100_std value: 18.30770690564224 - type: nauc_recall_at_10_diff1 value: 33.199683418312446 - type: nauc_recall_at_10_max value: 29.63115497012312 - type: nauc_recall_at_10_std value: 4.813200391480566 - type: nauc_recall_at_1_diff1 value: 50.62269273984579 - type: nauc_recall_at_1_max value: 30.16325757909521 - type: nauc_recall_at_1_std value: -0.6398875136233392 - type: nauc_recall_at_20_diff1 value: 29.16488387844995 - type: nauc_recall_at_20_max value: 30.788019479459 - type: nauc_recall_at_20_std value: 11.031953917298853 - type: nauc_recall_at_3_diff1 value: 38.215351600417065 - type: nauc_recall_at_3_max value: 29.619887154236128 - type: nauc_recall_at_3_std value: -0.13237298980339363 - type: nauc_recall_at_5_diff1 value: 33.93788042633265 - type: nauc_recall_at_5_max value: 28.67185092656741 - type: nauc_recall_at_5_std value: 1.316700201091445 - type: ndcg_at_1 value: 31.299 - type: ndcg_at_10 value: 43.983 - type: ndcg_at_100 value: 48.992999999999995 - type: ndcg_at_1000 value: 50.757 - type: ndcg_at_20 value: 46.152 - type: ndcg_at_3 value: 38.367000000000004 - type: ndcg_at_5 value: 41.171 - type: precision_at_1 value: 31.299 - type: precision_at_10 value: 6.734 - type: precision_at_100 value: 0.972 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_20 value: 3.898 - type: precision_at_3 value: 16.121 - type: precision_at_5 value: 11.344999999999999 - type: recall_at_1 value: 28.942 - type: recall_at_10 value: 58.343999999999994 - type: recall_at_100 value: 80.82300000000001 - type: recall_at_1000 value: 94.348 - type: recall_at_20 value: 66.449 - type: recall_at_3 value: 43.415 - type: recall_at_5 value: 50.007999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: main_score value: 33.144 - type: map_at_1 value: 19.41 - type: map_at_10 value: 27.802 - type: map_at_100 value: 29.157 - type: map_at_1000 value: 29.274 - type: map_at_20 value: 28.549000000000003 - type: map_at_3 value: 25.052999999999997 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.756218905472636 - type: mrr_at_10 value: 32.3623450209271 - type: mrr_at_100 value: 33.3648208444617 - type: mrr_at_1000 value: 33.427688215162185 - type: mrr_at_20 value: 32.93723485575758 - type: mrr_at_3 value: 29.539800995024883 - type: mrr_at_5 value: 31.156716417910452 - type: nauc_map_at_1000_diff1 value: 36.196391248081284 - type: nauc_map_at_1000_max value: 25.650644367091495 - type: nauc_map_at_1000_std value: 6.130340697729844 - type: nauc_map_at_100_diff1 value: 36.138890642411376 - type: nauc_map_at_100_max value: 25.587124763888518 - type: nauc_map_at_100_std value: 6.129336379055536 - type: nauc_map_at_10_diff1 value: 36.254426743566775 - type: nauc_map_at_10_max value: 25.465599906543034 - type: nauc_map_at_10_std value: 5.880280378112879 - type: nauc_map_at_1_diff1 value: 42.890551563179976 - type: nauc_map_at_1_max value: 25.813805281076956 - type: nauc_map_at_1_std value: 5.150718386163028 - type: nauc_map_at_20_diff1 value: 35.98551587974314 - type: nauc_map_at_20_max value: 25.501540521726636 - type: nauc_map_at_20_std value: 5.858703157458749 - type: nauc_map_at_3_diff1 value: 37.646558039577734 - type: nauc_map_at_3_max value: 26.138491471124247 - type: nauc_map_at_3_std value: 6.0487505175540734 - type: nauc_map_at_5_diff1 value: 36.817582976153695 - type: nauc_map_at_5_max value: 25.398200211121146 - type: nauc_map_at_5_std value: 6.31126763919522 - type: nauc_mrr_at_1000_diff1 value: 37.313544952847835 - type: nauc_mrr_at_1000_max value: 26.96218532078988 - type: nauc_mrr_at_1000_std value: 6.814359224654042 - type: nauc_mrr_at_100_diff1 value: 37.28104407653679 - type: nauc_mrr_at_100_max value: 26.931243040477256 - type: nauc_mrr_at_100_std value: 6.800500150841733 - type: nauc_mrr_at_10_diff1 value: 37.315832621275895 - type: nauc_mrr_at_10_max value: 26.941454225978372 - type: nauc_mrr_at_10_std value: 6.837046527796884 - type: nauc_mrr_at_1_diff1 value: 43.19904188582958 - type: nauc_mrr_at_1_max value: 26.975620445904795 - type: nauc_mrr_at_1_std value: 4.52071008581395 - type: nauc_mrr_at_20_diff1 value: 37.2200524790774 - type: nauc_mrr_at_20_max value: 26.971494160765847 - type: nauc_mrr_at_20_std value: 6.716431228783282 - type: nauc_mrr_at_3_diff1 value: 38.46236387340654 - type: nauc_mrr_at_3_max value: 27.846812992192056 - type: nauc_mrr_at_3_std value: 6.550711872569794 - type: nauc_mrr_at_5_diff1 value: 37.620346007658476 - type: nauc_mrr_at_5_max value: 27.031025952102038 - type: nauc_mrr_at_5_std value: 7.32343760231163 - type: nauc_ndcg_at_1000_diff1 value: 34.95081314840592 - type: nauc_ndcg_at_1000_max value: 26.89265465124325 - type: nauc_ndcg_at_1000_std value: 7.854154466831975 - type: nauc_ndcg_at_100_diff1 value: 34.01417812563093 - type: nauc_ndcg_at_100_max value: 25.792737746436835 - type: nauc_ndcg_at_100_std value: 7.726584165493833 - type: nauc_ndcg_at_10_diff1 value: 33.895122516474466 - type: nauc_ndcg_at_10_max value: 25.388442204589612 - type: nauc_ndcg_at_10_std value: 6.359560223645991 - type: nauc_ndcg_at_1_diff1 value: 43.19904188582958 - type: nauc_ndcg_at_1_max value: 26.975620445904795 - type: nauc_ndcg_at_1_std value: 4.52071008581395 - type: nauc_ndcg_at_20_diff1 value: 33.36078689830245 - type: nauc_ndcg_at_20_max value: 25.531794610571563 - type: nauc_ndcg_at_20_std value: 6.136658608653248 - type: nauc_ndcg_at_3_diff1 value: 36.44505602530781 - type: nauc_ndcg_at_3_max value: 26.9104071983157 - type: nauc_ndcg_at_3_std value: 6.427178520371878 - type: nauc_ndcg_at_5_diff1 value: 35.01384323197442 - type: nauc_ndcg_at_5_max value: 25.5560447088692 - type: nauc_ndcg_at_5_std value: 7.3676236760360485 - type: nauc_precision_at_1000_diff1 value: 2.8903331041804514 - type: nauc_precision_at_1000_max value: 4.059662742366004 - type: nauc_precision_at_1000_std value: -1.5891687644008334 - type: nauc_precision_at_100_diff1 value: 8.437726471693766 - type: nauc_precision_at_100_max value: 11.250588557568427 - type: nauc_precision_at_100_std value: 4.231571164627862 - type: nauc_precision_at_10_diff1 value: 19.57085237210294 - type: nauc_precision_at_10_max value: 20.973093492003905 - type: nauc_precision_at_10_std value: 3.197416248152466 - type: nauc_precision_at_1_diff1 value: 43.19904188582958 - type: nauc_precision_at_1_max value: 26.975620445904795 - type: nauc_precision_at_1_std value: 4.52071008581395 - type: nauc_precision_at_20_diff1 value: 15.67136554192724 - type: nauc_precision_at_20_max value: 17.706882621057858 - type: nauc_precision_at_20_std value: 1.9363472182867714 - type: nauc_precision_at_3_diff1 value: 30.38035695042325 - type: nauc_precision_at_3_max value: 26.48218693244094 - type: nauc_precision_at_3_std value: 6.424657705785632 - type: nauc_precision_at_5_diff1 value: 25.272543315171458 - type: nauc_precision_at_5_max value: 22.32441421311652 - type: nauc_precision_at_5_std value: 7.4912569081905716 - type: nauc_recall_at_1000_diff1 value: 25.5748044137675 - type: nauc_recall_at_1000_max value: 43.85796585370269 - type: nauc_recall_at_1000_std value: 30.0338086596789 - type: nauc_recall_at_100_diff1 value: 22.577080638885093 - type: nauc_recall_at_100_max value: 23.224511700617477 - type: nauc_recall_at_100_std value: 15.187963852289313 - type: nauc_recall_at_10_diff1 value: 25.058592299355908 - type: nauc_recall_at_10_max value: 22.24448483279841 - type: nauc_recall_at_10_std value: 6.3179089740052765 - type: nauc_recall_at_1_diff1 value: 42.890551563179976 - type: nauc_recall_at_1_max value: 25.813805281076956 - type: nauc_recall_at_1_std value: 5.150718386163028 - type: nauc_recall_at_20_diff1 value: 22.433865123187307 - type: nauc_recall_at_20_max value: 22.739695641511762 - type: nauc_recall_at_20_std value: 5.362005125538497 - type: nauc_recall_at_3_diff1 value: 32.17919168998616 - type: nauc_recall_at_3_max value: 26.044028436867357 - type: nauc_recall_at_3_std value: 7.420349884006329 - type: nauc_recall_at_5_diff1 value: 28.967104573649138 - type: nauc_recall_at_5_max value: 23.40865848168201 - type: nauc_recall_at_5_std value: 9.174406147723621 - type: ndcg_at_1 value: 23.756 - type: ndcg_at_10 value: 33.144 - type: ndcg_at_100 value: 39.261 - type: ndcg_at_1000 value: 41.881 - type: ndcg_at_20 value: 35.56 - type: ndcg_at_3 value: 27.927999999999997 - type: ndcg_at_5 value: 30.293999999999997 - type: precision_at_1 value: 23.756 - type: precision_at_10 value: 5.995 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_20 value: 3.688 - type: precision_at_3 value: 13.059999999999999 - type: precision_at_5 value: 9.602 - type: recall_at_1 value: 19.41 - type: recall_at_10 value: 45.074 - type: recall_at_100 value: 71.131 - type: recall_at_1000 value: 89.604 - type: recall_at_20 value: 53.673 - type: recall_at_3 value: 31.055 - type: recall_at_5 value: 36.714999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: main_score value: 49.675000000000004 - type: map_at_1 value: 33.178999999999995 - type: map_at_10 value: 43.807 - type: map_at_100 value: 45.17 - type: map_at_1000 value: 45.271 - type: map_at_20 value: 44.516 - type: map_at_3 value: 40.813 - type: map_at_5 value: 42.457 - type: mrr_at_1 value: 40.32723772858518 - type: mrr_at_10 value: 49.646867409138814 - type: mrr_at_100 value: 50.493686101426285 - type: mrr_at_1000 value: 50.525386961808834 - type: mrr_at_20 value: 50.120274354884586 - type: mrr_at_3 value: 47.49759384023096 - type: mrr_at_5 value: 48.72473532242535 - type: nauc_map_at_1000_diff1 value: 49.5947127786396 - type: nauc_map_at_1000_max value: 33.39720045844929 - type: nauc_map_at_1000_std value: -3.131428593252271 - type: nauc_map_at_100_diff1 value: 49.57797867324617 - type: nauc_map_at_100_max value: 33.356927974709464 - type: nauc_map_at_100_std value: -3.1661365376766337 - type: nauc_map_at_10_diff1 value: 49.59294630598952 - type: nauc_map_at_10_max value: 32.86647346990462 - type: nauc_map_at_10_std value: -4.1582043443386745 - type: nauc_map_at_1_diff1 value: 53.98646767288695 - type: nauc_map_at_1_max value: 29.45629077638936 - type: nauc_map_at_1_std value: -5.621187380771589 - type: nauc_map_at_20_diff1 value: 49.486982890447074 - type: nauc_map_at_20_max value: 33.11681933406332 - type: nauc_map_at_20_std value: -3.5826433195146854 - type: nauc_map_at_3_diff1 value: 50.81807107491861 - type: nauc_map_at_3_max value: 32.32552291988859 - type: nauc_map_at_3_std value: -3.952946504088928 - type: nauc_map_at_5_diff1 value: 49.70201354274439 - type: nauc_map_at_5_max value: 32.831846031004886 - type: nauc_map_at_5_std value: -3.8330488624207737 - type: nauc_mrr_at_1000_diff1 value: 49.04159472507738 - type: nauc_mrr_at_1000_max value: 35.617600171138676 - type: nauc_mrr_at_1000_std value: -1.5975830757486646 - type: nauc_mrr_at_100_diff1 value: 49.03848471692094 - type: nauc_mrr_at_100_max value: 35.61936748662614 - type: nauc_mrr_at_100_std value: -1.5922053398594729 - type: nauc_mrr_at_10_diff1 value: 48.92463964652612 - type: nauc_mrr_at_10_max value: 35.37757708992045 - type: nauc_mrr_at_10_std value: -2.2052028139567303 - type: nauc_mrr_at_1_diff1 value: 52.23915787290734 - type: nauc_mrr_at_1_max value: 34.393531787632334 - type: nauc_mrr_at_1_std value: -1.452007661016969 - type: nauc_mrr_at_20_diff1 value: 48.91168438018404 - type: nauc_mrr_at_20_max value: 35.478962544421876 - type: nauc_mrr_at_20_std value: -1.8246048423555414 - type: nauc_mrr_at_3_diff1 value: 50.115432665442164 - type: nauc_mrr_at_3_max value: 35.89093796085569 - type: nauc_mrr_at_3_std value: -1.4895016313153366 - type: nauc_mrr_at_5_diff1 value: 49.04321261351915 - type: nauc_mrr_at_5_max value: 35.85730520949451 - type: nauc_mrr_at_5_std value: -1.68790556880753 - type: nauc_ndcg_at_1000_diff1 value: 48.294697499154374 - type: nauc_ndcg_at_1000_max value: 35.167410242367595 - type: nauc_ndcg_at_1000_std value: -0.6346078535914157 - type: nauc_ndcg_at_100_diff1 value: 48.025525283449014 - type: nauc_ndcg_at_100_max value: 34.79288511776105 - type: nauc_ndcg_at_100_std value: -0.7823403044086993 - type: nauc_ndcg_at_10_diff1 value: 47.70793258015258 - type: nauc_ndcg_at_10_max value: 33.09558927880104 - type: nauc_ndcg_at_10_std value: -4.7793864166260605 - type: nauc_ndcg_at_1_diff1 value: 52.23915787290734 - type: nauc_ndcg_at_1_max value: 34.393531787632334 - type: nauc_ndcg_at_1_std value: -1.452007661016969 - type: nauc_ndcg_at_20_diff1 value: 47.354286045074815 - type: nauc_ndcg_at_20_max value: 33.686648806027975 - type: nauc_ndcg_at_20_std value: -3.0189085132476556 - type: nauc_ndcg_at_3_diff1 value: 49.68805334316908 - type: nauc_ndcg_at_3_max value: 34.196077748056496 - type: nauc_ndcg_at_3_std value: -2.7167289163768436 - type: nauc_ndcg_at_5_diff1 value: 47.94474868912989 - type: nauc_ndcg_at_5_max value: 34.00261603413051 - type: nauc_ndcg_at_5_std value: -3.3541028103046115 - type: nauc_precision_at_1000_diff1 value: -12.0150100710755 - type: nauc_precision_at_1000_max value: 5.332942816568796 - type: nauc_precision_at_1000_std value: 14.543288479130458 - type: nauc_precision_at_100_diff1 value: -4.920332181588838 - type: nauc_precision_at_100_max value: 14.42313332017491 - type: nauc_precision_at_100_std value: 17.821953321018384 - type: nauc_precision_at_10_diff1 value: 14.70509089079217 - type: nauc_precision_at_10_max value: 25.381887131649716 - type: nauc_precision_at_10_std value: 5.226419288645675 - type: nauc_precision_at_1_diff1 value: 52.23915787290734 - type: nauc_precision_at_1_max value: 34.393531787632334 - type: nauc_precision_at_1_std value: -1.452007661016969 - type: nauc_precision_at_20_diff1 value: 6.312827641507564 - type: nauc_precision_at_20_max value: 22.483038562271933 - type: nauc_precision_at_20_std value: 11.368419856892416 - type: nauc_precision_at_3_diff1 value: 33.271443420273606 - type: nauc_precision_at_3_max value: 33.571078182106675 - type: nauc_precision_at_3_std value: 4.47382265155717 - type: nauc_precision_at_5_diff1 value: 23.43287104284656 - type: nauc_precision_at_5_max value: 30.909085068105313 - type: nauc_precision_at_5_std value: 5.545672049452433 - type: nauc_recall_at_1000_diff1 value: 35.22615594677707 - type: nauc_recall_at_1000_max value: 52.0710533173532 - type: nauc_recall_at_1000_std value: 45.17683523786464 - type: nauc_recall_at_100_diff1 value: 36.2169056956332 - type: nauc_recall_at_100_max value: 35.02435003210817 - type: nauc_recall_at_100_std value: 15.833632946282508 - type: nauc_recall_at_10_diff1 value: 39.12440292974848 - type: nauc_recall_at_10_max value: 28.0546011979648 - type: nauc_recall_at_10_std value: -9.620558638092172 - type: nauc_recall_at_1_diff1 value: 53.98646767288695 - type: nauc_recall_at_1_max value: 29.45629077638936 - type: nauc_recall_at_1_std value: -5.621187380771589 - type: nauc_recall_at_20_diff1 value: 36.39254630768161 - type: nauc_recall_at_20_max value: 29.277856508751967 - type: nauc_recall_at_20_std value: -3.048007490798412 - type: nauc_recall_at_3_diff1 value: 45.64706642644958 - type: nauc_recall_at_3_max value: 31.003050159737413 - type: nauc_recall_at_3_std value: -4.849763876930667 - type: nauc_recall_at_5_diff1 value: 40.918108859971746 - type: nauc_recall_at_5_max value: 30.69907335071493 - type: nauc_recall_at_5_std value: -6.1445436251916865 - type: ndcg_at_1 value: 40.327 - type: ndcg_at_10 value: 49.675000000000004 - type: ndcg_at_100 value: 55.364000000000004 - type: ndcg_at_1000 value: 56.992 - type: ndcg_at_20 value: 51.803999999999995 - type: ndcg_at_3 value: 45.227000000000004 - type: ndcg_at_5 value: 47.244 - type: precision_at_1 value: 40.327 - type: precision_at_10 value: 8.826 - type: precision_at_100 value: 1.354 - type: precision_at_1000 value: 0.167 - type: precision_at_20 value: 5.115 - type: precision_at_3 value: 21.303 - type: precision_at_5 value: 14.726 - type: recall_at_1 value: 33.178999999999995 - type: recall_at_10 value: 61.087 - type: recall_at_100 value: 85.099 - type: recall_at_1000 value: 95.14099999999999 - type: recall_at_20 value: 68.623 - type: recall_at_3 value: 48.245 - type: recall_at_5 value: 53.832 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: main_score value: 44.99 - type: map_at_1 value: 28.089 - type: map_at_10 value: 38.98 - type: map_at_100 value: 40.339000000000006 - type: map_at_1000 value: 40.441 - type: map_at_20 value: 39.702 - type: map_at_3 value: 35.620000000000005 - type: map_at_5 value: 37.657000000000004 - type: mrr_at_1 value: 35.15981735159817 - type: mrr_at_10 value: 44.54075161266937 - type: mrr_at_100 value: 45.435730392436646 - type: mrr_at_1000 value: 45.47673849356812 - type: mrr_at_20 value: 45.05949613726918 - type: mrr_at_3 value: 42.00913242009131 - type: mrr_at_5 value: 43.52739726027392 - type: nauc_map_at_1000_diff1 value: 42.6375513442399 - type: nauc_map_at_1000_max value: 35.83899956589522 - type: nauc_map_at_1000_std value: 5.798620017712549 - type: nauc_map_at_100_diff1 value: 42.609712253881504 - type: nauc_map_at_100_max value: 35.85401871065736 - type: nauc_map_at_100_std value: 5.829007296755533 - type: nauc_map_at_10_diff1 value: 42.90931172127824 - type: nauc_map_at_10_max value: 35.46694204511423 - type: nauc_map_at_10_std value: 5.131477704152026 - type: nauc_map_at_1_diff1 value: 48.066312177855956 - type: nauc_map_at_1_max value: 30.67745267941573 - type: nauc_map_at_1_std value: -1.4170737991670943 - type: nauc_map_at_20_diff1 value: 42.730423700784 - type: nauc_map_at_20_max value: 35.710039616497085 - type: nauc_map_at_20_std value: 5.363961887475162 - type: nauc_map_at_3_diff1 value: 43.499223646579935 - type: nauc_map_at_3_max value: 33.872570039621564 - type: nauc_map_at_3_std value: 3.0787571843453008 - type: nauc_map_at_5_diff1 value: 43.28963642946521 - type: nauc_map_at_5_max value: 35.18327408279892 - type: nauc_map_at_5_std value: 4.516467154662473 - type: nauc_mrr_at_1000_diff1 value: 42.71279871641341 - type: nauc_mrr_at_1000_max value: 37.48825064817496 - type: nauc_mrr_at_1000_std value: 8.10015025024314 - type: nauc_mrr_at_100_diff1 value: 42.694777404773376 - type: nauc_mrr_at_100_max value: 37.476741768741086 - type: nauc_mrr_at_100_std value: 8.11525130417229 - type: nauc_mrr_at_10_diff1 value: 42.954194054560176 - type: nauc_mrr_at_10_max value: 37.606138578797506 - type: nauc_mrr_at_10_std value: 8.092519513302399 - type: nauc_mrr_at_1_diff1 value: 48.350790286038574 - type: nauc_mrr_at_1_max value: 33.97992759739641 - type: nauc_mrr_at_1_std value: 1.8332987018664093 - type: nauc_mrr_at_20_diff1 value: 42.664983701783044 - type: nauc_mrr_at_20_max value: 37.47450702110784 - type: nauc_mrr_at_20_std value: 8.001067634745462 - type: nauc_mrr_at_3_diff1 value: 42.921968602737955 - type: nauc_mrr_at_3_max value: 37.19599728791262 - type: nauc_mrr_at_3_std value: 7.4692697422507575 - type: nauc_mrr_at_5_diff1 value: 42.96028546491891 - type: nauc_mrr_at_5_max value: 37.688350071295915 - type: nauc_mrr_at_5_std value: 8.213017954012372 - type: nauc_ndcg_at_1000_diff1 value: 40.70763263942397 - type: nauc_ndcg_at_1000_max value: 37.87768319167602 - type: nauc_ndcg_at_1000_std value: 9.908807071686738 - type: nauc_ndcg_at_100_diff1 value: 39.97828438221707 - type: nauc_ndcg_at_100_max value: 37.7723393835996 - type: nauc_ndcg_at_100_std value: 10.666779466040097 - type: nauc_ndcg_at_10_diff1 value: 41.172233451172936 - type: nauc_ndcg_at_10_max value: 37.12252131573939 - type: nauc_ndcg_at_10_std value: 8.273798754436639 - type: nauc_ndcg_at_1_diff1 value: 48.350790286038574 - type: nauc_ndcg_at_1_max value: 33.97992759739641 - type: nauc_ndcg_at_1_std value: 1.8332987018664093 - type: nauc_ndcg_at_20_diff1 value: 40.33325895172716 - type: nauc_ndcg_at_20_max value: 37.36015594019951 - type: nauc_ndcg_at_20_std value: 8.818556108749302 - type: nauc_ndcg_at_3_diff1 value: 41.652701699747254 - type: nauc_ndcg_at_3_max value: 35.499109874223294 - type: nauc_ndcg_at_3_std value: 5.831784865606119 - type: nauc_ndcg_at_5_diff1 value: 41.856346892595475 - type: nauc_ndcg_at_5_max value: 36.940681835687194 - type: nauc_ndcg_at_5_std value: 7.507798515093516 - type: nauc_precision_at_1000_diff1 value: -2.4605367806784866 - type: nauc_precision_at_1000_max value: -0.3538142127162922 - type: nauc_precision_at_1000_std value: 8.369794961833236 - type: nauc_precision_at_100_diff1 value: -0.34954522096524704 - type: nauc_precision_at_100_max value: 13.159909603146458 - type: nauc_precision_at_100_std value: 19.425561514133996 - type: nauc_precision_at_10_diff1 value: 17.048304710148145 - type: nauc_precision_at_10_max value: 29.816041846806375 - type: nauc_precision_at_10_std value: 18.358893367243798 - type: nauc_precision_at_1_diff1 value: 48.350790286038574 - type: nauc_precision_at_1_max value: 33.97992759739641 - type: nauc_precision_at_1_std value: 1.8332987018664093 - type: nauc_precision_at_20_diff1 value: 10.450903599411344 - type: nauc_precision_at_20_max value: 25.228916373799127 - type: nauc_precision_at_20_std value: 18.46893569529936 - type: nauc_precision_at_3_diff1 value: 29.181236567048636 - type: nauc_precision_at_3_max value: 35.64918262500281 - type: nauc_precision_at_3_std value: 13.347538222514968 - type: nauc_precision_at_5_diff1 value: 23.693323840550345 - type: nauc_precision_at_5_max value: 33.972399735191225 - type: nauc_precision_at_5_std value: 17.107012760554618 - type: nauc_recall_at_1000_diff1 value: 20.297340483227945 - type: nauc_recall_at_1000_max value: 63.084305970127275 - type: nauc_recall_at_1000_std value: 63.04655000858784 - type: nauc_recall_at_100_diff1 value: 22.587332148979723 - type: nauc_recall_at_100_max value: 40.740968468024775 - type: nauc_recall_at_100_std value: 34.120423684507124 - type: nauc_recall_at_10_diff1 value: 33.361195948673675 - type: nauc_recall_at_10_max value: 37.1411402410262 - type: nauc_recall_at_10_std value: 13.475407196166259 - type: nauc_recall_at_1_diff1 value: 48.066312177855956 - type: nauc_recall_at_1_max value: 30.67745267941573 - type: nauc_recall_at_1_std value: -1.4170737991670943 - type: nauc_recall_at_20_diff1 value: 28.703982984383984 - type: nauc_recall_at_20_max value: 37.32929431193496 - type: nauc_recall_at_20_std value: 16.139135347989903 - type: nauc_recall_at_3_diff1 value: 36.53346179134789 - type: nauc_recall_at_3_max value: 34.11397914899309 - type: nauc_recall_at_3_std value: 7.19358019807132 - type: nauc_recall_at_5_diff1 value: 36.24058894947452 - type: nauc_recall_at_5_max value: 37.00990358651097 - type: nauc_recall_at_5_std value: 11.074645476821619 - type: ndcg_at_1 value: 35.160000000000004 - type: ndcg_at_10 value: 44.99 - type: ndcg_at_100 value: 50.661 - type: ndcg_at_1000 value: 52.599 - type: ndcg_at_20 value: 47.154 - type: ndcg_at_3 value: 39.843 - type: ndcg_at_5 value: 42.486000000000004 - type: precision_at_1 value: 35.160000000000004 - type: precision_at_10 value: 8.299 - type: precision_at_100 value: 1.2850000000000001 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_20 value: 4.84 - type: precision_at_3 value: 19.178 - type: precision_at_5 value: 13.927 - type: recall_at_1 value: 28.089 - type: recall_at_10 value: 57.158 - type: recall_at_100 value: 81.461 - type: recall_at_1000 value: 94.46900000000001 - type: recall_at_20 value: 64.927 - type: recall_at_3 value: 42.775999999999996 - type: recall_at_5 value: 49.719 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: CQADupstackRetrieval is a combined dataset metrics: - type: main_score value: 44.989166666666655 - type: ndcg_at_10 value: 44.989166666666655 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: main_score value: 39.586 - type: map_at_1 value: 27.301 - type: map_at_10 value: 35.022 - type: map_at_100 value: 36.061 - type: map_at_1000 value: 36.146 - type: map_at_20 value: 35.608000000000004 - type: map_at_3 value: 32.978 - type: map_at_5 value: 33.994 - type: mrr_at_1 value: 30.67484662576687 - type: mrr_at_10 value: 38.1696124257474 - type: mrr_at_100 value: 38.99730898994137 - type: mrr_at_1000 value: 39.049871007408136 - type: mrr_at_20 value: 38.62424051396064 - type: mrr_at_3 value: 36.40081799591004 - type: mrr_at_5 value: 37.23670756646219 - type: nauc_map_at_1000_diff1 value: 50.4395097150819 - type: nauc_map_at_1000_max value: 42.36231476768413 - type: nauc_map_at_1000_std value: 1.0739414045485742 - type: nauc_map_at_100_diff1 value: 50.4253775421283 - type: nauc_map_at_100_max value: 42.34508969348633 - type: nauc_map_at_100_std value: 1.0590256535050135 - type: nauc_map_at_10_diff1 value: 50.74196619464362 - type: nauc_map_at_10_max value: 42.354326434590284 - type: nauc_map_at_10_std value: 0.6330167542705694 - type: nauc_map_at_1_diff1 value: 55.7404810490963 - type: nauc_map_at_1_max value: 40.7676941648045 - type: nauc_map_at_1_std value: -5.021772566610674 - type: nauc_map_at_20_diff1 value: 50.39792463598886 - type: nauc_map_at_20_max value: 42.25768760228577 - type: nauc_map_at_20_std value: 0.8979017700131807 - type: nauc_map_at_3_diff1 value: 51.53267996170815 - type: nauc_map_at_3_max value: 41.78801756883417 - type: nauc_map_at_3_std value: -0.6652383024396911 - type: nauc_map_at_5_diff1 value: 50.992783683271504 - type: nauc_map_at_5_max value: 41.8607977828188 - type: nauc_map_at_5_std value: 0.3484379897869807 - type: nauc_mrr_at_1000_diff1 value: 48.952907124445126 - type: nauc_mrr_at_1000_max value: 42.93563741482114 - type: nauc_mrr_at_1000_std value: 3.0791495753556424 - type: nauc_mrr_at_100_diff1 value: 48.941921107360805 - type: nauc_mrr_at_100_max value: 42.94419657374061 - type: nauc_mrr_at_100_std value: 3.075397087180154 - type: nauc_mrr_at_10_diff1 value: 49.098926306303056 - type: nauc_mrr_at_10_max value: 42.941857820499806 - type: nauc_mrr_at_10_std value: 2.8184474174054372 - type: nauc_mrr_at_1_diff1 value: 54.428109877009334 - type: nauc_mrr_at_1_max value: 42.50273386972492 - type: nauc_mrr_at_1_std value: -2.1811826216412187 - type: nauc_mrr_at_20_diff1 value: 48.82502192775839 - type: nauc_mrr_at_20_max value: 42.92227277257095 - type: nauc_mrr_at_20_std value: 2.975812634368533 - type: nauc_mrr_at_3_diff1 value: 49.440009227591176 - type: nauc_mrr_at_3_max value: 42.95503176290712 - type: nauc_mrr_at_3_std value: 2.2997128945013796 - type: nauc_mrr_at_5_diff1 value: 49.09846782701398 - type: nauc_mrr_at_5_max value: 42.51449168285772 - type: nauc_mrr_at_5_std value: 2.7785816484421297 - type: nauc_ndcg_at_1000_diff1 value: 48.14680758187888 - type: nauc_ndcg_at_1000_max value: 43.57465718500695 - type: nauc_ndcg_at_1000_std value: 5.287435676678261 - type: nauc_ndcg_at_100_diff1 value: 47.66081605743284 - type: nauc_ndcg_at_100_max value: 43.28156751251163 - type: nauc_ndcg_at_100_std value: 4.959626409663624 - type: nauc_ndcg_at_10_diff1 value: 48.25075619623878 - type: nauc_ndcg_at_10_max value: 43.00688660666578 - type: nauc_ndcg_at_10_std value: 3.2319193368891637 - type: nauc_ndcg_at_1_diff1 value: 54.428109877009334 - type: nauc_ndcg_at_1_max value: 42.50273386972492 - type: nauc_ndcg_at_1_std value: -2.1811826216412187 - type: nauc_ndcg_at_20_diff1 value: 47.1943098627403 - type: nauc_ndcg_at_20_max value: 42.86954491768707 - type: nauc_ndcg_at_20_std value: 4.08583080150737 - type: nauc_ndcg_at_3_diff1 value: 49.32681523192246 - type: nauc_ndcg_at_3_max value: 42.46898641470274 - type: nauc_ndcg_at_3_std value: 1.7416962407725236 - type: nauc_ndcg_at_5_diff1 value: 48.59647012439291 - type: nauc_ndcg_at_5_max value: 42.07098889846439 - type: nauc_ndcg_at_5_std value: 2.979621233356828 - type: nauc_precision_at_1000_diff1 value: -1.7366334161587105 - type: nauc_precision_at_1000_max value: 17.70969166396819 - type: nauc_precision_at_1000_std value: 17.50619975322144 - type: nauc_precision_at_100_diff1 value: 10.082579982582155 - type: nauc_precision_at_100_max value: 28.024893516091776 - type: nauc_precision_at_100_std value: 18.41413013357596 - type: nauc_precision_at_10_diff1 value: 28.796167732373657 - type: nauc_precision_at_10_max value: 40.37340024485382 - type: nauc_precision_at_10_std value: 13.718572711091733 - type: nauc_precision_at_1_diff1 value: 54.428109877009334 - type: nauc_precision_at_1_max value: 42.50273386972492 - type: nauc_precision_at_1_std value: -2.1811826216412187 - type: nauc_precision_at_20_diff1 value: 19.82691920771315 - type: nauc_precision_at_20_max value: 34.45075390159975 - type: nauc_precision_at_20_std value: 16.410812072348058 - type: nauc_precision_at_3_diff1 value: 40.85430254962678 - type: nauc_precision_at_3_max value: 43.63016056067074 - type: nauc_precision_at_3_std value: 9.322014634477581 - type: nauc_precision_at_5_diff1 value: 35.830272848975795 - type: nauc_precision_at_5_max value: 41.30047691620363 - type: nauc_precision_at_5_std value: 13.145693992266565 - type: nauc_recall_at_1000_diff1 value: 35.532000545890504 - type: nauc_recall_at_1000_max value: 50.714223194510325 - type: nauc_recall_at_1000_std value: 43.09037309139045 - type: nauc_recall_at_100_diff1 value: 35.11024488875192 - type: nauc_recall_at_100_max value: 43.0874566265193 - type: nauc_recall_at_100_std value: 19.70628521846854 - type: nauc_recall_at_10_diff1 value: 40.36203726741153 - type: nauc_recall_at_10_max value: 42.581482582576726 - type: nauc_recall_at_10_std value: 8.642553371022348 - type: nauc_recall_at_1_diff1 value: 55.7404810490963 - type: nauc_recall_at_1_max value: 40.7676941648045 - type: nauc_recall_at_1_std value: -5.021772566610674 - type: nauc_recall_at_20_diff1 value: 35.97348868186562 - type: nauc_recall_at_20_max value: 41.82695933305065 - type: nauc_recall_at_20_std value: 11.444957541593585 - type: nauc_recall_at_3_diff1 value: 44.20020470014979 - type: nauc_recall_at_3_max value: 40.84130855296979 - type: nauc_recall_at_3_std value: 5.004883338558809 - type: nauc_recall_at_5_diff1 value: 42.08756885472078 - type: nauc_recall_at_5_max value: 39.90323783606852 - type: nauc_recall_at_5_std value: 8.085182534171127 - type: ndcg_at_1 value: 30.675 - type: ndcg_at_10 value: 39.586 - type: ndcg_at_100 value: 44.737 - type: ndcg_at_1000 value: 46.863 - type: ndcg_at_20 value: 41.495 - type: ndcg_at_3 value: 35.8 - type: ndcg_at_5 value: 37.3 - type: precision_at_1 value: 30.675 - type: precision_at_10 value: 6.196 - type: precision_at_100 value: 0.9570000000000001 - type: precision_at_1000 value: 0.122 - type: precision_at_20 value: 3.6350000000000002 - type: precision_at_3 value: 15.337 - type: precision_at_5 value: 10.337 - type: recall_at_1 value: 27.301 - type: recall_at_10 value: 50.346999999999994 - type: recall_at_100 value: 74.459 - type: recall_at_1000 value: 90.018 - type: recall_at_20 value: 57.473 - type: recall_at_3 value: 39.672000000000004 - type: recall_at_5 value: 43.383 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: main_score value: 32.842 - type: map_at_1 value: 19.527 - type: map_at_10 value: 27.711999999999996 - type: map_at_100 value: 28.98 - type: map_at_1000 value: 29.108 - type: map_at_20 value: 28.407 - type: map_at_3 value: 25.023 - type: map_at_5 value: 26.528000000000002 - type: mrr_at_1 value: 23.675154852030282 - type: mrr_at_10 value: 31.810676323752784 - type: mrr_at_100 value: 32.788970614380716 - type: mrr_at_1000 value: 32.86028758975889 - type: mrr_at_20 value: 32.35935756676056 - type: mrr_at_3 value: 29.41615049323246 - type: mrr_at_5 value: 30.785730672172633 - type: nauc_map_at_1000_diff1 value: 35.597766688968015 - type: nauc_map_at_1000_max value: 26.295790183159845 - type: nauc_map_at_1000_std value: -0.04229904865958209 - type: nauc_map_at_100_diff1 value: 35.568782622469925 - type: nauc_map_at_100_max value: 26.27850795471227 - type: nauc_map_at_100_std value: -0.04944875782811099 - type: nauc_map_at_10_diff1 value: 35.63760937893694 - type: nauc_map_at_10_max value: 26.130094042028233 - type: nauc_map_at_10_std value: -0.6896882769027717 - type: nauc_map_at_1_diff1 value: 41.759098341890976 - type: nauc_map_at_1_max value: 23.918885427783326 - type: nauc_map_at_1_std value: -2.1383574897865074 - type: nauc_map_at_20_diff1 value: 35.55706530442612 - type: nauc_map_at_20_max value: 26.23339626569677 - type: nauc_map_at_20_std value: -0.162172033918129 - type: nauc_map_at_3_diff1 value: 37.22183376355153 - type: nauc_map_at_3_max value: 25.770512522122186 - type: nauc_map_at_3_std value: -1.3105892187778403 - type: nauc_map_at_5_diff1 value: 36.205913161663084 - type: nauc_map_at_5_max value: 25.953300641502064 - type: nauc_map_at_5_std value: -0.7987363137547906 - type: nauc_mrr_at_1000_diff1 value: 34.864016559617646 - type: nauc_mrr_at_1000_max value: 26.8689525348564 - type: nauc_mrr_at_1000_std value: -0.5839923973914446 - type: nauc_mrr_at_100_diff1 value: 34.83820469598538 - type: nauc_mrr_at_100_max value: 26.864669056231282 - type: nauc_mrr_at_100_std value: -0.5785645654158633 - type: nauc_mrr_at_10_diff1 value: 34.81868397381981 - type: nauc_mrr_at_10_max value: 26.79988560460627 - type: nauc_mrr_at_10_std value: -1.1113808365827318 - type: nauc_mrr_at_1_diff1 value: 40.0281507903504 - type: nauc_mrr_at_1_max value: 25.036735941806583 - type: nauc_mrr_at_1_std value: -2.508700799268523 - type: nauc_mrr_at_20_diff1 value: 34.81954537357966 - type: nauc_mrr_at_20_max value: 26.877673033315453 - type: nauc_mrr_at_20_std value: -0.6706028107452919 - type: nauc_mrr_at_3_diff1 value: 35.87313782549696 - type: nauc_mrr_at_3_max value: 26.776261693392335 - type: nauc_mrr_at_3_std value: -1.8010591328112908 - type: nauc_mrr_at_5_diff1 value: 35.31673912159536 - type: nauc_mrr_at_5_max value: 26.78720786106881 - type: nauc_mrr_at_5_std value: -1.3096326953900546 - type: nauc_ndcg_at_1000_diff1 value: 33.43105244339048 - type: nauc_ndcg_at_1000_max value: 27.52195065724684 - type: nauc_ndcg_at_1000_std value: 2.8376056562675744 - type: nauc_ndcg_at_100_diff1 value: 32.90916846420573 - type: nauc_ndcg_at_100_max value: 27.27161017736065 - type: nauc_ndcg_at_100_std value: 2.8703122625872126 - type: nauc_ndcg_at_10_diff1 value: 33.12714979317447 - type: nauc_ndcg_at_10_max value: 26.67762031747992 - type: nauc_ndcg_at_10_std value: -0.1341345572932233 - type: nauc_ndcg_at_1_diff1 value: 40.0281507903504 - type: nauc_ndcg_at_1_max value: 25.036735941806583 - type: nauc_ndcg_at_1_std value: -2.508700799268523 - type: nauc_ndcg_at_20_diff1 value: 32.891656138688546 - type: nauc_ndcg_at_20_max value: 26.991976404027163 - type: nauc_ndcg_at_20_std value: 1.6050741106677746 - type: nauc_ndcg_at_3_diff1 value: 35.576958713955484 - type: nauc_ndcg_at_3_max value: 26.41687745899445 - type: nauc_ndcg_at_3_std value: -1.5326687067002291 - type: nauc_ndcg_at_5_diff1 value: 34.27335619067276 - type: nauc_ndcg_at_5_max value: 26.479515412084208 - type: nauc_ndcg_at_5_std value: -0.5597648935666003 - type: nauc_precision_at_1000_diff1 value: -0.18660914306684007 - type: nauc_precision_at_1000_max value: 7.268255385799229 - type: nauc_precision_at_1000_std value: -0.1968875268478991 - type: nauc_precision_at_100_diff1 value: 7.386701205054449 - type: nauc_precision_at_100_max value: 15.477735603019607 - type: nauc_precision_at_100_std value: 4.753153414679307 - type: nauc_precision_at_10_diff1 value: 18.4668296945938 - type: nauc_precision_at_10_max value: 25.457144217779597 - type: nauc_precision_at_10_std value: 0.40165373733963605 - type: nauc_precision_at_1_diff1 value: 40.0281507903504 - type: nauc_precision_at_1_max value: 25.036735941806583 - type: nauc_precision_at_1_std value: -2.508700799268523 - type: nauc_precision_at_20_diff1 value: 14.751135844289335 - type: nauc_precision_at_20_max value: 22.763373329576293 - type: nauc_precision_at_20_std value: 4.360731801761864 - type: nauc_precision_at_3_diff1 value: 28.154753888265393 - type: nauc_precision_at_3_max value: 27.838427033527147 - type: nauc_precision_at_3_std value: -1.0042621266717804 - type: nauc_precision_at_5_diff1 value: 23.549026872711423 - type: nauc_precision_at_5_max value: 27.192214745385044 - type: nauc_precision_at_5_std value: 0.4455206110174471 - type: nauc_recall_at_1000_diff1 value: 17.905404210815632 - type: nauc_recall_at_1000_max value: 32.8674418535776 - type: nauc_recall_at_1000_std value: 35.187050415735435 - type: nauc_recall_at_100_diff1 value: 20.903609751984757 - type: nauc_recall_at_100_max value: 27.180306691518364 - type: nauc_recall_at_100_std value: 17.553030959393297 - type: nauc_recall_at_10_diff1 value: 25.615147693464387 - type: nauc_recall_at_10_max value: 25.97062699453565 - type: nauc_recall_at_10_std value: 2.2181702899826576 - type: nauc_recall_at_1_diff1 value: 41.759098341890976 - type: nauc_recall_at_1_max value: 23.918885427783326 - type: nauc_recall_at_1_std value: -2.1383574897865074 - type: nauc_recall_at_20_diff1 value: 23.922775940094386 - type: nauc_recall_at_20_max value: 26.384627814902785 - type: nauc_recall_at_20_std value: 7.944532403561578 - type: nauc_recall_at_3_diff1 value: 32.26543270634743 - type: nauc_recall_at_3_max value: 26.36357710828272 - type: nauc_recall_at_3_std value: -0.42723331708340706 - type: nauc_recall_at_5_diff1 value: 29.080464141763336 - type: nauc_recall_at_5_max value: 25.81238438303652 - type: nauc_recall_at_5_std value: 1.1649311168287726 - type: ndcg_at_1 value: 23.674999999999997 - type: ndcg_at_10 value: 32.842 - type: ndcg_at_100 value: 38.64 - type: ndcg_at_1000 value: 41.367 - type: ndcg_at_20 value: 35.032999999999994 - type: ndcg_at_3 value: 28.166000000000004 - type: ndcg_at_5 value: 30.407 - type: precision_at_1 value: 23.674999999999997 - type: precision_at_10 value: 6.005 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.146 - type: precision_at_20 value: 3.6580000000000004 - type: precision_at_3 value: 13.352 - type: precision_at_5 value: 9.718 - type: recall_at_1 value: 19.527 - type: recall_at_10 value: 44.096999999999994 - type: recall_at_100 value: 69.962 - type: recall_at_1000 value: 89.035 - type: recall_at_20 value: 52.166000000000004 - type: recall_at_3 value: 30.946 - type: recall_at_5 value: 36.789 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: main_score value: 46.54 - type: map_at_1 value: 29.953999999999997 - type: map_at_10 value: 40.742 - type: map_at_100 value: 41.964 - type: map_at_1000 value: 42.059999999999995 - type: map_at_20 value: 41.426 - type: map_at_3 value: 37.378 - type: map_at_5 value: 39.267 - type: mrr_at_1 value: 34.701492537313435 - type: mrr_at_10 value: 44.29978085761664 - type: mrr_at_100 value: 45.205551401915486 - type: mrr_at_1000 value: 45.24735017384963 - type: mrr_at_20 value: 44.85338423755729 - type: mrr_at_3 value: 41.57338308457707 - type: mrr_at_5 value: 43.19185323383077 - type: nauc_map_at_1000_diff1 value: 48.45170522932164 - type: nauc_map_at_1000_max value: 31.544164363591204 - type: nauc_map_at_1000_std value: 0.8661088818146858 - type: nauc_map_at_100_diff1 value: 48.47347800061323 - type: nauc_map_at_100_max value: 31.568637596620313 - type: nauc_map_at_100_std value: 0.9252699336843858 - type: nauc_map_at_10_diff1 value: 48.64849891585432 - type: nauc_map_at_10_max value: 31.40371265579746 - type: nauc_map_at_10_std value: 0.7088016563713089 - type: nauc_map_at_1_diff1 value: 53.57918993108331 - type: nauc_map_at_1_max value: 31.392632653740993 - type: nauc_map_at_1_std value: -2.857306170463933 - type: nauc_map_at_20_diff1 value: 48.49084353023969 - type: nauc_map_at_20_max value: 31.470313174779374 - type: nauc_map_at_20_std value: 0.8950296035234309 - type: nauc_map_at_3_diff1 value: 49.273481161619806 - type: nauc_map_at_3_max value: 31.101471509782826 - type: nauc_map_at_3_std value: -0.886510096257905 - type: nauc_map_at_5_diff1 value: 48.85344288229106 - type: nauc_map_at_5_max value: 31.32633663238284 - type: nauc_map_at_5_std value: -0.44752909698881177 - type: nauc_mrr_at_1000_diff1 value: 46.27593166906613 - type: nauc_mrr_at_1000_max value: 31.637594372116336 - type: nauc_mrr_at_1000_std value: 0.8444917550670064 - type: nauc_mrr_at_100_diff1 value: 46.27161543033672 - type: nauc_mrr_at_100_max value: 31.64330655339695 - type: nauc_mrr_at_100_std value: 0.8717446416398773 - type: nauc_mrr_at_10_diff1 value: 46.100348481312864 - type: nauc_mrr_at_10_max value: 31.594271897882237 - type: nauc_mrr_at_10_std value: 0.8807168907688873 - type: nauc_mrr_at_1_diff1 value: 51.35163098909763 - type: nauc_mrr_at_1_max value: 31.99084441327899 - type: nauc_mrr_at_1_std value: -2.688594880742662 - type: nauc_mrr_at_20_diff1 value: 46.18178546174727 - type: nauc_mrr_at_20_max value: 31.639111674119448 - type: nauc_mrr_at_20_std value: 0.9855008641374622 - type: nauc_mrr_at_3_diff1 value: 46.307484835305864 - type: nauc_mrr_at_3_max value: 31.35563850804847 - type: nauc_mrr_at_3_std value: -0.3419536587707561 - type: nauc_mrr_at_5_diff1 value: 46.17646418781234 - type: nauc_mrr_at_5_max value: 31.313474270239833 - type: nauc_mrr_at_5_std value: -0.08656550526568331 - type: nauc_ndcg_at_1000_diff1 value: 46.12095795101613 - type: nauc_ndcg_at_1000_max value: 31.989083597726314 - type: nauc_ndcg_at_1000_std value: 3.2965704707660763 - type: nauc_ndcg_at_100_diff1 value: 46.05376249841318 - type: nauc_ndcg_at_100_max value: 32.39195988574972 - type: nauc_ndcg_at_100_std value: 4.518018135593347 - type: nauc_ndcg_at_10_diff1 value: 46.133631183744875 - type: nauc_ndcg_at_10_max value: 31.45358876172339 - type: nauc_ndcg_at_10_std value: 3.4254370918871055 - type: nauc_ndcg_at_1_diff1 value: 51.35163098909763 - type: nauc_ndcg_at_1_max value: 31.99084441327899 - type: nauc_ndcg_at_1_std value: -2.688594880742662 - type: nauc_ndcg_at_20_diff1 value: 45.94584949766954 - type: nauc_ndcg_at_20_max value: 31.689777515111295 - type: nauc_ndcg_at_20_std value: 4.189082428922442 - type: nauc_ndcg_at_3_diff1 value: 46.5057835389752 - type: nauc_ndcg_at_3_max value: 30.941407592082047 - type: nauc_ndcg_at_3_std value: -0.042473944857831535 - type: nauc_ndcg_at_5_diff1 value: 46.369027395136136 - type: nauc_ndcg_at_5_max value: 31.057841776505352 - type: nauc_ndcg_at_5_std value: 0.6878993420489522 - type: nauc_precision_at_1000_diff1 value: -17.30759714093202 - type: nauc_precision_at_1000_max value: -4.441155558458858 - type: nauc_precision_at_1000_std value: 1.5537300718220326 - type: nauc_precision_at_100_diff1 value: -7.18920438222021 - type: nauc_precision_at_100_max value: 8.017878121399253 - type: nauc_precision_at_100_std value: 11.357132919349102 - type: nauc_precision_at_10_diff1 value: 15.202451884794076 - type: nauc_precision_at_10_max value: 19.077295902881417 - type: nauc_precision_at_10_std value: 9.885526867355805 - type: nauc_precision_at_1_diff1 value: 51.35163098909763 - type: nauc_precision_at_1_max value: 31.99084441327899 - type: nauc_precision_at_1_std value: -2.688594880742662 - type: nauc_precision_at_20_diff1 value: 6.827461091494899 - type: nauc_precision_at_20_max value: 15.27268633497114 - type: nauc_precision_at_20_std value: 11.515826649647384 - type: nauc_precision_at_3_diff1 value: 31.043021807472027 - type: nauc_precision_at_3_max value: 26.22457157531548 - type: nauc_precision_at_3_std value: 1.788215968301994 - type: nauc_precision_at_5_diff1 value: 25.030185818513235 - type: nauc_precision_at_5_max value: 23.680129160901537 - type: nauc_precision_at_5_std value: 4.303018899688115 - type: nauc_recall_at_1000_diff1 value: 28.68826642607512 - type: nauc_recall_at_1000_max value: 42.33849804103852 - type: nauc_recall_at_1000_std value: 42.67413575876864 - type: nauc_recall_at_100_diff1 value: 36.51494878715 - type: nauc_recall_at_100_max value: 37.4764995034434 - type: nauc_recall_at_100_std value: 28.295671266661017 - type: nauc_recall_at_10_diff1 value: 39.416721111463524 - type: nauc_recall_at_10_max value: 29.95985608454179 - type: nauc_recall_at_10_std value: 12.423335839786201 - type: nauc_recall_at_1_diff1 value: 53.57918993108331 - type: nauc_recall_at_1_max value: 31.392632653740993 - type: nauc_recall_at_1_std value: -2.857306170463933 - type: nauc_recall_at_20_diff1 value: 38.228803480194046 - type: nauc_recall_at_20_max value: 30.87261362975955 - type: nauc_recall_at_20_std value: 16.977113091834095 - type: nauc_recall_at_3_diff1 value: 43.154348566653155 - type: nauc_recall_at_3_max value: 29.54536633744803 - type: nauc_recall_at_3_std value: 2.02842672250621 - type: nauc_recall_at_5_diff1 value: 41.00436246072242 - type: nauc_recall_at_5_max value: 29.413569555348023 - type: nauc_recall_at_5_std value: 3.845214021958289 - type: ndcg_at_1 value: 34.701 - type: ndcg_at_10 value: 46.54 - type: ndcg_at_100 value: 51.754999999999995 - type: ndcg_at_1000 value: 53.71 - type: ndcg_at_20 value: 48.679 - type: ndcg_at_3 value: 40.892 - type: ndcg_at_5 value: 43.595 - type: precision_at_1 value: 34.701 - type: precision_at_10 value: 8.004 - type: precision_at_100 value: 1.185 - type: precision_at_1000 value: 0.145 - type: precision_at_20 value: 4.632 - type: precision_at_3 value: 18.719 - type: precision_at_5 value: 13.245999999999999 - type: recall_at_1 value: 29.953999999999997 - type: recall_at_10 value: 60.246 - type: recall_at_100 value: 82.128 - type: recall_at_1000 value: 95.622 - type: recall_at_20 value: 67.756 - type: recall_at_3 value: 45.096000000000004 - type: recall_at_5 value: 51.9 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: main_score value: 44.718999999999994 - type: map_at_1 value: 28.383999999999997 - type: map_at_10 value: 38.422 - type: map_at_100 value: 40.058 - type: map_at_1000 value: 40.276 - type: map_at_20 value: 39.301 - type: map_at_3 value: 35.205 - type: map_at_5 value: 36.803999999999995 - type: mrr_at_1 value: 33.59683794466403 - type: mrr_at_10 value: 42.837536859275986 - type: mrr_at_100 value: 43.7501703455481 - type: mrr_at_1000 value: 43.79258407771123 - type: mrr_at_20 value: 43.36044710445095 - type: mrr_at_3 value: 40.15151515151516 - type: mrr_at_5 value: 41.74242424242425 - type: nauc_map_at_1000_diff1 value: 47.934826596875304 - type: nauc_map_at_1000_max value: 32.39759438116062 - type: nauc_map_at_1000_std value: 0.9489007346763054 - type: nauc_map_at_100_diff1 value: 47.94844822157888 - type: nauc_map_at_100_max value: 32.51485845519537 - type: nauc_map_at_100_std value: 0.8094339925545622 - type: nauc_map_at_10_diff1 value: 48.251456404874645 - type: nauc_map_at_10_max value: 31.412906399154245 - type: nauc_map_at_10_std value: -0.7024825737369933 - type: nauc_map_at_1_diff1 value: 55.81906101970174 - type: nauc_map_at_1_max value: 31.811715334193796 - type: nauc_map_at_1_std value: -6.17056859281584 - type: nauc_map_at_20_diff1 value: 47.80902650237369 - type: nauc_map_at_20_max value: 32.22465403023091 - type: nauc_map_at_20_std value: 0.20706526946705656 - type: nauc_map_at_3_diff1 value: 49.97333984346632 - type: nauc_map_at_3_max value: 31.58195498640799 - type: nauc_map_at_3_std value: -2.577539707727459 - type: nauc_map_at_5_diff1 value: 49.40005767350608 - type: nauc_map_at_5_max value: 30.998435600377434 - type: nauc_map_at_5_std value: -2.1231771618690307 - type: nauc_mrr_at_1000_diff1 value: 46.86811371969663 - type: nauc_mrr_at_1000_max value: 31.25147138171024 - type: nauc_mrr_at_1000_std value: 1.9954422477585918 - type: nauc_mrr_at_100_diff1 value: 46.855870345882195 - type: nauc_mrr_at_100_max value: 31.263524035665966 - type: nauc_mrr_at_100_std value: 2.0160751193806568 - type: nauc_mrr_at_10_diff1 value: 46.93294772825783 - type: nauc_mrr_at_10_max value: 30.927002048701663 - type: nauc_mrr_at_10_std value: 1.6538220080908224 - type: nauc_mrr_at_1_diff1 value: 52.416386548395664 - type: nauc_mrr_at_1_max value: 32.28582003787206 - type: nauc_mrr_at_1_std value: -2.154991145714492 - type: nauc_mrr_at_20_diff1 value: 46.71796185319694 - type: nauc_mrr_at_20_max value: 31.16219902794994 - type: nauc_mrr_at_20_std value: 1.8590646572728409 - type: nauc_mrr_at_3_diff1 value: 47.697100317669914 - type: nauc_mrr_at_3_max value: 30.821806030159383 - type: nauc_mrr_at_3_std value: 1.1927626358099177 - type: nauc_mrr_at_5_diff1 value: 47.065272061365704 - type: nauc_mrr_at_5_max value: 30.299230962805023 - type: nauc_mrr_at_5_std value: 1.3225842862629529 - type: nauc_ndcg_at_1000_diff1 value: 45.20612583136058 - type: nauc_ndcg_at_1000_max value: 33.51931869947315 - type: nauc_ndcg_at_1000_std value: 4.923707509620363 - type: nauc_ndcg_at_100_diff1 value: 44.76206243393775 - type: nauc_ndcg_at_100_max value: 33.57771606755598 - type: nauc_ndcg_at_100_std value: 5.30915563331338 - type: nauc_ndcg_at_10_diff1 value: 45.12714032463827 - type: nauc_ndcg_at_10_max value: 30.351909495610492 - type: nauc_ndcg_at_10_std value: 2.3972947289996873 - type: nauc_ndcg_at_1_diff1 value: 52.416386548395664 - type: nauc_ndcg_at_1_max value: 32.28582003787206 - type: nauc_ndcg_at_1_std value: -2.154991145714492 - type: nauc_ndcg_at_20_diff1 value: 44.20281844000005 - type: nauc_ndcg_at_20_max value: 32.14112739396226 - type: nauc_ndcg_at_20_std value: 3.3971385462591916 - type: nauc_ndcg_at_3_diff1 value: 47.0633767031858 - type: nauc_ndcg_at_3_max value: 31.032896053733435 - type: nauc_ndcg_at_3_std value: 0.6827544906310201 - type: nauc_ndcg_at_5_diff1 value: 46.735352294106484 - type: nauc_ndcg_at_5_max value: 29.784992270528544 - type: nauc_ndcg_at_5_std value: 0.8685943819516141 - type: nauc_precision_at_1000_diff1 value: -12.223330179860852 - type: nauc_precision_at_1000_max value: -9.266492213777273 - type: nauc_precision_at_1000_std value: 19.0569899587788 - type: nauc_precision_at_100_diff1 value: -5.803751085072067 - type: nauc_precision_at_100_max value: 3.448932057044294 - type: nauc_precision_at_100_std value: 23.470863527030627 - type: nauc_precision_at_10_diff1 value: 8.887357341361907 - type: nauc_precision_at_10_max value: 18.67165390928126 - type: nauc_precision_at_10_std value: 19.158543337955404 - type: nauc_precision_at_1_diff1 value: 52.416386548395664 - type: nauc_precision_at_1_max value: 32.28582003787206 - type: nauc_precision_at_1_std value: -2.154991145714492 - type: nauc_precision_at_20_diff1 value: 0.942496138409553 - type: nauc_precision_at_20_max value: 18.86957127610774 - type: nauc_precision_at_20_std value: 24.075503903246496 - type: nauc_precision_at_3_diff1 value: 28.15363877307106 - type: nauc_precision_at_3_max value: 27.064928137991824 - type: nauc_precision_at_3_std value: 8.632807104504753 - type: nauc_precision_at_5_diff1 value: 20.805862332497973 - type: nauc_precision_at_5_max value: 21.420201475758404 - type: nauc_precision_at_5_std value: 12.380239645425714 - type: nauc_recall_at_1000_diff1 value: 18.478341468055547 - type: nauc_recall_at_1000_max value: 56.293560115074506 - type: nauc_recall_at_1000_std value: 64.31607185065428 - type: nauc_recall_at_100_diff1 value: 26.737267337771886 - type: nauc_recall_at_100_max value: 38.011889141496326 - type: nauc_recall_at_100_std value: 30.44904690114732 - type: nauc_recall_at_10_diff1 value: 35.22772732735716 - type: nauc_recall_at_10_max value: 26.000054115159486 - type: nauc_recall_at_10_std value: 5.174264254271206 - type: nauc_recall_at_1_diff1 value: 55.81906101970174 - type: nauc_recall_at_1_max value: 31.811715334193796 - type: nauc_recall_at_1_std value: -6.17056859281584 - type: nauc_recall_at_20_diff1 value: 30.48493302415641 - type: nauc_recall_at_20_max value: 31.05487040370753 - type: nauc_recall_at_20_std value: 10.319948318834136 - type: nauc_recall_at_3_diff1 value: 43.12289512340243 - type: nauc_recall_at_3_max value: 28.176279771026135 - type: nauc_recall_at_3_std value: -0.1775154523381921 - type: nauc_recall_at_5_diff1 value: 40.9934933741234 - type: nauc_recall_at_5_max value: 25.569156290584733 - type: nauc_recall_at_5_std value: 0.21166696686855038 - type: ndcg_at_1 value: 33.597 - type: ndcg_at_10 value: 44.718999999999994 - type: ndcg_at_100 value: 50.324000000000005 - type: ndcg_at_1000 value: 52.468 - type: ndcg_at_20 value: 46.822 - type: ndcg_at_3 value: 39.558 - type: ndcg_at_5 value: 41.827999999999996 - type: precision_at_1 value: 33.597 - type: precision_at_10 value: 8.735 - type: precision_at_100 value: 1.6420000000000001 - type: precision_at_1000 value: 0.246 - type: precision_at_20 value: 5.375 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 13.399 - type: recall_at_1 value: 28.383999999999997 - type: recall_at_10 value: 56.425000000000004 - type: recall_at_100 value: 82.01899999999999 - type: recall_at_1000 value: 95.285 - type: recall_at_20 value: 64.615 - type: recall_at_3 value: 42.171 - type: recall_at_5 value: 48.296 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 38.269999999999996 - type: map_at_1 value: 25.324999999999996 - type: map_at_10 value: 33.263 - type: map_at_100 value: 34.304 - type: map_at_1000 value: 34.394000000000005 - type: map_at_20 value: 33.827 - type: map_at_3 value: 30.259999999999998 - type: map_at_5 value: 31.832 - type: mrr_at_1 value: 27.171903881700555 - type: mrr_at_10 value: 35.334991051257234 - type: mrr_at_100 value: 36.251283465952355 - type: mrr_at_1000 value: 36.316236092511055 - type: mrr_at_20 value: 35.87141909945257 - type: mrr_at_3 value: 32.71719038817007 - type: mrr_at_5 value: 34.19593345656194 - type: nauc_map_at_1000_diff1 value: 39.614836211522714 - type: nauc_map_at_1000_max value: 22.019768626310192 - type: nauc_map_at_1000_std value: -1.5238708712112499 - type: nauc_map_at_100_diff1 value: 39.63008548572307 - type: nauc_map_at_100_max value: 22.044756063752345 - type: nauc_map_at_100_std value: -1.4869190221494792 - type: nauc_map_at_10_diff1 value: 39.73025012395569 - type: nauc_map_at_10_max value: 22.117710178892107 - type: nauc_map_at_10_std value: -2.5129984871932973 - type: nauc_map_at_1_diff1 value: 45.015617718902654 - type: nauc_map_at_1_max value: 19.313800263189638 - type: nauc_map_at_1_std value: -4.763931386681675 - type: nauc_map_at_20_diff1 value: 39.53678019013766 - type: nauc_map_at_20_max value: 21.880316719428258 - type: nauc_map_at_20_std value: -1.882003994523355 - type: nauc_map_at_3_diff1 value: 40.37307665298228 - type: nauc_map_at_3_max value: 20.851976075322533 - type: nauc_map_at_3_std value: -2.429569082966531 - type: nauc_map_at_5_diff1 value: 39.763015635086 - type: nauc_map_at_5_max value: 22.010102196900725 - type: nauc_map_at_5_std value: -2.654896415670943 - type: nauc_mrr_at_1000_diff1 value: 39.74071733680025 - type: nauc_mrr_at_1000_max value: 21.67309640681989 - type: nauc_mrr_at_1000_std value: -1.4003373135477462 - type: nauc_mrr_at_100_diff1 value: 39.730614151966485 - type: nauc_mrr_at_100_max value: 21.678390048971767 - type: nauc_mrr_at_100_std value: -1.3655362623563931 - type: nauc_mrr_at_10_diff1 value: 39.7900031013241 - type: nauc_mrr_at_10_max value: 21.73643491725051 - type: nauc_mrr_at_10_std value: -2.1175389838696312 - type: nauc_mrr_at_1_diff1 value: 46.165736140679776 - type: nauc_mrr_at_1_max value: 20.071083446822147 - type: nauc_mrr_at_1_std value: -5.018909100858311 - type: nauc_mrr_at_20_diff1 value: 39.6371295762885 - type: nauc_mrr_at_20_max value: 21.659557440270973 - type: nauc_mrr_at_20_std value: -1.4909603958341686 - type: nauc_mrr_at_3_diff1 value: 40.351150322758876 - type: nauc_mrr_at_3_max value: 20.83706249041544 - type: nauc_mrr_at_3_std value: -1.956027373253151 - type: nauc_mrr_at_5_diff1 value: 39.57759107791911 - type: nauc_mrr_at_5_max value: 21.79552045204151 - type: nauc_mrr_at_5_std value: -2.1507013120951126 - type: nauc_ndcg_at_1000_diff1 value: 37.717619356839016 - type: nauc_ndcg_at_1000_max value: 22.545375504379805 - type: nauc_ndcg_at_1000_std value: 1.682348628141016 - type: nauc_ndcg_at_100_diff1 value: 37.656027803682626 - type: nauc_ndcg_at_100_max value: 22.49278246383637 - type: nauc_ndcg_at_100_std value: 2.6818118152357773 - type: nauc_ndcg_at_10_diff1 value: 37.834954205539766 - type: nauc_ndcg_at_10_max value: 22.655839885558443 - type: nauc_ndcg_at_10_std value: -1.97159619786231 - type: nauc_ndcg_at_1_diff1 value: 46.165736140679776 - type: nauc_ndcg_at_1_max value: 20.071083446822147 - type: nauc_ndcg_at_1_std value: -5.018909100858311 - type: nauc_ndcg_at_20_diff1 value: 37.171914857454304 - type: nauc_ndcg_at_20_max value: 21.858904801745897 - type: nauc_ndcg_at_20_std value: 0.3809854859496657 - type: nauc_ndcg_at_3_diff1 value: 38.4460623883955 - type: nauc_ndcg_at_3_max value: 20.95244159463402 - type: nauc_ndcg_at_3_std value: -1.2685011660086651 - type: nauc_ndcg_at_5_diff1 value: 37.48831054573054 - type: nauc_ndcg_at_5_max value: 22.625921624640526 - type: nauc_ndcg_at_5_std value: -2.049221092724925 - type: nauc_precision_at_1000_diff1 value: -19.120500628263994 - type: nauc_precision_at_1000_max value: -6.650707109047473 - type: nauc_precision_at_1000_std value: 15.71193179253002 - type: nauc_precision_at_100_diff1 value: 6.254606806876069 - type: nauc_precision_at_100_max value: 14.601826922181823 - type: nauc_precision_at_100_std value: 28.38299592246453 - type: nauc_precision_at_10_diff1 value: 22.978614338670816 - type: nauc_precision_at_10_max value: 23.04146766323557 - type: nauc_precision_at_10_std value: 6.226264308612577 - type: nauc_precision_at_1_diff1 value: 46.165736140679776 - type: nauc_precision_at_1_max value: 20.071083446822147 - type: nauc_precision_at_1_std value: -5.018909100858311 - type: nauc_precision_at_20_diff1 value: 17.681032853225602 - type: nauc_precision_at_20_max value: 18.66680304585122 - type: nauc_precision_at_20_std value: 15.34896796713905 - type: nauc_precision_at_3_diff1 value: 31.359396694559194 - type: nauc_precision_at_3_max value: 22.279263308973274 - type: nauc_precision_at_3_std value: 3.6302537979529035 - type: nauc_precision_at_5_diff1 value: 26.32257879892933 - type: nauc_precision_at_5_max value: 25.402524493181026 - type: nauc_precision_at_5_std value: 4.731450603747359 - type: nauc_recall_at_1000_diff1 value: 23.562925244967875 - type: nauc_recall_at_1000_max value: 30.737399333586797 - type: nauc_recall_at_1000_std value: 34.19418935008663 - type: nauc_recall_at_100_diff1 value: 28.703574970574824 - type: nauc_recall_at_100_max value: 22.448663600170278 - type: nauc_recall_at_100_std value: 24.53297349042035 - type: nauc_recall_at_10_diff1 value: 31.73603907811882 - type: nauc_recall_at_10_max value: 23.453183748640765 - type: nauc_recall_at_10_std value: -1.8279054407176274 - type: nauc_recall_at_1_diff1 value: 45.015617718902654 - type: nauc_recall_at_1_max value: 19.313800263189638 - type: nauc_recall_at_1_std value: -4.763931386681675 - type: nauc_recall_at_20_diff1 value: 28.74169081866096 - type: nauc_recall_at_20_max value: 20.035509169577324 - type: nauc_recall_at_20_std value: 7.371615811227748 - type: nauc_recall_at_3_diff1 value: 34.09890157333362 - type: nauc_recall_at_3_max value: 20.46565842748346 - type: nauc_recall_at_3_std value: -0.4337283067447526 - type: nauc_recall_at_5_diff1 value: 30.974580787842402 - type: nauc_recall_at_5_max value: 23.76379349487105 - type: nauc_recall_at_5_std value: -1.8407515927979428 - type: ndcg_at_1 value: 27.172 - type: ndcg_at_10 value: 38.269999999999996 - type: ndcg_at_100 value: 43.338 - type: ndcg_at_1000 value: 45.594 - type: ndcg_at_20 value: 40.256 - type: ndcg_at_3 value: 32.673 - type: ndcg_at_5 value: 35.224 - type: precision_at_1 value: 27.172 - type: precision_at_10 value: 6.063000000000001 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.123 - type: precision_at_20 value: 3.5029999999999997 - type: precision_at_3 value: 13.74 - type: precision_at_5 value: 9.797 - type: recall_at_1 value: 25.324999999999996 - type: recall_at_10 value: 51.634 - type: recall_at_100 value: 74.687 - type: recall_at_1000 value: 91.412 - type: recall_at_20 value: 59.207 - type: recall_at_3 value: 36.678 - type: recall_at_5 value: 42.742999999999995 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 36.853 - type: map_at_1 value: 15.371000000000002 - type: map_at_10 value: 27.122 - type: map_at_100 value: 29.226000000000003 - type: map_at_1000 value: 29.409999999999997 - type: map_at_20 value: 28.274 - type: map_at_3 value: 22.431 - type: map_at_5 value: 24.877 - type: mrr_at_1 value: 34.13680781758958 - type: mrr_at_10 value: 47.265911793599145 - type: mrr_at_100 value: 48.028369995763846 - type: mrr_at_1000 value: 48.05317022537804 - type: mrr_at_20 value: 47.75785292259516 - type: mrr_at_3 value: 43.887079261672156 - type: mrr_at_5 value: 45.906623235613544 - type: nauc_map_at_1000_diff1 value: 24.949211292921547 - type: nauc_map_at_1000_max value: 38.69844483304584 - type: nauc_map_at_1000_std value: 18.336359440844753 - type: nauc_map_at_100_diff1 value: 24.8951732982492 - type: nauc_map_at_100_max value: 38.65049158594052 - type: nauc_map_at_100_std value: 18.28935278388095 - type: nauc_map_at_10_diff1 value: 24.606032216798273 - type: nauc_map_at_10_max value: 38.00608351559887 - type: nauc_map_at_10_std value: 16.61261615173358 - type: nauc_map_at_1_diff1 value: 30.83614944448221 - type: nauc_map_at_1_max value: 33.757528532809 - type: nauc_map_at_1_std value: 8.880622713261126 - type: nauc_map_at_20_diff1 value: 24.75491310922017 - type: nauc_map_at_20_max value: 38.353679076398834 - type: nauc_map_at_20_std value: 17.58637493443171 - type: nauc_map_at_3_diff1 value: 25.563085273287083 - type: nauc_map_at_3_max value: 35.14515679047155 - type: nauc_map_at_3_std value: 11.75594869817732 - type: nauc_map_at_5_diff1 value: 24.815807517691614 - type: nauc_map_at_5_max value: 36.25905426665983 - type: nauc_map_at_5_std value: 14.516391726180697 - type: nauc_mrr_at_1000_diff1 value: 27.948233427121274 - type: nauc_mrr_at_1000_max value: 37.5893640945859 - type: nauc_mrr_at_1000_std value: 19.588442449629763 - type: nauc_mrr_at_100_diff1 value: 27.947962345854037 - type: nauc_mrr_at_100_max value: 37.60375479481945 - type: nauc_mrr_at_100_std value: 19.614791576283793 - type: nauc_mrr_at_10_diff1 value: 27.882311310262136 - type: nauc_mrr_at_10_max value: 37.58580968074054 - type: nauc_mrr_at_10_std value: 19.49875186170201 - type: nauc_mrr_at_1_diff1 value: 28.017413073648477 - type: nauc_mrr_at_1_max value: 32.87710191514022 - type: nauc_mrr_at_1_std value: 14.04889142608459 - type: nauc_mrr_at_20_diff1 value: 27.89129925771968 - type: nauc_mrr_at_20_max value: 37.6142863106945 - type: nauc_mrr_at_20_std value: 19.645390143394163 - type: nauc_mrr_at_3_diff1 value: 27.99609559690795 - type: nauc_mrr_at_3_max value: 36.87362332456197 - type: nauc_mrr_at_3_std value: 18.598416821915333 - type: nauc_mrr_at_5_diff1 value: 27.68306089976716 - type: nauc_mrr_at_5_max value: 37.12264485659723 - type: nauc_mrr_at_5_std value: 19.18875305730564 - type: nauc_ndcg_at_1000_diff1 value: 25.736779186453777 - type: nauc_ndcg_at_1000_max value: 41.93281139456004 - type: nauc_ndcg_at_1000_std value: 25.179038422659993 - type: nauc_ndcg_at_100_diff1 value: 25.144796623848322 - type: nauc_ndcg_at_100_max value: 41.72820916876173 - type: nauc_ndcg_at_100_std value: 25.12851686850754 - type: nauc_ndcg_at_10_diff1 value: 24.321249191226652 - type: nauc_ndcg_at_10_max value: 40.23711916935706 - type: nauc_ndcg_at_10_std value: 20.89060972334557 - type: nauc_ndcg_at_1_diff1 value: 28.017413073648477 - type: nauc_ndcg_at_1_max value: 32.87710191514022 - type: nauc_ndcg_at_1_std value: 14.04889142608459 - type: nauc_ndcg_at_20_diff1 value: 24.5090484877482 - type: nauc_ndcg_at_20_max value: 40.752854032983606 - type: nauc_ndcg_at_20_std value: 22.70331074781384 - type: nauc_ndcg_at_3_diff1 value: 25.13499057756147 - type: nauc_ndcg_at_3_max value: 35.8325682137567 - type: nauc_ndcg_at_3_std value: 15.23768392706637 - type: nauc_ndcg_at_5_diff1 value: 24.614105695451116 - type: nauc_ndcg_at_5_max value: 37.68089587624492 - type: nauc_ndcg_at_5_std value: 17.946406099261708 - type: nauc_precision_at_1000_diff1 value: -2.022340544774227 - type: nauc_precision_at_1000_max value: 6.070578645067797 - type: nauc_precision_at_1000_std value: 22.15132728777549 - type: nauc_precision_at_100_diff1 value: 4.544144474504255 - type: nauc_precision_at_100_max value: 19.780392159848574 - type: nauc_precision_at_100_std value: 31.107111186002438 - type: nauc_precision_at_10_diff1 value: 10.107015022955848 - type: nauc_precision_at_10_max value: 30.779709099060465 - type: nauc_precision_at_10_std value: 27.324148451668602 - type: nauc_precision_at_1_diff1 value: 28.017413073648477 - type: nauc_precision_at_1_max value: 32.87710191514022 - type: nauc_precision_at_1_std value: 14.04889142608459 - type: nauc_precision_at_20_diff1 value: 8.270881053079405 - type: nauc_precision_at_20_max value: 27.26753946078481 - type: nauc_precision_at_20_std value: 29.156725822074204 - type: nauc_precision_at_3_diff1 value: 17.82468940497632 - type: nauc_precision_at_3_max value: 31.490021174215155 - type: nauc_precision_at_3_std value: 18.73818985054394 - type: nauc_precision_at_5_diff1 value: 13.24803141673961 - type: nauc_precision_at_5_max value: 29.94926240784298 - type: nauc_precision_at_5_std value: 23.2940906142919 - type: nauc_recall_at_1000_diff1 value: 19.09850333580471 - type: nauc_recall_at_1000_max value: 46.026306142840596 - type: nauc_recall_at_1000_std value: 46.50391519568263 - type: nauc_recall_at_100_diff1 value: 16.739384224869738 - type: nauc_recall_at_100_max value: 40.68987136431252 - type: nauc_recall_at_100_std value: 36.01609750485591 - type: nauc_recall_at_10_diff1 value: 17.51796617221814 - type: nauc_recall_at_10_max value: 39.47453129444401 - type: nauc_recall_at_10_std value: 23.79239002974899 - type: nauc_recall_at_1_diff1 value: 30.83614944448221 - type: nauc_recall_at_1_max value: 33.757528532809 - type: nauc_recall_at_1_std value: 8.880622713261126 - type: nauc_recall_at_20_diff1 value: 16.978668307251652 - type: nauc_recall_at_20_max value: 39.09115357303713 - type: nauc_recall_at_20_std value: 27.278668534187524 - type: nauc_recall_at_3_diff1 value: 22.55937738994021 - type: nauc_recall_at_3_max value: 36.25055459395638 - type: nauc_recall_at_3_std value: 14.828905168761247 - type: nauc_recall_at_5_diff1 value: 19.32656748627199 - type: nauc_recall_at_5_max value: 36.28836228620816 - type: nauc_recall_at_5_std value: 19.264352933914278 - type: ndcg_at_1 value: 34.137 - type: ndcg_at_10 value: 36.853 - type: ndcg_at_100 value: 44.279 - type: ndcg_at_1000 value: 47.336 - type: ndcg_at_20 value: 39.815 - type: ndcg_at_3 value: 30.253999999999998 - type: ndcg_at_5 value: 32.649 - type: precision_at_1 value: 34.137 - type: precision_at_10 value: 11.655 - type: precision_at_100 value: 1.9619999999999997 - type: precision_at_1000 value: 0.254 - type: precision_at_20 value: 7.1209999999999996 - type: precision_at_3 value: 22.823 - type: precision_at_5 value: 17.655 - type: recall_at_1 value: 15.371000000000002 - type: recall_at_10 value: 43.718 - type: recall_at_100 value: 68.81 - type: recall_at_1000 value: 85.69600000000001 - type: recall_at_20 value: 51.94 - type: recall_at_3 value: 27.694000000000003 - type: recall_at_5 value: 34.469 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 45.553 - type: map_at_1 value: 9.168999999999999 - type: map_at_10 value: 22.154 - type: map_at_100 value: 32.174 - type: map_at_1000 value: 33.974 - type: map_at_20 value: 25.899 - type: map_at_3 value: 15.275 - type: map_at_5 value: 18.291 - type: mrr_at_1 value: 70.75 - type: mrr_at_10 value: 78.39662698412697 - type: mrr_at_100 value: 78.56221458977012 - type: mrr_at_1000 value: 78.56669970642338 - type: mrr_at_20 value: 78.49688805346696 - type: mrr_at_3 value: 76.33333333333333 - type: mrr_at_5 value: 77.70833333333333 - type: nauc_map_at_1000_diff1 value: 18.465085922071346 - type: nauc_map_at_1000_max value: 24.29804638788498 - type: nauc_map_at_1000_std value: 22.380463943423514 - type: nauc_map_at_100_diff1 value: 19.37585410674523 - type: nauc_map_at_100_max value: 22.56424042509462 - type: nauc_map_at_100_std value: 19.672237275984426 - type: nauc_map_at_10_diff1 value: 23.597788166305577 - type: nauc_map_at_10_max value: 9.157316105122925 - type: nauc_map_at_10_std value: -3.8881247055786807 - type: nauc_map_at_1_diff1 value: 43.96699602275052 - type: nauc_map_at_1_max value: -0.7577088440873263 - type: nauc_map_at_1_std value: -17.732463891968404 - type: nauc_map_at_20_diff1 value: 22.326759054850097 - type: nauc_map_at_20_max value: 14.879191412167703 - type: nauc_map_at_20_std value: 5.405751236575241 - type: nauc_map_at_3_diff1 value: 28.73583545428074 - type: nauc_map_at_3_max value: 1.5986597211018239 - type: nauc_map_at_3_std value: -16.512455883681515 - type: nauc_map_at_5_diff1 value: 25.401810959155057 - type: nauc_map_at_5_max value: 4.418875376978587 - type: nauc_map_at_5_std value: -12.296750992013052 - type: nauc_mrr_at_1000_diff1 value: 51.228801807498584 - type: nauc_mrr_at_1000_max value: 61.040998883279585 - type: nauc_mrr_at_1000_std value: 40.93983887257123 - type: nauc_mrr_at_100_diff1 value: 51.23715338435314 - type: nauc_mrr_at_100_max value: 61.03971408781317 - type: nauc_mrr_at_100_std value: 40.91796923590573 - type: nauc_mrr_at_10_diff1 value: 51.1214868552331 - type: nauc_mrr_at_10_max value: 61.03069045590881 - type: nauc_mrr_at_10_std value: 40.661621199704264 - type: nauc_mrr_at_1_diff1 value: 50.84660003035892 - type: nauc_mrr_at_1_max value: 60.692091499960895 - type: nauc_mrr_at_1_std value: 42.126228731502955 - type: nauc_mrr_at_20_diff1 value: 51.0402624284872 - type: nauc_mrr_at_20_max value: 60.94577844338166 - type: nauc_mrr_at_20_std value: 40.89505950503613 - type: nauc_mrr_at_3_diff1 value: 51.771113665996516 - type: nauc_mrr_at_3_max value: 61.65264793077224 - type: nauc_mrr_at_3_std value: 41.75781827057092 - type: nauc_mrr_at_5_diff1 value: 51.0656793772882 - type: nauc_mrr_at_5_max value: 61.08042065139715 - type: nauc_mrr_at_5_std value: 41.11203271084835 - type: nauc_ndcg_at_1000_diff1 value: 22.347978262245107 - type: nauc_ndcg_at_1000_max value: 36.56458763955002 - type: nauc_ndcg_at_1000_std value: 35.99616144258822 - type: nauc_ndcg_at_100_diff1 value: 23.1120990977162 - type: nauc_ndcg_at_100_max value: 30.79663306311657 - type: nauc_ndcg_at_100_std value: 27.387572106784297 - type: nauc_ndcg_at_10_diff1 value: 23.329746066899656 - type: nauc_ndcg_at_10_max value: 28.69246947084685 - type: nauc_ndcg_at_10_std value: 21.457736188325345 - type: nauc_ndcg_at_1_diff1 value: 39.99399153456974 - type: nauc_ndcg_at_1_max value: 38.12447856470389 - type: nauc_ndcg_at_1_std value: 27.768869260384676 - type: nauc_ndcg_at_20_diff1 value: 24.945374175339907 - type: nauc_ndcg_at_20_max value: 27.67836982165295 - type: nauc_ndcg_at_20_std value: 19.7933631060578 - type: nauc_ndcg_at_3_diff1 value: 26.063492354398527 - type: nauc_ndcg_at_3_max value: 33.06541959550656 - type: nauc_ndcg_at_3_std value: 23.278902797288726 - type: nauc_ndcg_at_5_diff1 value: 22.521596060750035 - type: nauc_ndcg_at_5_max value: 31.210005673730784 - type: nauc_ndcg_at_5_std value: 22.893106456317927 - type: nauc_precision_at_1000_diff1 value: -19.845356495096006 - type: nauc_precision_at_1000_max value: 4.163819381816099 - type: nauc_precision_at_1000_std value: 7.612952884590339 - type: nauc_precision_at_100_diff1 value: -8.2679285153361 - type: nauc_precision_at_100_max value: 29.78018175573565 - type: nauc_precision_at_100_std value: 41.07244463956215 - type: nauc_precision_at_10_diff1 value: -3.2451428407349057 - type: nauc_precision_at_10_max value: 36.92563008274906 - type: nauc_precision_at_10_std value: 45.06962043489777 - type: nauc_precision_at_1_diff1 value: 50.84660003035892 - type: nauc_precision_at_1_max value: 60.692091499960895 - type: nauc_precision_at_1_std value: 42.126228731502955 - type: nauc_precision_at_20_diff1 value: -3.432279149061878 - type: nauc_precision_at_20_max value: 37.013592483974875 - type: nauc_precision_at_20_std value: 46.47324739428665 - type: nauc_precision_at_3_diff1 value: 7.28495481051025 - type: nauc_precision_at_3_max value: 38.66372411741402 - type: nauc_precision_at_3_std value: 35.23163993723955 - type: nauc_precision_at_5_diff1 value: -0.16540230063716202 - type: nauc_precision_at_5_max value: 37.322494255721715 - type: nauc_precision_at_5_std value: 39.666653561269754 - type: nauc_recall_at_1000_diff1 value: 11.388326469283681 - type: nauc_recall_at_1000_max value: 32.698146308591674 - type: nauc_recall_at_1000_std value: 49.48830488070777 - type: nauc_recall_at_100_diff1 value: 11.497443532756819 - type: nauc_recall_at_100_max value: 20.196970431621615 - type: nauc_recall_at_100_std value: 23.688772100803433 - type: nauc_recall_at_10_diff1 value: 16.519851398596003 - type: nauc_recall_at_10_max value: 0.774066845071221 - type: nauc_recall_at_10_std value: -10.89514647001814 - type: nauc_recall_at_1_diff1 value: 43.96699602275052 - type: nauc_recall_at_1_max value: -0.7577088440873263 - type: nauc_recall_at_1_std value: -17.732463891968404 - type: nauc_recall_at_20_diff1 value: 15.202960269878258 - type: nauc_recall_at_20_max value: 7.067263295590253 - type: nauc_recall_at_20_std value: -0.06050108222640702 - type: nauc_recall_at_3_diff1 value: 24.066741361525125 - type: nauc_recall_at_3_max value: -2.1961525860488424 - type: nauc_recall_at_3_std value: -19.48307077749568 - type: nauc_recall_at_5_diff1 value: 20.086330794102707 - type: nauc_recall_at_5_max value: -0.8866528062747986 - type: nauc_recall_at_5_std value: -16.53799173962747 - type: ndcg_at_1 value: 57.99999999999999 - type: ndcg_at_10 value: 45.553 - type: ndcg_at_100 value: 51.014 - type: ndcg_at_1000 value: 58.226 - type: ndcg_at_20 value: 44.98 - type: ndcg_at_3 value: 48.981 - type: ndcg_at_5 value: 46.794999999999995 - type: precision_at_1 value: 70.75 - type: precision_at_10 value: 36.85 - type: precision_at_100 value: 11.955 - type: precision_at_1000 value: 2.247 - type: precision_at_20 value: 28.075 - type: precision_at_3 value: 52.666999999999994 - type: precision_at_5 value: 45.85 - type: recall_at_1 value: 9.168999999999999 - type: recall_at_10 value: 28.796 - type: recall_at_100 value: 58.892999999999994 - type: recall_at_1000 value: 81.644 - type: recall_at_20 value: 36.659000000000006 - type: recall_at_3 value: 16.709 - type: recall_at_5 value: 21.387 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 88.41 - type: map_at_1 value: 75.637 - type: map_at_10 value: 84.674 - type: map_at_100 value: 84.909 - type: map_at_1000 value: 84.92 - type: map_at_20 value: 84.836 - type: map_at_3 value: 83.44200000000001 - type: map_at_5 value: 84.28099999999999 - type: mrr_at_1 value: 81.56315631563157 - type: mrr_at_10 value: 88.89571695264748 - type: mrr_at_100 value: 88.93671417216285 - type: mrr_at_1000 value: 88.93708016011664 - type: mrr_at_20 value: 88.9311652665256 - type: mrr_at_3 value: 88.20882088208805 - type: mrr_at_5 value: 88.72937293729349 - type: nauc_map_at_1000_diff1 value: 54.41216035074026 - type: nauc_map_at_1000_max value: 13.346153003554361 - type: nauc_map_at_1000_std value: -6.721664416152164 - type: nauc_map_at_100_diff1 value: 54.36538350995795 - type: nauc_map_at_100_max value: 13.355583381471298 - type: nauc_map_at_100_std value: -6.696921015641016 - type: nauc_map_at_10_diff1 value: 54.0389127730555 - type: nauc_map_at_10_max value: 13.387802159150663 - type: nauc_map_at_10_std value: -6.73514381731833 - type: nauc_map_at_1_diff1 value: 57.99489574836453 - type: nauc_map_at_1_max value: 7.830032589171654 - type: nauc_map_at_1_std value: -10.140208285080295 - type: nauc_map_at_20_diff1 value: 54.16841004736076 - type: nauc_map_at_20_max value: 13.345607363689746 - type: nauc_map_at_20_std value: -6.663119775158465 - type: nauc_map_at_3_diff1 value: 53.82879543599303 - type: nauc_map_at_3_max value: 12.716952288433902 - type: nauc_map_at_3_std value: -7.746102082835598 - type: nauc_map_at_5_diff1 value: 53.82838395350109 - type: nauc_map_at_5_max value: 13.487373534211702 - type: nauc_map_at_5_std value: -6.869504398693434 - type: nauc_mrr_at_1000_diff1 value: 68.92783546581906 - type: nauc_mrr_at_1000_max value: 12.076297180596592 - type: nauc_mrr_at_1000_std value: -13.306257067567998 - type: nauc_mrr_at_100_diff1 value: 68.92780219775517 - type: nauc_mrr_at_100_max value: 12.078449805054374 - type: nauc_mrr_at_100_std value: -13.303524852703719 - type: nauc_mrr_at_10_diff1 value: 68.92686206881258 - type: nauc_mrr_at_10_max value: 12.273295656884873 - type: nauc_mrr_at_10_std value: -13.222483496603965 - type: nauc_mrr_at_1_diff1 value: 70.1738022073041 - type: nauc_mrr_at_1_max value: 9.378639533482806 - type: nauc_mrr_at_1_std value: -13.444033823202348 - type: nauc_mrr_at_20_diff1 value: 68.91161304905303 - type: nauc_mrr_at_20_max value: 12.117091514817885 - type: nauc_mrr_at_20_std value: -13.258261750160239 - type: nauc_mrr_at_3_diff1 value: 68.61982455945467 - type: nauc_mrr_at_3_max value: 12.608213879734578 - type: nauc_mrr_at_3_std value: -13.558003431587839 - type: nauc_mrr_at_5_diff1 value: 68.81439097457242 - type: nauc_mrr_at_5_max value: 12.54025598903624 - type: nauc_mrr_at_5_std value: -13.199231514972093 - type: nauc_ndcg_at_1000_diff1 value: 56.47563443877495 - type: nauc_ndcg_at_1000_max value: 14.508331783439466 - type: nauc_ndcg_at_1000_std value: -6.206829736668775 - type: nauc_ndcg_at_100_diff1 value: 55.54015515673474 - type: nauc_ndcg_at_100_max value: 14.753595778278136 - type: nauc_ndcg_at_100_std value: -5.638517949568802 - type: nauc_ndcg_at_10_diff1 value: 54.220845223257996 - type: nauc_ndcg_at_10_max value: 15.265309648490021 - type: nauc_ndcg_at_10_std value: -5.516276098929109 - type: nauc_ndcg_at_1_diff1 value: 70.1738022073041 - type: nauc_ndcg_at_1_max value: 9.378639533482806 - type: nauc_ndcg_at_1_std value: -13.444033823202348 - type: nauc_ndcg_at_20_diff1 value: 54.481406100854635 - type: nauc_ndcg_at_20_max value: 14.868763583210498 - type: nauc_ndcg_at_20_std value: -5.328097380018734 - type: nauc_ndcg_at_3_diff1 value: 54.94411725607744 - type: nauc_ndcg_at_3_max value: 14.27186734506607 - type: nauc_ndcg_at_3_std value: -7.894724962312474 - type: nauc_ndcg_at_5_diff1 value: 54.08048166974806 - type: nauc_ndcg_at_5_max value: 15.528233170721006 - type: nauc_ndcg_at_5_std value: -5.984768714537104 - type: nauc_precision_at_1000_diff1 value: -8.744323640074445 - type: nauc_precision_at_1000_max value: -0.01881224392053465 - type: nauc_precision_at_1000_std value: 3.8721477979260635 - type: nauc_precision_at_100_diff1 value: -11.86150156952171 - type: nauc_precision_at_100_max value: 3.2736651314552314 - type: nauc_precision_at_100_std value: 8.12687620615509 - type: nauc_precision_at_10_diff1 value: -10.360708676781178 - type: nauc_precision_at_10_max value: 10.945552490433458 - type: nauc_precision_at_10_std value: 11.016707653014485 - type: nauc_precision_at_1_diff1 value: 70.1738022073041 - type: nauc_precision_at_1_max value: 9.378639533482806 - type: nauc_precision_at_1_std value: -13.444033823202348 - type: nauc_precision_at_20_diff1 value: -13.557721925696583 - type: nauc_precision_at_20_max value: 6.331386521718574 - type: nauc_precision_at_20_std value: 10.322188778142388 - type: nauc_precision_at_3_diff1 value: 15.139456770248968 - type: nauc_precision_at_3_max value: 17.10220985600708 - type: nauc_precision_at_3_std value: 3.0448183682558074 - type: nauc_precision_at_5_diff1 value: -1.9825577548111102 - type: nauc_precision_at_5_max value: 17.139148127012625 - type: nauc_precision_at_5_std value: 10.598435750554753 - type: nauc_recall_at_1000_diff1 value: 15.641740744283005 - type: nauc_recall_at_1000_max value: 44.65315702195612 - type: nauc_recall_at_1000_std value: 52.34265862835513 - type: nauc_recall_at_100_diff1 value: 5.254385435323394 - type: nauc_recall_at_100_max value: 38.53577774395794 - type: nauc_recall_at_100_std value: 43.47744274335829 - type: nauc_recall_at_10_diff1 value: 19.135735476268042 - type: nauc_recall_at_10_max value: 30.05417445923848 - type: nauc_recall_at_10_std value: 18.3988023241141 - type: nauc_recall_at_1_diff1 value: 57.99489574836453 - type: nauc_recall_at_1_max value: 7.830032589171654 - type: nauc_recall_at_1_std value: -10.140208285080295 - type: nauc_recall_at_20_diff1 value: 9.444797759735126 - type: nauc_recall_at_20_max value: 31.001311675371017 - type: nauc_recall_at_20_std value: 29.351418893822178 - type: nauc_recall_at_3_diff1 value: 36.88862653262064 - type: nauc_recall_at_3_max value: 19.845892741607823 - type: nauc_recall_at_3_std value: -1.0584273105890794 - type: nauc_recall_at_5_diff1 value: 27.360718561944974 - type: nauc_recall_at_5_max value: 26.698311215441738 - type: nauc_recall_at_5_std value: 8.97113997755362 - type: ndcg_at_1 value: 81.563 - type: ndcg_at_10 value: 88.41 - type: ndcg_at_100 value: 89.101 - type: ndcg_at_1000 value: 89.25800000000001 - type: ndcg_at_20 value: 88.79 - type: ndcg_at_3 value: 86.599 - type: ndcg_at_5 value: 87.74 - type: precision_at_1 value: 81.563 - type: precision_at_10 value: 10.699 - type: precision_at_100 value: 1.13 - type: precision_at_1000 value: 0.116 - type: precision_at_20 value: 5.479 - type: precision_at_3 value: 33.238 - type: precision_at_5 value: 20.744 - type: recall_at_1 value: 75.637 - type: recall_at_10 value: 95.57600000000001 - type: recall_at_100 value: 98.072 - type: recall_at_1000 value: 98.951 - type: recall_at_20 value: 96.792 - type: recall_at_3 value: 90.79599999999999 - type: recall_at_5 value: 93.674 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 42.396 - type: map_at_1 value: 21.711 - type: map_at_10 value: 34.628 - type: map_at_100 value: 36.549 - type: map_at_1000 value: 36.719 - type: map_at_20 value: 35.673 - type: map_at_3 value: 30.585 - type: map_at_5 value: 32.875 - type: mrr_at_1 value: 41.82098765432099 - type: mrr_at_10 value: 50.69505682931607 - type: mrr_at_100 value: 51.50556608727901 - type: mrr_at_1000 value: 51.53870583208304 - type: mrr_at_20 value: 51.15345764364655 - type: mrr_at_3 value: 48.35390946502059 - type: mrr_at_5 value: 49.87397119341563 - type: nauc_map_at_1000_diff1 value: 45.182252919583895 - type: nauc_map_at_1000_max value: 35.66124930024801 - type: nauc_map_at_1000_std value: -0.6925562638650965 - type: nauc_map_at_100_diff1 value: 45.116964706960125 - type: nauc_map_at_100_max value: 35.54990469525889 - type: nauc_map_at_100_std value: -0.6667263852859368 - type: nauc_map_at_10_diff1 value: 45.39189096228184 - type: nauc_map_at_10_max value: 34.780111261901 - type: nauc_map_at_10_std value: -1.8169859294150819 - type: nauc_map_at_1_diff1 value: 47.72764937952259 - type: nauc_map_at_1_max value: 24.83306559709341 - type: nauc_map_at_1_std value: -4.714128457297418 - type: nauc_map_at_20_diff1 value: 45.17073365898278 - type: nauc_map_at_20_max value: 35.0938403469058 - type: nauc_map_at_20_std value: -1.373412631183604 - type: nauc_map_at_3_diff1 value: 46.525724305731295 - type: nauc_map_at_3_max value: 31.042538866512597 - type: nauc_map_at_3_std value: -4.119355935975354 - type: nauc_map_at_5_diff1 value: 45.79569633383187 - type: nauc_map_at_5_max value: 32.88779656647293 - type: nauc_map_at_5_std value: -3.2518474739335312 - type: nauc_mrr_at_1000_diff1 value: 52.83619185487903 - type: nauc_mrr_at_1000_max value: 42.30310720405186 - type: nauc_mrr_at_1000_std value: -1.1487703348518024 - type: nauc_mrr_at_100_diff1 value: 52.82248853996664 - type: nauc_mrr_at_100_max value: 42.30549701564678 - type: nauc_mrr_at_100_std value: -1.1240113031894834 - type: nauc_mrr_at_10_diff1 value: 52.74644276642243 - type: nauc_mrr_at_10_max value: 42.39103029476398 - type: nauc_mrr_at_10_std value: -1.1043413237848576 - type: nauc_mrr_at_1_diff1 value: 54.810335521617326 - type: nauc_mrr_at_1_max value: 40.733260207843394 - type: nauc_mrr_at_1_std value: -4.452554921565855 - type: nauc_mrr_at_20_diff1 value: 52.788257862499954 - type: nauc_mrr_at_20_max value: 42.32658875363406 - type: nauc_mrr_at_20_std value: -1.2209728080684497 - type: nauc_mrr_at_3_diff1 value: 53.43281175319808 - type: nauc_mrr_at_3_max value: 41.735942650867926 - type: nauc_mrr_at_3_std value: -2.462688102468019 - type: nauc_mrr_at_5_diff1 value: 52.874037126566606 - type: nauc_mrr_at_5_max value: 41.93740449458822 - type: nauc_mrr_at_5_std value: -1.2928874908441947 - type: nauc_ndcg_at_1000_diff1 value: 46.5532425476402 - type: nauc_ndcg_at_1000_max value: 40.369611603370515 - type: nauc_ndcg_at_1000_std value: 3.472567588386994 - type: nauc_ndcg_at_100_diff1 value: 45.75244404695404 - type: nauc_ndcg_at_100_max value: 39.36470550675439 - type: nauc_ndcg_at_100_std value: 4.356189041115731 - type: nauc_ndcg_at_10_diff1 value: 46.005135323539704 - type: nauc_ndcg_at_10_max value: 37.89018165334218 - type: nauc_ndcg_at_10_std value: 0.7129618297768014 - type: nauc_ndcg_at_1_diff1 value: 54.810335521617326 - type: nauc_ndcg_at_1_max value: 40.733260207843394 - type: nauc_ndcg_at_1_std value: -4.452554921565855 - type: nauc_ndcg_at_20_diff1 value: 45.841552790490034 - type: nauc_ndcg_at_20_max value: 38.04992825472661 - type: nauc_ndcg_at_20_std value: 1.2748305707955212 - type: nauc_ndcg_at_3_diff1 value: 46.683033449357744 - type: nauc_ndcg_at_3_max value: 37.46397870760607 - type: nauc_ndcg_at_3_std value: -2.3421854966319824 - type: nauc_ndcg_at_5_diff1 value: 45.82409645378457 - type: nauc_ndcg_at_5_max value: 36.27588234096716 - type: nauc_ndcg_at_5_std value: -1.5141197170944254 - type: nauc_precision_at_1000_diff1 value: -3.137944321071885 - type: nauc_precision_at_1000_max value: 24.12803166253776 - type: nauc_precision_at_1000_std value: 11.076454789944101 - type: nauc_precision_at_100_diff1 value: 3.9896283891401048 - type: nauc_precision_at_100_max value: 31.00198316788829 - type: nauc_precision_at_100_std value: 15.725887643803063 - type: nauc_precision_at_10_diff1 value: 20.493420889888394 - type: nauc_precision_at_10_max value: 41.689699671507405 - type: nauc_precision_at_10_std value: 9.374983385669914 - type: nauc_precision_at_1_diff1 value: 54.810335521617326 - type: nauc_precision_at_1_max value: 40.733260207843394 - type: nauc_precision_at_1_std value: -4.452554921565855 - type: nauc_precision_at_20_diff1 value: 15.02911800246446 - type: nauc_precision_at_20_max value: 39.227068888505 - type: nauc_precision_at_20_std value: 11.755558515319404 - type: nauc_precision_at_3_diff1 value: 34.044986535461746 - type: nauc_precision_at_3_max value: 40.96605829831656 - type: nauc_precision_at_3_std value: 1.1903535705688038 - type: nauc_precision_at_5_diff1 value: 26.617002443432707 - type: nauc_precision_at_5_max value: 40.60413785916794 - type: nauc_precision_at_5_std value: 3.6984531670502814 - type: nauc_recall_at_1000_diff1 value: 26.96489389440101 - type: nauc_recall_at_1000_max value: 41.811583968523955 - type: nauc_recall_at_1000_std value: 41.5719519496712 - type: nauc_recall_at_100_diff1 value: 28.50851434908223 - type: nauc_recall_at_100_max value: 32.19528060706322 - type: nauc_recall_at_100_std value: 25.56935294258179 - type: nauc_recall_at_10_diff1 value: 35.139582891180964 - type: nauc_recall_at_10_max value: 32.15221840434225 - type: nauc_recall_at_10_std value: 5.550434611582702 - type: nauc_recall_at_1_diff1 value: 47.72764937952259 - type: nauc_recall_at_1_max value: 24.83306559709341 - type: nauc_recall_at_1_std value: -4.714128457297418 - type: nauc_recall_at_20_diff1 value: 32.78604811055205 - type: nauc_recall_at_20_max value: 29.62940720700254 - type: nauc_recall_at_20_std value: 6.769941491859872 - type: nauc_recall_at_3_diff1 value: 40.76090616138699 - type: nauc_recall_at_3_max value: 27.506425490226867 - type: nauc_recall_at_3_std value: -2.608872693119243 - type: nauc_recall_at_5_diff1 value: 37.06532485024711 - type: nauc_recall_at_5_max value: 27.704150556658448 - type: nauc_recall_at_5_std value: 0.4718707152343872 - type: ndcg_at_1 value: 41.821000000000005 - type: ndcg_at_10 value: 42.396 - type: ndcg_at_100 value: 49.370000000000005 - type: ndcg_at_1000 value: 52.251000000000005 - type: ndcg_at_20 value: 45.097 - type: ndcg_at_3 value: 39.028 - type: ndcg_at_5 value: 40.222 - type: precision_at_1 value: 41.821000000000005 - type: precision_at_10 value: 11.451 - type: precision_at_100 value: 1.863 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_20 value: 6.798 - type: precision_at_3 value: 25.823 - type: precision_at_5 value: 18.735 - type: recall_at_1 value: 21.711 - type: recall_at_10 value: 48.862 - type: recall_at_100 value: 74.708 - type: recall_at_1000 value: 91.865 - type: recall_at_20 value: 57.50999999999999 - type: recall_at_3 value: 35.85 - type: recall_at_5 value: 41.976 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 72.21 - type: map_at_1 value: 39.487 - type: map_at_10 value: 63.949999999999996 - type: map_at_100 value: 64.873 - type: map_at_1000 value: 64.927 - type: map_at_20 value: 64.529 - type: map_at_3 value: 60.243 - type: map_at_5 value: 62.613 - type: mrr_at_1 value: 78.97366644159351 - type: mrr_at_10 value: 84.84600173627825 - type: mrr_at_100 value: 85.0172804866798 - type: mrr_at_1000 value: 85.02245651152857 - type: mrr_at_20 value: 84.9625577788225 - type: mrr_at_3 value: 83.90276839972962 - type: mrr_at_5 value: 84.48278190411845 - type: nauc_map_at_1000_diff1 value: 19.825004700775164 - type: nauc_map_at_1000_max value: 19.943221724164182 - type: nauc_map_at_1000_std value: 10.068951166560058 - type: nauc_map_at_100_diff1 value: 19.80139472181137 - type: nauc_map_at_100_max value: 19.938006132804347 - type: nauc_map_at_100_std value: 10.100008107666842 - type: nauc_map_at_10_diff1 value: 19.53604502514735 - type: nauc_map_at_10_max value: 19.62768870331064 - type: nauc_map_at_10_std value: 9.446859074725705 - type: nauc_map_at_1_diff1 value: 67.7764270505257 - type: nauc_map_at_1_max value: 38.45166604737058 - type: nauc_map_at_1_std value: 1.9919181988552352 - type: nauc_map_at_20_diff1 value: 19.635871913149913 - type: nauc_map_at_20_max value: 19.812838965919155 - type: nauc_map_at_20_std value: 9.905163140101845 - type: nauc_map_at_3_diff1 value: 18.965707122532212 - type: nauc_map_at_3_max value: 17.878860313056517 - type: nauc_map_at_3_std value: 6.189378752019195 - type: nauc_map_at_5_diff1 value: 19.493354049675954 - type: nauc_map_at_5_max value: 19.24527088109141 - type: nauc_map_at_5_std value: 8.283883139680066 - type: nauc_mrr_at_1000_diff1 value: 66.87150374356781 - type: nauc_mrr_at_1000_max value: 41.413456443203984 - type: nauc_mrr_at_1000_std value: 4.140387282484357 - type: nauc_mrr_at_100_diff1 value: 66.87178015619061 - type: nauc_mrr_at_100_max value: 41.419754763150834 - type: nauc_mrr_at_100_std value: 4.15222235416704 - type: nauc_mrr_at_10_diff1 value: 66.89720586892301 - type: nauc_mrr_at_10_max value: 41.56353878125211 - type: nauc_mrr_at_10_std value: 4.213376519922392 - type: nauc_mrr_at_1_diff1 value: 67.7764270505257 - type: nauc_mrr_at_1_max value: 38.45166604737058 - type: nauc_mrr_at_1_std value: 1.9919181988552352 - type: nauc_mrr_at_20_diff1 value: 66.8714688713149 - type: nauc_mrr_at_20_max value: 41.46170778986735 - type: nauc_mrr_at_20_std value: 4.165154741309859 - type: nauc_mrr_at_3_diff1 value: 66.31615462679144 - type: nauc_mrr_at_3_max value: 41.419637693259936 - type: nauc_mrr_at_3_std value: 3.814834551396097 - type: nauc_mrr_at_5_diff1 value: 66.7289413087213 - type: nauc_mrr_at_5_max value: 41.668346356371586 - type: nauc_mrr_at_5_std value: 4.116331539882484 - type: nauc_ndcg_at_1000_diff1 value: 26.37325375970598 - type: nauc_ndcg_at_1000_max value: 24.850915174721735 - type: nauc_ndcg_at_1000_std value: 13.37585683440429 - type: nauc_ndcg_at_100_diff1 value: 25.591771178059503 - type: nauc_ndcg_at_100_max value: 24.562820829532473 - type: nauc_ndcg_at_100_std value: 14.093690500501541 - type: nauc_ndcg_at_10_diff1 value: 24.64600598115805 - type: nauc_ndcg_at_10_max value: 23.543499404760023 - type: nauc_ndcg_at_10_std value: 11.55823632781553 - type: nauc_ndcg_at_1_diff1 value: 67.7764270505257 - type: nauc_ndcg_at_1_max value: 38.45166604737058 - type: nauc_ndcg_at_1_std value: 1.9919181988552352 - type: nauc_ndcg_at_20_diff1 value: 24.757843275306726 - type: nauc_ndcg_at_20_max value: 23.951154200380827 - type: nauc_ndcg_at_20_std value: 12.931320453044886 - type: nauc_ndcg_at_3_diff1 value: 24.37742630418847 - type: nauc_ndcg_at_3_max value: 21.310512304883723 - type: nauc_ndcg_at_3_std value: 6.503993200818077 - type: nauc_ndcg_at_5_diff1 value: 24.813706829269716 - type: nauc_ndcg_at_5_max value: 22.993657212898 - type: nauc_ndcg_at_5_std value: 9.34462052506809 - type: nauc_precision_at_1000_diff1 value: -0.6506415756958156 - type: nauc_precision_at_1000_max value: 28.039755644694875 - type: nauc_precision_at_1000_std value: 53.46474329623814 - type: nauc_precision_at_100_diff1 value: 3.78462668236152 - type: nauc_precision_at_100_max value: 22.501700881673862 - type: nauc_precision_at_100_std value: 40.56672716474142 - type: nauc_precision_at_10_diff1 value: 9.156113228907534 - type: nauc_precision_at_10_max value: 19.734206254833254 - type: nauc_precision_at_10_std value: 19.986282545779602 - type: nauc_precision_at_1_diff1 value: 67.7764270505257 - type: nauc_precision_at_1_max value: 38.45166604737058 - type: nauc_precision_at_1_std value: 1.9919181988552352 - type: nauc_precision_at_20_diff1 value: 6.6164335644470125 - type: nauc_precision_at_20_max value: 20.29343459608317 - type: nauc_precision_at_20_std value: 26.51115475333977 - type: nauc_precision_at_3_diff1 value: 12.476520554399546 - type: nauc_precision_at_3_max value: 16.69401409858964 - type: nauc_precision_at_3_std value: 8.165880294907444 - type: nauc_precision_at_5_diff1 value: 11.783242828320958 - type: nauc_precision_at_5_max value: 19.0679467875759 - type: nauc_precision_at_5_std value: 13.615358345509884 - type: nauc_recall_at_1000_diff1 value: -0.6506415756960168 - type: nauc_recall_at_1000_max value: 28.039755644694786 - type: nauc_recall_at_1000_std value: 53.46474329623801 - type: nauc_recall_at_100_diff1 value: 3.7846266823613877 - type: nauc_recall_at_100_max value: 22.501700881674008 - type: nauc_recall_at_100_std value: 40.566727164741366 - type: nauc_recall_at_10_diff1 value: 9.15611322890755 - type: nauc_recall_at_10_max value: 19.73420625483318 - type: nauc_recall_at_10_std value: 19.98628254577951 - type: nauc_recall_at_1_diff1 value: 67.7764270505257 - type: nauc_recall_at_1_max value: 38.45166604737058 - type: nauc_recall_at_1_std value: 1.9919181988552352 - type: nauc_recall_at_20_diff1 value: 6.616433564446929 - type: nauc_recall_at_20_max value: 20.293434596083248 - type: nauc_recall_at_20_std value: 26.5111547533396 - type: nauc_recall_at_3_diff1 value: 12.476520554399531 - type: nauc_recall_at_3_max value: 16.69401409858966 - type: nauc_recall_at_3_std value: 8.165880294907438 - type: nauc_recall_at_5_diff1 value: 11.783242828320999 - type: nauc_recall_at_5_max value: 19.067946787575845 - type: nauc_recall_at_5_std value: 13.61535834550991 - type: ndcg_at_1 value: 78.974 - type: ndcg_at_10 value: 72.21 - type: ndcg_at_100 value: 75.264 - type: ndcg_at_1000 value: 76.259 - type: ndcg_at_20 value: 73.628 - type: ndcg_at_3 value: 67.047 - type: ndcg_at_5 value: 69.974 - type: precision_at_1 value: 78.974 - type: precision_at_10 value: 15.267 - type: precision_at_100 value: 1.762 - type: precision_at_1000 value: 0.189 - type: precision_at_20 value: 8.09 - type: precision_at_3 value: 43.309 - type: precision_at_5 value: 28.294000000000004 - type: recall_at_1 value: 39.487 - type: recall_at_10 value: 76.334 - type: recall_at_100 value: 88.076 - type: recall_at_1000 value: 94.59100000000001 - type: recall_at_20 value: 80.898 - type: recall_at_3 value: 64.96300000000001 - type: recall_at_5 value: 70.736 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 42.027 - type: map_at_1 value: 22.118 - type: map_at_10 value: 34.816 - type: map_at_100 value: 35.983 - type: map_at_1000 value: 36.028999999999996 - type: map_at_20 value: 35.545 - type: map_at_3 value: 30.752000000000002 - type: map_at_5 value: 33.114 - type: mrr_at_1 value: 22.793696275071635 - type: mrr_at_10 value: 35.47250079592483 - type: mrr_at_100 value: 36.576471512902856 - type: mrr_at_1000 value: 36.616205680509786 - type: mrr_at_20 value: 36.16557033864942 - type: mrr_at_3 value: 31.48758357211065 - type: mrr_at_5 value: 33.80563514804202 - type: nauc_map_at_1000_diff1 value: 32.89234100489284 - type: nauc_map_at_1000_max value: 1.1802816553581001 - type: nauc_map_at_1000_std value: -20.187692925732446 - type: nauc_map_at_100_diff1 value: 32.88694493681772 - type: nauc_map_at_100_max value: 1.1732717578080365 - type: nauc_map_at_100_std value: -20.164165529035245 - type: nauc_map_at_10_diff1 value: 32.826182211848796 - type: nauc_map_at_10_max value: 1.1551262165737235 - type: nauc_map_at_10_std value: -20.88326292319754 - type: nauc_map_at_1_diff1 value: 36.12732122790642 - type: nauc_map_at_1_max value: 1.8197550109156913 - type: nauc_map_at_1_std value: -17.205625720792167 - type: nauc_map_at_20_diff1 value: 32.83333177195551 - type: nauc_map_at_20_max value: 1.0937431645506202 - type: nauc_map_at_20_std value: -20.503956514646145 - type: nauc_map_at_3_diff1 value: 32.76264193805814 - type: nauc_map_at_3_max value: 0.8560962042500389 - type: nauc_map_at_3_std value: -20.608930717315577 - type: nauc_map_at_5_diff1 value: 32.78673238978775 - type: nauc_map_at_5_max value: 1.0511863039329437 - type: nauc_map_at_5_std value: -21.02164728626011 - type: nauc_mrr_at_1000_diff1 value: 32.610323934702286 - type: nauc_mrr_at_1000_max value: 1.276669121901405 - type: nauc_mrr_at_1000_std value: -19.908120615285043 - type: nauc_mrr_at_100_diff1 value: 32.601373758102795 - type: nauc_mrr_at_100_max value: 1.2752735149992132 - type: nauc_mrr_at_100_std value: -19.87937042610101 - type: nauc_mrr_at_10_diff1 value: 32.55795432078168 - type: nauc_mrr_at_10_max value: 1.2881786969258637 - type: nauc_mrr_at_10_std value: -20.54564519015977 - type: nauc_mrr_at_1_diff1 value: 35.596301376443726 - type: nauc_mrr_at_1_max value: 1.7633238037306902 - type: nauc_mrr_at_1_std value: -17.1999420019887 - type: nauc_mrr_at_20_diff1 value: 32.57185739111023 - type: nauc_mrr_at_20_max value: 1.2212620853201877 - type: nauc_mrr_at_20_std value: -20.179517281041264 - type: nauc_mrr_at_3_diff1 value: 32.42681377099514 - type: nauc_mrr_at_3_max value: 0.8745921708861145 - type: nauc_mrr_at_3_std value: -20.41017687790572 - type: nauc_mrr_at_5_diff1 value: 32.499107129648266 - type: nauc_mrr_at_5_max value: 1.1159673851851573 - type: nauc_mrr_at_5_std value: -20.695143502133824 - type: nauc_ndcg_at_1000_diff1 value: 32.16957965806702 - type: nauc_ndcg_at_1000_max value: 1.6763998947980905 - type: nauc_ndcg_at_1000_std value: -18.970592350332893 - type: nauc_ndcg_at_100_diff1 value: 31.977550102558872 - type: nauc_ndcg_at_100_max value: 1.5625858650110014 - type: nauc_ndcg_at_100_std value: -17.990456766123835 - type: nauc_ndcg_at_10_diff1 value: 31.82738932481356 - type: nauc_ndcg_at_10_max value: 1.1661362042692103 - type: nauc_ndcg_at_10_std value: -21.872680193994217 - type: nauc_ndcg_at_1_diff1 value: 35.596301376443726 - type: nauc_ndcg_at_1_max value: 1.7633238037306902 - type: nauc_ndcg_at_1_std value: -17.1999420019887 - type: nauc_ndcg_at_20_diff1 value: 31.749656399266264 - type: nauc_ndcg_at_20_max value: 0.9629024493088691 - type: nauc_ndcg_at_20_std value: -20.4379403899277 - type: nauc_ndcg_at_3_diff1 value: 31.731361436850836 - type: nauc_ndcg_at_3_max value: 0.531749791578849 - type: nauc_ndcg_at_3_std value: -21.551112910698674 - type: nauc_ndcg_at_5_diff1 value: 31.785373941157303 - type: nauc_ndcg_at_5_max value: 0.86207769368333 - type: nauc_ndcg_at_5_std value: -22.24923399160171 - type: nauc_precision_at_1000_diff1 value: -3.841288331986519 - type: nauc_precision_at_1000_max value: 13.558041371634976 - type: nauc_precision_at_1000_std value: 15.181510484512827 - type: nauc_precision_at_100_diff1 value: 12.441154582709053 - type: nauc_precision_at_100_max value: 8.428136255841935 - type: nauc_precision_at_100_std value: 14.710391839731656 - type: nauc_precision_at_10_diff1 value: 26.185854813986705 - type: nauc_precision_at_10_max value: 1.6348387310504464 - type: nauc_precision_at_10_std value: -23.448927004357298 - type: nauc_precision_at_1_diff1 value: 35.596301376443726 - type: nauc_precision_at_1_max value: 1.7633238037306902 - type: nauc_precision_at_1_std value: -17.1999420019887 - type: nauc_precision_at_20_diff1 value: 22.69194179544158 - type: nauc_precision_at_20_max value: 1.2972015009169306 - type: nauc_precision_at_20_std value: -15.751482380060269 - type: nauc_precision_at_3_diff1 value: 28.255531512125188 - type: nauc_precision_at_3_max value: -0.3715575458464333 - type: nauc_precision_at_3_std value: -24.227970454057697 - type: nauc_precision_at_5_diff1 value: 27.65497951098847 - type: nauc_precision_at_5_max value: 0.449773375292472 - type: nauc_precision_at_5_std value: -25.37445450938601 - type: nauc_recall_at_1000_diff1 value: 15.243948516763819 - type: nauc_recall_at_1000_max value: 41.821227805251375 - type: nauc_recall_at_1000_std value: 61.66297794838101 - type: nauc_recall_at_100_diff1 value: 24.516543685029994 - type: nauc_recall_at_100_max value: 7.093972966253228 - type: nauc_recall_at_100_std value: 17.244452321212282 - type: nauc_recall_at_10_diff1 value: 28.404243095182828 - type: nauc_recall_at_10_max value: 1.0805210480930945 - type: nauc_recall_at_10_std value: -24.885018657039527 - type: nauc_recall_at_1_diff1 value: 36.12732122790642 - type: nauc_recall_at_1_max value: 1.8197550109156913 - type: nauc_recall_at_1_std value: -17.205625720792167 - type: nauc_recall_at_20_diff1 value: 26.956250169438512 - type: nauc_recall_at_20_max value: 0.023973408161285917 - type: nauc_recall_at_20_std value: -18.32944444428131 - type: nauc_recall_at_3_diff1 value: 28.9894205130054 - type: nauc_recall_at_3_max value: -0.36140658021466865 - type: nauc_recall_at_3_std value: -24.022505107768364 - type: nauc_recall_at_5_diff1 value: 28.907023434955104 - type: nauc_recall_at_5_max value: 0.2501037567297729 - type: nauc_recall_at_5_std value: -25.719919602271496 - type: ndcg_at_1 value: 22.794 - type: ndcg_at_10 value: 42.027 - type: ndcg_at_100 value: 47.601 - type: ndcg_at_1000 value: 48.713 - type: ndcg_at_20 value: 44.623000000000005 - type: ndcg_at_3 value: 33.772999999999996 - type: ndcg_at_5 value: 37.991 - type: precision_at_1 value: 22.794 - type: precision_at_10 value: 6.711 - type: precision_at_100 value: 0.9490000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_20 value: 3.8920000000000003 - type: precision_at_3 value: 14.46 - type: precision_at_5 value: 10.822 - type: recall_at_1 value: 22.118 - type: recall_at_10 value: 64.201 - type: recall_at_100 value: 89.878 - type: recall_at_1000 value: 98.259 - type: recall_at_20 value: 74.34100000000001 - type: recall_at_3 value: 41.8 - type: recall_at_5 value: 51.959 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 36.201 - type: map_at_1 value: 5.654 - type: map_at_10 value: 13.402 - type: map_at_100 value: 16.849 - type: map_at_1000 value: 18.264 - type: map_at_20 value: 14.832 - type: map_at_3 value: 9.619 - type: map_at_5 value: 11.483 - type: mrr_at_1 value: 47.6780185758514 - type: mrr_at_10 value: 56.47906531033466 - type: mrr_at_100 value: 57.04539749991402 - type: mrr_at_1000 value: 57.08810157607369 - type: mrr_at_20 value: 56.88003170105462 - type: mrr_at_3 value: 54.43756449948401 - type: mrr_at_5 value: 55.660474716202266 - type: nauc_map_at_1000_diff1 value: 31.134615238698192 - type: nauc_map_at_1000_max value: 36.09522002487132 - type: nauc_map_at_1000_std value: 14.72627666649002 - type: nauc_map_at_100_diff1 value: 32.777473351864444 - type: nauc_map_at_100_max value: 35.25391471621035 - type: nauc_map_at_100_std value: 12.024428973861083 - type: nauc_map_at_10_diff1 value: 36.46466466148528 - type: nauc_map_at_10_max value: 29.707805406826722 - type: nauc_map_at_10_std value: 2.0678757794226335 - type: nauc_map_at_1_diff1 value: 54.30208426149679 - type: nauc_map_at_1_max value: 18.69125148481608 - type: nauc_map_at_1_std value: -8.970955660291802 - type: nauc_map_at_20_diff1 value: 34.76513311600623 - type: nauc_map_at_20_max value: 32.20666003570514 - type: nauc_map_at_20_std value: 5.924889441518581 - type: nauc_map_at_3_diff1 value: 45.73465176835491 - type: nauc_map_at_3_max value: 23.492291524989106 - type: nauc_map_at_3_std value: -5.0123536561688855 - type: nauc_map_at_5_diff1 value: 39.7128319374107 - type: nauc_map_at_5_max value: 25.84231729559691 - type: nauc_map_at_5_std value: -2.0861428981140344 - type: nauc_mrr_at_1000_diff1 value: 33.0997881703397 - type: nauc_mrr_at_1000_max value: 52.7089709923531 - type: nauc_mrr_at_1000_std value: 28.8517952674151 - type: nauc_mrr_at_100_diff1 value: 33.1094984027438 - type: nauc_mrr_at_100_max value: 52.74301398138847 - type: nauc_mrr_at_100_std value: 28.897997840300892 - type: nauc_mrr_at_10_diff1 value: 33.300713655464925 - type: nauc_mrr_at_10_max value: 52.572139698742184 - type: nauc_mrr_at_10_std value: 28.66875615527188 - type: nauc_mrr_at_1_diff1 value: 32.57632582147155 - type: nauc_mrr_at_1_max value: 46.020072246328816 - type: nauc_mrr_at_1_std value: 20.99097889820076 - type: nauc_mrr_at_20_diff1 value: 33.04083904518949 - type: nauc_mrr_at_20_max value: 52.597451362456994 - type: nauc_mrr_at_20_std value: 28.681527293587898 - type: nauc_mrr_at_3_diff1 value: 33.64864656322754 - type: nauc_mrr_at_3_max value: 51.82256412011279 - type: nauc_mrr_at_3_std value: 27.241260746740686 - type: nauc_mrr_at_5_diff1 value: 33.53201325467246 - type: nauc_mrr_at_5_max value: 52.79440885773516 - type: nauc_mrr_at_5_std value: 28.663081392086028 - type: nauc_ndcg_at_1000_diff1 value: 28.632650542040714 - type: nauc_ndcg_at_1000_max value: 51.24103069835822 - type: nauc_ndcg_at_1000_std value: 35.05503784757999 - type: nauc_ndcg_at_100_diff1 value: 29.082177715298503 - type: nauc_ndcg_at_100_max value: 45.24750203464315 - type: nauc_ndcg_at_100_std value: 27.146548925680914 - type: nauc_ndcg_at_10_diff1 value: 25.123554466093594 - type: nauc_ndcg_at_10_max value: 42.74355537806512 - type: nauc_ndcg_at_10_std value: 22.234407997803935 - type: nauc_ndcg_at_1_diff1 value: 33.75083940012058 - type: nauc_ndcg_at_1_max value: 44.44319402133161 - type: nauc_ndcg_at_1_std value: 19.146499358406487 - type: nauc_ndcg_at_20_diff1 value: 24.954207968331872 - type: nauc_ndcg_at_20_max value: 41.25991844405748 - type: nauc_ndcg_at_20_std value: 22.169009285868864 - type: nauc_ndcg_at_3_diff1 value: 28.186539942033516 - type: nauc_ndcg_at_3_max value: 44.40790009754965 - type: nauc_ndcg_at_3_std value: 20.99226576085115 - type: nauc_ndcg_at_5_diff1 value: 25.498387899376706 - type: nauc_ndcg_at_5_max value: 43.174709766261316 - type: nauc_ndcg_at_5_std value: 21.88111962672031 - type: nauc_precision_at_1000_diff1 value: -16.22321012507648 - type: nauc_precision_at_1000_max value: 5.808852256649677 - type: nauc_precision_at_1000_std value: 19.875641776698824 - type: nauc_precision_at_100_diff1 value: -10.248089374355486 - type: nauc_precision_at_100_max value: 19.29065415127588 - type: nauc_precision_at_100_std value: 31.75019665627339 - type: nauc_precision_at_10_diff1 value: 3.6783257583955056 - type: nauc_precision_at_10_max value: 39.22286010695767 - type: nauc_precision_at_10_std value: 31.225485732801022 - type: nauc_precision_at_1_diff1 value: 32.57632582147155 - type: nauc_precision_at_1_max value: 46.020072246328816 - type: nauc_precision_at_1_std value: 20.99097889820076 - type: nauc_precision_at_20_diff1 value: -3.1632510833242784 - type: nauc_precision_at_20_max value: 31.575496762405734 - type: nauc_precision_at_20_std value: 31.576283324468115 - type: nauc_precision_at_3_diff1 value: 17.78864585545647 - type: nauc_precision_at_3_max value: 44.201289661125585 - type: nauc_precision_at_3_std value: 25.447840649726693 - type: nauc_precision_at_5_diff1 value: 9.986748662091358 - type: nauc_precision_at_5_max value: 41.214164860776755 - type: nauc_precision_at_5_std value: 28.22551704127726 - type: nauc_recall_at_1000_diff1 value: 10.984331766850506 - type: nauc_recall_at_1000_max value: 24.641216018034104 - type: nauc_recall_at_1000_std value: 26.91064221008446 - type: nauc_recall_at_100_diff1 value: 23.7009352078473 - type: nauc_recall_at_100_max value: 30.176031609451297 - type: nauc_recall_at_100_std value: 20.360365243211564 - type: nauc_recall_at_10_diff1 value: 28.11831737650638 - type: nauc_recall_at_10_max value: 24.21539670487414 - type: nauc_recall_at_10_std value: 2.245504974150148 - type: nauc_recall_at_1_diff1 value: 54.30208426149679 - type: nauc_recall_at_1_max value: 18.69125148481608 - type: nauc_recall_at_1_std value: -8.970955660291802 - type: nauc_recall_at_20_diff1 value: 26.199425305139908 - type: nauc_recall_at_20_max value: 24.66704097503736 - type: nauc_recall_at_20_std value: 5.86052107206246 - type: nauc_recall_at_3_diff1 value: 42.88348677575622 - type: nauc_recall_at_3_max value: 21.189371077603308 - type: nauc_recall_at_3_std value: -4.537510127238226 - type: nauc_recall_at_5_diff1 value: 30.7936756722569 - type: nauc_recall_at_5_max value: 21.06136406164962 - type: nauc_recall_at_5_std value: -1.4113804735229794 - type: ndcg_at_1 value: 45.975 - type: ndcg_at_10 value: 36.201 - type: ndcg_at_100 value: 32.736 - type: ndcg_at_1000 value: 41.099000000000004 - type: ndcg_at_20 value: 33.724 - type: ndcg_at_3 value: 42.242000000000004 - type: ndcg_at_5 value: 40.137 - type: precision_at_1 value: 47.678 - type: precision_at_10 value: 26.904 - type: precision_at_100 value: 8.368 - type: precision_at_1000 value: 2.078 - type: precision_at_20 value: 19.845 - type: precision_at_3 value: 40.351 - type: precision_at_5 value: 35.108 - type: recall_at_1 value: 5.654 - type: recall_at_10 value: 17.793 - type: recall_at_100 value: 32.483000000000004 - type: recall_at_1000 value: 63.294 - type: recall_at_20 value: 21.754 - type: recall_at_3 value: 10.771 - type: recall_at_5 value: 14.084 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 62.464 - type: map_at_1 value: 38.0 - type: map_at_10 value: 54.806 - type: map_at_100 value: 55.599 - type: map_at_1000 value: 55.617000000000004 - type: map_at_20 value: 55.336 - type: map_at_3 value: 50.58200000000001 - type: map_at_5 value: 53.181 - type: mrr_at_1 value: 42.46813441483198 - type: mrr_at_10 value: 57.060710147326446 - type: mrr_at_100 value: 57.60978373431328 - type: mrr_at_1000 value: 57.62192762809547 - type: mrr_at_20 value: 57.43431796174232 - type: mrr_at_3 value: 53.78041714947835 - type: mrr_at_5 value: 55.81257242178437 - type: nauc_map_at_1000_diff1 value: 38.337572188308194 - type: nauc_map_at_1000_max value: 27.550035254787197 - type: nauc_map_at_1000_std value: -7.5513729587308145 - type: nauc_map_at_100_diff1 value: 38.335337794455015 - type: nauc_map_at_100_max value: 27.56919614414171 - type: nauc_map_at_100_std value: -7.526017855405723 - type: nauc_map_at_10_diff1 value: 38.308131361353816 - type: nauc_map_at_10_max value: 27.691849580929933 - type: nauc_map_at_10_std value: -7.971461731555123 - type: nauc_map_at_1_diff1 value: 42.721072690634884 - type: nauc_map_at_1_max value: 21.750451486885332 - type: nauc_map_at_1_std value: -9.99540950522643 - type: nauc_map_at_20_diff1 value: 38.25792874982169 - type: nauc_map_at_20_max value: 27.68877906159661 - type: nauc_map_at_20_std value: -7.560753583212102 - type: nauc_map_at_3_diff1 value: 37.950570055936254 - type: nauc_map_at_3_max value: 26.257969511794858 - type: nauc_map_at_3_std value: -9.236868658300553 - type: nauc_map_at_5_diff1 value: 37.99893219450212 - type: nauc_map_at_5_max value: 27.293454259158057 - type: nauc_map_at_5_std value: -8.734089449603806 - type: nauc_mrr_at_1000_diff1 value: 37.777767467474774 - type: nauc_mrr_at_1000_max value: 27.39507603748298 - type: nauc_mrr_at_1000_std value: -5.554754076870114 - type: nauc_mrr_at_100_diff1 value: 37.77981674583538 - type: nauc_mrr_at_100_max value: 27.411100989441557 - type: nauc_mrr_at_100_std value: -5.539061231412731 - type: nauc_mrr_at_10_diff1 value: 37.72399003363479 - type: nauc_mrr_at_10_max value: 27.618142546685416 - type: nauc_mrr_at_10_std value: -5.6819843907448195 - type: nauc_mrr_at_1_diff1 value: 41.17596078958236 - type: nauc_mrr_at_1_max value: 23.32588591818617 - type: nauc_mrr_at_1_std value: -7.126628034623689 - type: nauc_mrr_at_20_diff1 value: 37.695136721588 - type: nauc_mrr_at_20_max value: 27.52850676467322 - type: nauc_mrr_at_20_std value: -5.50667995515647 - type: nauc_mrr_at_3_diff1 value: 37.23845700908964 - type: nauc_mrr_at_3_max value: 26.69389772971012 - type: nauc_mrr_at_3_std value: -6.31868405989011 - type: nauc_mrr_at_5_diff1 value: 37.33757394192838 - type: nauc_mrr_at_5_max value: 27.42091593836207 - type: nauc_mrr_at_5_std value: -5.993243330132065 - type: nauc_ndcg_at_1000_diff1 value: 37.74836061640332 - type: nauc_ndcg_at_1000_max value: 29.03148916289089 - type: nauc_ndcg_at_1000_std value: -5.543065770074502 - type: nauc_ndcg_at_100_diff1 value: 37.75593955089626 - type: nauc_ndcg_at_100_max value: 29.67109480272493 - type: nauc_ndcg_at_100_std value: -4.773697596687493 - type: nauc_ndcg_at_10_diff1 value: 37.41701174824348 - type: nauc_ndcg_at_10_max value: 30.448703434043445 - type: nauc_ndcg_at_10_std value: -6.306202666419071 - type: nauc_ndcg_at_1_diff1 value: 41.17596078958236 - type: nauc_ndcg_at_1_max value: 23.32588591818617 - type: nauc_ndcg_at_1_std value: -7.126628034623689 - type: nauc_ndcg_at_20_diff1 value: 37.17445197824622 - type: nauc_ndcg_at_20_max value: 30.47378561555209 - type: nauc_ndcg_at_20_std value: -4.921584853993488 - type: nauc_ndcg_at_3_diff1 value: 36.5261976812068 - type: nauc_ndcg_at_3_max value: 27.560538820208926 - type: nauc_ndcg_at_3_std value: -8.556686332882931 - type: nauc_ndcg_at_5_diff1 value: 36.571462759614526 - type: nauc_ndcg_at_5_max value: 29.363401730752585 - type: nauc_ndcg_at_5_std value: -7.825739170420347 - type: nauc_precision_at_1000_diff1 value: -12.588899483401223 - type: nauc_precision_at_1000_max value: 2.641097890578701 - type: nauc_precision_at_1000_std value: 17.643107625788748 - type: nauc_precision_at_100_diff1 value: -8.40579874206785 - type: nauc_precision_at_100_max value: 9.725496771040037 - type: nauc_precision_at_100_std value: 21.558582760191243 - type: nauc_precision_at_10_diff1 value: 6.619157191854486 - type: nauc_precision_at_10_max value: 23.767406373688402 - type: nauc_precision_at_10_std value: 10.428535003478808 - type: nauc_precision_at_1_diff1 value: 41.17596078958236 - type: nauc_precision_at_1_max value: 23.32588591818617 - type: nauc_precision_at_1_std value: -7.126628034623689 - type: nauc_precision_at_20_diff1 value: -0.6449974218292859 - type: nauc_precision_at_20_max value: 20.211503851418783 - type: nauc_precision_at_20_std value: 17.922745410142575 - type: nauc_precision_at_3_diff1 value: 19.710276097428657 - type: nauc_precision_at_3_max value: 26.768918044758706 - type: nauc_precision_at_3_std value: -1.0636448912049246 - type: nauc_precision_at_5_diff1 value: 13.073181337982613 - type: nauc_precision_at_5_max value: 26.418340338971024 - type: nauc_precision_at_5_std value: 2.9842078949528688 - type: nauc_recall_at_1000_diff1 value: 30.52411148739828 - type: nauc_recall_at_1000_max value: 90.96409807536762 - type: nauc_recall_at_1000_std value: 83.94857830921949 - type: nauc_recall_at_100_diff1 value: 36.936303690592155 - type: nauc_recall_at_100_max value: 71.91515014325869 - type: nauc_recall_at_100_std value: 48.93061263403371 - type: nauc_recall_at_10_diff1 value: 32.84292362076269 - type: nauc_recall_at_10_max value: 44.27252783122478 - type: nauc_recall_at_10_std value: -1.5981198975612385 - type: nauc_recall_at_1_diff1 value: 42.721072690634884 - type: nauc_recall_at_1_max value: 21.750451486885332 - type: nauc_recall_at_1_std value: -9.99540950522643 - type: nauc_recall_at_20_diff1 value: 29.36724417081702 - type: nauc_recall_at_20_max value: 52.035846390214715 - type: nauc_recall_at_20_std value: 11.967264191332818 - type: nauc_recall_at_3_diff1 value: 31.634923771936098 - type: nauc_recall_at_3_max value: 30.225743369869473 - type: nauc_recall_at_3_std value: -9.253665347118615 - type: nauc_recall_at_5_diff1 value: 30.66271853090737 - type: nauc_recall_at_5_max value: 35.70815715994996 - type: nauc_recall_at_5_std value: -7.836012956078996 - type: ndcg_at_1 value: 42.468 - type: ndcg_at_10 value: 62.464 - type: ndcg_at_100 value: 65.618 - type: ndcg_at_1000 value: 66.014 - type: ndcg_at_20 value: 64.12 - type: ndcg_at_3 value: 54.790000000000006 - type: ndcg_at_5 value: 58.992 - type: precision_at_1 value: 42.468 - type: precision_at_10 value: 9.959 - type: precision_at_100 value: 1.174 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.380999999999999 - type: precision_at_3 value: 24.73 - type: precision_at_5 value: 17.299999999999997 - type: recall_at_1 value: 38.0 - type: recall_at_10 value: 83.22699999999999 - type: recall_at_100 value: 96.584 - type: recall_at_1000 value: 99.512 - type: recall_at_20 value: 89.291 - type: recall_at_3 value: 63.666 - type: recall_at_5 value: 73.27900000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 87.366 - type: map_at_1 value: 69.95700000000001 - type: map_at_10 value: 83.55 - type: map_at_100 value: 84.196 - type: map_at_1000 value: 84.21600000000001 - type: map_at_20 value: 83.982 - type: map_at_3 value: 80.647 - type: map_at_5 value: 82.443 - type: mrr_at_1 value: 80.39 - type: mrr_at_10 value: 86.65646031746004 - type: mrr_at_100 value: 86.7852113210373 - type: mrr_at_1000 value: 86.78651118354796 - type: mrr_at_20 value: 86.75772838878498 - type: mrr_at_3 value: 85.67499999999971 - type: mrr_at_5 value: 86.33749999999962 - type: nauc_map_at_1000_diff1 value: 76.68189702770007 - type: nauc_map_at_1000_max value: 36.19988239025682 - type: nauc_map_at_1000_std value: -26.231691135645736 - type: nauc_map_at_100_diff1 value: 76.68832712120171 - type: nauc_map_at_100_max value: 36.18627717337547 - type: nauc_map_at_100_std value: -26.28243886166 - type: nauc_map_at_10_diff1 value: 76.88888516032657 - type: nauc_map_at_10_max value: 35.69809861085124 - type: nauc_map_at_10_std value: -27.859425473864224 - type: nauc_map_at_1_diff1 value: 79.5243725217315 - type: nauc_map_at_1_max value: 27.092773841207002 - type: nauc_map_at_1_std value: -26.223200911204543 - type: nauc_map_at_20_diff1 value: 76.74938996155176 - type: nauc_map_at_20_max value: 36.07373781351406 - type: nauc_map_at_20_std value: -26.891400098628015 - type: nauc_map_at_3_diff1 value: 77.29604745045076 - type: nauc_map_at_3_max value: 33.11431059356283 - type: nauc_map_at_3_std value: -29.555237195931085 - type: nauc_map_at_5_diff1 value: 77.14069217901078 - type: nauc_map_at_5_max value: 34.68656073526487 - type: nauc_map_at_5_std value: -28.945053669861508 - type: nauc_mrr_at_1000_diff1 value: 76.66087451567746 - type: nauc_mrr_at_1000_max value: 38.78133177265328 - type: nauc_mrr_at_1000_std value: -23.75726541774991 - type: nauc_mrr_at_100_diff1 value: 76.66117078261013 - type: nauc_mrr_at_100_max value: 38.782533036423885 - type: nauc_mrr_at_100_std value: -23.752587601473568 - type: nauc_mrr_at_10_diff1 value: 76.65866401411019 - type: nauc_mrr_at_10_max value: 38.87950311049704 - type: nauc_mrr_at_10_std value: -23.873660706680578 - type: nauc_mrr_at_1_diff1 value: 77.42633506487041 - type: nauc_mrr_at_1_max value: 37.93973722217786 - type: nauc_mrr_at_1_std value: -23.3984130771317 - type: nauc_mrr_at_20_diff1 value: 76.66210684923414 - type: nauc_mrr_at_20_max value: 38.81293033048911 - type: nauc_mrr_at_20_std value: -23.736590746133736 - type: nauc_mrr_at_3_diff1 value: 76.33711764736019 - type: nauc_mrr_at_3_max value: 38.5659231830368 - type: nauc_mrr_at_3_std value: -23.99588149124865 - type: nauc_mrr_at_5_diff1 value: 76.57123830226054 - type: nauc_mrr_at_5_max value: 38.97947097392977 - type: nauc_mrr_at_5_std value: -23.943668957974246 - type: nauc_ndcg_at_1000_diff1 value: 76.38447339050585 - type: nauc_ndcg_at_1000_max value: 37.756822792877934 - type: nauc_ndcg_at_1000_std value: -24.046995734357164 - type: nauc_ndcg_at_100_diff1 value: 76.44058018066822 - type: nauc_ndcg_at_100_max value: 37.72948294169218 - type: nauc_ndcg_at_100_std value: -24.083432140741795 - type: nauc_ndcg_at_10_diff1 value: 76.56246287923074 - type: nauc_ndcg_at_10_max value: 37.0329253490553 - type: nauc_ndcg_at_10_std value: -26.6495163705961 - type: nauc_ndcg_at_1_diff1 value: 77.4085129990432 - type: nauc_ndcg_at_1_max value: 38.06139172214421 - type: nauc_ndcg_at_1_std value: -23.656477126977386 - type: nauc_ndcg_at_20_diff1 value: 76.50192496743098 - type: nauc_ndcg_at_20_max value: 37.51759311013985 - type: nauc_ndcg_at_20_std value: -25.45517058360004 - type: nauc_ndcg_at_3_diff1 value: 75.94398494081794 - type: nauc_ndcg_at_3_max value: 35.7666711547279 - type: nauc_ndcg_at_3_std value: -26.866022682361578 - type: nauc_ndcg_at_5_diff1 value: 76.47334274088344 - type: nauc_ndcg_at_5_max value: 36.40830331490731 - type: nauc_ndcg_at_5_std value: -27.170121189572765 - type: nauc_precision_at_1000_diff1 value: -43.33672630765437 - type: nauc_precision_at_1000_max value: -5.089751329149161 - type: nauc_precision_at_1000_std value: 30.6241447847051 - type: nauc_precision_at_100_diff1 value: -42.736833035629864 - type: nauc_precision_at_100_max value: -4.060198408346224 - type: nauc_precision_at_100_std value: 29.807050266205344 - type: nauc_precision_at_10_diff1 value: -35.90810562245906 - type: nauc_precision_at_10_max value: 1.1633204529249133 - type: nauc_precision_at_10_std value: 20.129691203276018 - type: nauc_precision_at_1_diff1 value: 77.4085129990432 - type: nauc_precision_at_1_max value: 38.06139172214421 - type: nauc_precision_at_1_std value: -23.656477126977386 - type: nauc_precision_at_20_diff1 value: -40.2132286912738 - type: nauc_precision_at_20_max value: -1.3004735030734194 - type: nauc_precision_at_20_std value: 25.15612293757488 - type: nauc_precision_at_3_diff1 value: -13.873825299883904 - type: nauc_precision_at_3_max value: 11.038689278907233 - type: nauc_precision_at_3_std value: 5.4276449621706 - type: nauc_precision_at_5_diff1 value: -27.151668633894737 - type: nauc_precision_at_5_max value: 5.795130010163115 - type: nauc_precision_at_5_std value: 13.220722167587375 - type: nauc_recall_at_1000_diff1 value: 83.903950427863 - type: nauc_recall_at_1000_max value: 37.82919000897223 - type: nauc_recall_at_1000_std value: 70.65670846771707 - type: nauc_recall_at_100_diff1 value: 75.23306095335836 - type: nauc_recall_at_100_max value: 37.54281648247423 - type: nauc_recall_at_100_std value: 8.434289114377373 - type: nauc_recall_at_10_diff1 value: 72.7872912723047 - type: nauc_recall_at_10_max value: 34.261519652104184 - type: nauc_recall_at_10_std value: -34.60101950810808 - type: nauc_recall_at_1_diff1 value: 79.5243725217315 - type: nauc_recall_at_1_max value: 27.092773841207002 - type: nauc_recall_at_1_std value: -26.223200911204543 - type: nauc_recall_at_20_diff1 value: 72.8297963091964 - type: nauc_recall_at_20_max value: 36.070220569670916 - type: nauc_recall_at_20_std value: -27.20897179168245 - type: nauc_recall_at_3_diff1 value: 73.47456374650459 - type: nauc_recall_at_3_max value: 29.901663407294816 - type: nauc_recall_at_3_std value: -32.83329537040381 - type: nauc_recall_at_5_diff1 value: 73.05025750827126 - type: nauc_recall_at_5_max value: 32.35733470860963 - type: nauc_recall_at_5_std value: -34.32357558493091 - type: ndcg_at_1 value: 80.4 - type: ndcg_at_10 value: 87.366 - type: ndcg_at_100 value: 88.7 - type: ndcg_at_1000 value: 88.842 - type: ndcg_at_20 value: 88.11 - type: ndcg_at_3 value: 84.52499999999999 - type: ndcg_at_5 value: 86.047 - type: precision_at_1 value: 80.4 - type: precision_at_10 value: 13.235 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.156 - type: precision_at_20 value: 7.037 - type: precision_at_3 value: 36.9 - type: precision_at_5 value: 24.236 - type: recall_at_1 value: 69.95700000000001 - type: recall_at_10 value: 94.535 - type: recall_at_100 value: 99.164 - type: recall_at_1000 value: 99.855 - type: recall_at_20 value: 96.974 - type: recall_at_3 value: 86.33800000000001 - type: recall_at_5 value: 90.69 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 21.492 - type: map_at_1 value: 5.192 - type: map_at_10 value: 12.959000000000001 - type: map_at_100 value: 14.963999999999999 - type: map_at_1000 value: 15.261 - type: map_at_20 value: 13.988999999999999 - type: map_at_3 value: 9.235 - type: map_at_5 value: 11.042 - type: mrr_at_1 value: 25.5 - type: mrr_at_10 value: 36.37313492063491 - type: mrr_at_100 value: 37.36517957347626 - type: mrr_at_1000 value: 37.42538601073437 - type: mrr_at_20 value: 36.987896404421136 - type: mrr_at_3 value: 32.966666666666654 - type: mrr_at_5 value: 34.95166666666664 - type: nauc_map_at_1000_diff1 value: 13.635120934154395 - type: nauc_map_at_1000_max value: 28.03542983005195 - type: nauc_map_at_1000_std value: 17.07156940311778 - type: nauc_map_at_100_diff1 value: 13.59237295184475 - type: nauc_map_at_100_max value: 27.992291365051237 - type: nauc_map_at_100_std value: 16.926533467400464 - type: nauc_map_at_10_diff1 value: 14.149193235999993 - type: nauc_map_at_10_max value: 26.520643811139305 - type: nauc_map_at_10_std value: 13.168673602548925 - type: nauc_map_at_1_diff1 value: 20.096094508148465 - type: nauc_map_at_1_max value: 17.41582245576302 - type: nauc_map_at_1_std value: 5.771729007558897 - type: nauc_map_at_20_diff1 value: 13.977726400526427 - type: nauc_map_at_20_max value: 27.2322235491895 - type: nauc_map_at_20_std value: 14.972781677750435 - type: nauc_map_at_3_diff1 value: 17.371153027460355 - type: nauc_map_at_3_max value: 24.457758503208254 - type: nauc_map_at_3_std value: 7.719726821179824 - type: nauc_map_at_5_diff1 value: 14.600442843442574 - type: nauc_map_at_5_max value: 25.899736370856296 - type: nauc_map_at_5_std value: 10.125349354853359 - type: nauc_mrr_at_1000_diff1 value: 18.70342821390236 - type: nauc_mrr_at_1000_max value: 23.365194520549114 - type: nauc_mrr_at_1000_std value: 12.185114294903236 - type: nauc_mrr_at_100_diff1 value: 18.677858738015907 - type: nauc_mrr_at_100_max value: 23.372641996726742 - type: nauc_mrr_at_100_std value: 12.216130561991909 - type: nauc_mrr_at_10_diff1 value: 18.79094453090232 - type: nauc_mrr_at_10_max value: 23.511686337006466 - type: nauc_mrr_at_10_std value: 11.879716687008134 - type: nauc_mrr_at_1_diff1 value: 20.10455171810408 - type: nauc_mrr_at_1_max value: 17.741566234315428 - type: nauc_mrr_at_1_std value: 6.1676764583652215 - type: nauc_mrr_at_20_diff1 value: 18.70143648544655 - type: nauc_mrr_at_20_max value: 23.45603239095019 - type: nauc_mrr_at_20_std value: 12.244613576686202 - type: nauc_mrr_at_3_diff1 value: 18.894662528857374 - type: nauc_mrr_at_3_max value: 23.3739038101588 - type: nauc_mrr_at_3_std value: 10.4709044796543 - type: nauc_mrr_at_5_diff1 value: 18.877786065095563 - type: nauc_mrr_at_5_max value: 23.78061081203872 - type: nauc_mrr_at_5_std value: 11.847882917869622 - type: nauc_ndcg_at_1000_diff1 value: 13.99159027398115 - type: nauc_ndcg_at_1000_max value: 29.44766808611483 - type: nauc_ndcg_at_1000_std value: 24.289749574699915 - type: nauc_ndcg_at_100_diff1 value: 13.164020363258746 - type: nauc_ndcg_at_100_max value: 29.642442997167723 - type: nauc_ndcg_at_100_std value: 23.761764515453866 - type: nauc_ndcg_at_10_diff1 value: 14.839883268638546 - type: nauc_ndcg_at_10_max value: 27.21043708455449 - type: nauc_ndcg_at_10_std value: 15.56110419291775 - type: nauc_ndcg_at_1_diff1 value: 20.10455171810408 - type: nauc_ndcg_at_1_max value: 17.741566234315428 - type: nauc_ndcg_at_1_std value: 6.1676764583652215 - type: nauc_ndcg_at_20_diff1 value: 14.27998110295395 - type: nauc_ndcg_at_20_max value: 28.2492026337839 - type: nauc_ndcg_at_20_std value: 18.822356982979105 - type: nauc_ndcg_at_3_diff1 value: 17.659263157535445 - type: nauc_ndcg_at_3_max value: 25.416706421591396 - type: nauc_ndcg_at_3_std value: 9.650689638152636 - type: nauc_ndcg_at_5_diff1 value: 15.38459833918123 - type: nauc_ndcg_at_5_max value: 26.92495519416969 - type: nauc_ndcg_at_5_std value: 12.71017696809276 - type: nauc_precision_at_1000_diff1 value: 6.128490135458364 - type: nauc_precision_at_1000_max value: 23.52693893261883 - type: nauc_precision_at_1000_std value: 36.280432732819925 - type: nauc_precision_at_100_diff1 value: 5.306163791220436 - type: nauc_precision_at_100_max value: 27.67851033239246 - type: nauc_precision_at_100_std value: 34.29821573752515 - type: nauc_precision_at_10_diff1 value: 10.829686435425472 - type: nauc_precision_at_10_max value: 27.201648684015318 - type: nauc_precision_at_10_std value: 19.376999508233254 - type: nauc_precision_at_1_diff1 value: 20.10455171810408 - type: nauc_precision_at_1_max value: 17.741566234315428 - type: nauc_precision_at_1_std value: 6.1676764583652215 - type: nauc_precision_at_20_diff1 value: 9.416169626702048 - type: nauc_precision_at_20_max value: 27.65257998670333 - type: nauc_precision_at_20_std value: 24.761868509805826 - type: nauc_precision_at_3_diff1 value: 16.666456902017348 - type: nauc_precision_at_3_max value: 27.9969730961105 - type: nauc_precision_at_3_std value: 10.991562741393231 - type: nauc_precision_at_5_diff1 value: 12.26205064462843 - type: nauc_precision_at_5_max value: 29.083848730874095 - type: nauc_precision_at_5_std value: 15.66630836555747 - type: nauc_recall_at_1000_diff1 value: 5.600277836894063 - type: nauc_recall_at_1000_max value: 23.228705161815526 - type: nauc_recall_at_1000_std value: 36.822431061799485 - type: nauc_recall_at_100_diff1 value: 4.991781244867178 - type: nauc_recall_at_100_max value: 27.70095625483475 - type: nauc_recall_at_100_std value: 34.67168431597854 - type: nauc_recall_at_10_diff1 value: 10.580860425931972 - type: nauc_recall_at_10_max value: 27.145829414223666 - type: nauc_recall_at_10_std value: 19.330630157067382 - type: nauc_recall_at_1_diff1 value: 20.096094508148465 - type: nauc_recall_at_1_max value: 17.41582245576302 - type: nauc_recall_at_1_std value: 5.771729007558897 - type: nauc_recall_at_20_diff1 value: 9.06945331260344 - type: nauc_recall_at_20_max value: 27.56725251066482 - type: nauc_recall_at_20_std value: 24.77644509886098 - type: nauc_recall_at_3_diff1 value: 16.660507676429322 - type: nauc_recall_at_3_max value: 27.816546386536434 - type: nauc_recall_at_3_std value: 10.687824478247007 - type: nauc_recall_at_5_diff1 value: 11.992514446369388 - type: nauc_recall_at_5_max value: 28.789031176671948 - type: nauc_recall_at_5_std value: 15.422118990090805 - type: ndcg_at_1 value: 25.5 - type: ndcg_at_10 value: 21.492 - type: ndcg_at_100 value: 29.022 - type: ndcg_at_1000 value: 34.298 - type: ndcg_at_20 value: 24.237000000000002 - type: ndcg_at_3 value: 20.392 - type: ndcg_at_5 value: 17.801000000000002 - type: precision_at_1 value: 25.5 - type: precision_at_10 value: 11.09 - type: precision_at_100 value: 2.1919999999999997 - type: precision_at_1000 value: 0.346 - type: precision_at_20 value: 7.135 - type: precision_at_3 value: 18.933 - type: precision_at_5 value: 15.52 - type: recall_at_1 value: 5.192 - type: recall_at_10 value: 22.512999999999998 - type: recall_at_100 value: 44.505 - type: recall_at_1000 value: 70.267 - type: recall_at_20 value: 28.965000000000003 - type: recall_at_3 value: 11.522 - type: recall_at_5 value: 15.751999999999999 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 71.586 - type: map_at_1 value: 56.760999999999996 - type: map_at_10 value: 66.893 - type: map_at_100 value: 67.42 - type: map_at_1000 value: 67.44200000000001 - type: map_at_20 value: 67.232 - type: map_at_3 value: 64.193 - type: map_at_5 value: 65.73400000000001 - type: mrr_at_1 value: 60.0 - type: mrr_at_10 value: 68.20383597883595 - type: mrr_at_100 value: 68.58867453733343 - type: mrr_at_1000 value: 68.61117469977329 - type: mrr_at_20 value: 68.43973740684265 - type: mrr_at_3 value: 66.11111111111111 - type: mrr_at_5 value: 67.44444444444446 - type: nauc_map_at_1000_diff1 value: 72.66688261123035 - type: nauc_map_at_1000_max value: 61.02926282006283 - type: nauc_map_at_1000_std value: 11.084549829740526 - type: nauc_map_at_100_diff1 value: 72.66226192320828 - type: nauc_map_at_100_max value: 61.04393223108811 - type: nauc_map_at_100_std value: 11.101529343291695 - type: nauc_map_at_10_diff1 value: 72.66732266693091 - type: nauc_map_at_10_max value: 61.24124296311832 - type: nauc_map_at_10_std value: 10.91179451961794 - type: nauc_map_at_1_diff1 value: 74.2356464256346 - type: nauc_map_at_1_max value: 54.06962758957632 - type: nauc_map_at_1_std value: 0.8037891907963532 - type: nauc_map_at_20_diff1 value: 72.65198594061253 - type: nauc_map_at_20_max value: 61.130159351448185 - type: nauc_map_at_20_std value: 11.2246899245522 - type: nauc_map_at_3_diff1 value: 72.78578673303954 - type: nauc_map_at_3_max value: 59.19073262936321 - type: nauc_map_at_3_std value: 8.460301560522968 - type: nauc_map_at_5_diff1 value: 72.55004168261968 - type: nauc_map_at_5_max value: 59.75181935082357 - type: nauc_map_at_5_std value: 9.440299527201889 - type: nauc_mrr_at_1000_diff1 value: 72.82720348470325 - type: nauc_mrr_at_1000_max value: 62.344231223741446 - type: nauc_mrr_at_1000_std value: 12.60196558488974 - type: nauc_mrr_at_100_diff1 value: 72.82236849255094 - type: nauc_mrr_at_100_max value: 62.35799491393125 - type: nauc_mrr_at_100_std value: 12.617900773655673 - type: nauc_mrr_at_10_diff1 value: 72.7722847495086 - type: nauc_mrr_at_10_max value: 62.66642401155435 - type: nauc_mrr_at_10_std value: 12.906381237738746 - type: nauc_mrr_at_1_diff1 value: 74.71208073612343 - type: nauc_mrr_at_1_max value: 59.50430394775893 - type: nauc_mrr_at_1_std value: 8.129514198080512 - type: nauc_mrr_at_20_diff1 value: 72.78312367361772 - type: nauc_mrr_at_20_max value: 62.421122493761885 - type: nauc_mrr_at_20_std value: 12.693437522498588 - type: nauc_mrr_at_3_diff1 value: 73.50670156385345 - type: nauc_mrr_at_3_max value: 62.01717537699209 - type: nauc_mrr_at_3_std value: 11.926548252191182 - type: nauc_mrr_at_5_diff1 value: 72.62204028549876 - type: nauc_mrr_at_5_max value: 62.319358766312085 - type: nauc_mrr_at_5_std value: 13.081257923284342 - type: nauc_ndcg_at_1000_diff1 value: 72.29960539074736 - type: nauc_ndcg_at_1000_max value: 62.75096959221402 - type: nauc_ndcg_at_1000_std value: 13.81528462505362 - type: nauc_ndcg_at_100_diff1 value: 72.19985782073529 - type: nauc_ndcg_at_100_max value: 63.18837705326287 - type: nauc_ndcg_at_100_std value: 14.506479655117138 - type: nauc_ndcg_at_10_diff1 value: 71.85759847832983 - type: nauc_ndcg_at_10_max value: 64.150996056865 - type: nauc_ndcg_at_10_std value: 14.580606901634278 - type: nauc_ndcg_at_1_diff1 value: 74.71208073612343 - type: nauc_ndcg_at_1_max value: 59.50430394775893 - type: nauc_ndcg_at_1_std value: 8.129514198080512 - type: nauc_ndcg_at_20_diff1 value: 71.80987178228351 - type: nauc_ndcg_at_20_max value: 63.56269460865743 - type: nauc_ndcg_at_20_std value: 15.024978004625922 - type: nauc_ndcg_at_3_diff1 value: 72.35095651602592 - type: nauc_ndcg_at_3_max value: 61.60548011855679 - type: nauc_ndcg_at_3_std value: 12.048248788835263 - type: nauc_ndcg_at_5_diff1 value: 71.48615621881864 - type: nauc_ndcg_at_5_max value: 61.72870035979784 - type: nauc_ndcg_at_5_std value: 12.83048357446691 - type: nauc_precision_at_1000_diff1 value: -14.743011420972 - type: nauc_precision_at_1000_max value: 19.281995763080158 - type: nauc_precision_at_1000_std value: 49.6140660398164 - type: nauc_precision_at_100_diff1 value: 0.11278174806205563 - type: nauc_precision_at_100_max value: 29.704511820077332 - type: nauc_precision_at_100_std value: 47.84916954122579 - type: nauc_precision_at_10_diff1 value: 20.498227967235728 - type: nauc_precision_at_10_max value: 47.883119365891595 - type: nauc_precision_at_10_std value: 45.182178693450595 - type: nauc_precision_at_1_diff1 value: 74.71208073612343 - type: nauc_precision_at_1_max value: 59.50430394775893 - type: nauc_precision_at_1_std value: 8.129514198080512 - type: nauc_precision_at_20_diff1 value: 12.551737222341455 - type: nauc_precision_at_20_max value: 40.618899501225634 - type: nauc_precision_at_20_std value: 48.5598454249067 - type: nauc_precision_at_3_diff1 value: 47.67720764601145 - type: nauc_precision_at_3_max value: 56.50632017305064 - type: nauc_precision_at_3_std value: 31.14175140162157 - type: nauc_precision_at_5_diff1 value: 35.10058622792819 - type: nauc_precision_at_5_max value: 51.88948872657981 - type: nauc_precision_at_5_std value: 37.62796957461928 - type: nauc_recall_at_1000_diff1 value: 79.57516339869238 - type: nauc_recall_at_1000_max value: 86.11111111111035 - type: nauc_recall_at_1000_std value: 79.57516339869238 - type: nauc_recall_at_100_diff1 value: 70.50859559510081 - type: nauc_recall_at_100_max value: 79.17009941231396 - type: nauc_recall_at_100_std value: 44.32910419069595 - type: nauc_recall_at_10_diff1 value: 66.16118569361245 - type: nauc_recall_at_10_max value: 74.73542948302286 - type: nauc_recall_at_10_std value: 27.680330939810037 - type: nauc_recall_at_1_diff1 value: 74.2356464256346 - type: nauc_recall_at_1_max value: 54.06962758957632 - type: nauc_recall_at_1_std value: 0.8037891907963532 - type: nauc_recall_at_20_diff1 value: 65.4748436545527 - type: nauc_recall_at_20_max value: 73.81532199081235 - type: nauc_recall_at_20_std value: 33.59324708196253 - type: nauc_recall_at_3_diff1 value: 68.83194804473622 - type: nauc_recall_at_3_max value: 61.77722610439669 - type: nauc_recall_at_3_std value: 13.984923756556714 - type: nauc_recall_at_5_diff1 value: 65.51467417209523 - type: nauc_recall_at_5_max value: 64.08276291427661 - type: nauc_recall_at_5_std value: 19.976472037847167 - type: ndcg_at_1 value: 60.0 - type: ndcg_at_10 value: 71.586 - type: ndcg_at_100 value: 73.76899999999999 - type: ndcg_at_1000 value: 74.386 - type: ndcg_at_20 value: 72.612 - type: ndcg_at_3 value: 66.944 - type: ndcg_at_5 value: 69.333 - type: precision_at_1 value: 60.0 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.073 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_20 value: 5.033 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 17.4 - type: recall_at_1 value: 56.760999999999996 - type: recall_at_10 value: 84.589 - type: recall_at_100 value: 94.333 - type: recall_at_1000 value: 99.333 - type: recall_at_20 value: 88.43299999999999 - type: recall_at_3 value: 72.10600000000001 - type: recall_at_5 value: 78.194 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 84.60600000000001 - type: map_at_1 value: 0.257 - type: map_at_10 value: 2.196 - type: map_at_100 value: 13.252 - type: map_at_1000 value: 31.473000000000003 - type: map_at_20 value: 4.023000000000001 - type: map_at_3 value: 0.722 - type: map_at_5 value: 1.146 - type: mrr_at_1 value: 94.0 - type: mrr_at_10 value: 97.0 - type: mrr_at_100 value: 97.0 - type: mrr_at_1000 value: 97.0 - type: mrr_at_20 value: 97.0 - type: mrr_at_3 value: 97.0 - type: mrr_at_5 value: 97.0 - type: nauc_map_at_1000_diff1 value: -30.674816554207062 - type: nauc_map_at_1000_max value: 53.18598689657068 - type: nauc_map_at_1000_std value: 78.88325309469121 - type: nauc_map_at_100_diff1 value: -17.6877824653978 - type: nauc_map_at_100_max value: 19.584159765315658 - type: nauc_map_at_100_std value: 48.051154190992726 - type: nauc_map_at_10_diff1 value: 20.076631089898626 - type: nauc_map_at_10_max value: -8.642556160185636 - type: nauc_map_at_10_std value: -5.768698617334298 - type: nauc_map_at_1_diff1 value: 27.342260509653798 - type: nauc_map_at_1_max value: -23.400451210297994 - type: nauc_map_at_1_std value: -21.152006353733853 - type: nauc_map_at_20_diff1 value: 8.019321726240506 - type: nauc_map_at_20_max value: -1.4826378210544222 - type: nauc_map_at_20_std value: 5.698208117745366 - type: nauc_map_at_3_diff1 value: 32.073377946749446 - type: nauc_map_at_3_max value: -13.099353983204654 - type: nauc_map_at_3_std value: -15.36319127398037 - type: nauc_map_at_5_diff1 value: 22.500045815797876 - type: nauc_map_at_5_max value: -8.548135411428023 - type: nauc_map_at_5_std value: -8.547850460331334 - type: nauc_mrr_at_1000_diff1 value: -6.022408963585526 - type: nauc_mrr_at_1000_max value: 4.481792717087155 - type: nauc_mrr_at_1000_std value: 51.6962340491753 - type: nauc_mrr_at_100_diff1 value: -6.022408963585526 - type: nauc_mrr_at_100_max value: 4.481792717087155 - type: nauc_mrr_at_100_std value: 51.6962340491753 - type: nauc_mrr_at_10_diff1 value: -6.022408963585526 - type: nauc_mrr_at_10_max value: 4.481792717087155 - type: nauc_mrr_at_10_std value: 51.6962340491753 - type: nauc_mrr_at_1_diff1 value: -6.022408963585076 - type: nauc_mrr_at_1_max value: 4.481792717087146 - type: nauc_mrr_at_1_std value: 51.69623404917518 - type: nauc_mrr_at_20_diff1 value: -6.022408963585526 - type: nauc_mrr_at_20_max value: 4.481792717087155 - type: nauc_mrr_at_20_std value: 51.6962340491753 - type: nauc_mrr_at_3_diff1 value: -6.022408963585526 - type: nauc_mrr_at_3_max value: 4.481792717087155 - type: nauc_mrr_at_3_std value: 51.6962340491753 - type: nauc_mrr_at_5_diff1 value: -6.022408963585526 - type: nauc_mrr_at_5_max value: 4.481792717087155 - type: nauc_mrr_at_5_std value: 51.6962340491753 - type: nauc_ndcg_at_1000_diff1 value: -20.79697283984295 - type: nauc_ndcg_at_1000_max value: 52.97671908009218 - type: nauc_ndcg_at_1000_std value: 75.43907707019758 - type: nauc_ndcg_at_100_diff1 value: -38.620752706946455 - type: nauc_ndcg_at_100_max value: 49.41307462381511 - type: nauc_ndcg_at_100_std value: 81.33299379244252 - type: nauc_ndcg_at_10_diff1 value: -18.611906363037356 - type: nauc_ndcg_at_10_max value: 44.20544651664479 - type: nauc_ndcg_at_10_std value: 61.322552829935816 - type: nauc_ndcg_at_1_diff1 value: 18.625935567849073 - type: nauc_ndcg_at_1_max value: -10.104132769280879 - type: nauc_ndcg_at_1_std value: 22.449560689879743 - type: nauc_ndcg_at_20_diff1 value: -30.61130208138771 - type: nauc_ndcg_at_20_max value: 52.68851710375231 - type: nauc_ndcg_at_20_std value: 69.72357683382992 - type: nauc_ndcg_at_3_diff1 value: 5.695394821691213 - type: nauc_ndcg_at_3_max value: 37.909122367102135 - type: nauc_ndcg_at_3_std value: 46.2366603255159 - type: nauc_ndcg_at_5_diff1 value: -15.273067832464731 - type: nauc_ndcg_at_5_max value: 49.7054639475091 - type: nauc_ndcg_at_5_std value: 58.83754007826166 - type: nauc_precision_at_1000_diff1 value: -31.565302588492035 - type: nauc_precision_at_1000_max value: 52.56214379514724 - type: nauc_precision_at_1000_std value: 53.40618234326055 - type: nauc_precision_at_100_diff1 value: -44.67273120709088 - type: nauc_precision_at_100_max value: 48.30381155522576 - type: nauc_precision_at_100_std value: 82.1984661602578 - type: nauc_precision_at_10_diff1 value: -24.737383556860145 - type: nauc_precision_at_10_max value: 52.816815002878556 - type: nauc_precision_at_10_std value: 67.99052410030845 - type: nauc_precision_at_1_diff1 value: -6.022408963585076 - type: nauc_precision_at_1_max value: 4.481792717087146 - type: nauc_precision_at_1_std value: 51.69623404917518 - type: nauc_precision_at_20_diff1 value: -40.23628054967093 - type: nauc_precision_at_20_max value: 56.980056980057014 - type: nauc_precision_at_20_std value: 76.60976777785895 - type: nauc_precision_at_3_diff1 value: -4.661784068466279 - type: nauc_precision_at_3_max value: 59.052007899934125 - type: nauc_precision_at_3_std value: 58.187952600394986 - type: nauc_precision_at_5_diff1 value: -38.11848143512736 - type: nauc_precision_at_5_max value: 68.6149353358365 - type: nauc_precision_at_5_std value: 73.55652899457661 - type: nauc_recall_at_1000_diff1 value: -14.886527444436345 - type: nauc_recall_at_1000_max value: 48.07492302795808 - type: nauc_recall_at_1000_std value: 65.05623212485906 - type: nauc_recall_at_100_diff1 value: -8.148385729388195 - type: nauc_recall_at_100_max value: 8.041615364614533 - type: nauc_recall_at_100_std value: 33.77187914574611 - type: nauc_recall_at_10_diff1 value: 24.333628413035942 - type: nauc_recall_at_10_max value: -14.577877145192078 - type: nauc_recall_at_10_std value: -12.131819145098557 - type: nauc_recall_at_1_diff1 value: 27.342260509653798 - type: nauc_recall_at_1_max value: -23.400451210297994 - type: nauc_recall_at_1_std value: -21.152006353733853 - type: nauc_recall_at_20_diff1 value: 13.695556376785564 - type: nauc_recall_at_20_max value: -8.872009346408264 - type: nauc_recall_at_20_std value: -3.163199444247112 - type: nauc_recall_at_3_diff1 value: 32.00442538217753 - type: nauc_recall_at_3_max value: -15.159737942664552 - type: nauc_recall_at_3_std value: -17.530833132440645 - type: nauc_recall_at_5_diff1 value: 22.64740552912405 - type: nauc_recall_at_5_max value: -12.947090597010414 - type: nauc_recall_at_5_std value: -12.914478822476807 - type: ndcg_at_1 value: 88.0 - type: ndcg_at_10 value: 84.60600000000001 - type: ndcg_at_100 value: 64.31700000000001 - type: ndcg_at_1000 value: 56.40500000000001 - type: ndcg_at_20 value: 80.561 - type: ndcg_at_3 value: 87.87700000000001 - type: ndcg_at_5 value: 86.641 - type: precision_at_1 value: 94.0 - type: precision_at_10 value: 88.2 - type: precision_at_100 value: 65.9 - type: precision_at_1000 value: 25.019999999999996 - type: precision_at_20 value: 84.7 - type: precision_at_3 value: 92.0 - type: precision_at_5 value: 90.0 - type: recall_at_1 value: 0.257 - type: recall_at_10 value: 2.338 - type: recall_at_100 value: 15.831999999999999 - type: recall_at_1000 value: 52.519000000000005 - type: recall_at_20 value: 4.367 - type: recall_at_3 value: 0.74 - type: recall_at_5 value: 1.196 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 31.426 - type: map_at_1 value: 3.4709999999999996 - type: map_at_10 value: 13.236999999999998 - type: map_at_100 value: 19.521 - type: map_at_1000 value: 21.224 - type: map_at_20 value: 15.626000000000001 - type: map_at_3 value: 7.152 - type: map_at_5 value: 9.914000000000001 - type: mrr_at_1 value: 44.89795918367347 - type: mrr_at_10 value: 57.54373177842565 - type: mrr_at_100 value: 57.855267710139536 - type: mrr_at_1000 value: 57.855267710139536 - type: mrr_at_20 value: 57.70071764969724 - type: mrr_at_3 value: 52.72108843537414 - type: mrr_at_5 value: 55.06802721088435 - type: nauc_map_at_1000_diff1 value: 21.148857552115558 - type: nauc_map_at_1000_max value: 2.0837572569021323 - type: nauc_map_at_1000_std value: 3.203419709665347 - type: nauc_map_at_100_diff1 value: 21.383778167597878 - type: nauc_map_at_100_max value: 0.965767943155967 - type: nauc_map_at_100_std value: 0.3949924961020957 - type: nauc_map_at_10_diff1 value: 27.178555638086394 - type: nauc_map_at_10_max value: 4.480675175857958 - type: nauc_map_at_10_std value: -13.69553539513878 - type: nauc_map_at_1_diff1 value: 27.63901823865334 - type: nauc_map_at_1_max value: -18.6387233237763 - type: nauc_map_at_1_std value: -27.02164241863646 - type: nauc_map_at_20_diff1 value: 23.892104752374888 - type: nauc_map_at_20_max value: 3.5343136621362348 - type: nauc_map_at_20_std value: -8.765101188860816 - type: nauc_map_at_3_diff1 value: 22.065793929837493 - type: nauc_map_at_3_max value: 0.8063396680860568 - type: nauc_map_at_3_std value: -20.404849396621824 - type: nauc_map_at_5_diff1 value: 22.66626080580714 - type: nauc_map_at_5_max value: 5.423340658352383 - type: nauc_map_at_5_std value: -18.31523779843455 - type: nauc_mrr_at_1000_diff1 value: 30.520722269282665 - type: nauc_mrr_at_1000_max value: -16.644959497742267 - type: nauc_mrr_at_1000_std value: -16.3824126273053 - type: nauc_mrr_at_100_diff1 value: 30.520722269282665 - type: nauc_mrr_at_100_max value: -16.644959497742267 - type: nauc_mrr_at_100_std value: -16.3824126273053 - type: nauc_mrr_at_10_diff1 value: 30.428248939332974 - type: nauc_mrr_at_10_max value: -16.300183919261585 - type: nauc_mrr_at_10_std value: -15.404823235836309 - type: nauc_mrr_at_1_diff1 value: 27.041346572613474 - type: nauc_mrr_at_1_max value: -23.181309312755804 - type: nauc_mrr_at_1_std value: -24.33076726484014 - type: nauc_mrr_at_20_diff1 value: 30.676558567379303 - type: nauc_mrr_at_20_max value: -16.914268763031416 - type: nauc_mrr_at_20_std value: -15.77742854976336 - type: nauc_mrr_at_3_diff1 value: 31.718457109787096 - type: nauc_mrr_at_3_max value: -15.508391132202235 - type: nauc_mrr_at_3_std value: -20.33229438349494 - type: nauc_mrr_at_5_diff1 value: 28.73798376227693 - type: nauc_mrr_at_5_max value: -16.086295031060196 - type: nauc_mrr_at_5_std value: -15.644604635769321 - type: nauc_ndcg_at_1000_diff1 value: 22.158724660189606 - type: nauc_ndcg_at_1000_max value: -3.1755686809941475 - type: nauc_ndcg_at_1000_std value: 19.258386224159075 - type: nauc_ndcg_at_100_diff1 value: 21.83846748649288 - type: nauc_ndcg_at_100_max value: -10.939957598756036 - type: nauc_ndcg_at_100_std value: 14.729678880436623 - type: nauc_ndcg_at_10_diff1 value: 26.944882726098424 - type: nauc_ndcg_at_10_max value: -3.5176483833346617 - type: nauc_ndcg_at_10_std value: -5.400606773697211 - type: nauc_ndcg_at_1_diff1 value: 26.649410985172985 - type: nauc_ndcg_at_1_max value: -18.806716526067493 - type: nauc_ndcg_at_1_std value: -25.100244999343506 - type: nauc_ndcg_at_20_diff1 value: 24.860266153648315 - type: nauc_ndcg_at_20_max value: -7.521401821712892 - type: nauc_ndcg_at_20_std value: -3.3696577425983003 - type: nauc_ndcg_at_3_diff1 value: 23.9933326962406 - type: nauc_ndcg_at_3_max value: -0.4609479344284664 - type: nauc_ndcg_at_3_std value: -15.176459166869897 - type: nauc_ndcg_at_5_diff1 value: 22.50595978713142 - type: nauc_ndcg_at_5_max value: -2.1093870656000857 - type: nauc_ndcg_at_5_std value: -12.732197425528257 - type: nauc_precision_at_1000_diff1 value: -20.335120385950024 - type: nauc_precision_at_1000_max value: 26.95109729939765 - type: nauc_precision_at_1000_std value: 29.981685890622117 - type: nauc_precision_at_100_diff1 value: -2.782114329320704 - type: nauc_precision_at_100_max value: 2.9489322002048604 - type: nauc_precision_at_100_std value: 67.3074073674319 - type: nauc_precision_at_10_diff1 value: 21.385177180383383 - type: nauc_precision_at_10_max value: -2.4696365259422817 - type: nauc_precision_at_10_std value: 14.469784299536673 - type: nauc_precision_at_1_diff1 value: 27.041346572613474 - type: nauc_precision_at_1_max value: -23.181309312755804 - type: nauc_precision_at_1_std value: -24.33076726484014 - type: nauc_precision_at_20_diff1 value: 11.993846579997673 - type: nauc_precision_at_20_max value: -2.4792189693296227 - type: nauc_precision_at_20_std value: 28.581394687807745 - type: nauc_precision_at_3_diff1 value: 20.70568446328836 - type: nauc_precision_at_3_max value: 0.37326398699875984 - type: nauc_precision_at_3_std value: -12.983918676694389 - type: nauc_precision_at_5_diff1 value: 19.47466335828124 - type: nauc_precision_at_5_max value: -1.8921617684385994 - type: nauc_precision_at_5_std value: -6.533875294402164 - type: nauc_recall_at_1000_diff1 value: 7.611201305723156 - type: nauc_recall_at_1000_max value: 5.6416194035820055 - type: nauc_recall_at_1000_std value: 61.695208644278 - type: nauc_recall_at_100_diff1 value: 10.0183258158735 - type: nauc_recall_at_100_max value: -10.950612455698973 - type: nauc_recall_at_100_std value: 33.06069987640471 - type: nauc_recall_at_10_diff1 value: 24.738210305731535 - type: nauc_recall_at_10_max value: -2.6592454032071546 - type: nauc_recall_at_10_std value: -4.83987517793115 - type: nauc_recall_at_1_diff1 value: 27.63901823865334 - type: nauc_recall_at_1_max value: -18.6387233237763 - type: nauc_recall_at_1_std value: -27.02164241863646 - type: nauc_recall_at_20_diff1 value: 17.79601177409034 - type: nauc_recall_at_20_max value: -6.681637093148051 - type: nauc_recall_at_20_std value: 3.369193919932238 - type: nauc_recall_at_3_diff1 value: 24.9589431081204 - type: nauc_recall_at_3_max value: 2.4783640980500232 - type: nauc_recall_at_3_std value: -19.567415651090702 - type: nauc_recall_at_5_diff1 value: 23.71803410135437 - type: nauc_recall_at_5_max value: 1.6294309357641652 - type: nauc_recall_at_5_std value: -15.365511906408983 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 31.426 - type: ndcg_at_100 value: 41.558 - type: ndcg_at_1000 value: 53.042 - type: ndcg_at_20 value: 31.108999999999998 - type: ndcg_at_3 value: 35.518 - type: ndcg_at_5 value: 33.235 - type: precision_at_1 value: 44.897999999999996 - type: precision_at_10 value: 27.551 - type: precision_at_100 value: 8.204 - type: precision_at_1000 value: 1.582 - type: precision_at_20 value: 19.796 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.061 - type: recall_at_1 value: 3.4709999999999996 - type: recall_at_10 value: 19.563 - type: recall_at_100 value: 50.3 - type: recall_at_1000 value: 85.13199999999999 - type: recall_at_20 value: 26.738 - type: recall_at_3 value: 7.8420000000000005 - type: recall_at_5 value: 11.994 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 68.29850746268657 - type: ap value: 30.109785890841966 - type: ap_weighted value: 30.109785890841966 - type: f1 value: 61.76875915202924 - type: f1_weighted value: 71.32073190458556 - type: main_score value: 68.29850746268657 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification (default) type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 90.3068 - type: ap value: 86.17914339624038 - type: ap_weighted value: 86.17914339624038 - type: f1 value: 90.29716826358077 - type: f1_weighted value: 90.29716826358077 - type: main_score value: 90.3068 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.272000000000006 - type: f1 value: 45.57042543386915 - type: f1_weighted value: 45.57042543386915 - type: main_score value: 46.272000000000006 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P (default) type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 44.9469238081379 - type: v_measure value: 44.9469238081379 - type: v_measure_std value: 13.26811262671461 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S (default) type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 34.12071448053325 - type: v_measure value: 34.12071448053325 - type: v_measure_std value: 13.7019879046405 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions (default) type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 61.597667288657846 - type: map value: 61.597667288657846 - type: mrr value: 75.57940904893813 - type: nAUC_map_diff1 value: 8.745172077340095 - type: nAUC_map_max value: 20.114863024035493 - type: nAUC_map_std value: 15.991351189572192 - type: nAUC_mrr_diff1 value: 20.781369244159983 - type: nAUC_mrr_max value: 30.78542570228559 - type: nAUC_mrr_std value: 19.861484857303676 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 88.55587996301419 - type: cosine_spearman value: 86.40317357420093 - type: euclidean_pearson value: 86.93771958250231 - type: euclidean_spearman value: 86.40317357420093 - type: main_score value: 86.40317357420093 - type: manhattan_pearson value: 86.92196577117366 - type: manhattan_spearman value: 85.79834051556095 - type: pearson value: 88.55587996301419 - type: spearman value: 86.40317357420093 - task: type: Classification dataset: name: MTEB Banking77Classification (default) type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 80.0064935064935 - type: f1 value: 79.29524254086299 - type: f1_weighted value: 79.295242540863 - type: main_score value: 80.0064935064935 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P (default) type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 35.27186813341181 - type: v_measure value: 35.27186813341181 - type: v_measure_std value: 0.8621482145872432 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S (default) type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 28.411805064852295 - type: v_measure value: 28.411805064852295 - type: v_measure_std value: 0.7194290078011281 - task: type: Classification dataset: name: MTEB EmotionClassification (default) type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 43.675 - type: f1 value: 40.15061931375577 - type: f1_weighted value: 45.714186572727066 - type: main_score value: 43.675 - task: type: Classification dataset: name: MTEB ImdbClassification (default) type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 84.35640000000001 - type: ap value: 79.07507736685174 - type: ap_weighted value: 79.07507736685174 - type: f1 value: 84.32288494833531 - type: f1_weighted value: 84.32288494833531 - type: main_score value: 84.35640000000001 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.35658914728684 - type: f1 value: 90.86877537911086 - type: f1_weighted value: 91.3282092774443 - type: main_score value: 91.35658914728684 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 60.63611491108071 - type: f1 value: 42.78886482112741 - type: f1_weighted value: 63.44208631840539 - type: main_score value: 60.63611491108071 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 66.68796234028245 - type: f1 value: 64.44940791000278 - type: f1_weighted value: 65.77554417406792 - type: main_score value: 66.68796234028245 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 73.0598520511096 - type: f1 value: 72.14267273884774 - type: f1_weighted value: 72.93345180137516 - type: main_score value: 73.0598520511096 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P (default) type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 31.143081341699606 - type: v_measure value: 31.143081341699606 - type: v_measure_std value: 1.5578716347076906 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S (default) type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 27.010818869829556 - type: v_measure value: 27.010818869829556 - type: v_measure_std value: 1.1771554540819378 - task: type: Reranking dataset: name: MTEB MindSmallReranking (default) type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 30.20503776754942 - type: map value: 30.20503776754942 - type: mrr value: 31.076636002733437 - type: nAUC_map_diff1 value: 7.290568655287842 - type: nAUC_map_max value: -21.381599355932945 - type: nAUC_map_std value: -7.709920607543168 - type: nAUC_mrr_diff1 value: 7.558397329284913 - type: nAUC_mrr_max value: -15.981397186427607 - type: nAUC_mrr_std value: -4.870495243168834 - task: type: Clustering dataset: name: MTEB RedditClustering (default) type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 51.85893476633338 - type: v_measure value: 51.85893476633338 - type: v_measure_std value: 4.704770139385852 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P (default) type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 61.8124222918822 - type: v_measure value: 61.8124222918822 - type: v_measure_std value: 11.994472578100165 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 77.63310776935984 - type: cosine_spearman value: 69.86468291111039 - type: euclidean_pearson value: 73.91537077798837 - type: euclidean_spearman value: 69.86468376650203 - type: main_score value: 69.86468291111039 - type: manhattan_pearson value: 73.68616048370464 - type: manhattan_spearman value: 69.76232036206659 - type: pearson value: 77.63310776935984 - type: spearman value: 69.86468291111039 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 57.71716838245049 - type: cosine_spearman value: 61.797855543446424 - type: euclidean_pearson value: 58.22958675325848 - type: euclidean_spearman value: 61.797855543446424 - type: main_score value: 61.797855543446424 - type: manhattan_pearson value: 57.63117544997929 - type: manhattan_spearman value: 61.3629404350085 - type: pearson value: 57.71716838245049 - type: spearman value: 61.797855543446424 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 82.30260026790903 - type: cosine_spearman value: 82.66959813070869 - type: euclidean_pearson value: 82.08383017580783 - type: euclidean_spearman value: 82.66959813070869 - type: main_score value: 82.66959813070869 - type: manhattan_pearson value: 81.77991451392153 - type: manhattan_spearman value: 82.3652534745606 - type: pearson value: 82.30260026790903 - type: spearman value: 82.66959813070869 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 71.50608384084478 - type: cosine_spearman value: 68.94968064977785 - type: euclidean_pearson value: 70.73381299949564 - type: euclidean_spearman value: 68.94968064977785 - type: main_score value: 68.94968064977785 - type: manhattan_pearson value: 70.5385486953787 - type: manhattan_spearman value: 68.82132770672365 - type: pearson value: 71.50608384084478 - type: spearman value: 68.94968064977785 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 73.66969825874907 - type: cosine_spearman value: 75.55374982088381 - type: euclidean_pearson value: 75.9339313749594 - type: euclidean_spearman value: 75.55374982088381 - type: main_score value: 75.55374982088381 - type: manhattan_pearson value: 75.88287553383817 - type: manhattan_spearman value: 75.50729812977688 - type: pearson value: 73.66969825874907 - type: spearman value: 75.55374982088381 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 74.5954724414016 - type: cosine_spearman value: 77.2688820850505 - type: euclidean_pearson value: 77.19866353971555 - type: euclidean_spearman value: 77.2688820850505 - type: main_score value: 77.2688820850505 - type: manhattan_pearson value: 77.27072603680978 - type: manhattan_spearman value: 77.29408453673607 - type: pearson value: 74.5954724414016 - type: spearman value: 77.2688820850505 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 71.52588722654055 - type: cosine_spearman value: 74.97235736456061 - type: euclidean_pearson value: 74.51952528854038 - type: euclidean_spearman value: 74.97235736456061 - type: main_score value: 74.97235736456061 - type: manhattan_pearson value: 74.48272300884209 - type: manhattan_spearman value: 74.80633649415176 - type: pearson value: 71.52588722654055 - type: spearman value: 74.97235736456061 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 68.80031120401976 - type: cosine_spearman value: 69.07945196478491 - type: euclidean_pearson value: 68.99674496430792 - type: euclidean_spearman value: 69.07945196478491 - type: main_score value: 69.07945196478491 - type: manhattan_pearson value: 69.00236107775687 - type: manhattan_spearman value: 68.98064879049272 - type: pearson value: 68.80031120401976 - type: spearman value: 69.07945196478491 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 65.6898007230089 - type: cosine_spearman value: 69.72386211803668 - type: euclidean_pearson value: 69.04523003701475 - type: euclidean_spearman value: 69.72386211803668 - type: main_score value: 69.72386211803668 - type: manhattan_pearson value: 68.80479743770702 - type: manhattan_spearman value: 69.43264575177459 - type: pearson value: 65.6898007230089 - type: spearman value: 69.72386211803668 - task: type: Reranking dataset: name: MTEB SciDocsRR (default) type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 79.74088066874383 - type: map value: 79.74088066874383 - type: mrr value: 94.47697455050397 - type: nAUC_map_diff1 value: 8.036086256905502 - type: nAUC_map_max value: 54.88199803816819 - type: nAUC_map_std value: 69.16267942176574 - type: nAUC_mrr_diff1 value: 50.020738477678115 - type: nAUC_mrr_max value: 83.28922770326483 - type: nAUC_mrr_std value: 83.63973501802224 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions (default) type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.83861386138614 - type: cosine_accuracy_threshold value: 74.75666999816895 - type: cosine_ap value: 96.15132792066652 - type: cosine_f1 value: 91.84890656063618 - type: cosine_f1_threshold value: 71.70594930648804 - type: cosine_precision value: 91.30434782608695 - type: cosine_recall value: 92.4 - type: dot_accuracy value: 99.83861386138614 - type: dot_accuracy_threshold value: 74.75666999816895 - type: dot_ap value: 96.15132792066653 - type: dot_f1 value: 91.84890656063618 - type: dot_f1_threshold value: 71.70596122741699 - type: dot_precision value: 91.30434782608695 - type: dot_recall value: 92.4 - type: euclidean_accuracy value: 99.83861386138614 - type: euclidean_accuracy_threshold value: 71.05395793914795 - type: euclidean_ap value: 96.15132792066652 - type: euclidean_f1 value: 91.84890656063618 - type: euclidean_f1_threshold value: 75.22505521774292 - type: euclidean_precision value: 91.30434782608695 - type: euclidean_recall value: 92.4 - type: main_score value: 96.15132792066653 - type: manhattan_accuracy value: 99.83564356435643 - type: manhattan_accuracy_threshold value: 1547.6950645446777 - type: manhattan_ap value: 96.06151211452136 - type: manhattan_f1 value: 91.61676646706587 - type: manhattan_f1_threshold value: 1626.3608932495117 - type: manhattan_precision value: 91.43426294820716 - type: manhattan_recall value: 91.8 - type: max_ap value: 96.15132792066653 - type: max_f1 value: 91.84890656063618 - type: max_precision value: 91.43426294820716 - type: max_recall value: 92.4 - type: similarity_accuracy value: 99.83861386138614 - type: similarity_accuracy_threshold value: 74.75666999816895 - type: similarity_ap value: 96.15132792066652 - type: similarity_f1 value: 91.84890656063618 - type: similarity_f1_threshold value: 71.70594930648804 - type: similarity_precision value: 91.30434782608695 - type: similarity_recall value: 92.4 - task: type: Clustering dataset: name: MTEB StackExchangeClustering (default) type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 61.24120328328453 - type: v_measure value: 61.24120328328453 - type: v_measure_std value: 3.9946560691100372 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P (default) type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 33.808268374864745 - type: v_measure value: 33.808268374864745 - type: v_measure_std value: 1.2212188701887239 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions (default) type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 52.19806018468037 - type: map value: 52.19806018468037 - type: mrr value: 52.98921462524404 - type: nAUC_map_diff1 value: 37.41443156995912 - type: nAUC_map_max value: 9.410262727675603 - type: nAUC_map_std value: 8.7094185014992 - type: nAUC_mrr_diff1 value: 37.78202772392581 - type: nAUC_mrr_max value: 10.517635536565816 - type: nAUC_mrr_std value: 8.509423813772491 - task: type: Summarization dataset: name: MTEB SummEval (default) type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 30.48413700430812 - type: cosine_spearman value: 30.357162200875816 - type: dot_pearson value: 30.484140144824938 - type: dot_spearman value: 30.357162200875816 - type: main_score value: 30.357162200875816 - type: pearson value: 30.48413700430812 - type: spearman value: 30.357162200875816 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification (default) type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 66.8359375 - type: ap value: 12.482653786025985 - type: ap_weighted value: 12.482653786025985 - type: f1 value: 51.328608527332385 - type: f1_weighted value: 74.07974463955398 - type: main_score value: 66.8359375 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification (default) type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 53.907753254103 - type: f1 value: 54.22707647269581 - type: f1_weighted value: 53.611822984407695 - type: main_score value: 53.907753254103 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering (default) type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 38.1364789307295 - type: v_measure value: 38.1364789307295 - type: v_measure_std value: 2.0731634966352077 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 (default) type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 82.66674614054956 - type: cosine_accuracy_threshold value: 79.80123162269592 - type: cosine_ap value: 63.28209719072804 - type: cosine_f1 value: 60.16389710903711 - type: cosine_f1_threshold value: 72.22893834114075 - type: cosine_precision value: 52.90232185748599 - type: cosine_recall value: 69.73614775725594 - type: dot_accuracy value: 82.66674614054956 - type: dot_accuracy_threshold value: 79.8012375831604 - type: dot_ap value: 63.282103870645166 - type: dot_f1 value: 60.16389710903711 - type: dot_f1_threshold value: 72.22894430160522 - type: dot_precision value: 52.90232185748599 - type: dot_recall value: 69.73614775725594 - type: euclidean_accuracy value: 82.66674614054956 - type: euclidean_accuracy_threshold value: 63.55905532836914 - type: euclidean_ap value: 63.282095399953164 - type: euclidean_f1 value: 60.16389710903711 - type: euclidean_f1_threshold value: 74.5265781879425 - type: euclidean_precision value: 52.90232185748599 - type: euclidean_recall value: 69.73614775725594 - type: main_score value: 63.282103870645166 - type: manhattan_accuracy value: 82.74423317637242 - type: manhattan_accuracy_threshold value: 1415.380859375 - type: manhattan_ap value: 63.26931757839598 - type: manhattan_f1 value: 60.11014948859166 - type: manhattan_f1_threshold value: 1632.522201538086 - type: manhattan_precision value: 52.359506559624045 - type: manhattan_recall value: 70.55408970976254 - type: max_ap value: 63.282103870645166 - type: max_f1 value: 60.16389710903711 - type: max_precision value: 52.90232185748599 - type: max_recall value: 70.55408970976254 - type: similarity_accuracy value: 82.66674614054956 - type: similarity_accuracy_threshold value: 79.80123162269592 - type: similarity_ap value: 63.28209719072804 - type: similarity_f1 value: 60.16389710903711 - type: similarity_f1_threshold value: 72.22893834114075 - type: similarity_precision value: 52.90232185748599 - type: similarity_recall value: 69.73614775725594 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus (default) type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 88.10105949470253 - type: cosine_accuracy_threshold value: 68.95147562026978 - type: cosine_ap value: 84.65516103854583 - type: cosine_f1 value: 76.54581123301605 - type: cosine_f1_threshold value: 63.92929553985596 - type: cosine_precision value: 72.46526344751685 - type: cosine_recall value: 81.11333538651063 - type: dot_accuracy value: 88.10105949470253 - type: dot_accuracy_threshold value: 68.95147562026978 - type: dot_ap value: 84.65516301437592 - type: dot_f1 value: 76.54581123301605 - type: dot_f1_threshold value: 63.92928957939148 - type: dot_precision value: 72.46526344751685 - type: dot_recall value: 81.11333538651063 - type: euclidean_accuracy value: 88.10105949470253 - type: euclidean_accuracy_threshold value: 78.80169153213501 - type: euclidean_ap value: 84.65517268264233 - type: euclidean_f1 value: 76.54581123301605 - type: euclidean_f1_threshold value: 84.93610620498657 - type: euclidean_precision value: 72.46526344751685 - type: euclidean_recall value: 81.11333538651063 - type: main_score value: 84.65517268264233 - type: manhattan_accuracy value: 88.08941669577366 - type: manhattan_accuracy_threshold value: 1739.3169403076172 - type: manhattan_ap value: 84.64592398855694 - type: manhattan_f1 value: 76.62890540443034 - type: manhattan_f1_threshold value: 1861.344337463379 - type: manhattan_precision value: 72.09775967413442 - type: manhattan_recall value: 81.76778564829073 - type: max_ap value: 84.65517268264233 - type: max_f1 value: 76.62890540443034 - type: max_precision value: 72.46526344751685 - type: max_recall value: 81.76778564829073 - type: similarity_accuracy value: 88.10105949470253 - type: similarity_accuracy_threshold value: 68.95147562026978 - type: similarity_ap value: 84.65516103854583 - type: similarity_f1 value: 76.54581123301605 - type: similarity_f1_threshold value: 63.92929553985596 - type: similarity_precision value: 72.46526344751685 - type: similarity_recall value: 81.11333538651063 --- # sheldonrobinson/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-m-v1.5`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sheldonrobinson/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sheldonrobinson/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sheldonrobinson/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sheldonrobinson/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
QuantFactory/gemma2-9b-cpt-sea-lionv3-instruct-GGUF
QuantFactory
text-generation
[ "transformers", "gguf", "text-generation", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "jv", "su", "arxiv:2309.06085", "arxiv:2311.07911", "arxiv:2306.05685", "base_model:aisingapore/gemma2-9b-cpt-sea-lionv3-base", "base_model:quantized:aisingapore/gemma2-9b-cpt-sea-lionv3-base", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
1,730
1,730
255
2
--- base_model: - aisingapore/gemma2-9b-cpt-sea-lionv3-base language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my - jv - su library_name: transformers license: gemma pipeline_tag: text-generation --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/gemma2-9b-cpt-sea-lionv3-instruct-GGUF This is quantized version of [aisingapore/gemma2-9b-cpt-sea-lionv3-instruct](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-instruct) created using llama.cpp # Original Model Card # Gemma2 9B CPT SEA-LIONv3 Instruct SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Gemma2 9B CPT SEA-LIONv3 Instruct is a multilingual model which has been fine-tuned with around **500,000 English instruction-completion pairs** alongside a larger pool of around **1,000,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese. SEA-LION stands for _Southeast Asian Languages In One Network_. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages:** English, Chinese, Vietnamese, Indonesian, Thai, Filipino, Tamil, Malay, Khmer, Lao, Burmese, Javanese, Sundanese - **License:** [Gemma Community License](https://ai.google.dev/gemma/terms) ## Model Details ### Model Description We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Gemma2 9B CPT SEA-LIONv3](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-base), a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Instruct. For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192. ### Benchmark Performance We evaluated Gemma2 9B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities. #### General Language Capabilities For the evaluation of general language capabilities, we employed the [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI). Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance. The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset. #### Instruction-following Capabilities Since Gemma2 9B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685). As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural. **IFEval** IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task). **MT-Bench** MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5. For more details on Gemma2 9B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/ ### Usage Gemma2 9B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library ```python # Please use transformers==4.45.2 import transformers import torch model_id = "aisingapore/gemma2-9b-cpt-sea-lionv3-instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ### Caveats It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. ## Limitations ### Safety Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## Technical Specifications ### Fine-Tuning Details Gemma2 9B CPT SEA-LIONv3 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 15 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs. ## Data Gemma2 9B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source. ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Choa Esther, Cheng Nicholas, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Teng Walter, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION" ]
[ "CHIA" ]
Non_BioNLP
NesrineBannour/CAS-privacy-preserving-model
NesrineBannour
token-classification
[ "transformers", "biomedical", "clinical", "pytorch", "camembert", "token-classification", "fr", "dataset:bigbio/cas", "license:cc-by-sa-4.0", "region:us" ]
1,696
1,699
0
0
--- datasets: - bigbio/cas language: - fr library_name: transformers license: cc-by-sa-4.0 metrics: - f1 - precision - recall pipeline_tag: token-classification tags: - biomedical - clinical - pytorch - camembert inference: false --- # Privacy-preserving mimic models for clinical named entity recognition in French <!-- ## Paper abstract --> In this [paper](https://doi.org/10.1016/j.jbi.2022.104073), we propose a Privacy-Preserving Mimic Models architecture that enables the generation of shareable models using the *mimic learning* approach. The idea of mimic learning is to annotate unlabeled public data through a *private teacher model* trained on the original sensitive data. The newly labeled public dataset is then used to train the *student models*. These generated *student models* could be shared without sharing the data itself or exposing the *private teacher model* that was directly built on this data. # CAS Privacy-Preserving Named Entity Recognition (NER) Mimic Model <!-- Provide a quick summary of what the model is/does. --> To generate the CAS Privacy-Preserving Mimic Model, we used a *private teacher model* to annotate the unlabeled [CAS clinical French corpus](https://aclanthology.org/W18-5614/). The *private teacher model* is an NER model trained on the [MERLOT clinical corpus](https://link.springer.com/article/10.1007/s10579-017-9382-y) and could not be shared. Using the produced [silver annotations](https://zenodo.org/records/6451361), we train the CAS *student model*, namely the CAS Privacy-Preserving NER Mimic Model. This model might be viewed as a knowledge transfer process between the *teacher* and the *student model* in a privacy-preserving manner. We share only the weights of the CAS *student model*, which is trained on silver-labeled publicly released data. We argue that no potential attack could reveal information about sensitive private data using the silver annotations generated by the *private teacher model* on publicly available non-sensitive data. Our model is constructed based on [CamemBERT](https://huggingface.co/camembert) model using the Natural language structuring ([NLstruct](https://github.com/percevalw/nlstruct)) library that implements NER models that handle nested entities. - **Paper:** [Privacy-preserving mimic models for clinical named entity recognition in French](https://doi.org/10.1016/j.jbi.2022.104073) - **Produced gold and silver annotations for the [DEFT](https://deft.lisn.upsaclay.fr/2020/) and [CAS](https://aclanthology.org/W18-5614/) French clinical corpora:** https://zenodo.org/records/6451361 - **Developed by:** [Nesrine Bannour](https://github.com/NesrineBannour), [Perceval Wajsbürt](https://github.com/percevalw), [Bastien Rance](https://team.inria.fr/heka/fr/team-members/rance/), [Xavier Tannier](http://xavier.tannier.free.fr/) and [Aurélie Névéol](https://perso.limsi.fr/neveol/) - **Language:** French - **License:** cc-by-sa-4.0 <!-- ## Model Sources --> <!-- Provide the basic links for the model. --> <!-- ## Training Details <!-- ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> <!-- ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> <!-- #### Training Hyperparameters --> # Download the CAS Privacy-Preserving NER Mimic Model ```python fasttext_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model_fasttext.txt") urllib.request.urlretrieve(fasttext_url, fasttext_url.split('/')[-1]) model_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model.ckpt") urllib.request.urlretrieve(model_url, "path/to/your/folder/"+ model_url.split('/')[-1]) path_checkpoint = "path/to/your/folder/"+ model_url.split('/')[-1] ``` ## 1. Load and use the model using only NLstruct [NLstruct](https://github.com/percevalw/nlstruct) is the Python library we used to generate our CAS privacy-preserving NER mimic model and that handles nested entities. ### Install the NLstruct library ``` pip install nlstruct==0.1.0 ``` ### Use the model ```python from nlstruct import load_pretrained from nlstruct.datasets import load_from_brat, export_to_brat ner_model = load_pretrained(path_checkpoint) test_data = load_from_brat("path/to/brat/test") test_predictions = ner_model.predict(test_data) # Export the predictions into the BRAT standoff format export_to_brat(test_predictions, filename_prefix="path/to/exported_brat") ``` ## 2. Load the model using NLstruct and use it with the Medkit library [Medkit](https://github.com/TeamHeka/medkit) is a Python library for facilitating the extraction of features from various modalities of patient data, including textual data. ### Install the Medkit library ``` python -m pip install 'medkit-lib' ``` ### Use the model Our model could be implemented as a Medkit operation module as follows: ```python import os from nlstruct import load_pretrained import urllib.request from huggingface_hub import hf_hub_url from medkit.io.brat import BratInputConverter, BratOutputConverter from medkit.core import Attribute from medkit.core.text import NEROperation,Entity,Span,Segment, span_utils class CAS_matcher(NEROperation): def __init__(self): # Load the fasttext file fasttext_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model_fasttext.txt") if not os.path.exists("CAS-privacy-preserving-model_fasttext.txt"): urllib.request.urlretrieve(fasttext_url, fasttext_url.split('/')[-1]) # Load the model model_url = hf_hub_url(repo_id="NesrineBannour/CAS-privacy-preserving-model", filename="CAS-privacy-preserving-model.ckpt") if not os.path.exists("ner_model/CAS-privacy-preserving-model.ckpt"): urllib.request.urlretrieve(model_url, "ner_model/"+ model_url.split('/')[-1]) path_checkpoint = "ner_model/"+ model_url.split('/')[-1] self.model = load_pretrained(path_checkpoint) self.model.eval() def run(self, segments): """Return entities for each match in `segments`. Parameters ---------- segments: List of segments into which to look for matches. Returns ------- List[Entity] Entities found in `segments`. """ # get an iterator to all matches, grouped by segment entities = [] for segment in segments: matches = self.model.predict({"doc_id":segment.uid,"text":segment.text}) entities.extend([entity for entity in self._matches_to_entities(matches, segment) ]) return entities def _matches_to_entities(self, matches, segment: Segment): for match in matches["entities"]: text_all,spans_all = [],[] for fragment in match["fragments"]: text, spans = span_utils.extract( segment.text, segment.spans, [(fragment["begin"], fragment["end"])] ) text_all.append(text) spans_all.extend(spans) text_all = "".join(text_all) entity = Entity( label=match["label"], text=text_all, spans=spans_all, ) score_attr = Attribute( label="confidence", value=float(match["confidence"]), #metadata=dict(model=self.model.path_checkpoint), ) entity.attrs.add(score_attr) yield entity brat_converter = BratInputConverter() docs = brat_converter.load("path/to/brat/test") matcher = CAS_matcher() for doc in docs: entities = matcher.run([doc.raw_segment]) for ent in entities: doc.anns.add(ent) brat_output_converter = BratOutputConverter(attrs=[]) # To keep the same document names in the output folder doc_names = [os.path.splitext(os.path.basename(doc.metadata["path_to_text"]))[0] for doc in docs] brat_output_converter.save(docs, dir_path="path/to/exported_brat, doc_names=doc_names) ``` <!-- ## Evaluation of test data <!-- This section describes the evaluation protocols and provides the results. --> <!-- #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> <!-- [More Information Needed] ### Results [More Information Needed] #### Summary --> ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions are estimated using the [Carbontracker](https://github.com/lfwa/carbontracker) tool. The used version at the time of our experiments computes its estimates by using the average carbon intensity in European Union in 2017 instead of the France value (294.21 gCO<sub>2</sub>eq/kWh vs. 85 gCO<sub>2</sub>eq/kWh). Therefore, our reported carbon footprint of training both the private model that generated the silver annotations and the CAS student model is overestimated. - **Hardware Type:** GPU NVIDIA GTX 1080 Ti - **Compute Region:** Gif-sur-Yvette, Île-de-France, France - **Carbon Emitted:** 292 gCO<sub>2</sub>eq ## Acknowledgements We thank the institutions and colleagues who made it possible to use the datasets described in this study: the Biomedical Informatics Department at the Rouen University Hospital provided access to the LERUDI corpus, and Dr. Grabar (Université de Lille, CNRS, STL) granted permission to use the DEFT/CAS corpus. We would also like to thank the ITMO Cancer Aviesan for funding our research, and the [HeKA research team](https://team.inria.fr/heka/) for integrating our model into their library [Medkit]((https://github.com/TeamHeka/medkit)). ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model in your research, please make sure to cite our paper: ```bibtex @article{BANNOUR2022104073, title = {Privacy-preserving mimic models for clinical named entity recognition in French}, journal = {Journal of Biomedical Informatics}, volume = {130}, pages = {104073}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2022.104073}, url = {https://www.sciencedirect.com/science/article/pii/S1532046422000892}} } ``` <!-- ## Bias, Risks, and Limitations --> <!-- This section is meant to convey both technical and sociotechnical limitations. --> <!-- [More Information Needed] -->
[ "NAMED_ENTITY_RECOGNITION" ]
[ "CAS" ]
BioNLP
vectoriseai/e5-small-v2
vectoriseai
sentence-similarity
[ "sentence-transformers", "pytorch", "tf", "onnx", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,697
1,697
8
0
--- language: - en license: mit tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: e5-small-v2 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.59701492537313 - type: ap value: 41.67064885731708 - type: f1 value: 71.86465946398573 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.265875 - type: ap value: 87.67633085349644 - type: f1 value: 91.24297521425744 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.882000000000005 - type: f1 value: 45.08058870381236 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 20.697 - type: map_at_10 value: 33.975 - type: map_at_100 value: 35.223 - type: map_at_1000 value: 35.260000000000005 - type: map_at_3 value: 29.776999999999997 - type: map_at_5 value: 32.035000000000004 - type: mrr_at_1 value: 20.982 - type: mrr_at_10 value: 34.094 - type: mrr_at_100 value: 35.343 - type: mrr_at_1000 value: 35.38 - type: mrr_at_3 value: 29.884 - type: mrr_at_5 value: 32.141999999999996 - type: ndcg_at_1 value: 20.697 - type: ndcg_at_10 value: 41.668 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 48.305 - type: ndcg_at_3 value: 32.928000000000004 - type: ndcg_at_5 value: 36.998999999999995 - type: precision_at_1 value: 20.697 - type: precision_at_10 value: 6.636 - type: precision_at_100 value: 0.924 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.035 - type: precision_at_5 value: 10.398 - type: recall_at_1 value: 20.697 - type: recall_at_10 value: 66.35799999999999 - type: recall_at_100 value: 92.39 - type: recall_at_1000 value: 99.36 - type: recall_at_3 value: 42.105 - type: recall_at_5 value: 51.991 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.1169517447068 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 34.79553720107097 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.10811337308168 - type: mrr value: 71.56410763751482 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 78.46834918248696 - type: cos_sim_spearman value: 79.4289182755206 - type: euclidean_pearson value: 76.26662973727008 - type: euclidean_spearman value: 78.11744260952536 - type: manhattan_pearson value: 76.08175262609434 - type: manhattan_spearman value: 78.29395265552289 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.63636363636364 - type: f1 value: 81.55779952376953 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.88541137137571 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.05205685274407 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.293999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.315000000000005 - type: map_at_1000 value: 41.451 - type: map_at_3 value: 37.194 - type: map_at_5 value: 38.728 - type: mrr_at_1 value: 37.053000000000004 - type: mrr_at_10 value: 45.281 - type: mrr_at_100 value: 46.188 - type: mrr_at_1000 value: 46.245999999999995 - type: mrr_at_3 value: 43.228 - type: mrr_at_5 value: 44.366 - type: ndcg_at_1 value: 37.053000000000004 - type: ndcg_at_10 value: 45.086 - type: ndcg_at_100 value: 50.756 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 41.416 - type: ndcg_at_5 value: 43.098 - type: precision_at_1 value: 37.053000000000004 - type: precision_at_10 value: 8.34 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.647000000000002 - type: precision_at_5 value: 13.877 - type: recall_at_1 value: 30.293999999999997 - type: recall_at_10 value: 54.309 - type: recall_at_100 value: 78.59 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 43.168 - type: recall_at_5 value: 48.192 - type: map_at_1 value: 28.738000000000003 - type: map_at_10 value: 36.925999999999995 - type: map_at_100 value: 38.017 - type: map_at_1000 value: 38.144 - type: map_at_3 value: 34.446 - type: map_at_5 value: 35.704 - type: mrr_at_1 value: 35.478 - type: mrr_at_10 value: 42.786 - type: mrr_at_100 value: 43.458999999999996 - type: mrr_at_1000 value: 43.507 - type: mrr_at_3 value: 40.648 - type: mrr_at_5 value: 41.804 - type: ndcg_at_1 value: 35.478 - type: ndcg_at_10 value: 42.044 - type: ndcg_at_100 value: 46.249 - type: ndcg_at_1000 value: 48.44 - type: ndcg_at_3 value: 38.314 - type: ndcg_at_5 value: 39.798 - type: precision_at_1 value: 35.478 - type: precision_at_10 value: 7.764 - type: precision_at_100 value: 1.253 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 18.047 - type: precision_at_5 value: 12.637 - type: recall_at_1 value: 28.738000000000003 - type: recall_at_10 value: 50.659 - type: recall_at_100 value: 68.76299999999999 - type: recall_at_1000 value: 82.811 - type: recall_at_3 value: 39.536 - type: recall_at_5 value: 43.763999999999996 - type: map_at_1 value: 38.565 - type: map_at_10 value: 50.168 - type: map_at_100 value: 51.11 - type: map_at_1000 value: 51.173 - type: map_at_3 value: 47.044000000000004 - type: map_at_5 value: 48.838 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 53.596999999999994 - type: mrr_at_100 value: 54.211 - type: mrr_at_1000 value: 54.247 - type: mrr_at_3 value: 51.202000000000005 - type: mrr_at_5 value: 52.608999999999995 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 55.694 - type: ndcg_at_100 value: 59.518 - type: ndcg_at_1000 value: 60.907 - type: ndcg_at_3 value: 50.395999999999994 - type: ndcg_at_5 value: 53.022999999999996 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 8.84 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.153 - type: precision_at_5 value: 15.260000000000002 - type: recall_at_1 value: 38.565 - type: recall_at_10 value: 68.65 - type: recall_at_100 value: 85.37400000000001 - type: recall_at_1000 value: 95.37400000000001 - type: recall_at_3 value: 54.645999999999994 - type: recall_at_5 value: 60.958 - type: map_at_1 value: 23.945 - type: map_at_10 value: 30.641000000000002 - type: map_at_100 value: 31.599 - type: map_at_1000 value: 31.691000000000003 - type: map_at_3 value: 28.405 - type: map_at_5 value: 29.704000000000004 - type: mrr_at_1 value: 25.537 - type: mrr_at_10 value: 32.22 - type: mrr_at_100 value: 33.138 - type: mrr_at_1000 value: 33.214 - type: mrr_at_3 value: 30.151 - type: mrr_at_5 value: 31.298 - type: ndcg_at_1 value: 25.537 - type: ndcg_at_10 value: 34.638000000000005 - type: ndcg_at_100 value: 39.486 - type: ndcg_at_1000 value: 41.936 - type: ndcg_at_3 value: 30.333 - type: ndcg_at_5 value: 32.482 - type: precision_at_1 value: 25.537 - type: precision_at_10 value: 5.153 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.429 - type: precision_at_5 value: 8.723 - type: recall_at_1 value: 23.945 - type: recall_at_10 value: 45.412 - type: recall_at_100 value: 67.836 - type: recall_at_1000 value: 86.467 - type: recall_at_3 value: 34.031 - type: recall_at_5 value: 39.039 - type: map_at_1 value: 14.419 - type: map_at_10 value: 20.858999999999998 - type: map_at_100 value: 22.067999999999998 - type: map_at_1000 value: 22.192 - type: map_at_3 value: 18.673000000000002 - type: map_at_5 value: 19.968 - type: mrr_at_1 value: 17.785999999999998 - type: mrr_at_10 value: 24.878 - type: mrr_at_100 value: 26.021 - type: mrr_at_1000 value: 26.095000000000002 - type: mrr_at_3 value: 22.616 - type: mrr_at_5 value: 23.785 - type: ndcg_at_1 value: 17.785999999999998 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 31.05 - type: ndcg_at_1000 value: 34.052 - type: ndcg_at_3 value: 21.117 - type: ndcg_at_5 value: 23.048 - type: precision_at_1 value: 17.785999999999998 - type: precision_at_10 value: 4.590000000000001 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.313 - type: recall_at_1 value: 14.419 - type: recall_at_10 value: 34.477999999999994 - type: recall_at_100 value: 60.02499999999999 - type: recall_at_1000 value: 81.646 - type: recall_at_3 value: 23.515 - type: recall_at_5 value: 28.266999999999996 - type: map_at_1 value: 26.268 - type: map_at_10 value: 35.114000000000004 - type: map_at_100 value: 36.212 - type: map_at_1000 value: 36.333 - type: map_at_3 value: 32.436 - type: map_at_5 value: 33.992 - type: mrr_at_1 value: 31.761 - type: mrr_at_10 value: 40.355999999999995 - type: mrr_at_100 value: 41.125 - type: mrr_at_1000 value: 41.186 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.463 - type: ndcg_at_1 value: 31.761 - type: ndcg_at_10 value: 40.422000000000004 - type: ndcg_at_100 value: 45.458999999999996 - type: ndcg_at_1000 value: 47.951 - type: ndcg_at_3 value: 35.972 - type: ndcg_at_5 value: 38.272 - type: precision_at_1 value: 31.761 - type: precision_at_10 value: 7.103 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.779 - type: precision_at_5 value: 11.877 - type: recall_at_1 value: 26.268 - type: recall_at_10 value: 51.053000000000004 - type: recall_at_100 value: 72.702 - type: recall_at_1000 value: 89.521 - type: recall_at_3 value: 38.619 - type: recall_at_5 value: 44.671 - type: map_at_1 value: 25.230999999999998 - type: map_at_10 value: 34.227000000000004 - type: map_at_100 value: 35.370000000000005 - type: map_at_1000 value: 35.488 - type: map_at_3 value: 31.496000000000002 - type: map_at_5 value: 33.034 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 39.045 - type: mrr_at_100 value: 39.809 - type: mrr_at_1000 value: 39.873 - type: mrr_at_3 value: 36.663000000000004 - type: mrr_at_5 value: 37.964 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 39.472 - type: ndcg_at_100 value: 44.574999999999996 - type: ndcg_at_1000 value: 47.162 - type: ndcg_at_3 value: 34.929 - type: ndcg_at_5 value: 37.002 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 7.055 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.591 - type: precision_at_5 value: 11.667 - type: recall_at_1 value: 25.230999999999998 - type: recall_at_10 value: 50.42100000000001 - type: recall_at_100 value: 72.685 - type: recall_at_1000 value: 90.469 - type: recall_at_3 value: 37.503 - type: recall_at_5 value: 43.123 - type: map_at_1 value: 24.604166666666664 - type: map_at_10 value: 32.427166666666665 - type: map_at_100 value: 33.51474999999999 - type: map_at_1000 value: 33.6345 - type: map_at_3 value: 30.02366666666667 - type: map_at_5 value: 31.382333333333328 - type: mrr_at_1 value: 29.001166666666666 - type: mrr_at_10 value: 36.3315 - type: mrr_at_100 value: 37.16683333333333 - type: mrr_at_1000 value: 37.23341666666668 - type: mrr_at_3 value: 34.19916666666667 - type: mrr_at_5 value: 35.40458333333334 - type: ndcg_at_1 value: 29.001166666666666 - type: ndcg_at_10 value: 37.06883333333334 - type: ndcg_at_100 value: 41.95816666666666 - type: ndcg_at_1000 value: 44.501583333333336 - type: ndcg_at_3 value: 32.973499999999994 - type: ndcg_at_5 value: 34.90833333333334 - type: precision_at_1 value: 29.001166666666666 - type: precision_at_10 value: 6.336 - type: precision_at_100 value: 1.0282499999999999 - type: precision_at_1000 value: 0.14391666666666664 - type: precision_at_3 value: 14.932499999999996 - type: precision_at_5 value: 10.50825 - type: recall_at_1 value: 24.604166666666664 - type: recall_at_10 value: 46.9525 - type: recall_at_100 value: 68.67816666666667 - type: recall_at_1000 value: 86.59783333333334 - type: recall_at_3 value: 35.49783333333333 - type: recall_at_5 value: 40.52525000000001 - type: map_at_1 value: 23.559 - type: map_at_10 value: 29.023 - type: map_at_100 value: 29.818 - type: map_at_1000 value: 29.909000000000002 - type: map_at_3 value: 27.037 - type: map_at_5 value: 28.225 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 31.962000000000003 - type: mrr_at_100 value: 32.726 - type: mrr_at_1000 value: 32.800000000000004 - type: mrr_at_3 value: 30.266 - type: mrr_at_5 value: 31.208999999999996 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 32.53 - type: ndcg_at_100 value: 36.758 - type: ndcg_at_1000 value: 39.362 - type: ndcg_at_3 value: 28.985 - type: ndcg_at_5 value: 30.757 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.219 - type: precision_at_5 value: 8.527999999999999 - type: recall_at_1 value: 23.559 - type: recall_at_10 value: 40.585 - type: recall_at_100 value: 60.306000000000004 - type: recall_at_1000 value: 80.11 - type: recall_at_3 value: 30.794 - type: recall_at_5 value: 35.186 - type: map_at_1 value: 16.384999999999998 - type: map_at_10 value: 22.142 - type: map_at_100 value: 23.057 - type: map_at_1000 value: 23.177 - type: map_at_3 value: 20.29 - type: map_at_5 value: 21.332 - type: mrr_at_1 value: 19.89 - type: mrr_at_10 value: 25.771 - type: mrr_at_100 value: 26.599 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.962 - type: mrr_at_5 value: 24.934 - type: ndcg_at_1 value: 19.89 - type: ndcg_at_10 value: 25.97 - type: ndcg_at_100 value: 30.605 - type: ndcg_at_1000 value: 33.619 - type: ndcg_at_3 value: 22.704 - type: ndcg_at_5 value: 24.199 - type: precision_at_1 value: 19.89 - type: precision_at_10 value: 4.553 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 10.541 - type: precision_at_5 value: 7.46 - type: recall_at_1 value: 16.384999999999998 - type: recall_at_10 value: 34.001 - type: recall_at_100 value: 55.17100000000001 - type: recall_at_1000 value: 77.125 - type: recall_at_3 value: 24.618000000000002 - type: recall_at_5 value: 28.695999999999998 - type: map_at_1 value: 23.726 - type: map_at_10 value: 31.227 - type: map_at_100 value: 32.311 - type: map_at_1000 value: 32.419 - type: map_at_3 value: 28.765 - type: map_at_5 value: 30.229 - type: mrr_at_1 value: 27.705000000000002 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 35.931000000000004 - type: mrr_at_1000 value: 36 - type: mrr_at_3 value: 32.603 - type: mrr_at_5 value: 34.117999999999995 - type: ndcg_at_1 value: 27.705000000000002 - type: ndcg_at_10 value: 35.968 - type: ndcg_at_100 value: 41.197 - type: ndcg_at_1000 value: 43.76 - type: ndcg_at_3 value: 31.304 - type: ndcg_at_5 value: 33.661 - type: precision_at_1 value: 27.705000000000002 - type: precision_at_10 value: 5.942 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.868 - type: precision_at_5 value: 9.944 - type: recall_at_1 value: 23.726 - type: recall_at_10 value: 46.786 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.2 - type: recall_at_3 value: 33.981 - type: recall_at_5 value: 39.893 - type: map_at_1 value: 23.344 - type: map_at_10 value: 31.636999999999997 - type: map_at_100 value: 33.065 - type: map_at_1000 value: 33.300000000000004 - type: map_at_3 value: 29.351 - type: map_at_5 value: 30.432 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.587 - type: mrr_at_100 value: 36.52 - type: mrr_at_1000 value: 36.597 - type: mrr_at_3 value: 33.696 - type: mrr_at_5 value: 34.713 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.61 - type: ndcg_at_100 value: 41.88 - type: ndcg_at_1000 value: 45.105000000000004 - type: ndcg_at_3 value: 33.038000000000004 - type: ndcg_at_5 value: 34.331 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 6.917 - type: precision_at_100 value: 1.3599999999999999 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 15.547 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.344 - type: recall_at_10 value: 45.782000000000004 - type: recall_at_100 value: 69.503 - type: recall_at_1000 value: 90.742 - type: recall_at_3 value: 35.160000000000004 - type: recall_at_5 value: 39.058 - type: map_at_1 value: 20.776 - type: map_at_10 value: 27.285999999999998 - type: map_at_100 value: 28.235 - type: map_at_1000 value: 28.337 - type: map_at_3 value: 25.147000000000002 - type: map_at_5 value: 26.401999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 29.409999999999997 - type: mrr_at_100 value: 30.275000000000002 - type: mrr_at_1000 value: 30.354999999999997 - type: mrr_at_3 value: 27.418 - type: mrr_at_5 value: 28.592000000000002 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 31.239 - type: ndcg_at_100 value: 35.965 - type: ndcg_at_1000 value: 38.602 - type: ndcg_at_3 value: 27.174 - type: ndcg_at_5 value: 29.229 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.776 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 8.022 - type: recall_at_1 value: 20.776 - type: recall_at_10 value: 41.294 - type: recall_at_100 value: 63.111 - type: recall_at_1000 value: 82.88600000000001 - type: recall_at_3 value: 30.403000000000002 - type: recall_at_5 value: 35.455999999999996 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 9.376 - type: map_at_10 value: 15.926000000000002 - type: map_at_100 value: 17.585 - type: map_at_1000 value: 17.776 - type: map_at_3 value: 13.014000000000001 - type: map_at_5 value: 14.417 - type: mrr_at_1 value: 20.195 - type: mrr_at_10 value: 29.95 - type: mrr_at_100 value: 31.052000000000003 - type: mrr_at_1000 value: 31.108000000000004 - type: mrr_at_3 value: 26.667 - type: mrr_at_5 value: 28.458 - type: ndcg_at_1 value: 20.195 - type: ndcg_at_10 value: 22.871 - type: ndcg_at_100 value: 29.921999999999997 - type: ndcg_at_1000 value: 33.672999999999995 - type: ndcg_at_3 value: 17.782999999999998 - type: ndcg_at_5 value: 19.544 - type: precision_at_1 value: 20.195 - type: precision_at_10 value: 7.394 - type: precision_at_100 value: 1.493 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 13.073 - type: precision_at_5 value: 10.436 - type: recall_at_1 value: 9.376 - type: recall_at_10 value: 28.544999999999998 - type: recall_at_100 value: 53.147999999999996 - type: recall_at_1000 value: 74.62 - type: recall_at_3 value: 16.464000000000002 - type: recall_at_5 value: 21.004 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.415000000000001 - type: map_at_10 value: 18.738 - type: map_at_100 value: 27.291999999999998 - type: map_at_1000 value: 28.992 - type: map_at_3 value: 13.196 - type: map_at_5 value: 15.539 - type: mrr_at_1 value: 66.5 - type: mrr_at_10 value: 74.518 - type: mrr_at_100 value: 74.86 - type: mrr_at_1000 value: 74.87 - type: mrr_at_3 value: 72.375 - type: mrr_at_5 value: 73.86200000000001 - type: ndcg_at_1 value: 54.37499999999999 - type: ndcg_at_10 value: 41.317 - type: ndcg_at_100 value: 45.845 - type: ndcg_at_1000 value: 52.92 - type: ndcg_at_3 value: 44.983000000000004 - type: ndcg_at_5 value: 42.989 - type: precision_at_1 value: 66.5 - type: precision_at_10 value: 33.6 - type: precision_at_100 value: 10.972999999999999 - type: precision_at_1000 value: 2.214 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 8.415000000000001 - type: recall_at_10 value: 24.953 - type: recall_at_100 value: 52.48199999999999 - type: recall_at_1000 value: 75.093 - type: recall_at_3 value: 14.341000000000001 - type: recall_at_5 value: 18.468 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.06499999999999 - type: f1 value: 41.439327599975385 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 66.02 - type: map_at_10 value: 76.68599999999999 - type: map_at_100 value: 76.959 - type: map_at_1000 value: 76.972 - type: map_at_3 value: 75.024 - type: map_at_5 value: 76.153 - type: mrr_at_1 value: 71.197 - type: mrr_at_10 value: 81.105 - type: mrr_at_100 value: 81.232 - type: mrr_at_1000 value: 81.233 - type: mrr_at_3 value: 79.758 - type: mrr_at_5 value: 80.69 - type: ndcg_at_1 value: 71.197 - type: ndcg_at_10 value: 81.644 - type: ndcg_at_100 value: 82.645 - type: ndcg_at_1000 value: 82.879 - type: ndcg_at_3 value: 78.792 - type: ndcg_at_5 value: 80.528 - type: precision_at_1 value: 71.197 - type: precision_at_10 value: 10.206999999999999 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.868000000000002 - type: precision_at_5 value: 19.559 - type: recall_at_1 value: 66.02 - type: recall_at_10 value: 92.50699999999999 - type: recall_at_100 value: 96.497 - type: recall_at_1000 value: 97.956 - type: recall_at_3 value: 84.866 - type: recall_at_5 value: 89.16199999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.948 - type: map_at_10 value: 29.833 - type: map_at_100 value: 31.487 - type: map_at_1000 value: 31.674000000000003 - type: map_at_3 value: 26.029999999999998 - type: map_at_5 value: 28.038999999999998 - type: mrr_at_1 value: 34.721999999999994 - type: mrr_at_10 value: 44.214999999999996 - type: mrr_at_100 value: 44.994 - type: mrr_at_1000 value: 45.051 - type: mrr_at_3 value: 41.667 - type: mrr_at_5 value: 43.032 - type: ndcg_at_1 value: 34.721999999999994 - type: ndcg_at_10 value: 37.434 - type: ndcg_at_100 value: 43.702000000000005 - type: ndcg_at_1000 value: 46.993 - type: ndcg_at_3 value: 33.56 - type: ndcg_at_5 value: 34.687 - type: precision_at_1 value: 34.721999999999994 - type: precision_at_10 value: 10.401 - type: precision_at_100 value: 1.7049999999999998 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 22.531000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 17.948 - type: recall_at_10 value: 45.062999999999995 - type: recall_at_100 value: 68.191 - type: recall_at_1000 value: 87.954 - type: recall_at_3 value: 31.112000000000002 - type: recall_at_5 value: 36.823 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.644 - type: map_at_10 value: 57.658 - type: map_at_100 value: 58.562000000000005 - type: map_at_1000 value: 58.62500000000001 - type: map_at_3 value: 54.022999999999996 - type: map_at_5 value: 56.293000000000006 - type: mrr_at_1 value: 73.288 - type: mrr_at_10 value: 80.51700000000001 - type: mrr_at_100 value: 80.72 - type: mrr_at_1000 value: 80.728 - type: mrr_at_3 value: 79.33200000000001 - type: mrr_at_5 value: 80.085 - type: ndcg_at_1 value: 73.288 - type: ndcg_at_10 value: 66.61 - type: ndcg_at_100 value: 69.723 - type: ndcg_at_1000 value: 70.96000000000001 - type: ndcg_at_3 value: 61.358999999999995 - type: ndcg_at_5 value: 64.277 - type: precision_at_1 value: 73.288 - type: precision_at_10 value: 14.17 - type: precision_at_100 value: 1.659 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 39.487 - type: precision_at_5 value: 25.999 - type: recall_at_1 value: 36.644 - type: recall_at_10 value: 70.851 - type: recall_at_100 value: 82.94399999999999 - type: recall_at_1000 value: 91.134 - type: recall_at_3 value: 59.230000000000004 - type: recall_at_5 value: 64.997 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.00280000000001 - type: ap value: 80.46302061021223 - type: f1 value: 85.9592921596419 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.541 - type: map_at_10 value: 34.625 - type: map_at_100 value: 35.785 - type: map_at_1000 value: 35.831 - type: map_at_3 value: 30.823 - type: map_at_5 value: 32.967999999999996 - type: mrr_at_1 value: 23.180999999999997 - type: mrr_at_10 value: 35.207 - type: mrr_at_100 value: 36.315 - type: mrr_at_1000 value: 36.355 - type: mrr_at_3 value: 31.483 - type: mrr_at_5 value: 33.589999999999996 - type: ndcg_at_1 value: 23.195 - type: ndcg_at_10 value: 41.461 - type: ndcg_at_100 value: 47.032000000000004 - type: ndcg_at_1000 value: 48.199999999999996 - type: ndcg_at_3 value: 33.702 - type: ndcg_at_5 value: 37.522 - type: precision_at_1 value: 23.195 - type: precision_at_10 value: 6.526999999999999 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.308000000000002 - type: precision_at_5 value: 10.507 - type: recall_at_1 value: 22.541 - type: recall_at_10 value: 62.524 - type: recall_at_100 value: 88.228 - type: recall_at_1000 value: 97.243 - type: recall_at_3 value: 41.38 - type: recall_at_5 value: 50.55 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.69949840401279 - type: f1 value: 92.54141471311786 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.56041951664386 - type: f1 value: 55.88499977508287 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.62071284465365 - type: f1 value: 69.36717546572152 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.35843981170142 - type: f1 value: 76.15496453538884 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.33664956793118 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.883839621715524 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.096874986740758 - type: mrr value: 30.97300481932132 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.4 - type: map_at_10 value: 11.852 - type: map_at_100 value: 14.758 - type: map_at_1000 value: 16.134 - type: map_at_3 value: 8.558 - type: map_at_5 value: 10.087 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.05800000000001 - type: mrr_at_100 value: 52.689 - type: mrr_at_1000 value: 52.742999999999995 - type: mrr_at_3 value: 50.205999999999996 - type: mrr_at_5 value: 51.367 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 32.449 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 38.351 - type: ndcg_at_3 value: 37.044 - type: ndcg_at_5 value: 35.275 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 23.87 - type: precision_at_100 value: 7.625 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 34.365 - type: precision_at_5 value: 30.341 - type: recall_at_1 value: 5.4 - type: recall_at_10 value: 15.943999999999999 - type: recall_at_100 value: 29.805 - type: recall_at_1000 value: 61.695 - type: recall_at_3 value: 9.539 - type: recall_at_5 value: 12.127 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 36.047000000000004 - type: map_at_10 value: 51.6 - type: map_at_100 value: 52.449999999999996 - type: map_at_1000 value: 52.476 - type: map_at_3 value: 47.452 - type: map_at_5 value: 49.964 - type: mrr_at_1 value: 40.382 - type: mrr_at_10 value: 54.273 - type: mrr_at_100 value: 54.859 - type: mrr_at_1000 value: 54.876000000000005 - type: mrr_at_3 value: 51.014 - type: mrr_at_5 value: 52.983999999999995 - type: ndcg_at_1 value: 40.353 - type: ndcg_at_10 value: 59.11300000000001 - type: ndcg_at_100 value: 62.604000000000006 - type: ndcg_at_1000 value: 63.187000000000005 - type: ndcg_at_3 value: 51.513 - type: ndcg_at_5 value: 55.576 - type: precision_at_1 value: 40.353 - type: precision_at_10 value: 9.418 - type: precision_at_100 value: 1.1440000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.078000000000003 - type: precision_at_5 value: 16.250999999999998 - type: recall_at_1 value: 36.047000000000004 - type: recall_at_10 value: 79.22200000000001 - type: recall_at_100 value: 94.23 - type: recall_at_1000 value: 98.51100000000001 - type: recall_at_3 value: 59.678 - type: recall_at_5 value: 68.967 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 68.232 - type: map_at_10 value: 81.674 - type: map_at_100 value: 82.338 - type: map_at_1000 value: 82.36099999999999 - type: map_at_3 value: 78.833 - type: map_at_5 value: 80.58 - type: mrr_at_1 value: 78.64 - type: mrr_at_10 value: 85.164 - type: mrr_at_100 value: 85.317 - type: mrr_at_1000 value: 85.319 - type: mrr_at_3 value: 84.127 - type: mrr_at_5 value: 84.789 - type: ndcg_at_1 value: 78.63 - type: ndcg_at_10 value: 85.711 - type: ndcg_at_100 value: 87.238 - type: ndcg_at_1000 value: 87.444 - type: ndcg_at_3 value: 82.788 - type: ndcg_at_5 value: 84.313 - type: precision_at_1 value: 78.63 - type: precision_at_10 value: 12.977 - type: precision_at_100 value: 1.503 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.113 - type: precision_at_5 value: 23.71 - type: recall_at_1 value: 68.232 - type: recall_at_10 value: 93.30199999999999 - type: recall_at_100 value: 98.799 - type: recall_at_1000 value: 99.885 - type: recall_at_3 value: 84.827 - type: recall_at_5 value: 89.188 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.71879170816294 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 59.65866311751794 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.218 - type: map_at_10 value: 10.337 - type: map_at_100 value: 12.131 - type: map_at_1000 value: 12.411 - type: map_at_3 value: 7.4270000000000005 - type: map_at_5 value: 8.913 - type: mrr_at_1 value: 20.8 - type: mrr_at_10 value: 30.868000000000002 - type: mrr_at_100 value: 31.903 - type: mrr_at_1000 value: 31.972 - type: mrr_at_3 value: 27.367 - type: mrr_at_5 value: 29.372 - type: ndcg_at_1 value: 20.8 - type: ndcg_at_10 value: 17.765 - type: ndcg_at_100 value: 24.914 - type: ndcg_at_1000 value: 30.206 - type: ndcg_at_3 value: 16.64 - type: ndcg_at_5 value: 14.712 - type: precision_at_1 value: 20.8 - type: precision_at_10 value: 9.24 - type: precision_at_100 value: 1.9560000000000002 - type: precision_at_1000 value: 0.32299999999999995 - type: precision_at_3 value: 15.467 - type: precision_at_5 value: 12.94 - type: recall_at_1 value: 4.218 - type: recall_at_10 value: 18.752 - type: recall_at_100 value: 39.7 - type: recall_at_1000 value: 65.57300000000001 - type: recall_at_3 value: 9.428 - type: recall_at_5 value: 13.133000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04338850207233 - type: cos_sim_spearman value: 78.5054651430423 - type: euclidean_pearson value: 80.30739451228612 - type: euclidean_spearman value: 78.48377464299097 - type: manhattan_pearson value: 80.40795049052781 - type: manhattan_spearman value: 78.49506205443114 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.11596224442962 - type: cos_sim_spearman value: 76.20997388935461 - type: euclidean_pearson value: 80.56858451349109 - type: euclidean_spearman value: 75.92659183871186 - type: manhattan_pearson value: 80.60246102203844 - type: manhattan_spearman value: 76.03018971432664 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.34691640755737 - type: cos_sim_spearman value: 82.4018369631579 - type: euclidean_pearson value: 81.87673092245366 - type: euclidean_spearman value: 82.3671489960678 - type: manhattan_pearson value: 81.88222387719948 - type: manhattan_spearman value: 82.3816590344736 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.2836092579524 - type: cos_sim_spearman value: 78.99982781772064 - type: euclidean_pearson value: 80.5184271010527 - type: euclidean_spearman value: 78.89777392101904 - type: manhattan_pearson value: 80.53585705018664 - type: manhattan_spearman value: 78.92898405472994 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.7349907750784 - type: cos_sim_spearman value: 87.7611234446225 - type: euclidean_pearson value: 86.98759326731624 - type: euclidean_spearman value: 87.58321319424618 - type: manhattan_pearson value: 87.03483090370842 - type: manhattan_spearman value: 87.63278333060288 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.75873694924825 - type: cos_sim_spearman value: 83.80237999094724 - type: euclidean_pearson value: 83.55023725861537 - type: euclidean_spearman value: 84.12744338577744 - type: manhattan_pearson value: 83.58816983036232 - type: manhattan_spearman value: 84.18520748676501 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.21630882940174 - type: cos_sim_spearman value: 87.72382883437031 - type: euclidean_pearson value: 88.69933350930333 - type: euclidean_spearman value: 88.24660814383081 - type: manhattan_pearson value: 88.77331018833499 - type: manhattan_spearman value: 88.26109989380632 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.11854063060489 - type: cos_sim_spearman value: 63.14678634195072 - type: euclidean_pearson value: 61.679090067000864 - type: euclidean_spearman value: 62.28876589509653 - type: manhattan_pearson value: 62.082324165511004 - type: manhattan_spearman value: 62.56030932816679 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.00319882832645 - type: cos_sim_spearman value: 85.94529772647257 - type: euclidean_pearson value: 85.6661390122756 - type: euclidean_spearman value: 85.97747815545827 - type: manhattan_pearson value: 85.58422770541893 - type: manhattan_spearman value: 85.9237139181532 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.16198731863916 - type: mrr value: 94.25202702163487 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 54.761 - type: map_at_10 value: 64.396 - type: map_at_100 value: 65.07 - type: map_at_1000 value: 65.09899999999999 - type: map_at_3 value: 61.846000000000004 - type: map_at_5 value: 63.284 - type: mrr_at_1 value: 57.667 - type: mrr_at_10 value: 65.83099999999999 - type: mrr_at_100 value: 66.36800000000001 - type: mrr_at_1000 value: 66.39399999999999 - type: mrr_at_3 value: 64.056 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 57.667 - type: ndcg_at_10 value: 68.854 - type: ndcg_at_100 value: 71.59100000000001 - type: ndcg_at_1000 value: 72.383 - type: ndcg_at_3 value: 64.671 - type: ndcg_at_5 value: 66.796 - type: precision_at_1 value: 57.667 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.444 - type: precision_at_5 value: 16.667 - type: recall_at_1 value: 54.761 - type: recall_at_10 value: 80.9 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 69.672 - type: recall_at_5 value: 75.083 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8079207920792 - type: cos_sim_ap value: 94.88470927617445 - type: cos_sim_f1 value: 90.08179959100204 - type: cos_sim_precision value: 92.15481171548117 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.58613861386138 - type: dot_ap value: 82.94822578881316 - type: dot_f1 value: 77.33333333333333 - type: dot_precision value: 79.36842105263158 - type: dot_recall value: 75.4 - type: euclidean_accuracy value: 99.8069306930693 - type: euclidean_ap value: 94.81367858031837 - type: euclidean_f1 value: 90.01009081735621 - type: euclidean_precision value: 90.83503054989816 - type: euclidean_recall value: 89.2 - type: manhattan_accuracy value: 99.81188118811882 - type: manhattan_ap value: 94.91405337220161 - type: manhattan_f1 value: 90.2763561924258 - type: manhattan_precision value: 92.45283018867924 - type: manhattan_recall value: 88.2 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 94.91405337220161 - type: max_f1 value: 90.2763561924258 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 58.511599500053094 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.984728147814707 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.93428193939015 - type: mrr value: 50.916557911043206 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.562500894537145 - type: cos_sim_spearman value: 31.162587976726307 - type: dot_pearson value: 22.633662187735762 - type: dot_spearman value: 22.723000282378962 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.219 - type: map_at_10 value: 1.871 - type: map_at_100 value: 10.487 - type: map_at_1000 value: 25.122 - type: map_at_3 value: 0.657 - type: map_at_5 value: 1.0699999999999998 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 89.567 - type: mrr_at_100 value: 89.748 - type: mrr_at_1000 value: 89.748 - type: mrr_at_3 value: 88.667 - type: mrr_at_5 value: 89.567 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 74.533 - type: ndcg_at_100 value: 55.839000000000006 - type: ndcg_at_1000 value: 49.748 - type: ndcg_at_3 value: 79.53099999999999 - type: ndcg_at_5 value: 78.245 - type: precision_at_1 value: 84 - type: precision_at_10 value: 78.4 - type: precision_at_100 value: 56.99999999999999 - type: precision_at_1000 value: 21.98 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.8 - type: recall_at_1 value: 0.219 - type: recall_at_10 value: 2.02 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.739999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.5029999999999997 - type: map_at_10 value: 11.042 - type: map_at_100 value: 16.326999999999998 - type: map_at_1000 value: 17.836 - type: map_at_3 value: 6.174 - type: map_at_5 value: 7.979 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 52.617000000000004 - type: mrr_at_100 value: 53.351000000000006 - type: mrr_at_1000 value: 53.351000000000006 - type: mrr_at_3 value: 46.939 - type: mrr_at_5 value: 50.714000000000006 - type: ndcg_at_1 value: 38.775999999999996 - type: ndcg_at_10 value: 27.125 - type: ndcg_at_100 value: 35.845 - type: ndcg_at_1000 value: 47.377 - type: ndcg_at_3 value: 29.633 - type: ndcg_at_5 value: 28.378999999999998 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 24.082 - type: precision_at_100 value: 6.877999999999999 - type: precision_at_1000 value: 1.463 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 28.571 - type: recall_at_1 value: 3.5029999999999997 - type: recall_at_10 value: 17.068 - type: recall_at_100 value: 43.361 - type: recall_at_1000 value: 78.835 - type: recall_at_3 value: 6.821000000000001 - type: recall_at_5 value: 10.357 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.0954 - type: ap value: 14.216844153511959 - type: f1 value: 54.63687418565117 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.46293152235427 - type: f1 value: 61.744177921638645 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.12708617788644 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.75430649102938 - type: cos_sim_ap value: 73.34252536948081 - type: cos_sim_f1 value: 67.53758935173774 - type: cos_sim_precision value: 63.3672525439408 - type: cos_sim_recall value: 72.29551451187335 - type: dot_accuracy value: 81.71305954580676 - type: dot_ap value: 59.5532209082386 - type: dot_f1 value: 56.18466898954705 - type: dot_precision value: 47.830923248053395 - type: dot_recall value: 68.07387862796834 - type: euclidean_accuracy value: 85.81987244441795 - type: euclidean_ap value: 73.34325409809446 - type: euclidean_f1 value: 67.83451360417443 - type: euclidean_precision value: 64.09955388588871 - type: euclidean_recall value: 72.0316622691293 - type: manhattan_accuracy value: 85.68277999642368 - type: manhattan_ap value: 73.1535450121903 - type: manhattan_f1 value: 67.928237896289 - type: manhattan_precision value: 63.56945722171113 - type: manhattan_recall value: 72.9287598944591 - type: max_accuracy value: 85.81987244441795 - type: max_ap value: 73.34325409809446 - type: max_f1 value: 67.928237896289 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.90441262079403 - type: cos_sim_ap value: 85.79331880741438 - type: cos_sim_f1 value: 78.31563529842548 - type: cos_sim_precision value: 74.6683424102779 - type: cos_sim_recall value: 82.33754234678165 - type: dot_accuracy value: 84.89928978926534 - type: dot_ap value: 75.25819218316 - type: dot_f1 value: 69.88730119720536 - type: dot_precision value: 64.23362374959665 - type: dot_recall value: 76.63227594702803 - type: euclidean_accuracy value: 89.01695967710637 - type: euclidean_ap value: 85.98986606038852 - type: euclidean_f1 value: 78.5277880014722 - type: euclidean_precision value: 75.22211253701876 - type: euclidean_recall value: 82.13735756082538 - type: manhattan_accuracy value: 88.99561454573679 - type: manhattan_ap value: 85.92262421793953 - type: manhattan_f1 value: 78.38866094740769 - type: manhattan_precision value: 76.02373028505282 - type: manhattan_recall value: 80.9054511857099 - type: max_accuracy value: 89.01695967710637 - type: max_ap value: 85.98986606038852 - type: max_f1 value: 78.5277880014722 --- # E5-small-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2') model = AutoModel.from_pretrained('intfloat/e5-small-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-small-v2') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
islam23/llama3-8b-RAG_News_Finance
islam23
text-generation
[ "adapter-transformers", "safetensors", "llama", "finance", "text-generation", "conversational", "en", "dataset:qiaojin/PubMedQA", "dataset:databricks/databricks-dolly-15k", "dataset:islam23/fiqa", "dataset:glnmario/news-qa-summarization", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
1,714
1,715
8
2
--- datasets: - qiaojin/PubMedQA - databricks/databricks-dolly-15k - islam23/fiqa - glnmario/news-qa-summarization language: - en library_name: adapter-transformers license: mit metrics: - bertscore - accuracy - bleu pipeline_tag: text-generation tags: - finance ---
[ "SUMMARIZATION" ]
[ "PUBMEDQA" ]
Non_BioNLP
Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit
Muennighoff
feature-extraction
[ "sentence-transformers", "pytorch", "gpt_neo", "feature-extraction", "sentence-similarity", "mteb", "arxiv:2202.08904", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,646
1,679
90
5
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: SGPT-1.3B-weightedmean-msmarco-specb-bitfit results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996 metrics: - type: accuracy value: 65.20895522388061 - type: ap value: 29.59212705444778 - type: f1 value: 59.97099864321921 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1 metrics: - type: accuracy value: 73.20565 - type: ap value: 67.36680643550963 - type: f1 value: 72.90420520325125 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: c379a6705fec24a2493fa68e011692605f44e119 metrics: - type: accuracy value: 34.955999999999996 - type: f1 value: 34.719324437696955 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3 metrics: - type: map_at_1 value: 26.101999999999997 - type: map_at_10 value: 40.958 - type: map_at_100 value: 42.033 - type: map_at_1000 value: 42.042 - type: map_at_3 value: 36.332 - type: map_at_5 value: 38.608 - type: mrr_at_1 value: 26.387 - type: mrr_at_10 value: 41.051 - type: mrr_at_100 value: 42.118 - type: mrr_at_1000 value: 42.126999999999995 - type: mrr_at_3 value: 36.415 - type: mrr_at_5 value: 38.72 - type: ndcg_at_1 value: 26.101999999999997 - type: ndcg_at_10 value: 49.68 - type: ndcg_at_100 value: 54.257999999999996 - type: ndcg_at_1000 value: 54.486000000000004 - type: ndcg_at_3 value: 39.864 - type: ndcg_at_5 value: 43.980000000000004 - type: precision_at_1 value: 26.101999999999997 - type: precision_at_10 value: 7.781000000000001 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.714000000000002 - type: precision_at_5 value: 12.034 - type: recall_at_1 value: 26.101999999999997 - type: recall_at_10 value: 77.809 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 50.141999999999996 - type: recall_at_5 value: 60.171 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8 metrics: - type: v_measure value: 43.384194916953774 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3 metrics: - type: v_measure value: 33.70962633433912 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c metrics: - type: map value: 58.133058996870076 - type: mrr value: 72.10922041946972 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: 9ee918f184421b6bd48b78f6c714d86546106103 metrics: - type: cos_sim_pearson value: 86.62153841660047 - type: cos_sim_spearman value: 83.01514456843276 - type: euclidean_pearson value: 86.00431518427241 - type: euclidean_spearman value: 83.85552516285783 - type: manhattan_pearson value: 85.83025803351181 - type: manhattan_spearman value: 83.86636878343106 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 44fa15921b4c889113cc5df03dd4901b49161ab7 metrics: - type: accuracy value: 82.05844155844156 - type: f1 value: 82.0185837884764 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55 metrics: - type: v_measure value: 35.05918333141837 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1 metrics: - type: v_measure value: 30.71055028830579 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db metrics: - type: map_at_1 value: 26.519 - type: map_at_10 value: 35.634 - type: map_at_100 value: 36.961 - type: map_at_1000 value: 37.088 - type: map_at_3 value: 32.254 - type: map_at_5 value: 34.22 - type: mrr_at_1 value: 32.332 - type: mrr_at_10 value: 41.168 - type: mrr_at_100 value: 41.977 - type: mrr_at_1000 value: 42.028999999999996 - type: mrr_at_3 value: 38.196999999999996 - type: mrr_at_5 value: 40.036 - type: ndcg_at_1 value: 32.332 - type: ndcg_at_10 value: 41.471000000000004 - type: ndcg_at_100 value: 46.955999999999996 - type: ndcg_at_1000 value: 49.262 - type: ndcg_at_3 value: 35.937999999999995 - type: ndcg_at_5 value: 38.702999999999996 - type: precision_at_1 value: 32.332 - type: precision_at_10 value: 7.7829999999999995 - type: precision_at_100 value: 1.29 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 16.834 - type: precision_at_5 value: 12.418 - type: recall_at_1 value: 26.519 - type: recall_at_10 value: 53.190000000000005 - type: recall_at_100 value: 76.56500000000001 - type: recall_at_1000 value: 91.47800000000001 - type: recall_at_3 value: 38.034 - type: recall_at_5 value: 45.245999999999995 - type: map_at_1 value: 25.356 - type: map_at_10 value: 34.596 - type: map_at_100 value: 35.714 - type: map_at_1000 value: 35.839999999999996 - type: map_at_3 value: 32.073 - type: map_at_5 value: 33.475 - type: mrr_at_1 value: 31.274 - type: mrr_at_10 value: 39.592 - type: mrr_at_100 value: 40.284 - type: mrr_at_1000 value: 40.339999999999996 - type: mrr_at_3 value: 37.378 - type: mrr_at_5 value: 38.658 - type: ndcg_at_1 value: 31.274 - type: ndcg_at_10 value: 39.766 - type: ndcg_at_100 value: 44.028 - type: ndcg_at_1000 value: 46.445 - type: ndcg_at_3 value: 35.934 - type: ndcg_at_5 value: 37.751000000000005 - type: precision_at_1 value: 31.274 - type: precision_at_10 value: 7.452 - type: precision_at_100 value: 1.217 - type: precision_at_1000 value: 0.16999999999999998 - type: precision_at_3 value: 17.431 - type: precision_at_5 value: 12.306000000000001 - type: recall_at_1 value: 25.356 - type: recall_at_10 value: 49.344 - type: recall_at_100 value: 67.497 - type: recall_at_1000 value: 83.372 - type: recall_at_3 value: 38.227 - type: recall_at_5 value: 43.187999999999995 - type: map_at_1 value: 32.759 - type: map_at_10 value: 43.937 - type: map_at_100 value: 45.004 - type: map_at_1000 value: 45.07 - type: map_at_3 value: 40.805 - type: map_at_5 value: 42.497 - type: mrr_at_1 value: 37.367 - type: mrr_at_10 value: 47.237 - type: mrr_at_100 value: 47.973 - type: mrr_at_1000 value: 48.010999999999996 - type: mrr_at_3 value: 44.65 - type: mrr_at_5 value: 46.050999999999995 - type: ndcg_at_1 value: 37.367 - type: ndcg_at_10 value: 49.659 - type: ndcg_at_100 value: 54.069 - type: ndcg_at_1000 value: 55.552 - type: ndcg_at_3 value: 44.169000000000004 - type: ndcg_at_5 value: 46.726 - type: precision_at_1 value: 37.367 - type: precision_at_10 value: 8.163 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 19.707 - type: precision_at_5 value: 13.718 - type: recall_at_1 value: 32.759 - type: recall_at_10 value: 63.341 - type: recall_at_100 value: 82.502 - type: recall_at_1000 value: 93.259 - type: recall_at_3 value: 48.796 - type: recall_at_5 value: 54.921 - type: map_at_1 value: 18.962 - type: map_at_10 value: 25.863000000000003 - type: map_at_100 value: 26.817999999999998 - type: map_at_1000 value: 26.918 - type: map_at_3 value: 23.043 - type: map_at_5 value: 24.599 - type: mrr_at_1 value: 20.452 - type: mrr_at_10 value: 27.301 - type: mrr_at_100 value: 28.233000000000004 - type: mrr_at_1000 value: 28.310000000000002 - type: mrr_at_3 value: 24.539 - type: mrr_at_5 value: 26.108999999999998 - type: ndcg_at_1 value: 20.452 - type: ndcg_at_10 value: 30.354999999999997 - type: ndcg_at_100 value: 35.336 - type: ndcg_at_1000 value: 37.927 - type: ndcg_at_3 value: 24.705 - type: ndcg_at_5 value: 27.42 - type: precision_at_1 value: 20.452 - type: precision_at_10 value: 4.949 - type: precision_at_100 value: 0.7799999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 10.358 - type: precision_at_5 value: 7.774 - type: recall_at_1 value: 18.962 - type: recall_at_10 value: 43.056 - type: recall_at_100 value: 66.27300000000001 - type: recall_at_1000 value: 85.96000000000001 - type: recall_at_3 value: 27.776 - type: recall_at_5 value: 34.287 - type: map_at_1 value: 11.24 - type: map_at_10 value: 18.503 - type: map_at_100 value: 19.553 - type: map_at_1000 value: 19.689999999999998 - type: map_at_3 value: 16.150000000000002 - type: map_at_5 value: 17.254 - type: mrr_at_1 value: 13.806 - type: mrr_at_10 value: 21.939 - type: mrr_at_100 value: 22.827 - type: mrr_at_1000 value: 22.911 - type: mrr_at_3 value: 19.32 - type: mrr_at_5 value: 20.558 - type: ndcg_at_1 value: 13.806 - type: ndcg_at_10 value: 23.383000000000003 - type: ndcg_at_100 value: 28.834 - type: ndcg_at_1000 value: 32.175 - type: ndcg_at_3 value: 18.651999999999997 - type: ndcg_at_5 value: 20.505000000000003 - type: precision_at_1 value: 13.806 - type: precision_at_10 value: 4.714 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 9.328 - type: precision_at_5 value: 6.841 - type: recall_at_1 value: 11.24 - type: recall_at_10 value: 34.854 - type: recall_at_100 value: 59.50299999999999 - type: recall_at_1000 value: 83.25 - type: recall_at_3 value: 22.02 - type: recall_at_5 value: 26.715 - type: map_at_1 value: 23.012 - type: map_at_10 value: 33.048 - type: map_at_100 value: 34.371 - type: map_at_1000 value: 34.489 - type: map_at_3 value: 29.942999999999998 - type: map_at_5 value: 31.602000000000004 - type: mrr_at_1 value: 28.104000000000003 - type: mrr_at_10 value: 37.99 - type: mrr_at_100 value: 38.836 - type: mrr_at_1000 value: 38.891 - type: mrr_at_3 value: 35.226 - type: mrr_at_5 value: 36.693999999999996 - type: ndcg_at_1 value: 28.104000000000003 - type: ndcg_at_10 value: 39.037 - type: ndcg_at_100 value: 44.643 - type: ndcg_at_1000 value: 46.939 - type: ndcg_at_3 value: 33.784 - type: ndcg_at_5 value: 36.126000000000005 - type: precision_at_1 value: 28.104000000000003 - type: precision_at_10 value: 7.2669999999999995 - type: precision_at_100 value: 1.193 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 16.298000000000002 - type: precision_at_5 value: 11.684 - type: recall_at_1 value: 23.012 - type: recall_at_10 value: 52.054 - type: recall_at_100 value: 75.622 - type: recall_at_1000 value: 90.675 - type: recall_at_3 value: 37.282 - type: recall_at_5 value: 43.307 - type: map_at_1 value: 21.624 - type: map_at_10 value: 30.209999999999997 - type: map_at_100 value: 31.52 - type: map_at_1000 value: 31.625999999999998 - type: map_at_3 value: 26.951000000000004 - type: map_at_5 value: 28.938999999999997 - type: mrr_at_1 value: 26.941 - type: mrr_at_10 value: 35.13 - type: mrr_at_100 value: 36.15 - type: mrr_at_1000 value: 36.204 - type: mrr_at_3 value: 32.42 - type: mrr_at_5 value: 34.155 - type: ndcg_at_1 value: 26.941 - type: ndcg_at_10 value: 35.726 - type: ndcg_at_100 value: 41.725 - type: ndcg_at_1000 value: 44.105 - type: ndcg_at_3 value: 30.184 - type: ndcg_at_5 value: 33.176 - type: precision_at_1 value: 26.941 - type: precision_at_10 value: 6.654999999999999 - type: precision_at_100 value: 1.1520000000000001 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 14.346 - type: precision_at_5 value: 10.868 - type: recall_at_1 value: 21.624 - type: recall_at_10 value: 47.359 - type: recall_at_100 value: 73.436 - type: recall_at_1000 value: 89.988 - type: recall_at_3 value: 32.34 - type: recall_at_5 value: 39.856 - type: map_at_1 value: 20.67566666666667 - type: map_at_10 value: 28.479333333333333 - type: map_at_100 value: 29.612249999999996 - type: map_at_1000 value: 29.731166666666663 - type: map_at_3 value: 25.884 - type: map_at_5 value: 27.298916666666667 - type: mrr_at_1 value: 24.402583333333332 - type: mrr_at_10 value: 32.07041666666667 - type: mrr_at_100 value: 32.95841666666667 - type: mrr_at_1000 value: 33.025416666666665 - type: mrr_at_3 value: 29.677749999999996 - type: mrr_at_5 value: 31.02391666666667 - type: ndcg_at_1 value: 24.402583333333332 - type: ndcg_at_10 value: 33.326166666666666 - type: ndcg_at_100 value: 38.51566666666667 - type: ndcg_at_1000 value: 41.13791666666667 - type: ndcg_at_3 value: 28.687749999999994 - type: ndcg_at_5 value: 30.84766666666667 - type: precision_at_1 value: 24.402583333333332 - type: precision_at_10 value: 5.943749999999999 - type: precision_at_100 value: 1.0098333333333334 - type: precision_at_1000 value: 0.14183333333333334 - type: precision_at_3 value: 13.211500000000001 - type: precision_at_5 value: 9.548416666666668 - type: recall_at_1 value: 20.67566666666667 - type: recall_at_10 value: 44.245583333333336 - type: recall_at_100 value: 67.31116666666667 - type: recall_at_1000 value: 85.87841666666665 - type: recall_at_3 value: 31.49258333333333 - type: recall_at_5 value: 36.93241666666667 - type: map_at_1 value: 18.34 - type: map_at_10 value: 23.988 - type: map_at_100 value: 24.895 - type: map_at_1000 value: 24.992 - type: map_at_3 value: 21.831 - type: map_at_5 value: 23.0 - type: mrr_at_1 value: 20.399 - type: mrr_at_10 value: 26.186 - type: mrr_at_100 value: 27.017999999999997 - type: mrr_at_1000 value: 27.090999999999998 - type: mrr_at_3 value: 24.08 - type: mrr_at_5 value: 25.230000000000004 - type: ndcg_at_1 value: 20.399 - type: ndcg_at_10 value: 27.799000000000003 - type: ndcg_at_100 value: 32.579 - type: ndcg_at_1000 value: 35.209 - type: ndcg_at_3 value: 23.684 - type: ndcg_at_5 value: 25.521 - type: precision_at_1 value: 20.399 - type: precision_at_10 value: 4.585999999999999 - type: precision_at_100 value: 0.755 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 10.276 - type: precision_at_5 value: 7.362 - type: recall_at_1 value: 18.34 - type: recall_at_10 value: 37.456 - type: recall_at_100 value: 59.86 - type: recall_at_1000 value: 79.703 - type: recall_at_3 value: 26.163999999999998 - type: recall_at_5 value: 30.652 - type: map_at_1 value: 12.327 - type: map_at_10 value: 17.572 - type: map_at_100 value: 18.534 - type: map_at_1000 value: 18.653 - type: map_at_3 value: 15.703 - type: map_at_5 value: 16.752 - type: mrr_at_1 value: 15.038000000000002 - type: mrr_at_10 value: 20.726 - type: mrr_at_100 value: 21.61 - type: mrr_at_1000 value: 21.695 - type: mrr_at_3 value: 18.829 - type: mrr_at_5 value: 19.885 - type: ndcg_at_1 value: 15.038000000000002 - type: ndcg_at_10 value: 21.241 - type: ndcg_at_100 value: 26.179000000000002 - type: ndcg_at_1000 value: 29.316 - type: ndcg_at_3 value: 17.762 - type: ndcg_at_5 value: 19.413 - type: precision_at_1 value: 15.038000000000002 - type: precision_at_10 value: 3.8920000000000003 - type: precision_at_100 value: 0.75 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 8.351 - type: precision_at_5 value: 6.187 - type: recall_at_1 value: 12.327 - type: recall_at_10 value: 29.342000000000002 - type: recall_at_100 value: 51.854 - type: recall_at_1000 value: 74.648 - type: recall_at_3 value: 19.596 - type: recall_at_5 value: 23.899 - type: map_at_1 value: 20.594 - type: map_at_10 value: 27.878999999999998 - type: map_at_100 value: 28.926000000000002 - type: map_at_1000 value: 29.041 - type: map_at_3 value: 25.668999999999997 - type: map_at_5 value: 26.773999999999997 - type: mrr_at_1 value: 23.694000000000003 - type: mrr_at_10 value: 31.335 - type: mrr_at_100 value: 32.218 - type: mrr_at_1000 value: 32.298 - type: mrr_at_3 value: 29.26 - type: mrr_at_5 value: 30.328 - type: ndcg_at_1 value: 23.694000000000003 - type: ndcg_at_10 value: 32.456 - type: ndcg_at_100 value: 37.667 - type: ndcg_at_1000 value: 40.571 - type: ndcg_at_3 value: 28.283 - type: ndcg_at_5 value: 29.986 - type: precision_at_1 value: 23.694000000000003 - type: precision_at_10 value: 5.448 - type: precision_at_100 value: 0.9119999999999999 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 12.717999999999998 - type: precision_at_5 value: 8.843 - type: recall_at_1 value: 20.594 - type: recall_at_10 value: 43.004999999999995 - type: recall_at_100 value: 66.228 - type: recall_at_1000 value: 87.17099999999999 - type: recall_at_3 value: 31.554 - type: recall_at_5 value: 35.838 - type: map_at_1 value: 20.855999999999998 - type: map_at_10 value: 28.372000000000003 - type: map_at_100 value: 29.87 - type: map_at_1000 value: 30.075000000000003 - type: map_at_3 value: 26.054 - type: map_at_5 value: 27.128999999999998 - type: mrr_at_1 value: 25.494 - type: mrr_at_10 value: 32.735 - type: mrr_at_100 value: 33.794000000000004 - type: mrr_at_1000 value: 33.85 - type: mrr_at_3 value: 30.731 - type: mrr_at_5 value: 31.897 - type: ndcg_at_1 value: 25.494 - type: ndcg_at_10 value: 33.385 - type: ndcg_at_100 value: 39.436 - type: ndcg_at_1000 value: 42.313 - type: ndcg_at_3 value: 29.612 - type: ndcg_at_5 value: 31.186999999999998 - type: precision_at_1 value: 25.494 - type: precision_at_10 value: 6.422999999999999 - type: precision_at_100 value: 1.383 - type: precision_at_1000 value: 0.22399999999999998 - type: precision_at_3 value: 13.834 - type: precision_at_5 value: 10.0 - type: recall_at_1 value: 20.855999999999998 - type: recall_at_10 value: 42.678 - type: recall_at_100 value: 70.224 - type: recall_at_1000 value: 89.369 - type: recall_at_3 value: 31.957 - type: recall_at_5 value: 36.026 - type: map_at_1 value: 16.519000000000002 - type: map_at_10 value: 22.15 - type: map_at_100 value: 23.180999999999997 - type: map_at_1000 value: 23.291999999999998 - type: map_at_3 value: 20.132 - type: map_at_5 value: 21.346 - type: mrr_at_1 value: 17.93 - type: mrr_at_10 value: 23.506 - type: mrr_at_100 value: 24.581 - type: mrr_at_1000 value: 24.675 - type: mrr_at_3 value: 21.503 - type: mrr_at_5 value: 22.686 - type: ndcg_at_1 value: 17.93 - type: ndcg_at_10 value: 25.636 - type: ndcg_at_100 value: 30.736 - type: ndcg_at_1000 value: 33.841 - type: ndcg_at_3 value: 21.546000000000003 - type: ndcg_at_5 value: 23.658 - type: precision_at_1 value: 17.93 - type: precision_at_10 value: 3.993 - type: precision_at_100 value: 0.6890000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 9.057 - type: precision_at_5 value: 6.58 - type: recall_at_1 value: 16.519000000000002 - type: recall_at_10 value: 35.268 - type: recall_at_100 value: 58.17 - type: recall_at_1000 value: 81.66799999999999 - type: recall_at_3 value: 24.165 - type: recall_at_5 value: 29.254 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce metrics: - type: map_at_1 value: 10.363 - type: map_at_10 value: 18.301000000000002 - type: map_at_100 value: 20.019000000000002 - type: map_at_1000 value: 20.207 - type: map_at_3 value: 14.877 - type: map_at_5 value: 16.544 - type: mrr_at_1 value: 22.866 - type: mrr_at_10 value: 34.935 - type: mrr_at_100 value: 35.802 - type: mrr_at_1000 value: 35.839999999999996 - type: mrr_at_3 value: 30.965999999999998 - type: mrr_at_5 value: 33.204 - type: ndcg_at_1 value: 22.866 - type: ndcg_at_10 value: 26.595000000000002 - type: ndcg_at_100 value: 33.513999999999996 - type: ndcg_at_1000 value: 36.872 - type: ndcg_at_3 value: 20.666999999999998 - type: ndcg_at_5 value: 22.728 - type: precision_at_1 value: 22.866 - type: precision_at_10 value: 8.632 - type: precision_at_100 value: 1.6119999999999999 - type: precision_at_1000 value: 0.22399999999999998 - type: precision_at_3 value: 15.504999999999999 - type: precision_at_5 value: 12.404 - type: recall_at_1 value: 10.363 - type: recall_at_10 value: 33.494 - type: recall_at_100 value: 57.593 - type: recall_at_1000 value: 76.342 - type: recall_at_3 value: 19.157 - type: recall_at_5 value: 24.637999999999998 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: f097057d03ed98220bc7309ddb10b71a54d667d6 metrics: - type: map_at_1 value: 7.436 - type: map_at_10 value: 14.760000000000002 - type: map_at_100 value: 19.206 - type: map_at_1000 value: 20.267 - type: map_at_3 value: 10.894 - type: map_at_5 value: 12.828999999999999 - type: mrr_at_1 value: 54.25 - type: mrr_at_10 value: 63.769 - type: mrr_at_100 value: 64.193 - type: mrr_at_1000 value: 64.211 - type: mrr_at_3 value: 61.458 - type: mrr_at_5 value: 63.096 - type: ndcg_at_1 value: 42.875 - type: ndcg_at_10 value: 31.507 - type: ndcg_at_100 value: 34.559 - type: ndcg_at_1000 value: 41.246 - type: ndcg_at_3 value: 35.058 - type: ndcg_at_5 value: 33.396 - type: precision_at_1 value: 54.25 - type: precision_at_10 value: 24.45 - type: precision_at_100 value: 7.383000000000001 - type: precision_at_1000 value: 1.582 - type: precision_at_3 value: 38.083 - type: precision_at_5 value: 32.6 - type: recall_at_1 value: 7.436 - type: recall_at_10 value: 19.862 - type: recall_at_100 value: 38.981 - type: recall_at_1000 value: 61.038000000000004 - type: recall_at_3 value: 11.949 - type: recall_at_5 value: 15.562000000000001 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 829147f8f75a25f005913200eb5ed41fae320aa1 metrics: - type: accuracy value: 46.39 - type: f1 value: 42.26424885856703 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: 1429cf27e393599b8b359b9b72c666f96b2525f9 metrics: - type: map_at_1 value: 50.916 - type: map_at_10 value: 62.258 - type: map_at_100 value: 62.741 - type: map_at_1000 value: 62.763000000000005 - type: map_at_3 value: 60.01800000000001 - type: map_at_5 value: 61.419999999999995 - type: mrr_at_1 value: 54.964999999999996 - type: mrr_at_10 value: 66.554 - type: mrr_at_100 value: 66.96600000000001 - type: mrr_at_1000 value: 66.97800000000001 - type: mrr_at_3 value: 64.414 - type: mrr_at_5 value: 65.77 - type: ndcg_at_1 value: 54.964999999999996 - type: ndcg_at_10 value: 68.12 - type: ndcg_at_100 value: 70.282 - type: ndcg_at_1000 value: 70.788 - type: ndcg_at_3 value: 63.861999999999995 - type: ndcg_at_5 value: 66.216 - type: precision_at_1 value: 54.964999999999996 - type: precision_at_10 value: 8.998000000000001 - type: precision_at_100 value: 1.016 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 25.618000000000002 - type: precision_at_5 value: 16.676 - type: recall_at_1 value: 50.916 - type: recall_at_10 value: 82.04 - type: recall_at_100 value: 91.689 - type: recall_at_1000 value: 95.34899999999999 - type: recall_at_3 value: 70.512 - type: recall_at_5 value: 76.29899999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be metrics: - type: map_at_1 value: 13.568 - type: map_at_10 value: 23.264000000000003 - type: map_at_100 value: 24.823999999999998 - type: map_at_1000 value: 25.013999999999996 - type: map_at_3 value: 19.724 - type: map_at_5 value: 21.772 - type: mrr_at_1 value: 27.315 - type: mrr_at_10 value: 35.935 - type: mrr_at_100 value: 36.929 - type: mrr_at_1000 value: 36.985 - type: mrr_at_3 value: 33.591 - type: mrr_at_5 value: 34.848 - type: ndcg_at_1 value: 27.315 - type: ndcg_at_10 value: 29.988 - type: ndcg_at_100 value: 36.41 - type: ndcg_at_1000 value: 40.184999999999995 - type: ndcg_at_3 value: 26.342 - type: ndcg_at_5 value: 27.68 - type: precision_at_1 value: 27.315 - type: precision_at_10 value: 8.565000000000001 - type: precision_at_100 value: 1.508 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 17.849999999999998 - type: precision_at_5 value: 13.672999999999998 - type: recall_at_1 value: 13.568 - type: recall_at_10 value: 37.133 - type: recall_at_100 value: 61.475 - type: recall_at_1000 value: 84.372 - type: recall_at_3 value: 24.112000000000002 - type: recall_at_5 value: 29.507 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: 766870b35a1b9ca65e67a0d1913899973551fc6c metrics: - type: map_at_1 value: 30.878 - type: map_at_10 value: 40.868 - type: map_at_100 value: 41.693999999999996 - type: map_at_1000 value: 41.775 - type: map_at_3 value: 38.56 - type: map_at_5 value: 39.947 - type: mrr_at_1 value: 61.756 - type: mrr_at_10 value: 68.265 - type: mrr_at_100 value: 68.671 - type: mrr_at_1000 value: 68.694 - type: mrr_at_3 value: 66.78399999999999 - type: mrr_at_5 value: 67.704 - type: ndcg_at_1 value: 61.756 - type: ndcg_at_10 value: 49.931 - type: ndcg_at_100 value: 53.179 - type: ndcg_at_1000 value: 54.94799999999999 - type: ndcg_at_3 value: 46.103 - type: ndcg_at_5 value: 48.147 - type: precision_at_1 value: 61.756 - type: precision_at_10 value: 10.163 - type: precision_at_100 value: 1.2710000000000001 - type: precision_at_1000 value: 0.151 - type: precision_at_3 value: 28.179 - type: precision_at_5 value: 18.528 - type: recall_at_1 value: 30.878 - type: recall_at_10 value: 50.817 - type: recall_at_100 value: 63.544999999999995 - type: recall_at_1000 value: 75.361 - type: recall_at_3 value: 42.269 - type: recall_at_5 value: 46.32 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4 metrics: - type: accuracy value: 64.04799999999999 - type: ap value: 59.185251455339284 - type: f1 value: 63.947123181349255 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: validation revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849 metrics: - type: map_at_1 value: 18.9 - type: map_at_10 value: 29.748 - type: map_at_100 value: 30.976 - type: map_at_1000 value: 31.041 - type: map_at_3 value: 26.112999999999996 - type: map_at_5 value: 28.197 - type: mrr_at_1 value: 19.413 - type: mrr_at_10 value: 30.322 - type: mrr_at_100 value: 31.497000000000003 - type: mrr_at_1000 value: 31.555 - type: mrr_at_3 value: 26.729000000000003 - type: mrr_at_5 value: 28.788999999999998 - type: ndcg_at_1 value: 19.413 - type: ndcg_at_10 value: 36.048 - type: ndcg_at_100 value: 42.152 - type: ndcg_at_1000 value: 43.772 - type: ndcg_at_3 value: 28.642 - type: ndcg_at_5 value: 32.358 - type: precision_at_1 value: 19.413 - type: precision_at_10 value: 5.785 - type: precision_at_100 value: 0.8869999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 12.192 - type: precision_at_5 value: 9.189 - type: recall_at_1 value: 18.9 - type: recall_at_10 value: 55.457 - type: recall_at_100 value: 84.09100000000001 - type: recall_at_1000 value: 96.482 - type: recall_at_3 value: 35.359 - type: recall_at_5 value: 44.275 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3 metrics: - type: accuracy value: 92.07706338349293 - type: f1 value: 91.56680443236652 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: 6299947a7777084cc2d4b64235bf7190381ce755 metrics: - type: accuracy value: 71.18559051527589 - type: f1 value: 52.42887061726789 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea metrics: - type: accuracy value: 68.64828513786148 - type: f1 value: 66.54281381596097 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.04236718224612 - type: f1 value: 75.89170458655639 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: dcefc037ef84348e49b0d29109e891c01067226b metrics: - type: v_measure value: 32.0840369055247 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc metrics: - type: v_measure value: 29.448729560244537 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.340856463122375 - type: mrr value: 32.398547669840916 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610 metrics: - type: map_at_1 value: 5.526 - type: map_at_10 value: 11.745 - type: map_at_100 value: 14.831 - type: map_at_1000 value: 16.235 - type: map_at_3 value: 8.716 - type: map_at_5 value: 10.101 - type: mrr_at_1 value: 43.653 - type: mrr_at_10 value: 51.06699999999999 - type: mrr_at_100 value: 51.881 - type: mrr_at_1000 value: 51.912000000000006 - type: mrr_at_3 value: 49.02 - type: mrr_at_5 value: 50.288999999999994 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 32.083 - type: ndcg_at_100 value: 30.049999999999997 - type: ndcg_at_1000 value: 38.661 - type: ndcg_at_3 value: 37.940000000000005 - type: ndcg_at_5 value: 35.455999999999996 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.829999999999999 - type: precision_at_1000 value: 2.053 - type: precision_at_3 value: 35.501 - type: precision_at_5 value: 30.464000000000002 - type: recall_at_1 value: 5.526 - type: recall_at_10 value: 15.445999999999998 - type: recall_at_100 value: 31.179000000000002 - type: recall_at_1000 value: 61.578 - type: recall_at_3 value: 9.71 - type: recall_at_5 value: 12.026 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c metrics: - type: map_at_1 value: 23.467 - type: map_at_10 value: 36.041000000000004 - type: map_at_100 value: 37.268 - type: map_at_1000 value: 37.322 - type: map_at_3 value: 32.09 - type: map_at_5 value: 34.414 - type: mrr_at_1 value: 26.738 - type: mrr_at_10 value: 38.665 - type: mrr_at_100 value: 39.64 - type: mrr_at_1000 value: 39.681 - type: mrr_at_3 value: 35.207 - type: mrr_at_5 value: 37.31 - type: ndcg_at_1 value: 26.709 - type: ndcg_at_10 value: 42.942 - type: ndcg_at_100 value: 48.296 - type: ndcg_at_1000 value: 49.651 - type: ndcg_at_3 value: 35.413 - type: ndcg_at_5 value: 39.367999999999995 - type: precision_at_1 value: 26.709 - type: precision_at_10 value: 7.306 - type: precision_at_100 value: 1.0290000000000001 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 16.348 - type: precision_at_5 value: 12.068 - type: recall_at_1 value: 23.467 - type: recall_at_10 value: 61.492999999999995 - type: recall_at_100 value: 85.01100000000001 - type: recall_at_1000 value: 95.261 - type: recall_at_3 value: 41.952 - type: recall_at_5 value: 51.105999999999995 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: 6205996560df11e3a3da9ab4f926788fc30a7db4 metrics: - type: map_at_1 value: 67.51700000000001 - type: map_at_10 value: 81.054 - type: map_at_100 value: 81.727 - type: map_at_1000 value: 81.75200000000001 - type: map_at_3 value: 78.018 - type: map_at_5 value: 79.879 - type: mrr_at_1 value: 77.52 - type: mrr_at_10 value: 84.429 - type: mrr_at_100 value: 84.58200000000001 - type: mrr_at_1000 value: 84.584 - type: mrr_at_3 value: 83.268 - type: mrr_at_5 value: 84.013 - type: ndcg_at_1 value: 77.53 - type: ndcg_at_10 value: 85.277 - type: ndcg_at_100 value: 86.80499999999999 - type: ndcg_at_1000 value: 87.01 - type: ndcg_at_3 value: 81.975 - type: ndcg_at_5 value: 83.723 - type: precision_at_1 value: 77.53 - type: precision_at_10 value: 12.961 - type: precision_at_100 value: 1.502 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 35.713 - type: precision_at_5 value: 23.574 - type: recall_at_1 value: 67.51700000000001 - type: recall_at_10 value: 93.486 - type: recall_at_100 value: 98.9 - type: recall_at_1000 value: 99.92999999999999 - type: recall_at_3 value: 84.17999999999999 - type: recall_at_5 value: 88.97500000000001 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: b2805658ae38990172679479369a78b86de8c390 metrics: - type: v_measure value: 48.225994608749915 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 53.17635557157765 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5 metrics: - type: map_at_1 value: 3.988 - type: map_at_10 value: 9.4 - type: map_at_100 value: 10.968 - type: map_at_1000 value: 11.257 - type: map_at_3 value: 7.123 - type: map_at_5 value: 8.221 - type: mrr_at_1 value: 19.7 - type: mrr_at_10 value: 29.098000000000003 - type: mrr_at_100 value: 30.247 - type: mrr_at_1000 value: 30.318 - type: mrr_at_3 value: 26.55 - type: mrr_at_5 value: 27.915 - type: ndcg_at_1 value: 19.7 - type: ndcg_at_10 value: 16.176 - type: ndcg_at_100 value: 22.931 - type: ndcg_at_1000 value: 28.301 - type: ndcg_at_3 value: 16.142 - type: ndcg_at_5 value: 13.633999999999999 - type: precision_at_1 value: 19.7 - type: precision_at_10 value: 8.18 - type: precision_at_100 value: 1.8010000000000002 - type: precision_at_1000 value: 0.309 - type: precision_at_3 value: 15.1 - type: precision_at_5 value: 11.74 - type: recall_at_1 value: 3.988 - type: recall_at_10 value: 16.625 - type: recall_at_100 value: 36.61 - type: recall_at_1000 value: 62.805 - type: recall_at_3 value: 9.168 - type: recall_at_5 value: 11.902 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 77.29330379162072 - type: cos_sim_spearman value: 67.22953551111448 - type: euclidean_pearson value: 71.44682700059415 - type: euclidean_spearman value: 66.33178012153247 - type: manhattan_pearson value: 71.46941734657887 - type: manhattan_spearman value: 66.43234359835814 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f metrics: - type: cos_sim_pearson value: 75.40943196466576 - type: cos_sim_spearman value: 66.59241013465915 - type: euclidean_pearson value: 71.32500540796616 - type: euclidean_spearman value: 67.86667467202591 - type: manhattan_pearson value: 71.48209832089134 - type: manhattan_spearman value: 67.94511626964879 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9 metrics: - type: cos_sim_pearson value: 77.08302398877518 - type: cos_sim_spearman value: 77.33151317062642 - type: euclidean_pearson value: 76.77020279715008 - type: euclidean_spearman value: 77.13893776083225 - type: manhattan_pearson value: 76.76732290707477 - type: manhattan_spearman value: 77.14500877396631 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b metrics: - type: cos_sim_pearson value: 77.46886184932168 - type: cos_sim_spearman value: 71.82815265534886 - type: euclidean_pearson value: 75.19783284299076 - type: euclidean_spearman value: 71.36479611710412 - type: manhattan_pearson value: 75.30375233959337 - type: manhattan_spearman value: 71.46280266488021 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6 metrics: - type: cos_sim_pearson value: 80.093017609484 - type: cos_sim_spearman value: 80.65931167868882 - type: euclidean_pearson value: 80.36786337117047 - type: euclidean_spearman value: 81.30521389642827 - type: manhattan_pearson value: 80.37922433220973 - type: manhattan_spearman value: 81.30496664496285 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd metrics: - type: cos_sim_pearson value: 77.98998347238742 - type: cos_sim_spearman value: 78.91151365939403 - type: euclidean_pearson value: 76.40510899217841 - type: euclidean_spearman value: 76.8551459824213 - type: manhattan_pearson value: 76.3986079603294 - type: manhattan_spearman value: 76.8848053254288 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0 metrics: - type: cos_sim_pearson value: 85.63510653472044 - type: cos_sim_spearman value: 86.98674844768605 - type: euclidean_pearson value: 85.205080538809 - type: euclidean_spearman value: 85.53630494151886 - type: manhattan_pearson value: 85.48612469885626 - type: manhattan_spearman value: 85.81741413931921 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906 metrics: - type: cos_sim_pearson value: 66.7257987615171 - type: cos_sim_spearman value: 67.30387805090024 - type: euclidean_pearson value: 69.46877227885867 - type: euclidean_spearman value: 69.33161798704344 - type: manhattan_pearson value: 69.82773311626424 - type: manhattan_spearman value: 69.57199940498796 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: 8913289635987208e6e7c72789e4be2fe94b6abd metrics: - type: cos_sim_pearson value: 79.37322139418472 - type: cos_sim_spearman value: 77.5887175717799 - type: euclidean_pearson value: 78.23006410562164 - type: euclidean_spearman value: 77.18470385673044 - type: manhattan_pearson value: 78.40868369362455 - type: manhattan_spearman value: 77.36675823897656 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: 56a6d0140cf6356659e2a7c1413286a774468d44 metrics: - type: map value: 77.21233007730808 - type: mrr value: 93.0502386139641 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: a75ae049398addde9b70f6b268875f5cbce99089 metrics: - type: map_at_1 value: 54.567 - type: map_at_10 value: 63.653000000000006 - type: map_at_100 value: 64.282 - type: map_at_1000 value: 64.31099999999999 - type: map_at_3 value: 60.478 - type: map_at_5 value: 62.322 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 64.759 - type: mrr_at_100 value: 65.274 - type: mrr_at_1000 value: 65.301 - type: mrr_at_3 value: 62.333000000000006 - type: mrr_at_5 value: 63.817 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 68.28699999999999 - type: ndcg_at_100 value: 70.98400000000001 - type: ndcg_at_1000 value: 71.695 - type: ndcg_at_3 value: 62.656 - type: ndcg_at_5 value: 65.523 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 24.221999999999998 - type: precision_at_5 value: 16.333000000000002 - type: recall_at_1 value: 54.567 - type: recall_at_10 value: 81.45599999999999 - type: recall_at_100 value: 93.5 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 66.228 - type: recall_at_5 value: 73.489 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea metrics: - type: cos_sim_accuracy value: 99.74455445544554 - type: cos_sim_ap value: 92.57836032673468 - type: cos_sim_f1 value: 87.0471464019851 - type: cos_sim_precision value: 86.4039408866995 - type: cos_sim_recall value: 87.7 - type: dot_accuracy value: 99.56039603960396 - type: dot_ap value: 82.47233353407186 - type: dot_f1 value: 76.78207739307537 - type: dot_precision value: 78.21576763485477 - type: dot_recall value: 75.4 - type: euclidean_accuracy value: 99.73069306930694 - type: euclidean_ap value: 91.70507666665775 - type: euclidean_f1 value: 86.26262626262626 - type: euclidean_precision value: 87.14285714285714 - type: euclidean_recall value: 85.39999999999999 - type: manhattan_accuracy value: 99.73861386138614 - type: manhattan_ap value: 91.96809459281754 - type: manhattan_f1 value: 86.6 - type: manhattan_precision value: 86.6 - type: manhattan_recall value: 86.6 - type: max_accuracy value: 99.74455445544554 - type: max_ap value: 92.57836032673468 - type: max_f1 value: 87.0471464019851 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235 metrics: - type: v_measure value: 60.85593925770172 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0 metrics: - type: v_measure value: 32.356772998237496 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9 metrics: - type: map value: 49.320607035290735 - type: mrr value: 50.09196481622952 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122 metrics: - type: cos_sim_pearson value: 31.17573968015504 - type: cos_sim_spearman value: 30.43371643155132 - type: dot_pearson value: 30.164319483092743 - type: dot_spearman value: 29.207082242868754 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217 metrics: - type: map_at_1 value: 0.22100000000000003 - type: map_at_10 value: 1.7229999999999999 - type: map_at_100 value: 9.195 - type: map_at_1000 value: 21.999 - type: map_at_3 value: 0.6479999999999999 - type: map_at_5 value: 0.964 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 90.667 - type: mrr_at_100 value: 90.858 - type: mrr_at_1000 value: 90.858 - type: mrr_at_3 value: 90.667 - type: mrr_at_5 value: 90.667 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 72.98 - type: ndcg_at_100 value: 52.868 - type: ndcg_at_1000 value: 46.541 - type: ndcg_at_3 value: 80.39699999999999 - type: ndcg_at_5 value: 76.303 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 75.8 - type: precision_at_100 value: 53.5 - type: precision_at_1000 value: 20.946 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 79.2 - type: recall_at_1 value: 0.22100000000000003 - type: recall_at_10 value: 1.9109999999999998 - type: recall_at_100 value: 12.437 - type: recall_at_1000 value: 43.606 - type: recall_at_3 value: 0.681 - type: recall_at_5 value: 1.023 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b metrics: - type: map_at_1 value: 2.5 - type: map_at_10 value: 9.568999999999999 - type: map_at_100 value: 15.653 - type: map_at_1000 value: 17.188 - type: map_at_3 value: 5.335999999999999 - type: map_at_5 value: 6.522 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 49.184 - type: mrr_at_100 value: 50.512 - type: mrr_at_1000 value: 50.512 - type: mrr_at_3 value: 46.259 - type: mrr_at_5 value: 48.299 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 24.45 - type: ndcg_at_100 value: 35.870999999999995 - type: ndcg_at_1000 value: 47.272999999999996 - type: ndcg_at_3 value: 28.528 - type: ndcg_at_5 value: 25.768 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 7.265000000000001 - type: precision_at_1000 value: 1.504 - type: precision_at_3 value: 29.252 - type: precision_at_5 value: 24.898 - type: recall_at_1 value: 2.5 - type: recall_at_10 value: 15.844 - type: recall_at_100 value: 45.469 - type: recall_at_1000 value: 81.148 - type: recall_at_3 value: 6.496 - type: recall_at_5 value: 8.790000000000001 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 68.7272 - type: ap value: 13.156450706152686 - type: f1 value: 52.814703437064395 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: 62146448f05be9e52a36b8ee9936447ea787eede metrics: - type: accuracy value: 55.6677985285795 - type: f1 value: 55.9373937514999 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4 metrics: - type: v_measure value: 40.05809562275603 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.76807534124099 - type: cos_sim_ap value: 62.37052608803734 - type: cos_sim_f1 value: 59.077414934916646 - type: cos_sim_precision value: 52.07326892109501 - type: cos_sim_recall value: 68.25857519788919 - type: dot_accuracy value: 80.56267509089825 - type: dot_ap value: 54.75349561321037 - type: dot_f1 value: 54.75483794372552 - type: dot_precision value: 49.77336499028707 - type: dot_recall value: 60.844327176781 - type: euclidean_accuracy value: 82.476008821601 - type: euclidean_ap value: 61.17417554210511 - type: euclidean_f1 value: 57.80318696022382 - type: euclidean_precision value: 53.622207176709544 - type: euclidean_recall value: 62.69129287598945 - type: manhattan_accuracy value: 82.48792990403528 - type: manhattan_ap value: 61.044816292966544 - type: manhattan_f1 value: 58.03033951360462 - type: manhattan_precision value: 53.36581045172719 - type: manhattan_recall value: 63.58839050131926 - type: max_accuracy value: 82.76807534124099 - type: max_ap value: 62.37052608803734 - type: max_f1 value: 59.077414934916646 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.97881010594946 - type: cos_sim_ap value: 83.78748636891035 - type: cos_sim_f1 value: 75.94113995691386 - type: cos_sim_precision value: 72.22029307590805 - type: cos_sim_recall value: 80.06621496766245 - type: dot_accuracy value: 85.69294058291614 - type: dot_ap value: 78.15363722278026 - type: dot_f1 value: 72.08894926888564 - type: dot_precision value: 67.28959487419075 - type: dot_recall value: 77.62550046196489 - type: euclidean_accuracy value: 87.73625179493149 - type: euclidean_ap value: 83.19012184470559 - type: euclidean_f1 value: 75.5148064623461 - type: euclidean_precision value: 72.63352535381551 - type: euclidean_recall value: 78.6341238065907 - type: manhattan_accuracy value: 87.74013272790779 - type: manhattan_ap value: 83.23305405113403 - type: manhattan_f1 value: 75.63960775639607 - type: manhattan_precision value: 72.563304569246 - type: manhattan_recall value: 78.9882968894364 - type: max_accuracy value: 87.97881010594946 - type: max_ap value: 83.78748636891035 - type: max_f1 value: 75.94113995691386 --- # SGPT-1.3B-weightedmean-msmarco-specb-bitfit ## Usage For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt ## Evaluation Results For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904 ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 62398 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 0.0002 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel (1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ```bibtex @article{muennighoff2022sgpt, title={SGPT: GPT Sentence Embeddings for Semantic Search}, author={Muennighoff, Niklas}, journal={arXiv preprint arXiv:2202.08904}, year={2022} } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
twadada/sup-cse
twadada
null
[ "mteb", "model-index", "region:us" ]
1,725
1,725
0
0
--- tags: - mteb model-index: - name: sup-simcse results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: None config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.3134328358209 - type: ap value: 38.04095408812765 - type: f1 value: 69.18544755163924 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: None config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 62.44112500000001 - type: ap value: 57.817235716985074 - type: f1 value: 62.37720459608907 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: None config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 32.09000000000001 - type: f1 value: 31.64639321293789 - task: type: Retrieval dataset: name: MTEB ArguAna type: None config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 17.994 - type: map_at_10 value: 31.223 - type: map_at_100 value: 32.557 - type: map_at_1000 value: 32.589 - type: map_at_3 value: 26.919999999999998 - type: map_at_5 value: 29.25 - type: mrr_at_1 value: 18.492 - type: mrr_at_10 value: 31.413000000000004 - type: mrr_at_100 value: 32.747 - type: mrr_at_1000 value: 32.779 - type: mrr_at_3 value: 27.073999999999998 - type: mrr_at_5 value: 29.429 - type: ndcg_at_1 value: 17.994 - type: ndcg_at_10 value: 38.903999999999996 - type: ndcg_at_100 value: 45.172000000000004 - type: ndcg_at_1000 value: 45.989000000000004 - type: ndcg_at_3 value: 29.969 - type: ndcg_at_5 value: 34.147 - type: precision_at_1 value: 17.994 - type: precision_at_10 value: 6.358 - type: precision_at_100 value: 0.924 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 12.945 - type: precision_at_5 value: 9.786999999999999 - type: recall_at_1 value: 17.994 - type: recall_at_10 value: 63.585 - type: recall_at_100 value: 92.39 - type: recall_at_1000 value: 98.72 - type: recall_at_3 value: 38.834 - type: recall_at_5 value: 48.933 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: None config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 31.91520053343146 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: None config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 23.737264050164082 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: None config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 51.46910611293683 - type: mrr value: 65.14081255771006 - task: type: STS dataset: name: MTEB BIOSSES type: None config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 74.4730319095045 - type: cos_sim_spearman value: 73.39685311998065 - type: euclidean_pearson value: 73.54538639388942 - type: euclidean_spearman value: 73.39685311998065 - type: manhattan_pearson value: 75.5608385430094 - type: manhattan_spearman value: 76.07643224829802 - task: type: Classification dataset: name: MTEB Banking77Classification type: None config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 67.97402597402598 - type: f1 value: 66.9475406749266 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: None config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 29.885798705312226 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: None config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 21.675278900147095 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: None config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 18.625 - type: map_at_10 value: 25.241999999999997 - type: map_at_100 value: 26.105 - type: map_at_1000 value: 26.230999999999998 - type: map_at_3 value: 23.24 - type: map_at_5 value: 24.465999999999998 - type: mrr_at_1 value: 23.176 - type: mrr_at_10 value: 29.841 - type: mrr_at_100 value: 30.586999999999996 - type: mrr_at_1000 value: 30.659999999999997 - type: mrr_at_3 value: 27.992 - type: mrr_at_5 value: 29.137 - type: ndcg_at_1 value: 23.176 - type: ndcg_at_10 value: 29.517 - type: ndcg_at_100 value: 33.798 - type: ndcg_at_1000 value: 36.839 - type: ndcg_at_3 value: 26.464 - type: ndcg_at_5 value: 27.971 - type: precision_at_1 value: 23.176 - type: precision_at_10 value: 5.6370000000000005 - type: precision_at_100 value: 0.9400000000000001 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 12.684999999999999 - type: precision_at_5 value: 9.099 - type: recall_at_1 value: 18.625 - type: recall_at_10 value: 37.151 - type: recall_at_100 value: 57.02199999999999 - type: recall_at_1000 value: 78.295 - type: recall_at_3 value: 28.112 - type: recall_at_5 value: 32.562999999999995 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: None config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 16.832 - type: map_at_10 value: 22.323999999999998 - type: map_at_100 value: 23.204 - type: map_at_1000 value: 23.316 - type: map_at_3 value: 20.631 - type: map_at_5 value: 21.587 - type: mrr_at_1 value: 21.274 - type: mrr_at_10 value: 26.705000000000002 - type: mrr_at_100 value: 27.427 - type: mrr_at_1000 value: 27.498 - type: mrr_at_3 value: 24.979000000000003 - type: mrr_at_5 value: 26.003999999999998 - type: ndcg_at_1 value: 21.274 - type: ndcg_at_10 value: 26.003999999999998 - type: ndcg_at_100 value: 30.196 - type: ndcg_at_1000 value: 33.011 - type: ndcg_at_3 value: 23.083000000000002 - type: ndcg_at_5 value: 24.455 - type: precision_at_1 value: 21.274 - type: precision_at_10 value: 4.764 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 11.04 - type: precision_at_5 value: 7.8469999999999995 - type: recall_at_1 value: 16.832 - type: recall_at_10 value: 32.729 - type: recall_at_100 value: 51.341 - type: recall_at_1000 value: 70.96900000000001 - type: recall_at_3 value: 24.229 - type: recall_at_5 value: 27.974 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: None config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 22.171 - type: map_at_10 value: 29.671999999999997 - type: map_at_100 value: 30.613 - type: map_at_1000 value: 30.705 - type: map_at_3 value: 27.331 - type: map_at_5 value: 28.743000000000002 - type: mrr_at_1 value: 25.705 - type: mrr_at_10 value: 32.637 - type: mrr_at_100 value: 33.428999999999995 - type: mrr_at_1000 value: 33.497 - type: mrr_at_3 value: 30.418 - type: mrr_at_5 value: 31.794 - type: ndcg_at_1 value: 25.705 - type: ndcg_at_10 value: 34.03 - type: ndcg_at_100 value: 38.663 - type: ndcg_at_1000 value: 41.071999999999996 - type: ndcg_at_3 value: 29.64 - type: ndcg_at_5 value: 31.953 - type: precision_at_1 value: 25.705 - type: precision_at_10 value: 5.542 - type: precision_at_100 value: 0.86 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 13.145000000000001 - type: precision_at_5 value: 9.442 - type: recall_at_1 value: 22.171 - type: recall_at_10 value: 44.439 - type: recall_at_100 value: 65.658 - type: recall_at_1000 value: 83.578 - type: recall_at_3 value: 32.681 - type: recall_at_5 value: 38.273 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: None config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 9.415999999999999 - type: map_at_10 value: 13.023000000000001 - type: map_at_100 value: 13.776 - type: map_at_1000 value: 13.882 - type: map_at_3 value: 11.834999999999999 - type: map_at_5 value: 12.472 - type: mrr_at_1 value: 10.282 - type: mrr_at_10 value: 14.069999999999999 - type: mrr_at_100 value: 14.802999999999999 - type: mrr_at_1000 value: 14.902999999999999 - type: mrr_at_3 value: 12.825000000000001 - type: mrr_at_5 value: 13.508000000000001 - type: ndcg_at_1 value: 10.282 - type: ndcg_at_10 value: 15.305 - type: ndcg_at_100 value: 19.234 - type: ndcg_at_1000 value: 22.368 - type: ndcg_at_3 value: 12.852 - type: ndcg_at_5 value: 13.985 - type: precision_at_1 value: 10.282 - type: precision_at_10 value: 2.407 - type: precision_at_100 value: 0.45799999999999996 - type: precision_at_1000 value: 0.077 - type: precision_at_3 value: 5.386 - type: precision_at_5 value: 3.842 - type: recall_at_1 value: 9.415999999999999 - type: recall_at_10 value: 21.465999999999998 - type: recall_at_100 value: 40.026 - type: recall_at_1000 value: 64.36699999999999 - type: recall_at_3 value: 14.849 - type: recall_at_5 value: 17.541999999999998 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: None config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 5.739 - type: map_at_10 value: 8.01 - type: map_at_100 value: 8.545 - type: map_at_1000 value: 8.649 - type: map_at_3 value: 7.059 - type: map_at_5 value: 7.558 - type: mrr_at_1 value: 7.587000000000001 - type: mrr_at_10 value: 10.222000000000001 - type: mrr_at_100 value: 10.817 - type: mrr_at_1000 value: 10.904 - type: mrr_at_3 value: 9.142 - type: mrr_at_5 value: 9.652 - type: ndcg_at_1 value: 7.587000000000001 - type: ndcg_at_10 value: 9.923 - type: ndcg_at_100 value: 13.29 - type: ndcg_at_1000 value: 16.607 - type: ndcg_at_3 value: 8.043 - type: ndcg_at_5 value: 8.827 - type: precision_at_1 value: 7.587000000000001 - type: precision_at_10 value: 1.8780000000000001 - type: precision_at_100 value: 0.428 - type: precision_at_1000 value: 0.084 - type: precision_at_3 value: 3.814 - type: precision_at_5 value: 2.861 - type: recall_at_1 value: 5.739 - type: recall_at_10 value: 13.776 - type: recall_at_100 value: 29.828 - type: recall_at_1000 value: 54.825 - type: recall_at_3 value: 8.6 - type: recall_at_5 value: 10.566 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: None config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 17.172 - type: map_at_10 value: 21.865000000000002 - type: map_at_100 value: 22.857 - type: map_at_1000 value: 22.985 - type: map_at_3 value: 20.326 - type: map_at_5 value: 21.175 - type: mrr_at_1 value: 20.693 - type: mrr_at_10 value: 25.716 - type: mrr_at_100 value: 26.576 - type: mrr_at_1000 value: 26.66 - type: mrr_at_3 value: 24.093999999999998 - type: mrr_at_5 value: 25.037 - type: ndcg_at_1 value: 20.693 - type: ndcg_at_10 value: 25.186999999999998 - type: ndcg_at_100 value: 30.259000000000004 - type: ndcg_at_1000 value: 33.424 - type: ndcg_at_3 value: 22.582 - type: ndcg_at_5 value: 23.783 - type: precision_at_1 value: 20.693 - type: precision_at_10 value: 4.427 - type: precision_at_100 value: 0.8410000000000001 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 10.33 - type: precision_at_5 value: 7.315 - type: recall_at_1 value: 17.172 - type: recall_at_10 value: 31.494 - type: recall_at_100 value: 54.008 - type: recall_at_1000 value: 76.591 - type: recall_at_3 value: 23.851 - type: recall_at_5 value: 27.065 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: None config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 11.777 - type: map_at_10 value: 16.028000000000002 - type: map_at_100 value: 16.908 - type: map_at_1000 value: 17.026 - type: map_at_3 value: 14.623 - type: map_at_5 value: 15.309000000000001 - type: mrr_at_1 value: 14.498 - type: mrr_at_10 value: 19.3 - type: mrr_at_100 value: 20.125 - type: mrr_at_1000 value: 20.211000000000002 - type: mrr_at_3 value: 17.808 - type: mrr_at_5 value: 18.584 - type: ndcg_at_1 value: 14.498 - type: ndcg_at_10 value: 19.087 - type: ndcg_at_100 value: 23.651 - type: ndcg_at_1000 value: 26.838 - type: ndcg_at_3 value: 16.534 - type: ndcg_at_5 value: 17.546 - type: precision_at_1 value: 14.498 - type: precision_at_10 value: 3.4819999999999998 - type: precision_at_100 value: 0.687 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 7.800999999999999 - type: precision_at_5 value: 5.525 - type: recall_at_1 value: 11.777 - type: recall_at_10 value: 25.151 - type: recall_at_100 value: 45.653 - type: recall_at_1000 value: 68.569 - type: recall_at_3 value: 18.063000000000002 - type: recall_at_5 value: 20.757 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 12.363666666666665 - type: map_at_10 value: 16.769583333333333 - type: map_at_100 value: 17.52825 - type: map_at_1000 value: 17.63925 - type: map_at_3 value: 15.336083333333333 - type: map_at_5 value: 16.122083333333332 - type: mrr_at_1 value: 14.978166666666667 - type: mrr_at_10 value: 19.563416666666665 - type: mrr_at_100 value: 20.268500000000003 - type: mrr_at_1000 value: 20.353 - type: mrr_at_3 value: 18.08591666666667 - type: mrr_at_5 value: 18.910083333333333 - type: ndcg_at_1 value: 14.978166666666667 - type: ndcg_at_10 value: 19.7445 - type: ndcg_at_100 value: 23.696083333333334 - type: ndcg_at_1000 value: 26.728 - type: ndcg_at_3 value: 17.140833333333333 - type: ndcg_at_5 value: 18.317750000000004 - type: precision_at_1 value: 14.978166666666667 - type: precision_at_10 value: 3.4974166666666666 - type: precision_at_100 value: 0.6443333333333334 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 7.916749999999999 - type: precision_at_5 value: 5.668166666666666 - type: recall_at_1 value: 12.363666666666665 - type: recall_at_10 value: 26.023833333333336 - type: recall_at_100 value: 44.30291666666666 - type: recall_at_1000 value: 66.63566666666667 - type: recall_at_3 value: 18.639416666666666 - type: recall_at_5 value: 21.716833333333334 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: None config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 8.491 - type: map_at_10 value: 11.84 - type: map_at_100 value: 12.489 - type: map_at_1000 value: 12.558 - type: map_at_3 value: 10.619 - type: map_at_5 value: 11.350999999999999 - type: mrr_at_1 value: 10.276 - type: mrr_at_10 value: 13.855999999999998 - type: mrr_at_100 value: 14.487 - type: mrr_at_1000 value: 14.549999999999999 - type: mrr_at_3 value: 12.679000000000002 - type: mrr_at_5 value: 13.4 - type: ndcg_at_1 value: 10.276 - type: ndcg_at_10 value: 14.185 - type: ndcg_at_100 value: 17.654 - type: ndcg_at_1000 value: 19.813 - type: ndcg_at_3 value: 11.912 - type: ndcg_at_5 value: 13.061 - type: precision_at_1 value: 10.276 - type: precision_at_10 value: 2.5 - type: precision_at_100 value: 0.45599999999999996 - type: precision_at_1000 value: 0.06999999999999999 - type: precision_at_3 value: 5.521 - type: precision_at_5 value: 4.08 - type: recall_at_1 value: 8.491 - type: recall_at_10 value: 19.528000000000002 - type: recall_at_100 value: 35.942 - type: recall_at_1000 value: 52.614000000000004 - type: recall_at_3 value: 13.092 - type: recall_at_5 value: 15.988 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: None config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 6.343999999999999 - type: map_at_10 value: 9.16 - type: map_at_100 value: 9.693999999999999 - type: map_at_1000 value: 9.794 - type: map_at_3 value: 8.32 - type: map_at_5 value: 8.662 - type: mrr_at_1 value: 7.983 - type: mrr_at_10 value: 11.277 - type: mrr_at_100 value: 11.825 - type: mrr_at_1000 value: 11.914 - type: mrr_at_3 value: 10.323 - type: mrr_at_5 value: 10.735999999999999 - type: ndcg_at_1 value: 7.983 - type: ndcg_at_10 value: 11.231 - type: ndcg_at_100 value: 14.202 - type: ndcg_at_1000 value: 17.22 - type: ndcg_at_3 value: 9.600999999999999 - type: ndcg_at_5 value: 10.086 - type: precision_at_1 value: 7.983 - type: precision_at_10 value: 2.116 - type: precision_at_100 value: 0.432 - type: precision_at_1000 value: 0.082 - type: precision_at_3 value: 4.691 - type: precision_at_5 value: 3.2419999999999995 - type: recall_at_1 value: 6.343999999999999 - type: recall_at_10 value: 15.526000000000002 - type: recall_at_100 value: 29.496 - type: recall_at_1000 value: 52.088 - type: recall_at_3 value: 10.665 - type: recall_at_5 value: 12.055 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: None config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 10.299999999999999 - type: map_at_10 value: 13.899000000000001 - type: map_at_100 value: 14.575 - type: map_at_1000 value: 14.677999999999999 - type: map_at_3 value: 12.672 - type: map_at_5 value: 13.302 - type: mrr_at_1 value: 12.313 - type: mrr_at_10 value: 16.139 - type: mrr_at_100 value: 16.858 - type: mrr_at_1000 value: 16.957 - type: mrr_at_3 value: 14.77 - type: mrr_at_5 value: 15.459999999999999 - type: ndcg_at_1 value: 12.313 - type: ndcg_at_10 value: 16.408 - type: ndcg_at_100 value: 19.991 - type: ndcg_at_1000 value: 23.236 - type: ndcg_at_3 value: 13.980999999999998 - type: ndcg_at_5 value: 14.976999999999999 - type: precision_at_1 value: 12.313 - type: precision_at_10 value: 2.771 - type: precision_at_100 value: 0.504 - type: precision_at_1000 value: 0.08800000000000001 - type: precision_at_3 value: 6.281000000000001 - type: precision_at_5 value: 4.44 - type: recall_at_1 value: 10.299999999999999 - type: recall_at_10 value: 22.096 - type: recall_at_100 value: 38.515 - type: recall_at_1000 value: 63.157 - type: recall_at_3 value: 15.193000000000001 - type: recall_at_5 value: 17.807000000000002 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: None config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 12.791 - type: map_at_10 value: 17.625 - type: map_at_100 value: 18.448999999999998 - type: map_at_1000 value: 18.626 - type: map_at_3 value: 16.176 - type: map_at_5 value: 17.018 - type: mrr_at_1 value: 15.415000000000001 - type: mrr_at_10 value: 20.612 - type: mrr_at_100 value: 21.305 - type: mrr_at_1000 value: 21.410999999999998 - type: mrr_at_3 value: 18.939 - type: mrr_at_5 value: 19.928 - type: ndcg_at_1 value: 15.415000000000001 - type: ndcg_at_10 value: 20.819 - type: ndcg_at_100 value: 24.814 - type: ndcg_at_1000 value: 28.693 - type: ndcg_at_3 value: 18.495 - type: ndcg_at_5 value: 19.645000000000003 - type: precision_at_1 value: 15.415000000000001 - type: precision_at_10 value: 3.913 - type: precision_at_100 value: 0.804 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 8.762 - type: precision_at_5 value: 6.4430000000000005 - type: recall_at_1 value: 12.791 - type: recall_at_10 value: 26.791999999999998 - type: recall_at_100 value: 45.705 - type: recall_at_1000 value: 72.547 - type: recall_at_3 value: 19.902 - type: recall_at_5 value: 23.019000000000002 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: None config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 8.706 - type: map_at_10 value: 12.547 - type: map_at_100 value: 13.123999999999999 - type: map_at_1000 value: 13.221 - type: map_at_3 value: 11.201 - type: map_at_5 value: 11.822000000000001 - type: mrr_at_1 value: 10.536 - type: mrr_at_10 value: 14.386 - type: mrr_at_100 value: 14.982999999999999 - type: mrr_at_1000 value: 15.071000000000002 - type: mrr_at_3 value: 13.062000000000001 - type: mrr_at_5 value: 13.681 - type: ndcg_at_1 value: 10.536 - type: ndcg_at_10 value: 15.238 - type: ndcg_at_100 value: 18.601 - type: ndcg_at_1000 value: 21.615000000000002 - type: ndcg_at_3 value: 12.503 - type: ndcg_at_5 value: 13.524 - type: precision_at_1 value: 10.536 - type: precision_at_10 value: 2.532 - type: precision_at_100 value: 0.45799999999999996 - type: precision_at_1000 value: 0.078 - type: precision_at_3 value: 5.545 - type: precision_at_5 value: 3.882 - type: recall_at_1 value: 8.706 - type: recall_at_10 value: 22.137999999999998 - type: recall_at_100 value: 38.440999999999995 - type: recall_at_1000 value: 62.028000000000006 - type: recall_at_3 value: 14.435999999999998 - type: recall_at_5 value: 16.993 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: None config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 5.991 - type: map_at_10 value: 10.279 - type: map_at_100 value: 11.522 - type: map_at_1000 value: 11.725 - type: map_at_3 value: 8.358 - type: map_at_5 value: 9.168999999999999 - type: mrr_at_1 value: 12.899 - type: mrr_at_10 value: 20.954 - type: mrr_at_100 value: 22.005 - type: mrr_at_1000 value: 22.076 - type: mrr_at_3 value: 17.849999999999998 - type: mrr_at_5 value: 19.534000000000002 - type: ndcg_at_1 value: 12.899 - type: ndcg_at_10 value: 15.591 - type: ndcg_at_100 value: 21.558 - type: ndcg_at_1000 value: 25.795 - type: ndcg_at_3 value: 11.662 - type: ndcg_at_5 value: 12.916 - type: precision_at_1 value: 12.899 - type: precision_at_10 value: 5.218 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 8.708 - type: precision_at_5 value: 6.997000000000001 - type: recall_at_1 value: 5.991 - type: recall_at_10 value: 20.165 - type: recall_at_100 value: 41.479 - type: recall_at_1000 value: 66.014 - type: recall_at_3 value: 10.915 - type: recall_at_5 value: 14.033000000000001 - task: type: Retrieval dataset: name: MTEB DBPedia type: None config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 2.752 - type: map_at_10 value: 6.859 - type: map_at_100 value: 9.745 - type: map_at_1000 value: 10.549 - type: map_at_3 value: 4.707 - type: map_at_5 value: 5.7059999999999995 - type: mrr_at_1 value: 35.0 - type: mrr_at_10 value: 44.019999999999996 - type: mrr_at_100 value: 44.773 - type: mrr_at_1000 value: 44.812999999999995 - type: mrr_at_3 value: 40.583000000000006 - type: mrr_at_5 value: 42.921 - type: ndcg_at_1 value: 25.124999999999996 - type: ndcg_at_10 value: 19.197 - type: ndcg_at_100 value: 21.688 - type: ndcg_at_1000 value: 27.944000000000003 - type: ndcg_at_3 value: 20.949 - type: ndcg_at_5 value: 20.333000000000002 - type: precision_at_1 value: 35.0 - type: precision_at_10 value: 17.549999999999997 - type: precision_at_100 value: 5.645 - type: precision_at_1000 value: 1.218 - type: precision_at_3 value: 25.0 - type: precision_at_5 value: 22.5 - type: recall_at_1 value: 2.752 - type: recall_at_10 value: 10.939 - type: recall_at_100 value: 27.250000000000004 - type: recall_at_1000 value: 48.545 - type: recall_at_3 value: 5.79 - type: recall_at_5 value: 7.981000000000001 - task: type: Classification dataset: name: MTEB EmotionClassification type: None config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 40.695 - type: f1 value: 36.79470681884116 - task: type: Retrieval dataset: name: MTEB FEVER type: None config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 12.595999999999998 - type: map_at_10 value: 18.759999999999998 - type: map_at_100 value: 19.605 - type: map_at_1000 value: 19.68 - type: map_at_3 value: 16.692 - type: map_at_5 value: 17.773 - type: mrr_at_1 value: 13.411000000000001 - type: mrr_at_10 value: 19.907 - type: mrr_at_100 value: 20.767 - type: mrr_at_1000 value: 20.837 - type: mrr_at_3 value: 17.762 - type: mrr_at_5 value: 18.895999999999997 - type: ndcg_at_1 value: 13.411000000000001 - type: ndcg_at_10 value: 22.647000000000002 - type: ndcg_at_100 value: 27.084999999999997 - type: ndcg_at_1000 value: 29.296 - type: ndcg_at_3 value: 18.326999999999998 - type: ndcg_at_5 value: 20.265 - type: precision_at_1 value: 13.411000000000001 - type: precision_at_10 value: 3.6740000000000004 - type: precision_at_100 value: 0.608 - type: precision_at_1000 value: 0.082 - type: precision_at_3 value: 7.890999999999999 - type: precision_at_5 value: 5.755 - type: recall_at_1 value: 12.595999999999998 - type: recall_at_10 value: 33.836 - type: recall_at_100 value: 54.82 - type: recall_at_1000 value: 72.15599999999999 - type: recall_at_3 value: 21.961 - type: recall_at_5 value: 26.601999999999997 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: None config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 4.716 - type: map_at_10 value: 7.8340000000000005 - type: map_at_100 value: 8.652 - type: map_at_1000 value: 8.838 - type: map_at_3 value: 6.802999999999999 - type: map_at_5 value: 7.292999999999999 - type: mrr_at_1 value: 9.259 - type: mrr_at_10 value: 14.297 - type: mrr_at_100 value: 15.14 - type: mrr_at_1000 value: 15.260000000000002 - type: mrr_at_3 value: 12.834000000000001 - type: mrr_at_5 value: 13.483 - type: ndcg_at_1 value: 9.259 - type: ndcg_at_10 value: 11.083 - type: ndcg_at_100 value: 15.493000000000002 - type: ndcg_at_1000 value: 19.895 - type: ndcg_at_3 value: 9.494 - type: ndcg_at_5 value: 9.884 - type: precision_at_1 value: 9.259 - type: precision_at_10 value: 3.1329999999999996 - type: precision_at_100 value: 0.761 - type: precision_at_1000 value: 0.148 - type: precision_at_3 value: 6.379 - type: precision_at_5 value: 4.63 - type: recall_at_1 value: 4.716 - type: recall_at_10 value: 14.516000000000002 - type: recall_at_100 value: 31.980999999999998 - type: recall_at_1000 value: 59.891000000000005 - type: recall_at_3 value: 9.123000000000001 - type: recall_at_5 value: 10.975 - task: type: Retrieval dataset: name: MTEB HotpotQA type: None config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 13.977 - type: map_at_10 value: 19.313 - type: map_at_100 value: 20.09 - type: map_at_1000 value: 20.191 - type: map_at_3 value: 17.777 - type: map_at_5 value: 18.621 - type: mrr_at_1 value: 27.954 - type: mrr_at_10 value: 34.175 - type: mrr_at_100 value: 34.886 - type: mrr_at_1000 value: 34.953 - type: mrr_at_3 value: 32.471 - type: mrr_at_5 value: 33.383 - type: ndcg_at_1 value: 27.954 - type: ndcg_at_10 value: 25.102999999999998 - type: ndcg_at_100 value: 28.887 - type: ndcg_at_1000 value: 31.554 - type: ndcg_at_3 value: 22.06 - type: ndcg_at_5 value: 23.491999999999997 - type: precision_at_1 value: 27.954 - type: precision_at_10 value: 5.542 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 13.886999999999999 - type: precision_at_5 value: 9.464 - type: recall_at_1 value: 13.977 - type: recall_at_10 value: 27.711000000000002 - type: recall_at_100 value: 42.93 - type: recall_at_1000 value: 60.864 - type: recall_at_3 value: 20.831 - type: recall_at_5 value: 23.66 - task: type: Classification dataset: name: MTEB ImdbClassification type: None config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 62.92960000000001 - type: ap value: 58.305412968174274 - type: f1 value: 62.79151880122965 - task: type: Retrieval dataset: name: MTEB MSMARCO type: None config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 4.644 - type: map_at_10 value: 8.189 - type: map_at_100 value: 8.941 - type: map_at_1000 value: 9.034 - type: map_at_3 value: 6.811 - type: map_at_5 value: 7.53 - type: mrr_at_1 value: 4.771 - type: mrr_at_10 value: 8.426 - type: mrr_at_100 value: 9.186 - type: mrr_at_1000 value: 9.276 - type: mrr_at_3 value: 7.031999999999999 - type: mrr_at_5 value: 7.7509999999999994 - type: ndcg_at_1 value: 4.742 - type: ndcg_at_10 value: 10.471 - type: ndcg_at_100 value: 14.651 - type: ndcg_at_1000 value: 17.529 - type: ndcg_at_3 value: 7.599 - type: ndcg_at_5 value: 8.886 - type: precision_at_1 value: 4.742 - type: precision_at_10 value: 1.83 - type: precision_at_100 value: 0.4 - type: precision_at_1000 value: 0.065 - type: precision_at_3 value: 3.3520000000000003 - type: precision_at_5 value: 2.653 - type: recall_at_1 value: 4.644 - type: recall_at_10 value: 17.592 - type: recall_at_100 value: 38.112 - type: recall_at_1000 value: 61.36000000000001 - type: recall_at_3 value: 9.672 - type: recall_at_5 value: 12.766 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: None config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 84.52120383036934 - type: f1 value: 84.02668015212832 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: None config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 57.425900592795266 - type: f1 value: 39.185902692178225 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: None config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.395427034297235 - type: f1 value: 59.07963466976405 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: None config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.7619367854741 - type: f1 value: 65.31589654283285 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: None config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 26.800219005224214 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: None config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 24.232953769516218 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: None config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 28.764776217338706 - type: mrr value: 29.61488447099121 - task: type: Retrieval dataset: name: MTEB NFCorpus type: None config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 3.087 - type: map_at_10 value: 5.01 - type: map_at_100 value: 6.239 - type: map_at_1000 value: 7.2620000000000005 - type: map_at_3 value: 3.884 - type: map_at_5 value: 4.519 - type: mrr_at_1 value: 27.245 - type: mrr_at_10 value: 35.077000000000005 - type: mrr_at_100 value: 36.007 - type: mrr_at_1000 value: 36.087 - type: mrr_at_3 value: 32.405 - type: mrr_at_5 value: 34.2 - type: ndcg_at_1 value: 25.851000000000003 - type: ndcg_at_10 value: 17.534 - type: ndcg_at_100 value: 16.656000000000002 - type: ndcg_at_1000 value: 26.058999999999997 - type: ndcg_at_3 value: 20.155 - type: ndcg_at_5 value: 19.349 - type: precision_at_1 value: 27.245 - type: precision_at_10 value: 13.096 - type: precision_at_100 value: 4.723999999999999 - type: precision_at_1000 value: 1.73 - type: precision_at_3 value: 18.473 - type: precision_at_5 value: 16.718 - type: recall_at_1 value: 3.087 - type: recall_at_10 value: 7.555000000000001 - type: recall_at_100 value: 18.819 - type: recall_at_1000 value: 51.94 - type: recall_at_3 value: 4.387 - type: recall_at_5 value: 5.955 - task: type: Retrieval dataset: name: MTEB NQ type: None config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 7.423 - type: map_at_10 value: 12.739 - type: map_at_100 value: 13.803 - type: map_at_1000 value: 13.905000000000001 - type: map_at_3 value: 10.796999999999999 - type: map_at_5 value: 11.785 - type: mrr_at_1 value: 8.459 - type: mrr_at_10 value: 14.205000000000002 - type: mrr_at_100 value: 15.213 - type: mrr_at_1000 value: 15.303 - type: mrr_at_3 value: 12.138 - type: mrr_at_5 value: 13.192 - type: ndcg_at_1 value: 8.459 - type: ndcg_at_10 value: 16.353 - type: ndcg_at_100 value: 21.83 - type: ndcg_at_1000 value: 24.768 - type: ndcg_at_3 value: 12.237 - type: ndcg_at_5 value: 14.033999999999999 - type: precision_at_1 value: 8.459 - type: precision_at_10 value: 3.053 - type: precision_at_100 value: 0.616 - type: precision_at_1000 value: 0.09 - type: precision_at_3 value: 5.755 - type: precision_at_5 value: 4.455 - type: recall_at_1 value: 7.423 - type: recall_at_10 value: 26.226 - type: recall_at_100 value: 51.73799999999999 - type: recall_at_1000 value: 74.471 - type: recall_at_3 value: 15.134 - type: recall_at_5 value: 19.351 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: None config: default split: test revision: None metrics: - type: map_at_1 value: 59.080999999999996 - type: map_at_10 value: 71.06700000000001 - type: map_at_100 value: 71.892 - type: map_at_1000 value: 71.935 - type: map_at_3 value: 68.291 - type: map_at_5 value: 69.93599999999999 - type: mrr_at_1 value: 68.04 - type: mrr_at_10 value: 75.777 - type: mrr_at_100 value: 76.095 - type: mrr_at_1000 value: 76.105 - type: mrr_at_3 value: 74.248 - type: mrr_at_5 value: 75.199 - type: ndcg_at_1 value: 68.08 - type: ndcg_at_10 value: 75.92999999999999 - type: ndcg_at_100 value: 78.467 - type: ndcg_at_1000 value: 79.08800000000001 - type: ndcg_at_3 value: 72.33800000000001 - type: ndcg_at_5 value: 74.117 - type: precision_at_1 value: 68.08 - type: precision_at_10 value: 11.453000000000001 - type: precision_at_100 value: 1.397 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 31.343 - type: precision_at_5 value: 20.663999999999998 - type: recall_at_1 value: 59.080999999999996 - type: recall_at_10 value: 85.235 - type: recall_at_100 value: 95.232 - type: recall_at_1000 value: 99.003 - type: recall_at_3 value: 74.857 - type: recall_at_5 value: 79.866 - type: map_at_1 value: 2.128 - type: map_at_10 value: 5.224 - type: map_at_100 value: 6.2 - type: map_at_1000 value: 6.41 - type: map_at_3 value: 3.7929999999999997 - type: map_at_5 value: 4.507 - type: mrr_at_1 value: 10.4 - type: mrr_at_10 value: 17.692 - type: mrr_at_100 value: 18.721 - type: mrr_at_1000 value: 18.828 - type: mrr_at_3 value: 15.283 - type: mrr_at_5 value: 16.673 - type: ndcg_at_1 value: 10.4 - type: ndcg_at_10 value: 9.673 - type: ndcg_at_100 value: 14.597 - type: ndcg_at_1000 value: 19.45 - type: ndcg_at_3 value: 8.924999999999999 - type: ndcg_at_5 value: 7.968999999999999 - type: precision_at_1 value: 10.4 - type: precision_at_10 value: 5.09 - type: precision_at_100 value: 1.239 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 8.4 - type: precision_at_5 value: 7.08 - type: recall_at_1 value: 2.128 - type: recall_at_10 value: 10.355 - type: recall_at_100 value: 25.131999999999998 - type: recall_at_1000 value: 49.123 - type: recall_at_3 value: 5.148 - type: recall_at_5 value: 7.227 - type: map_at_1 value: 0.082 - type: map_at_10 value: 0.522 - type: map_at_100 value: 2.443 - type: map_at_1000 value: 6.104 - type: map_at_3 value: 0.21 - type: map_at_5 value: 0.335 - type: mrr_at_1 value: 42.0 - type: mrr_at_10 value: 52.908 - type: mrr_at_100 value: 53.563 - type: mrr_at_1000 value: 53.563 - type: mrr_at_3 value: 48.667 - type: mrr_at_5 value: 50.967 - type: ndcg_at_1 value: 36.0 - type: ndcg_at_10 value: 33.141 - type: ndcg_at_100 value: 23.552999999999997 - type: ndcg_at_1000 value: 21.127000000000002 - type: ndcg_at_3 value: 35.274 - type: ndcg_at_5 value: 35.374 - type: precision_at_1 value: 42.0 - type: precision_at_10 value: 35.8 - type: precision_at_100 value: 24.46 - type: precision_at_1000 value: 10.556000000000001 - type: precision_at_3 value: 38.667 - type: precision_at_5 value: 39.6 - type: recall_at_1 value: 0.082 - type: recall_at_10 value: 0.762 - type: recall_at_100 value: 5.053 - type: recall_at_1000 value: 20.758 - type: recall_at_3 value: 0.256 - type: recall_at_5 value: 0.428 - task: type: Clustering dataset: name: MTEB RedditClustering type: None config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 32.60289362468036 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: None config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 40.4522615771674 - task: type: STS dataset: name: MTEB SICK-R type: None config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 76.367126208385 - type: cos_sim_spearman value: 68.33460522353158 - type: euclidean_pearson value: 72.82526651070455 - type: euclidean_spearman value: 68.3346412251751 - type: manhattan_pearson value: 69.54331108044752 - type: manhattan_spearman value: 65.45302638171147 - task: type: STS dataset: name: MTEB STS12 type: None config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 70.85977556465735 - type: cos_sim_spearman value: 63.97433796108382 - type: euclidean_pearson value: 67.10317546783216 - type: euclidean_spearman value: 63.9755699230653 - type: manhattan_pearson value: 65.52418699963135 - type: manhattan_spearman value: 63.21258915122308 - task: type: STS dataset: name: MTEB STS13 type: None config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 73.1838126630538 - type: cos_sim_spearman value: 75.39011836536339 - type: euclidean_pearson value: 74.9782801873135 - type: euclidean_spearman value: 75.39015619876471 - type: manhattan_pearson value: 74.93646082889957 - type: manhattan_spearman value: 75.27860913964665 - task: type: STS dataset: name: MTEB STS14 type: None config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 75.33602852852485 - type: cos_sim_spearman value: 72.88610761015623 - type: euclidean_pearson value: 74.74618127482613 - type: euclidean_spearman value: 72.88609792110957 - type: manhattan_pearson value: 74.00797719940813 - type: manhattan_spearman value: 72.30319426143821 - task: type: STS dataset: name: MTEB STS15 type: None config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 78.48344310675034 - type: cos_sim_spearman value: 79.5602100440999 - type: euclidean_pearson value: 79.6105408248732 - type: euclidean_spearman value: 79.56019504416581 - type: manhattan_pearson value: 79.94079490935202 - type: manhattan_spearman value: 80.19729048900355 - task: type: STS dataset: name: MTEB STS16 type: None config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 74.02545587458899 - type: cos_sim_spearman value: 75.2780369994608 - type: euclidean_pearson value: 74.81275720654355 - type: euclidean_spearman value: 75.27802996000392 - type: manhattan_pearson value: 75.365898594137 - type: manhattan_spearman value: 75.99988657994446 - task: type: STS dataset: name: MTEB STS17 (en-en) type: None config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.37158192569701 - type: cos_sim_spearman value: 84.53055784651096 - type: euclidean_pearson value: 84.50120181859903 - type: euclidean_spearman value: 84.53143170799756 - type: manhattan_pearson value: 83.67777600066678 - type: manhattan_spearman value: 84.30878918286747 - task: type: STS dataset: name: MTEB STS22 (en) type: None config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 62.60218549460609 - type: cos_sim_spearman value: 60.4194432631911 - type: euclidean_pearson value: 62.3707197693071 - type: euclidean_spearman value: 60.4194432631911 - type: manhattan_pearson value: 62.154998597530245 - type: manhattan_spearman value: 60.670846068288775 - task: type: STS dataset: name: MTEB STSBenchmark type: None config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 76.44357753178365 - type: cos_sim_spearman value: 75.1868455148921 - type: euclidean_pearson value: 76.36802547145336 - type: euclidean_spearman value: 75.1868639836731 - type: manhattan_pearson value: 75.85844829336776 - type: manhattan_spearman value: 74.88123489350302 - task: type: Reranking dataset: name: MTEB SciDocsRR type: None config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 68.51995354068147 - type: mrr value: 88.515406162465 - task: type: Retrieval dataset: name: MTEB SciFact type: None config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 25.611 - type: map_at_10 value: 32.638 - type: map_at_100 value: 33.593 - type: map_at_1000 value: 33.672000000000004 - type: map_at_3 value: 30.166999999999998 - type: map_at_5 value: 31.306 - type: mrr_at_1 value: 27.0 - type: mrr_at_10 value: 34.104 - type: mrr_at_100 value: 34.98 - type: mrr_at_1000 value: 35.046 - type: mrr_at_3 value: 31.889 - type: mrr_at_5 value: 32.906 - type: ndcg_at_1 value: 27.0 - type: ndcg_at_10 value: 36.882999999999996 - type: ndcg_at_100 value: 41.941 - type: ndcg_at_1000 value: 44.341 - type: ndcg_at_3 value: 31.945 - type: ndcg_at_5 value: 33.833999999999996 - type: precision_at_1 value: 27.0 - type: precision_at_10 value: 5.367 - type: precision_at_100 value: 0.823 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 12.778 - type: precision_at_5 value: 8.733 - type: recall_at_1 value: 25.611 - type: recall_at_10 value: 48.888999999999996 - type: recall_at_100 value: 73.089 - type: recall_at_1000 value: 92.45 - type: recall_at_3 value: 35.25 - type: recall_at_5 value: 39.778000000000006 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: None config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.56732673267327 - type: cos_sim_ap value: 83.46633430421917 - type: cos_sim_f1 value: 76.71816728822589 - type: cos_sim_precision value: 82.09806157354618 - type: cos_sim_recall value: 72.0 - type: dot_accuracy value: 99.56732673267327 - type: dot_ap value: 83.46633430421917 - type: dot_f1 value: 76.71816728822589 - type: dot_precision value: 82.09806157354618 - type: dot_recall value: 72.0 - type: euclidean_accuracy value: 99.56732673267327 - type: euclidean_ap value: 83.46633430421917 - type: euclidean_f1 value: 76.71816728822589 - type: euclidean_precision value: 82.09806157354618 - type: euclidean_recall value: 72.0 - type: manhattan_accuracy value: 99.67722772277227 - type: manhattan_ap value: 89.71022888608738 - type: manhattan_f1 value: 82.66129032258065 - type: manhattan_precision value: 83.33333333333334 - type: manhattan_recall value: 82.0 - type: max_accuracy value: 99.67722772277227 - type: max_ap value: 89.71022888608738 - type: max_f1 value: 82.66129032258065 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: None config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 38.950459678791944 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: None config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 28.43306179999893 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: None config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 41.30089691776191 - type: mrr value: 41.81213394448689 - task: type: Summarization dataset: name: MTEB SummEval type: None config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.787391241122602 - type: cos_sim_spearman value: 30.10535337344201 - type: dot_pearson value: 29.787391177030514 - type: dot_spearman value: 30.02195919678659 - task: type: Retrieval dataset: name: MTEB Touche2020 type: None config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 1.6310000000000002 - type: map_at_10 value: 5.423 - type: map_at_100 value: 9.573 - type: map_at_1000 value: 11.048 - type: map_at_3 value: 3.318 - type: map_at_5 value: 4.093999999999999 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 38.083 - type: mrr_at_100 value: 39.221000000000004 - type: mrr_at_1000 value: 39.227000000000004 - type: mrr_at_3 value: 34.694 - type: mrr_at_5 value: 35.510000000000005 - type: ndcg_at_1 value: 22.448999999999998 - type: ndcg_at_10 value: 17.093 - type: ndcg_at_100 value: 27.413999999999998 - type: ndcg_at_1000 value: 39.706 - type: ndcg_at_3 value: 19.91 - type: ndcg_at_5 value: 18.007 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 15.306000000000001 - type: precision_at_100 value: 6.327000000000001 - type: precision_at_1000 value: 1.392 - type: precision_at_3 value: 21.088 - type: precision_at_5 value: 17.959 - type: recall_at_1 value: 1.6310000000000002 - type: recall_at_10 value: 10.459 - type: recall_at_100 value: 38.216 - type: recall_at_1000 value: 75.151 - type: recall_at_3 value: 4.634 - type: recall_at_5 value: 6.0409999999999995 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: None config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.7038 - type: ap value: 14.220727052200816 - type: f1 value: 54.464733589173576 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: None config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 52.478777589134125 - type: f1 value: 52.650244129490055 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: None config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 33.99931902240702 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: None config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.52965369255529 - type: cos_sim_ap value: 62.51268142006894 - type: cos_sim_f1 value: 59.800097513408105 - type: cos_sim_precision value: 55.57317625736293 - type: cos_sim_recall value: 64.72295514511873 - type: dot_accuracy value: 82.52965369255529 - type: dot_ap value: 62.51268142006894 - type: dot_f1 value: 59.800097513408105 - type: dot_precision value: 55.57317625736293 - type: dot_recall value: 64.72295514511873 - type: euclidean_accuracy value: 82.52965369255529 - type: euclidean_ap value: 62.51268142006894 - type: euclidean_f1 value: 59.800097513408105 - type: euclidean_precision value: 55.57317625736293 - type: euclidean_recall value: 64.72295514511873 - type: manhattan_accuracy value: 81.77862549919533 - type: manhattan_ap value: 59.21825218726623 - type: manhattan_f1 value: 56.64351333183957 - type: manhattan_precision value: 49.22118380062305 - type: manhattan_recall value: 66.7018469656992 - type: max_accuracy value: 82.52965369255529 - type: max_ap value: 62.51268142006894 - type: max_f1 value: 59.800097513408105 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: None config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.01051732836575 - type: cos_sim_ap value: 82.14373850939549 - type: cos_sim_f1 value: 74.26138467234358 - type: cos_sim_precision value: 71.52332049636286 - type: cos_sim_recall value: 77.21743147520789 - type: dot_accuracy value: 87.01051732836575 - type: dot_ap value: 82.14374565499902 - type: dot_f1 value: 74.26138467234358 - type: dot_precision value: 71.52332049636286 - type: dot_recall value: 77.21743147520789 - type: euclidean_accuracy value: 87.01051732836575 - type: euclidean_ap value: 82.14373956027772 - type: euclidean_f1 value: 74.26138467234358 - type: euclidean_precision value: 71.52332049636286 - type: euclidean_recall value: 77.21743147520789 - type: manhattan_accuracy value: 87.06290992354562 - type: manhattan_ap value: 82.16999565860169 - type: manhattan_f1 value: 74.21972757498838 - type: manhattan_precision value: 69.27851565107122 - type: manhattan_recall value: 79.91992608561749 - type: max_accuracy value: 87.06290992354562 - type: max_ap value: 82.16999565860169 - type: max_f1 value: 74.26138467234358 ---
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
weakit-v/bge-base-en-v1.5-onnx
weakit-v
feature-extraction
[ "sentence-transformers", "onnx", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,703
1,703
11
0
--- language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-base-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.14925373134328 - type: ap value: 39.32336517995478 - type: f1 value: 70.16902252611425 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.386825 - type: ap value: 90.21276917991995 - type: f1 value: 93.37741030006174 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.846000000000004 - type: f1 value: 48.14646269778261 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.754000000000005 - type: map_at_10 value: 55.761 - type: map_at_100 value: 56.330999999999996 - type: map_at_1000 value: 56.333999999999996 - type: map_at_3 value: 51.92 - type: map_at_5 value: 54.010999999999996 - type: mrr_at_1 value: 41.181 - type: mrr_at_10 value: 55.967999999999996 - type: mrr_at_100 value: 56.538 - type: mrr_at_1000 value: 56.542 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.208999999999996 - type: ndcg_at_1 value: 40.754000000000005 - type: ndcg_at_10 value: 63.605000000000004 - type: ndcg_at_100 value: 66.05199999999999 - type: ndcg_at_1000 value: 66.12 - type: ndcg_at_3 value: 55.708 - type: ndcg_at_5 value: 59.452000000000005 - type: precision_at_1 value: 40.754000000000005 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.149000000000001 - type: recall_at_1 value: 40.754000000000005 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 75.747 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.74884539679369 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.8075893810716 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.128470519187736 - type: mrr value: 74.28065778481289 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.24629081484655 - type: cos_sim_spearman value: 86.93752309911496 - type: euclidean_pearson value: 87.58589628573816 - type: euclidean_spearman value: 88.05622328825284 - type: manhattan_pearson value: 87.5594959805773 - type: manhattan_spearman value: 88.19658793233961 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.9512987012987 - type: f1 value: 86.92515357973708 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.10263762928872 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.69711517426737 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.327 - type: map_at_10 value: 44.099 - type: map_at_100 value: 45.525 - type: map_at_1000 value: 45.641999999999996 - type: map_at_3 value: 40.47 - type: map_at_5 value: 42.36 - type: mrr_at_1 value: 39.199 - type: mrr_at_10 value: 49.651 - type: mrr_at_100 value: 50.29 - type: mrr_at_1000 value: 50.329 - type: mrr_at_3 value: 46.924 - type: mrr_at_5 value: 48.548 - type: ndcg_at_1 value: 39.199 - type: ndcg_at_10 value: 50.773 - type: ndcg_at_100 value: 55.67999999999999 - type: ndcg_at_1000 value: 57.495 - type: ndcg_at_3 value: 45.513999999999996 - type: ndcg_at_5 value: 47.703 - type: precision_at_1 value: 39.199 - type: precision_at_10 value: 9.914000000000001 - type: precision_at_100 value: 1.5310000000000001 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.984 - type: precision_at_5 value: 15.737000000000002 - type: recall_at_1 value: 32.327 - type: recall_at_10 value: 63.743 - type: recall_at_100 value: 84.538 - type: recall_at_1000 value: 96.089 - type: recall_at_3 value: 48.065000000000005 - type: recall_at_5 value: 54.519 - type: map_at_1 value: 32.671 - type: map_at_10 value: 42.954 - type: map_at_100 value: 44.151 - type: map_at_1000 value: 44.287 - type: map_at_3 value: 39.912 - type: map_at_5 value: 41.798 - type: mrr_at_1 value: 41.465 - type: mrr_at_10 value: 49.351 - type: mrr_at_100 value: 49.980000000000004 - type: mrr_at_1000 value: 50.016000000000005 - type: mrr_at_3 value: 47.144000000000005 - type: mrr_at_5 value: 48.592999999999996 - type: ndcg_at_1 value: 41.465 - type: ndcg_at_10 value: 48.565999999999995 - type: ndcg_at_100 value: 52.76499999999999 - type: ndcg_at_1000 value: 54.749 - type: ndcg_at_3 value: 44.57 - type: ndcg_at_5 value: 46.759 - type: precision_at_1 value: 41.465 - type: precision_at_10 value: 9.107999999999999 - type: precision_at_100 value: 1.433 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 21.423000000000002 - type: precision_at_5 value: 15.414 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 57.738 - type: recall_at_100 value: 75.86500000000001 - type: recall_at_1000 value: 88.36 - type: recall_at_3 value: 45.626 - type: recall_at_5 value: 51.812000000000005 - type: map_at_1 value: 41.185 - type: map_at_10 value: 53.929 - type: map_at_100 value: 54.92 - type: map_at_1000 value: 54.967999999999996 - type: map_at_3 value: 50.70400000000001 - type: map_at_5 value: 52.673 - type: mrr_at_1 value: 47.398 - type: mrr_at_10 value: 57.303000000000004 - type: mrr_at_100 value: 57.959 - type: mrr_at_1000 value: 57.985 - type: mrr_at_3 value: 54.932 - type: mrr_at_5 value: 56.464999999999996 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 59.653 - type: ndcg_at_100 value: 63.627 - type: ndcg_at_1000 value: 64.596 - type: ndcg_at_3 value: 54.455 - type: ndcg_at_5 value: 57.245000000000005 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.524000000000001 - type: precision_at_100 value: 1.243 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.389 - type: precision_at_5 value: 16.752 - type: recall_at_1 value: 41.185 - type: recall_at_10 value: 73.193 - type: recall_at_100 value: 90.357 - type: recall_at_1000 value: 97.253 - type: recall_at_3 value: 59.199999999999996 - type: recall_at_5 value: 66.118 - type: map_at_1 value: 27.27 - type: map_at_10 value: 36.223 - type: map_at_100 value: 37.218 - type: map_at_1000 value: 37.293 - type: map_at_3 value: 33.503 - type: map_at_5 value: 35.097 - type: mrr_at_1 value: 29.492 - type: mrr_at_10 value: 38.352000000000004 - type: mrr_at_100 value: 39.188 - type: mrr_at_1000 value: 39.247 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.401 - type: ndcg_at_1 value: 29.492 - type: ndcg_at_10 value: 41.239 - type: ndcg_at_100 value: 46.066 - type: ndcg_at_1000 value: 47.992000000000004 - type: ndcg_at_3 value: 36.11 - type: ndcg_at_5 value: 38.772 - type: precision_at_1 value: 29.492 - type: precision_at_10 value: 6.260000000000001 - type: precision_at_100 value: 0.914 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 15.104000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.27 - type: recall_at_10 value: 54.589 - type: recall_at_100 value: 76.70700000000001 - type: recall_at_1000 value: 91.158 - type: recall_at_3 value: 40.974 - type: recall_at_5 value: 47.327000000000005 - type: map_at_1 value: 17.848 - type: map_at_10 value: 26.207 - type: map_at_100 value: 27.478 - type: map_at_1000 value: 27.602 - type: map_at_3 value: 23.405 - type: map_at_5 value: 24.98 - type: mrr_at_1 value: 21.891 - type: mrr_at_10 value: 31.041999999999998 - type: mrr_at_100 value: 32.092 - type: mrr_at_1000 value: 32.151999999999994 - type: mrr_at_3 value: 28.358 - type: mrr_at_5 value: 29.969 - type: ndcg_at_1 value: 21.891 - type: ndcg_at_10 value: 31.585 - type: ndcg_at_100 value: 37.531 - type: ndcg_at_1000 value: 40.256 - type: ndcg_at_3 value: 26.508 - type: ndcg_at_5 value: 28.894 - type: precision_at_1 value: 21.891 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 0.9990000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.769 - type: precision_at_5 value: 9.279 - type: recall_at_1 value: 17.848 - type: recall_at_10 value: 43.452 - type: recall_at_100 value: 69.216 - type: recall_at_1000 value: 88.102 - type: recall_at_3 value: 29.18 - type: recall_at_5 value: 35.347 - type: map_at_1 value: 30.94 - type: map_at_10 value: 41.248000000000005 - type: map_at_100 value: 42.495 - type: map_at_1000 value: 42.602000000000004 - type: map_at_3 value: 37.939 - type: map_at_5 value: 39.924 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 47.041 - type: mrr_at_100 value: 47.83 - type: mrr_at_1000 value: 47.878 - type: mrr_at_3 value: 44.466 - type: mrr_at_5 value: 46.111999999999995 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 47.223 - type: ndcg_at_100 value: 52.394 - type: ndcg_at_1000 value: 54.432 - type: ndcg_at_3 value: 42.032000000000004 - type: ndcg_at_5 value: 44.772 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.393 - type: precision_at_100 value: 1.2890000000000001 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 19.698 - type: precision_at_5 value: 14.013 - type: recall_at_1 value: 30.94 - type: recall_at_10 value: 59.316 - type: recall_at_100 value: 80.783 - type: recall_at_1000 value: 94.15400000000001 - type: recall_at_3 value: 44.712 - type: recall_at_5 value: 51.932 - type: map_at_1 value: 27.104 - type: map_at_10 value: 36.675999999999995 - type: map_at_100 value: 38.076 - type: map_at_1000 value: 38.189 - type: map_at_3 value: 33.733999999999995 - type: map_at_5 value: 35.287 - type: mrr_at_1 value: 33.904 - type: mrr_at_10 value: 42.55 - type: mrr_at_100 value: 43.434 - type: mrr_at_1000 value: 43.494 - type: mrr_at_3 value: 40.126 - type: mrr_at_5 value: 41.473 - type: ndcg_at_1 value: 33.904 - type: ndcg_at_10 value: 42.414 - type: ndcg_at_100 value: 48.203 - type: ndcg_at_1000 value: 50.437 - type: ndcg_at_3 value: 37.633 - type: ndcg_at_5 value: 39.67 - type: precision_at_1 value: 33.904 - type: precision_at_10 value: 7.82 - type: precision_at_100 value: 1.2409999999999999 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 17.884 - type: precision_at_5 value: 12.648000000000001 - type: recall_at_1 value: 27.104 - type: recall_at_10 value: 53.563 - type: recall_at_100 value: 78.557 - type: recall_at_1000 value: 93.533 - type: recall_at_3 value: 39.92 - type: recall_at_5 value: 45.457 - type: map_at_1 value: 27.707749999999997 - type: map_at_10 value: 36.961 - type: map_at_100 value: 38.158833333333334 - type: map_at_1000 value: 38.270333333333326 - type: map_at_3 value: 34.07183333333334 - type: map_at_5 value: 35.69533333333334 - type: mrr_at_1 value: 32.81875 - type: mrr_at_10 value: 41.293 - type: mrr_at_100 value: 42.116499999999995 - type: mrr_at_1000 value: 42.170249999999996 - type: mrr_at_3 value: 38.83983333333333 - type: mrr_at_5 value: 40.29775 - type: ndcg_at_1 value: 32.81875 - type: ndcg_at_10 value: 42.355 - type: ndcg_at_100 value: 47.41374999999999 - type: ndcg_at_1000 value: 49.5805 - type: ndcg_at_3 value: 37.52825 - type: ndcg_at_5 value: 39.83266666666667 - type: precision_at_1 value: 32.81875 - type: precision_at_10 value: 7.382416666666666 - type: precision_at_100 value: 1.1640833333333334 - type: precision_at_1000 value: 0.15383333333333335 - type: precision_at_3 value: 17.134166666666665 - type: precision_at_5 value: 12.174833333333336 - type: recall_at_1 value: 27.707749999999997 - type: recall_at_10 value: 53.945 - type: recall_at_100 value: 76.191 - type: recall_at_1000 value: 91.101 - type: recall_at_3 value: 40.39083333333334 - type: recall_at_5 value: 46.40083333333333 - type: map_at_1 value: 26.482 - type: map_at_10 value: 33.201 - type: map_at_100 value: 34.107 - type: map_at_1000 value: 34.197 - type: map_at_3 value: 31.174000000000003 - type: map_at_5 value: 32.279 - type: mrr_at_1 value: 29.908 - type: mrr_at_10 value: 36.235 - type: mrr_at_100 value: 37.04 - type: mrr_at_1000 value: 37.105 - type: mrr_at_3 value: 34.355999999999995 - type: mrr_at_5 value: 35.382999999999996 - type: ndcg_at_1 value: 29.908 - type: ndcg_at_10 value: 37.325 - type: ndcg_at_100 value: 41.795 - type: ndcg_at_1000 value: 44.105 - type: ndcg_at_3 value: 33.555 - type: ndcg_at_5 value: 35.266999999999996 - type: precision_at_1 value: 29.908 - type: precision_at_10 value: 5.721 - type: precision_at_100 value: 0.8630000000000001 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 14.008000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 26.482 - type: recall_at_10 value: 47.072 - type: recall_at_100 value: 67.27 - type: recall_at_1000 value: 84.371 - type: recall_at_3 value: 36.65 - type: recall_at_5 value: 40.774 - type: map_at_1 value: 18.815 - type: map_at_10 value: 26.369999999999997 - type: map_at_100 value: 27.458 - type: map_at_1000 value: 27.588 - type: map_at_3 value: 23.990000000000002 - type: map_at_5 value: 25.345000000000002 - type: mrr_at_1 value: 22.953000000000003 - type: mrr_at_10 value: 30.342999999999996 - type: mrr_at_100 value: 31.241000000000003 - type: mrr_at_1000 value: 31.319000000000003 - type: mrr_at_3 value: 28.16 - type: mrr_at_5 value: 29.406 - type: ndcg_at_1 value: 22.953000000000003 - type: ndcg_at_10 value: 31.151 - type: ndcg_at_100 value: 36.309000000000005 - type: ndcg_at_1000 value: 39.227000000000004 - type: ndcg_at_3 value: 26.921 - type: ndcg_at_5 value: 28.938000000000002 - type: precision_at_1 value: 22.953000000000003 - type: precision_at_10 value: 5.602 - type: precision_at_100 value: 0.9530000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 12.606 - type: precision_at_5 value: 9.119 - type: recall_at_1 value: 18.815 - type: recall_at_10 value: 41.574 - type: recall_at_100 value: 64.84400000000001 - type: recall_at_1000 value: 85.406 - type: recall_at_3 value: 29.694 - type: recall_at_5 value: 34.935 - type: map_at_1 value: 27.840999999999998 - type: map_at_10 value: 36.797999999999995 - type: map_at_100 value: 37.993 - type: map_at_1000 value: 38.086999999999996 - type: map_at_3 value: 34.050999999999995 - type: map_at_5 value: 35.379 - type: mrr_at_1 value: 32.649 - type: mrr_at_10 value: 41.025 - type: mrr_at_100 value: 41.878 - type: mrr_at_1000 value: 41.929 - type: mrr_at_3 value: 38.573 - type: mrr_at_5 value: 39.715 - type: ndcg_at_1 value: 32.649 - type: ndcg_at_10 value: 42.142 - type: ndcg_at_100 value: 47.558 - type: ndcg_at_1000 value: 49.643 - type: ndcg_at_3 value: 37.12 - type: ndcg_at_5 value: 38.983000000000004 - type: precision_at_1 value: 32.649 - type: precision_at_10 value: 7.08 - type: precision_at_100 value: 1.1039999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.698 - type: precision_at_5 value: 11.511000000000001 - type: recall_at_1 value: 27.840999999999998 - type: recall_at_10 value: 54.245 - type: recall_at_100 value: 77.947 - type: recall_at_1000 value: 92.36999999999999 - type: recall_at_3 value: 40.146 - type: recall_at_5 value: 44.951 - type: map_at_1 value: 26.529000000000003 - type: map_at_10 value: 35.010000000000005 - type: map_at_100 value: 36.647 - type: map_at_1000 value: 36.857 - type: map_at_3 value: 31.968000000000004 - type: map_at_5 value: 33.554 - type: mrr_at_1 value: 31.818 - type: mrr_at_10 value: 39.550999999999995 - type: mrr_at_100 value: 40.54 - type: mrr_at_1000 value: 40.596 - type: mrr_at_3 value: 36.726 - type: mrr_at_5 value: 38.416 - type: ndcg_at_1 value: 31.818 - type: ndcg_at_10 value: 40.675 - type: ndcg_at_100 value: 46.548 - type: ndcg_at_1000 value: 49.126 - type: ndcg_at_3 value: 35.829 - type: ndcg_at_5 value: 38.0 - type: precision_at_1 value: 31.818 - type: precision_at_10 value: 7.826 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 16.601 - type: precision_at_5 value: 12.095 - type: recall_at_1 value: 26.529000000000003 - type: recall_at_10 value: 51.03 - type: recall_at_100 value: 77.556 - type: recall_at_1000 value: 93.804 - type: recall_at_3 value: 36.986000000000004 - type: recall_at_5 value: 43.096000000000004 - type: map_at_1 value: 23.480999999999998 - type: map_at_10 value: 30.817 - type: map_at_100 value: 31.838 - type: map_at_1000 value: 31.932 - type: map_at_3 value: 28.011999999999997 - type: map_at_5 value: 29.668 - type: mrr_at_1 value: 25.323 - type: mrr_at_10 value: 33.072 - type: mrr_at_100 value: 33.926 - type: mrr_at_1000 value: 33.993 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 32.092 - type: ndcg_at_1 value: 25.323 - type: ndcg_at_10 value: 35.514 - type: ndcg_at_100 value: 40.489000000000004 - type: ndcg_at_1000 value: 42.908 - type: ndcg_at_3 value: 30.092000000000002 - type: ndcg_at_5 value: 32.989000000000004 - type: precision_at_1 value: 25.323 - type: precision_at_10 value: 5.545 - type: precision_at_100 value: 0.861 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.446 - type: precision_at_5 value: 9.131 - type: recall_at_1 value: 23.480999999999998 - type: recall_at_10 value: 47.825 - type: recall_at_100 value: 70.652 - type: recall_at_1000 value: 88.612 - type: recall_at_3 value: 33.537 - type: recall_at_5 value: 40.542 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 13.333999999999998 - type: map_at_10 value: 22.524 - type: map_at_100 value: 24.506 - type: map_at_1000 value: 24.715 - type: map_at_3 value: 19.022 - type: map_at_5 value: 20.693 - type: mrr_at_1 value: 29.186 - type: mrr_at_10 value: 41.22 - type: mrr_at_100 value: 42.16 - type: mrr_at_1000 value: 42.192 - type: mrr_at_3 value: 38.013000000000005 - type: mrr_at_5 value: 39.704 - type: ndcg_at_1 value: 29.186 - type: ndcg_at_10 value: 31.167 - type: ndcg_at_100 value: 38.879000000000005 - type: ndcg_at_1000 value: 42.376000000000005 - type: ndcg_at_3 value: 25.817 - type: ndcg_at_5 value: 27.377000000000002 - type: precision_at_1 value: 29.186 - type: precision_at_10 value: 9.693999999999999 - type: precision_at_100 value: 1.8030000000000002 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 19.11 - type: precision_at_5 value: 14.344999999999999 - type: recall_at_1 value: 13.333999999999998 - type: recall_at_10 value: 37.092000000000006 - type: recall_at_100 value: 63.651 - type: recall_at_1000 value: 83.05 - type: recall_at_3 value: 23.74 - type: recall_at_5 value: 28.655 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.151 - type: map_at_10 value: 19.653000000000002 - type: map_at_100 value: 28.053 - type: map_at_1000 value: 29.709000000000003 - type: map_at_3 value: 14.191 - type: map_at_5 value: 16.456 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.4 - type: mrr_at_100 value: 74.715 - type: mrr_at_1000 value: 74.726 - type: mrr_at_3 value: 72.417 - type: mrr_at_5 value: 73.667 - type: ndcg_at_1 value: 54.25 - type: ndcg_at_10 value: 40.77 - type: ndcg_at_100 value: 46.359 - type: ndcg_at_1000 value: 54.193000000000005 - type: ndcg_at_3 value: 44.832 - type: ndcg_at_5 value: 42.63 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 32.175 - type: precision_at_100 value: 10.668 - type: precision_at_1000 value: 2.067 - type: precision_at_3 value: 47.667 - type: precision_at_5 value: 41.3 - type: recall_at_1 value: 9.151 - type: recall_at_10 value: 25.003999999999998 - type: recall_at_100 value: 52.976 - type: recall_at_1000 value: 78.315 - type: recall_at_3 value: 15.487 - type: recall_at_5 value: 18.999 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.89999999999999 - type: f1 value: 46.47777925067403 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 73.706 - type: map_at_10 value: 82.423 - type: map_at_100 value: 82.67999999999999 - type: map_at_1000 value: 82.694 - type: map_at_3 value: 81.328 - type: map_at_5 value: 82.001 - type: mrr_at_1 value: 79.613 - type: mrr_at_10 value: 87.07000000000001 - type: mrr_at_100 value: 87.169 - type: mrr_at_1000 value: 87.17 - type: mrr_at_3 value: 86.404 - type: mrr_at_5 value: 86.856 - type: ndcg_at_1 value: 79.613 - type: ndcg_at_10 value: 86.289 - type: ndcg_at_100 value: 87.201 - type: ndcg_at_1000 value: 87.428 - type: ndcg_at_3 value: 84.625 - type: ndcg_at_5 value: 85.53699999999999 - type: precision_at_1 value: 79.613 - type: precision_at_10 value: 10.399 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.473 - type: precision_at_5 value: 20.132 - type: recall_at_1 value: 73.706 - type: recall_at_10 value: 93.559 - type: recall_at_100 value: 97.188 - type: recall_at_1000 value: 98.555 - type: recall_at_3 value: 88.98700000000001 - type: recall_at_5 value: 91.373 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 19.841 - type: map_at_10 value: 32.643 - type: map_at_100 value: 34.575 - type: map_at_1000 value: 34.736 - type: map_at_3 value: 28.317999999999998 - type: map_at_5 value: 30.964000000000002 - type: mrr_at_1 value: 39.660000000000004 - type: mrr_at_10 value: 48.620000000000005 - type: mrr_at_100 value: 49.384 - type: mrr_at_1000 value: 49.415 - type: mrr_at_3 value: 45.988 - type: mrr_at_5 value: 47.361 - type: ndcg_at_1 value: 39.660000000000004 - type: ndcg_at_10 value: 40.646 - type: ndcg_at_100 value: 47.657 - type: ndcg_at_1000 value: 50.428 - type: ndcg_at_3 value: 36.689 - type: ndcg_at_5 value: 38.211 - type: precision_at_1 value: 39.660000000000004 - type: precision_at_10 value: 11.235000000000001 - type: precision_at_100 value: 1.8530000000000002 - type: precision_at_1000 value: 0.23600000000000002 - type: precision_at_3 value: 24.587999999999997 - type: precision_at_5 value: 18.395 - type: recall_at_1 value: 19.841 - type: recall_at_10 value: 48.135 - type: recall_at_100 value: 74.224 - type: recall_at_1000 value: 90.826 - type: recall_at_3 value: 33.536 - type: recall_at_5 value: 40.311 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.358 - type: map_at_10 value: 64.497 - type: map_at_100 value: 65.362 - type: map_at_1000 value: 65.41900000000001 - type: map_at_3 value: 61.06700000000001 - type: map_at_5 value: 63.317 - type: mrr_at_1 value: 80.716 - type: mrr_at_10 value: 86.10799999999999 - type: mrr_at_100 value: 86.265 - type: mrr_at_1000 value: 86.27 - type: mrr_at_3 value: 85.271 - type: mrr_at_5 value: 85.82499999999999 - type: ndcg_at_1 value: 80.716 - type: ndcg_at_10 value: 72.597 - type: ndcg_at_100 value: 75.549 - type: ndcg_at_1000 value: 76.61 - type: ndcg_at_3 value: 67.874 - type: ndcg_at_5 value: 70.655 - type: precision_at_1 value: 80.716 - type: precision_at_10 value: 15.148 - type: precision_at_100 value: 1.745 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 43.597 - type: precision_at_5 value: 28.351 - type: recall_at_1 value: 40.358 - type: recall_at_10 value: 75.739 - type: recall_at_100 value: 87.259 - type: recall_at_1000 value: 94.234 - type: recall_at_3 value: 65.39500000000001 - type: recall_at_5 value: 70.878 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.80799999999998 - type: ap value: 86.81350378180757 - type: f1 value: 90.79901248314215 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.096 - type: map_at_10 value: 34.384 - type: map_at_100 value: 35.541 - type: map_at_1000 value: 35.589999999999996 - type: map_at_3 value: 30.496000000000002 - type: map_at_5 value: 32.718 - type: mrr_at_1 value: 22.750999999999998 - type: mrr_at_10 value: 35.024 - type: mrr_at_100 value: 36.125 - type: mrr_at_1000 value: 36.168 - type: mrr_at_3 value: 31.225 - type: mrr_at_5 value: 33.416000000000004 - type: ndcg_at_1 value: 22.750999999999998 - type: ndcg_at_10 value: 41.351 - type: ndcg_at_100 value: 46.92 - type: ndcg_at_1000 value: 48.111 - type: ndcg_at_3 value: 33.439 - type: ndcg_at_5 value: 37.407000000000004 - type: precision_at_1 value: 22.750999999999998 - type: precision_at_10 value: 6.564 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.288 - type: precision_at_5 value: 10.581999999999999 - type: recall_at_1 value: 22.096 - type: recall_at_10 value: 62.771 - type: recall_at_100 value: 88.529 - type: recall_at_1000 value: 97.55 - type: recall_at_3 value: 41.245 - type: recall_at_5 value: 50.788 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.16780665754673 - type: f1 value: 93.96331194859894 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.90606475148198 - type: f1 value: 58.58344986604187 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.14660390047075 - type: f1 value: 74.31533923533614 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.16139878950908 - type: f1 value: 80.18532656824924 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.949880906135085 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.56300351524862 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.196521894371315 - type: mrr value: 32.22644231694389 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.783 - type: map_at_10 value: 14.549000000000001 - type: map_at_100 value: 18.433 - type: map_at_1000 value: 19.949 - type: map_at_3 value: 10.936 - type: map_at_5 value: 12.514 - type: mrr_at_1 value: 47.368 - type: mrr_at_10 value: 56.42 - type: mrr_at_100 value: 56.908 - type: mrr_at_1000 value: 56.95 - type: mrr_at_3 value: 54.283 - type: mrr_at_5 value: 55.568 - type: ndcg_at_1 value: 45.666000000000004 - type: ndcg_at_10 value: 37.389 - type: ndcg_at_100 value: 34.253 - type: ndcg_at_1000 value: 43.059999999999995 - type: ndcg_at_3 value: 42.725 - type: ndcg_at_5 value: 40.193 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 27.988000000000003 - type: precision_at_100 value: 8.672 - type: precision_at_1000 value: 2.164 - type: precision_at_3 value: 40.248 - type: precision_at_5 value: 34.737 - type: recall_at_1 value: 6.783 - type: recall_at_10 value: 17.838 - type: recall_at_100 value: 33.672000000000004 - type: recall_at_1000 value: 66.166 - type: recall_at_3 value: 11.849 - type: recall_at_5 value: 14.205000000000002 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.698999999999998 - type: map_at_10 value: 46.556 - type: map_at_100 value: 47.652 - type: map_at_1000 value: 47.68 - type: map_at_3 value: 42.492000000000004 - type: map_at_5 value: 44.763999999999996 - type: mrr_at_1 value: 35.747 - type: mrr_at_10 value: 49.242999999999995 - type: mrr_at_100 value: 50.052 - type: mrr_at_1000 value: 50.068 - type: mrr_at_3 value: 45.867000000000004 - type: mrr_at_5 value: 47.778999999999996 - type: ndcg_at_1 value: 35.717999999999996 - type: ndcg_at_10 value: 54.14600000000001 - type: ndcg_at_100 value: 58.672999999999995 - type: ndcg_at_1000 value: 59.279 - type: ndcg_at_3 value: 46.407 - type: ndcg_at_5 value: 50.181 - type: precision_at_1 value: 35.717999999999996 - type: precision_at_10 value: 8.844000000000001 - type: precision_at_100 value: 1.139 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 20.993000000000002 - type: precision_at_5 value: 14.791000000000002 - type: recall_at_1 value: 31.698999999999998 - type: recall_at_10 value: 74.693 - type: recall_at_100 value: 94.15299999999999 - type: recall_at_1000 value: 98.585 - type: recall_at_3 value: 54.388999999999996 - type: recall_at_5 value: 63.08200000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.283 - type: map_at_10 value: 85.24000000000001 - type: map_at_100 value: 85.882 - type: map_at_1000 value: 85.897 - type: map_at_3 value: 82.326 - type: map_at_5 value: 84.177 - type: mrr_at_1 value: 82.21000000000001 - type: mrr_at_10 value: 88.228 - type: mrr_at_100 value: 88.32 - type: mrr_at_1000 value: 88.32 - type: mrr_at_3 value: 87.323 - type: mrr_at_5 value: 87.94800000000001 - type: ndcg_at_1 value: 82.17999999999999 - type: ndcg_at_10 value: 88.9 - type: ndcg_at_100 value: 90.079 - type: ndcg_at_1000 value: 90.158 - type: ndcg_at_3 value: 86.18299999999999 - type: ndcg_at_5 value: 87.71799999999999 - type: precision_at_1 value: 82.17999999999999 - type: precision_at_10 value: 13.464 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.693 - type: precision_at_5 value: 24.792 - type: recall_at_1 value: 71.283 - type: recall_at_10 value: 95.742 - type: recall_at_100 value: 99.67200000000001 - type: recall_at_1000 value: 99.981 - type: recall_at_3 value: 87.888 - type: recall_at_5 value: 92.24 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.24267063669042 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.88056988932578 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.903 - type: map_at_10 value: 13.202 - type: map_at_100 value: 15.5 - type: map_at_1000 value: 15.870999999999999 - type: map_at_3 value: 9.407 - type: map_at_5 value: 11.238 - type: mrr_at_1 value: 24.2 - type: mrr_at_10 value: 35.867 - type: mrr_at_100 value: 37.001 - type: mrr_at_1000 value: 37.043 - type: mrr_at_3 value: 32.5 - type: mrr_at_5 value: 34.35 - type: ndcg_at_1 value: 24.2 - type: ndcg_at_10 value: 21.731 - type: ndcg_at_100 value: 30.7 - type: ndcg_at_1000 value: 36.618 - type: ndcg_at_3 value: 20.72 - type: ndcg_at_5 value: 17.954 - type: precision_at_1 value: 24.2 - type: precision_at_10 value: 11.33 - type: precision_at_100 value: 2.4410000000000003 - type: precision_at_1000 value: 0.386 - type: precision_at_3 value: 19.667 - type: precision_at_5 value: 15.86 - type: recall_at_1 value: 4.903 - type: recall_at_10 value: 22.962 - type: recall_at_100 value: 49.563 - type: recall_at_1000 value: 78.238 - type: recall_at_3 value: 11.953 - type: recall_at_5 value: 16.067999999999998 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.12694254604078 - type: cos_sim_spearman value: 80.30141815181918 - type: euclidean_pearson value: 81.34015449877128 - type: euclidean_spearman value: 80.13984197010849 - type: manhattan_pearson value: 81.31767068124086 - type: manhattan_spearman value: 80.11720513114103 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.13112984010417 - type: cos_sim_spearman value: 78.03063573402875 - type: euclidean_pearson value: 83.51928418844804 - type: euclidean_spearman value: 78.4045235411144 - type: manhattan_pearson value: 83.49981637388689 - type: manhattan_spearman value: 78.4042575139372 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.50327987379504 - type: cos_sim_spearman value: 84.18556767756205 - type: euclidean_pearson value: 82.69684424327679 - type: euclidean_spearman value: 83.5368106038335 - type: manhattan_pearson value: 82.57967581007374 - type: manhattan_spearman value: 83.43009053133697 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.50756863007814 - type: cos_sim_spearman value: 82.27204331279108 - type: euclidean_pearson value: 81.39535251429741 - type: euclidean_spearman value: 81.84386626336239 - type: manhattan_pearson value: 81.34281737280695 - type: manhattan_spearman value: 81.81149375673166 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.8727714856726 - type: cos_sim_spearman value: 87.95738287792312 - type: euclidean_pearson value: 86.62920602795887 - type: euclidean_spearman value: 87.05207355381243 - type: manhattan_pearson value: 86.53587918472225 - type: manhattan_spearman value: 86.95382961029586 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.52240359769479 - type: cos_sim_spearman value: 85.47685776238286 - type: euclidean_pearson value: 84.25815333483058 - type: euclidean_spearman value: 85.27415639683198 - type: manhattan_pearson value: 84.29127757025637 - type: manhattan_spearman value: 85.30226224917351 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.42501708915708 - type: cos_sim_spearman value: 86.42276182795041 - type: euclidean_pearson value: 86.5408207354761 - type: euclidean_spearman value: 85.46096321750838 - type: manhattan_pearson value: 86.54177303026881 - type: manhattan_spearman value: 85.50313151916117 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.86521089250766 - type: cos_sim_spearman value: 65.94868540323003 - type: euclidean_pearson value: 67.16569626533084 - type: euclidean_spearman value: 66.37667004134917 - type: manhattan_pearson value: 67.1482365102333 - type: manhattan_spearman value: 66.53240122580029 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.64746265365318 - type: cos_sim_spearman value: 86.41888825906786 - type: euclidean_pearson value: 85.27453642725811 - type: euclidean_spearman value: 85.94095796602544 - type: manhattan_pearson value: 85.28643660505334 - type: manhattan_spearman value: 85.95028003260744 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.48903153618527 - type: mrr value: 96.41081503826601 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.594 - type: map_at_10 value: 69.296 - type: map_at_100 value: 69.782 - type: map_at_1000 value: 69.795 - type: map_at_3 value: 66.23 - type: map_at_5 value: 68.293 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 70.339 - type: mrr_at_100 value: 70.708 - type: mrr_at_1000 value: 70.722 - type: mrr_at_3 value: 68.0 - type: mrr_at_5 value: 69.56700000000001 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 74.039 - type: ndcg_at_100 value: 76.103 - type: ndcg_at_1000 value: 76.47800000000001 - type: ndcg_at_3 value: 68.967 - type: ndcg_at_5 value: 71.96900000000001 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.866999999999999 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 18.2 - type: recall_at_1 value: 58.594 - type: recall_at_10 value: 87.422 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 74.217 - type: recall_at_5 value: 81.539 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85049504950496 - type: cos_sim_ap value: 96.33111544137081 - type: cos_sim_f1 value: 92.35443037974684 - type: cos_sim_precision value: 93.53846153846153 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.82376237623762 - type: dot_ap value: 95.38082527310888 - type: dot_f1 value: 90.90909090909092 - type: dot_precision value: 92.90187891440502 - type: dot_recall value: 89.0 - type: euclidean_accuracy value: 99.84851485148515 - type: euclidean_ap value: 96.32316003996347 - type: euclidean_f1 value: 92.2071392659628 - type: euclidean_precision value: 92.71991911021233 - type: euclidean_recall value: 91.7 - type: manhattan_accuracy value: 99.84851485148515 - type: manhattan_ap value: 96.3655668249217 - type: manhattan_f1 value: 92.18356026222895 - type: manhattan_precision value: 92.98067141403867 - type: manhattan_recall value: 91.4 - type: max_accuracy value: 99.85049504950496 - type: max_ap value: 96.3655668249217 - type: max_f1 value: 92.35443037974684 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.94861371629051 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.009430451385 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.61164066427969 - type: mrr value: 55.49710603938544 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.622620124907662 - type: cos_sim_spearman value: 31.0678351356163 - type: dot_pearson value: 30.863727693306814 - type: dot_spearman value: 31.230306567021255 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 2.011 - type: map_at_100 value: 10.974 - type: map_at_1000 value: 25.819 - type: map_at_3 value: 0.6649999999999999 - type: map_at_5 value: 1.076 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 78.07300000000001 - type: ndcg_at_100 value: 58.231 - type: ndcg_at_1000 value: 51.153000000000006 - type: ndcg_at_3 value: 81.123 - type: ndcg_at_5 value: 81.059 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 83.0 - type: precision_at_100 value: 59.38 - type: precision_at_1000 value: 22.55 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.2079999999999997 - type: recall_at_100 value: 14.069 - type: recall_at_1000 value: 47.678 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.161 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.809 - type: map_at_10 value: 10.394 - type: map_at_100 value: 16.598 - type: map_at_1000 value: 18.142 - type: map_at_3 value: 5.572 - type: map_at_5 value: 7.1370000000000005 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 46.564 - type: mrr_at_100 value: 47.469 - type: mrr_at_1000 value: 47.469 - type: mrr_at_3 value: 42.177 - type: mrr_at_5 value: 44.524 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 25.701 - type: ndcg_at_100 value: 37.532 - type: ndcg_at_1000 value: 48.757 - type: ndcg_at_3 value: 28.199999999999996 - type: ndcg_at_5 value: 25.987 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 23.469 - type: precision_at_100 value: 7.9799999999999995 - type: precision_at_1000 value: 1.5350000000000001 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.809 - type: recall_at_10 value: 16.887 - type: recall_at_100 value: 48.67 - type: recall_at_1000 value: 82.89699999999999 - type: recall_at_3 value: 6.521000000000001 - type: recall_at_5 value: 9.609 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.57860000000001 - type: ap value: 13.82629211536393 - type: f1 value: 54.59860966183956 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.38030560271647 - type: f1 value: 59.69685552567865 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.4736717043405 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.92853311080646 - type: cos_sim_ap value: 77.67872502591382 - type: cos_sim_f1 value: 70.33941236068895 - type: cos_sim_precision value: 67.63273258645884 - type: cos_sim_recall value: 73.27176781002639 - type: dot_accuracy value: 85.79603027954938 - type: dot_ap value: 73.73786190233379 - type: dot_f1 value: 67.3437901774235 - type: dot_precision value: 65.67201604814443 - type: dot_recall value: 69.10290237467018 - type: euclidean_accuracy value: 86.94045419324074 - type: euclidean_ap value: 77.6687791535167 - type: euclidean_f1 value: 70.47209214023542 - type: euclidean_precision value: 67.7207492094381 - type: euclidean_recall value: 73.45646437994723 - type: manhattan_accuracy value: 86.87488823985218 - type: manhattan_ap value: 77.63373392430728 - type: manhattan_f1 value: 70.40920716112532 - type: manhattan_precision value: 68.31265508684864 - type: manhattan_recall value: 72.63852242744063 - type: max_accuracy value: 86.94045419324074 - type: max_ap value: 77.67872502591382 - type: max_f1 value: 70.47209214023542 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.67155664221679 - type: cos_sim_ap value: 85.64591703003417 - type: cos_sim_f1 value: 77.59531005352656 - type: cos_sim_precision value: 73.60967184801382 - type: cos_sim_recall value: 82.03726516784724 - type: dot_accuracy value: 88.41541506578181 - type: dot_ap value: 84.6482788957769 - type: dot_f1 value: 77.04748541466657 - type: dot_precision value: 74.02440754931176 - type: dot_recall value: 80.3279950723745 - type: euclidean_accuracy value: 88.63080684596576 - type: euclidean_ap value: 85.44570045321562 - type: euclidean_f1 value: 77.28769403336106 - type: euclidean_precision value: 72.90600040958427 - type: euclidean_recall value: 82.22975053895904 - type: manhattan_accuracy value: 88.59393798269105 - type: manhattan_ap value: 85.40271361038187 - type: manhattan_f1 value: 77.17606419344392 - type: manhattan_precision value: 72.4447747078295 - type: manhattan_recall value: 82.5685247921158 - type: max_accuracy value: 88.67155664221679 - type: max_ap value: 85.64591703003417 - type: max_f1 value: 77.59531005352656 --- **This repo contains the model exported to ONNX weights.** **Everything is provided as-is.** --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search. And it also can be used in vector databases for LLMs. ************* 🌟**Updates**🌟 ************* - 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire: - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released - 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BEAR", "BIOSSES", "SCIFACT" ]
Non_BioNLP
GroNLP/T0pp-sharded
GroNLP
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2110.08207", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,661
1,661
9
5
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text: A is the son's of B's uncle. What is the family relationship between A and B? - text: 'Reorder the words in this sentence: justin and name bieber years is my am I 27 old.' - text: "Task: copy but say the opposite.\n PSG won its match against Barca." - text: 'Is this review positive or negative? Review: Best cast iron skillet you will every buy.' example_title: Sentiment analysis - text: "Question A: How is air traffic controlled? \nQuestion B: How do you become\ \ an air traffic controller?\nPick one: these questions are duplicates or not\ \ duplicates." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday.\ \ He chose her because she had foreign affairs experience as a former First Lady.\ \ \nIn the previous sentence, decide who 'her' is referring to." example_title: Coreference resolution - text: "Last week I upgraded my iOS version and ever since then my phone has been\ \ overheating whenever I use your app.\n Select the category for the above sentence\ \ from: mobile, website, billing, account access." - text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach\ \ was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit,\ \ Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences\ \ 1 and 2 have the same meaning?" example_title: Paraphrase identification - text: "Here's the beginning of an article, choose a tag that best describes the\ \ topic of the article: business, cinema, politics, health, travel, sports.\n\n\ \ The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN)\ \ Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds.\ \ For a Cold War creation, Ian Fleming's suave spy has certainly gotten around,\ \ but despite different guises in the tuxedo and occasional scuba gear, when it\ \ comes to Bond ratings, there really shouldn't be much argument about who wore\ \ it best." - text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1,\ \ LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different\ \ things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out.\ \ Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?" - text: "Is the word 'table' used in the same meaning in the two following sentences?\n\ \n Sentence A: you can leave the books on the table over there.\n Sentence B:\ \ the tables in this book are very hard to read." - text: "On a shelf, there are five books: a gray book, a red book, a purple book,\ \ a blue book, and a black book.\n The red book is to the right of the gray book.\ \ The black book is to the left of the blue book. The blue book is to the left\ \ of the gray book. The purple book is the second from the right.\n\n Which book\ \ is the leftmost book?" example_title: Logic puzzles - text: "The two men running to become New York City's next mayor will face off in\ \ their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough\ \ president and a former New York City police captain, is widely expected to win\ \ the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era\ \ Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?" example_title: Reading comprehension - text: "The word 'binne' means any animal that is furry and has four legs, and the\ \ word 'bam' means a simple sort of dwelling.\n\n Which of the following best\ \ characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence\ \ 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence\ \ 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places\ \ where people live." --- *This repository provides a sharded version of the T0pp model that can be loaded in low-memory setups.* **Official repositories**: [Github](https://github.com/bigscience-workshop/t-zero) | [Hugging Face Hub](https://huggingface.co/bigscience/T0pp) # Model Description T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks. # Intended uses You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*. A few other examples that you can try: - *A is the son's of B's uncle. What is the family relationship between A and B?* - *Question A: How is air traffic controlled?<br> Question B: How do you become an air traffic controller?<br> Pick one: these questions are duplicates or not duplicates.* - *Is the word 'table' used in the same meaning in the two following sentences?<br><br> Sentence A: you can leave the books on the table over there.<br> Sentence B: the tables in this book are very hard to read.* - *Max: Know any good websites to buy clothes from?<br> Payton: Sure :) LINK 1, LINK 2, LINK 3<br> Max: That's a lot of them!<br> Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br> Max: I'll check them out. Thanks.<br><br> Who or what are Payton and Max referring to when they say 'them'?* - *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br> The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br> Which book is the leftmost book?* - *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.* # How to use We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[T0](https://huggingface.co/bigscience/T0)|11 billion| |[T0p](https://huggingface.co/bigscience/T0p)|11 billion| |[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion| |[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion| |[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion| |[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion| Here is how to use the model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp") inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`. **Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.** # Training procedure T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective. At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section. Training details: - Fine-tuning steps: 12'200 - Input sequence length: 1024 - Target sequence length: 256 - Batch size: 1'024 sequences - Optimizer: Adafactor - Learning rate: 1e-3 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples) - Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP| |T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions| |T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC| |T0_single_prompt|Same as T0 but only one prompt per training dataset| |T0_original_task_only|Same as T0 but only original tasks templates| |T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model| For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page. *: We recast Hotpot QA as closed-book QA due to long input sequence length. # Evaluation data We evaluate our models on a suite of held-out tasks: |Task category|Datasets| |-|-| |Natural language inference|ANLI, CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Limitations - The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html). - We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model. - Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text. # Bias and fairness Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics: - Input: `Is the earth flat?` - Prediction: `yes` - Input: `Do vaccines cause autism?` - Prediction: `yes` - Input: `Complete this sentence: This man works as a` - Prediction: `Architect` - Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny` - Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex` - Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault` - Input: `what is something everyone hates, but you like?` - Prediction: `sex` - Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex` - Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut` - Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy` Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases. To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts. <table> <tr> <td>Dataset</td> <td>Model</td> <td>Average (Acc.)</td> <td>Median (Acc.)</td> </tr> <tr> <td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td> </tr> <td>T0p</td><td>57.6</td><td>83.8</td> <tr> </tr> <td>T0pp</td><td>62.7</td><td>64.4</td> <tr> </tr> <td>T0_single_prompt</td><td>57.6</td><td>69.5</td> <tr> </tr> <td>T0_original_task_only</td><td>47.1</td><td>37.8</td> <tr> </tr> <td>T0_3B</td><td>56.9</td><td>82.6</td> </tr> <tr> <td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td> </tr> <td>T0p</td><td>80.1</td><td>80.6</td> <tr> </tr> <td>T0pp</td><td>89.2</td><td>90.0</td> <tr> </tr> <td>T0_single_prompt</td><td>81.6</td><td>84.6</td> <tr> </tr> <td>T0_original_task_only</td><td>83.7</td><td>83.8</td> <tr> </tr> <td>T0_3B</td><td>69.7</td><td>69.4</td> </tr> </table> To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts. <table> <tr> <td rowspan="2">Model</td> <td rowspan="2">Subset</td> <td colspan="3">Average (Acc.)</td> <td colspan="3">Median (Acc.)</td> </tr> <tr> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> </tr> <tr> <td rowspan="2">T0</td><td>Type 1</td> <td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td> </tr> <td>Type 2</td> <td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0p</td> <td>Type 1</td> <td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td> </tr> </tr> <td rowspan="2">T0pp</td> <td>Type 1</td> <td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td> </tr> </tr> <td>Type 2</td> <td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td> </tr> </tr> <td rowspan="2">T0_single_prompt</td> <td>Type 1</td> <td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td> </tr> </tr> <td rowspan="2">T0_original_task_only</td> <td>Type 1</td> <td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td> </tr> </tr> <td> Type 2</td> <td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0_3B</td> <td>Type 1</td> <td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td> </tr> </tr> <td> Type 2</td> <td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td> </tr> </table> # BibTeX entry and citation info ```bibtex @misc{sanh2021multitask, title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush}, year={2021}, eprint={2110.08207}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
[ "COREFERENCE_RESOLUTION", "TEXTUAL_ENTAILMENT", "SUMMARIZATION" ]
[ "SCIQ" ]
Non_BioNLP
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,716
1,716
9
0
--- datasets: - fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
[ "TEXT_CLASSIFICATION" ]
[ "SCIFACT" ]
Non_BioNLP
intfloat/e5-base
intfloat
sentence-similarity
[ "sentence-transformers", "pytorch", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,672
1,691
294,150
20
--- language: - en license: mit tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: e5-base results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.71641791044777 - type: ap value: 44.15426065428253 - type: f1 value: 73.89474407693241 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 87.9649 - type: ap value: 84.10171551915973 - type: f1 value: 87.94148377827356 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.645999999999994 - type: f1 value: 42.230574673549 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 26.814 - type: map_at_10 value: 42.681999999999995 - type: map_at_100 value: 43.714 - type: map_at_1000 value: 43.724000000000004 - type: map_at_3 value: 38.11 - type: map_at_5 value: 40.666999999999994 - type: mrr_at_1 value: 27.168999999999997 - type: mrr_at_10 value: 42.84 - type: mrr_at_100 value: 43.864 - type: mrr_at_1000 value: 43.875 - type: mrr_at_3 value: 38.193 - type: mrr_at_5 value: 40.793 - type: ndcg_at_1 value: 26.814 - type: ndcg_at_10 value: 51.410999999999994 - type: ndcg_at_100 value: 55.713 - type: ndcg_at_1000 value: 55.957 - type: ndcg_at_3 value: 41.955 - type: ndcg_at_5 value: 46.558 - type: precision_at_1 value: 26.814 - type: precision_at_10 value: 7.922999999999999 - type: precision_at_100 value: 0.9780000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 17.71 - type: precision_at_5 value: 12.859000000000002 - type: recall_at_1 value: 26.814 - type: recall_at_10 value: 79.232 - type: recall_at_100 value: 97.795 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 53.129000000000005 - type: recall_at_5 value: 64.29599999999999 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.56933066536439 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.47647746165173 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.65675531567043 - type: mrr value: 72.95255683067317 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.83147014162338 - type: cos_sim_spearman value: 85.1031439521441 - type: euclidean_pearson value: 83.53609085510973 - type: euclidean_spearman value: 84.59650590202833 - type: manhattan_pearson value: 83.14611947586386 - type: manhattan_spearman value: 84.13384475757064 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 83.32792207792208 - type: f1 value: 83.32037485050513 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 36.18605446588703 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.72379130181917 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.659 - type: map_at_10 value: 40.333999999999996 - type: map_at_100 value: 41.763 - type: map_at_1000 value: 41.894 - type: map_at_3 value: 37.561 - type: map_at_5 value: 39.084 - type: mrr_at_1 value: 37.482 - type: mrr_at_10 value: 45.736 - type: mrr_at_100 value: 46.591 - type: mrr_at_1000 value: 46.644999999999996 - type: mrr_at_3 value: 43.491 - type: mrr_at_5 value: 44.75 - type: ndcg_at_1 value: 37.482 - type: ndcg_at_10 value: 45.606 - type: ndcg_at_100 value: 51.172 - type: ndcg_at_1000 value: 53.407000000000004 - type: ndcg_at_3 value: 41.808 - type: ndcg_at_5 value: 43.449 - type: precision_at_1 value: 37.482 - type: precision_at_10 value: 8.254999999999999 - type: precision_at_100 value: 1.3719999999999999 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.695 - type: precision_at_5 value: 13.847999999999999 - type: recall_at_1 value: 30.659 - type: recall_at_10 value: 55.409 - type: recall_at_100 value: 78.687 - type: recall_at_1000 value: 93.068 - type: recall_at_3 value: 43.891999999999996 - type: recall_at_5 value: 48.678 - type: map_at_1 value: 30.977 - type: map_at_10 value: 40.296 - type: map_at_100 value: 41.453 - type: map_at_1000 value: 41.581 - type: map_at_3 value: 37.619 - type: map_at_5 value: 39.181 - type: mrr_at_1 value: 39.108 - type: mrr_at_10 value: 46.894000000000005 - type: mrr_at_100 value: 47.55 - type: mrr_at_1000 value: 47.598 - type: mrr_at_3 value: 44.766 - type: mrr_at_5 value: 46.062999999999995 - type: ndcg_at_1 value: 39.108 - type: ndcg_at_10 value: 45.717 - type: ndcg_at_100 value: 49.941 - type: ndcg_at_1000 value: 52.138 - type: ndcg_at_3 value: 42.05 - type: ndcg_at_5 value: 43.893 - type: precision_at_1 value: 39.108 - type: precision_at_10 value: 8.306 - type: precision_at_100 value: 1.3419999999999999 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 19.979 - type: precision_at_5 value: 14.038 - type: recall_at_1 value: 30.977 - type: recall_at_10 value: 54.688 - type: recall_at_100 value: 72.556 - type: recall_at_1000 value: 86.53800000000001 - type: recall_at_3 value: 43.388 - type: recall_at_5 value: 48.717 - type: map_at_1 value: 39.812 - type: map_at_10 value: 50.1 - type: map_at_100 value: 51.193999999999996 - type: map_at_1000 value: 51.258 - type: map_at_3 value: 47.510999999999996 - type: map_at_5 value: 48.891 - type: mrr_at_1 value: 45.266 - type: mrr_at_10 value: 53.459999999999994 - type: mrr_at_100 value: 54.19199999999999 - type: mrr_at_1000 value: 54.228 - type: mrr_at_3 value: 51.296 - type: mrr_at_5 value: 52.495999999999995 - type: ndcg_at_1 value: 45.266 - type: ndcg_at_10 value: 55.034000000000006 - type: ndcg_at_100 value: 59.458 - type: ndcg_at_1000 value: 60.862 - type: ndcg_at_3 value: 50.52799999999999 - type: ndcg_at_5 value: 52.564 - type: precision_at_1 value: 45.266 - type: precision_at_10 value: 8.483 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 21.944 - type: precision_at_5 value: 14.721 - type: recall_at_1 value: 39.812 - type: recall_at_10 value: 66.36 - type: recall_at_100 value: 85.392 - type: recall_at_1000 value: 95.523 - type: recall_at_3 value: 54.127 - type: recall_at_5 value: 59.245000000000005 - type: map_at_1 value: 26.186 - type: map_at_10 value: 33.18 - type: map_at_100 value: 34.052 - type: map_at_1000 value: 34.149 - type: map_at_3 value: 31.029 - type: map_at_5 value: 32.321 - type: mrr_at_1 value: 28.136 - type: mrr_at_10 value: 35.195 - type: mrr_at_100 value: 35.996 - type: mrr_at_1000 value: 36.076 - type: mrr_at_3 value: 33.051 - type: mrr_at_5 value: 34.407 - type: ndcg_at_1 value: 28.136 - type: ndcg_at_10 value: 37.275999999999996 - type: ndcg_at_100 value: 41.935 - type: ndcg_at_1000 value: 44.389 - type: ndcg_at_3 value: 33.059 - type: ndcg_at_5 value: 35.313 - type: precision_at_1 value: 28.136 - type: precision_at_10 value: 5.457999999999999 - type: precision_at_100 value: 0.826 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 13.522 - type: precision_at_5 value: 9.424000000000001 - type: recall_at_1 value: 26.186 - type: recall_at_10 value: 47.961999999999996 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.505 - type: recall_at_3 value: 36.752 - type: recall_at_5 value: 42.168 - type: map_at_1 value: 16.586000000000002 - type: map_at_10 value: 23.637 - type: map_at_100 value: 24.82 - type: map_at_1000 value: 24.95 - type: map_at_3 value: 21.428 - type: map_at_5 value: 22.555 - type: mrr_at_1 value: 20.771 - type: mrr_at_10 value: 27.839999999999996 - type: mrr_at_100 value: 28.887 - type: mrr_at_1000 value: 28.967 - type: mrr_at_3 value: 25.56 - type: mrr_at_5 value: 26.723000000000003 - type: ndcg_at_1 value: 20.771 - type: ndcg_at_10 value: 28.255000000000003 - type: ndcg_at_100 value: 33.886 - type: ndcg_at_1000 value: 36.963 - type: ndcg_at_3 value: 24.056 - type: ndcg_at_5 value: 25.818 - type: precision_at_1 value: 20.771 - type: precision_at_10 value: 5.1 - type: precision_at_100 value: 0.9119999999999999 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 11.526 - type: precision_at_5 value: 8.158999999999999 - type: recall_at_1 value: 16.586000000000002 - type: recall_at_10 value: 38.456 - type: recall_at_100 value: 62.666 - type: recall_at_1000 value: 84.47 - type: recall_at_3 value: 26.765 - type: recall_at_5 value: 31.297000000000004 - type: map_at_1 value: 28.831 - type: map_at_10 value: 37.545 - type: map_at_100 value: 38.934999999999995 - type: map_at_1000 value: 39.044000000000004 - type: map_at_3 value: 34.601 - type: map_at_5 value: 36.302 - type: mrr_at_1 value: 34.264 - type: mrr_at_10 value: 42.569 - type: mrr_at_100 value: 43.514 - type: mrr_at_1000 value: 43.561 - type: mrr_at_3 value: 40.167 - type: mrr_at_5 value: 41.678 - type: ndcg_at_1 value: 34.264 - type: ndcg_at_10 value: 42.914 - type: ndcg_at_100 value: 48.931999999999995 - type: ndcg_at_1000 value: 51.004000000000005 - type: ndcg_at_3 value: 38.096999999999994 - type: ndcg_at_5 value: 40.509 - type: precision_at_1 value: 34.264 - type: precision_at_10 value: 7.642 - type: precision_at_100 value: 1.258 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 17.453 - type: precision_at_5 value: 12.608 - type: recall_at_1 value: 28.831 - type: recall_at_10 value: 53.56999999999999 - type: recall_at_100 value: 79.26100000000001 - type: recall_at_1000 value: 92.862 - type: recall_at_3 value: 40.681 - type: recall_at_5 value: 46.597 - type: map_at_1 value: 27.461000000000002 - type: map_at_10 value: 35.885 - type: map_at_100 value: 37.039 - type: map_at_1000 value: 37.16 - type: map_at_3 value: 33.451 - type: map_at_5 value: 34.807 - type: mrr_at_1 value: 34.018 - type: mrr_at_10 value: 41.32 - type: mrr_at_100 value: 42.157 - type: mrr_at_1000 value: 42.223 - type: mrr_at_3 value: 39.288000000000004 - type: mrr_at_5 value: 40.481 - type: ndcg_at_1 value: 34.018 - type: ndcg_at_10 value: 40.821000000000005 - type: ndcg_at_100 value: 46.053 - type: ndcg_at_1000 value: 48.673 - type: ndcg_at_3 value: 36.839 - type: ndcg_at_5 value: 38.683 - type: precision_at_1 value: 34.018 - type: precision_at_10 value: 7.009 - type: precision_at_100 value: 1.123 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 16.933 - type: precision_at_5 value: 11.826 - type: recall_at_1 value: 27.461000000000002 - type: recall_at_10 value: 50.285000000000004 - type: recall_at_100 value: 73.25500000000001 - type: recall_at_1000 value: 91.17699999999999 - type: recall_at_3 value: 39.104 - type: recall_at_5 value: 43.968 - type: map_at_1 value: 26.980083333333337 - type: map_at_10 value: 34.47208333333333 - type: map_at_100 value: 35.609249999999996 - type: map_at_1000 value: 35.72833333333333 - type: map_at_3 value: 32.189416666666666 - type: map_at_5 value: 33.44683333333334 - type: mrr_at_1 value: 31.731666666666662 - type: mrr_at_10 value: 38.518 - type: mrr_at_100 value: 39.38166666666667 - type: mrr_at_1000 value: 39.446999999999996 - type: mrr_at_3 value: 36.49966666666668 - type: mrr_at_5 value: 37.639916666666664 - type: ndcg_at_1 value: 31.731666666666662 - type: ndcg_at_10 value: 38.92033333333333 - type: ndcg_at_100 value: 44.01675 - type: ndcg_at_1000 value: 46.51075 - type: ndcg_at_3 value: 35.09766666666667 - type: ndcg_at_5 value: 36.842999999999996 - type: precision_at_1 value: 31.731666666666662 - type: precision_at_10 value: 6.472583333333332 - type: precision_at_100 value: 1.0665 - type: precision_at_1000 value: 0.14725000000000002 - type: precision_at_3 value: 15.659083333333331 - type: precision_at_5 value: 10.878833333333333 - type: recall_at_1 value: 26.980083333333337 - type: recall_at_10 value: 48.13925 - type: recall_at_100 value: 70.70149999999998 - type: recall_at_1000 value: 88.10775000000001 - type: recall_at_3 value: 37.30091666666667 - type: recall_at_5 value: 41.90358333333333 - type: map_at_1 value: 25.607999999999997 - type: map_at_10 value: 30.523 - type: map_at_100 value: 31.409 - type: map_at_1000 value: 31.507 - type: map_at_3 value: 28.915000000000003 - type: map_at_5 value: 29.756 - type: mrr_at_1 value: 28.681 - type: mrr_at_10 value: 33.409 - type: mrr_at_100 value: 34.241 - type: mrr_at_1000 value: 34.313 - type: mrr_at_3 value: 32.029999999999994 - type: mrr_at_5 value: 32.712 - type: ndcg_at_1 value: 28.681 - type: ndcg_at_10 value: 33.733000000000004 - type: ndcg_at_100 value: 38.32 - type: ndcg_at_1000 value: 40.937 - type: ndcg_at_3 value: 30.898999999999997 - type: ndcg_at_5 value: 32.088 - type: precision_at_1 value: 28.681 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.79 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 12.73 - type: precision_at_5 value: 8.558 - type: recall_at_1 value: 25.607999999999997 - type: recall_at_10 value: 40.722 - type: recall_at_100 value: 61.956999999999994 - type: recall_at_1000 value: 81.43 - type: recall_at_3 value: 32.785 - type: recall_at_5 value: 35.855 - type: map_at_1 value: 20.399 - type: map_at_10 value: 25.968000000000004 - type: map_at_100 value: 26.985999999999997 - type: map_at_1000 value: 27.105 - type: map_at_3 value: 24.215 - type: map_at_5 value: 25.157 - type: mrr_at_1 value: 24.708 - type: mrr_at_10 value: 29.971999999999998 - type: mrr_at_100 value: 30.858 - type: mrr_at_1000 value: 30.934 - type: mrr_at_3 value: 28.304000000000002 - type: mrr_at_5 value: 29.183999999999997 - type: ndcg_at_1 value: 24.708 - type: ndcg_at_10 value: 29.676000000000002 - type: ndcg_at_100 value: 34.656 - type: ndcg_at_1000 value: 37.588 - type: ndcg_at_3 value: 26.613 - type: ndcg_at_5 value: 27.919 - type: precision_at_1 value: 24.708 - type: precision_at_10 value: 5.01 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 11.975 - type: precision_at_5 value: 8.279 - type: recall_at_1 value: 20.399 - type: recall_at_10 value: 36.935 - type: recall_at_100 value: 59.532 - type: recall_at_1000 value: 80.58 - type: recall_at_3 value: 27.979 - type: recall_at_5 value: 31.636999999999997 - type: map_at_1 value: 27.606 - type: map_at_10 value: 34.213 - type: map_at_100 value: 35.339999999999996 - type: map_at_1000 value: 35.458 - type: map_at_3 value: 31.987 - type: map_at_5 value: 33.322 - type: mrr_at_1 value: 31.53 - type: mrr_at_10 value: 37.911 - type: mrr_at_100 value: 38.879000000000005 - type: mrr_at_1000 value: 38.956 - type: mrr_at_3 value: 35.868 - type: mrr_at_5 value: 37.047999999999995 - type: ndcg_at_1 value: 31.53 - type: ndcg_at_10 value: 38.312000000000005 - type: ndcg_at_100 value: 43.812 - type: ndcg_at_1000 value: 46.414 - type: ndcg_at_3 value: 34.319 - type: ndcg_at_5 value: 36.312 - type: precision_at_1 value: 31.53 - type: precision_at_10 value: 5.970000000000001 - type: precision_at_100 value: 0.9939999999999999 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 14.738999999999999 - type: precision_at_5 value: 10.242999999999999 - type: recall_at_1 value: 27.606 - type: recall_at_10 value: 47.136 - type: recall_at_100 value: 71.253 - type: recall_at_1000 value: 89.39399999999999 - type: recall_at_3 value: 36.342 - type: recall_at_5 value: 41.388999999999996 - type: map_at_1 value: 24.855 - type: map_at_10 value: 31.963 - type: map_at_100 value: 33.371 - type: map_at_1000 value: 33.584 - type: map_at_3 value: 29.543999999999997 - type: map_at_5 value: 30.793 - type: mrr_at_1 value: 29.644 - type: mrr_at_10 value: 35.601 - type: mrr_at_100 value: 36.551 - type: mrr_at_1000 value: 36.623 - type: mrr_at_3 value: 33.399 - type: mrr_at_5 value: 34.575 - type: ndcg_at_1 value: 29.644 - type: ndcg_at_10 value: 36.521 - type: ndcg_at_100 value: 42.087 - type: ndcg_at_1000 value: 45.119 - type: ndcg_at_3 value: 32.797 - type: ndcg_at_5 value: 34.208 - type: precision_at_1 value: 29.644 - type: precision_at_10 value: 6.7 - type: precision_at_100 value: 1.374 - type: precision_at_1000 value: 0.22899999999999998 - type: precision_at_3 value: 15.152 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 24.855 - type: recall_at_10 value: 45.449 - type: recall_at_100 value: 70.921 - type: recall_at_1000 value: 90.629 - type: recall_at_3 value: 33.526 - type: recall_at_5 value: 37.848 - type: map_at_1 value: 24.781 - type: map_at_10 value: 30.020999999999997 - type: map_at_100 value: 30.948999999999998 - type: map_at_1000 value: 31.05 - type: map_at_3 value: 28.412 - type: map_at_5 value: 29.193 - type: mrr_at_1 value: 27.172 - type: mrr_at_10 value: 32.309 - type: mrr_at_100 value: 33.164 - type: mrr_at_1000 value: 33.239999999999995 - type: mrr_at_3 value: 30.775999999999996 - type: mrr_at_5 value: 31.562 - type: ndcg_at_1 value: 27.172 - type: ndcg_at_10 value: 33.178999999999995 - type: ndcg_at_100 value: 37.949 - type: ndcg_at_1000 value: 40.635 - type: ndcg_at_3 value: 30.107 - type: ndcg_at_5 value: 31.36 - type: precision_at_1 value: 27.172 - type: precision_at_10 value: 4.769 - type: precision_at_100 value: 0.769 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 12.261 - type: precision_at_5 value: 8.17 - type: recall_at_1 value: 24.781 - type: recall_at_10 value: 40.699000000000005 - type: recall_at_100 value: 62.866 - type: recall_at_1000 value: 83.11699999999999 - type: recall_at_3 value: 32.269999999999996 - type: recall_at_5 value: 35.443999999999996 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 5.2139999999999995 - type: map_at_10 value: 9.986 - type: map_at_100 value: 11.343 - type: map_at_1000 value: 11.55 - type: map_at_3 value: 7.961 - type: map_at_5 value: 8.967 - type: mrr_at_1 value: 12.052 - type: mrr_at_10 value: 20.165 - type: mrr_at_100 value: 21.317 - type: mrr_at_1000 value: 21.399 - type: mrr_at_3 value: 17.079 - type: mrr_at_5 value: 18.695 - type: ndcg_at_1 value: 12.052 - type: ndcg_at_10 value: 15.375 - type: ndcg_at_100 value: 21.858 - type: ndcg_at_1000 value: 26.145000000000003 - type: ndcg_at_3 value: 11.334 - type: ndcg_at_5 value: 12.798000000000002 - type: precision_at_1 value: 12.052 - type: precision_at_10 value: 5.16 - type: precision_at_100 value: 1.206 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 8.73 - type: precision_at_5 value: 7.114 - type: recall_at_1 value: 5.2139999999999995 - type: recall_at_10 value: 20.669999999999998 - type: recall_at_100 value: 43.901 - type: recall_at_1000 value: 68.447 - type: recall_at_3 value: 11.049000000000001 - type: recall_at_5 value: 14.652999999999999 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.511000000000001 - type: map_at_10 value: 19.503 - type: map_at_100 value: 27.46 - type: map_at_1000 value: 29.187 - type: map_at_3 value: 14.030999999999999 - type: map_at_5 value: 16.329 - type: mrr_at_1 value: 63.74999999999999 - type: mrr_at_10 value: 73.419 - type: mrr_at_100 value: 73.691 - type: mrr_at_1000 value: 73.697 - type: mrr_at_3 value: 71.792 - type: mrr_at_5 value: 72.979 - type: ndcg_at_1 value: 53.125 - type: ndcg_at_10 value: 41.02 - type: ndcg_at_100 value: 45.407 - type: ndcg_at_1000 value: 52.68000000000001 - type: ndcg_at_3 value: 46.088 - type: ndcg_at_5 value: 43.236000000000004 - type: precision_at_1 value: 63.74999999999999 - type: precision_at_10 value: 32.35 - type: precision_at_100 value: 10.363 - type: precision_at_1000 value: 2.18 - type: precision_at_3 value: 49.667 - type: precision_at_5 value: 41.5 - type: recall_at_1 value: 8.511000000000001 - type: recall_at_10 value: 24.851 - type: recall_at_100 value: 50.745 - type: recall_at_1000 value: 73.265 - type: recall_at_3 value: 15.716 - type: recall_at_5 value: 19.256 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 49.43500000000001 - type: f1 value: 44.56288273966374 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 40.858 - type: map_at_10 value: 52.276 - type: map_at_100 value: 52.928 - type: map_at_1000 value: 52.966 - type: map_at_3 value: 49.729 - type: map_at_5 value: 51.27 - type: mrr_at_1 value: 43.624 - type: mrr_at_10 value: 55.22899999999999 - type: mrr_at_100 value: 55.823 - type: mrr_at_1000 value: 55.85 - type: mrr_at_3 value: 52.739999999999995 - type: mrr_at_5 value: 54.251000000000005 - type: ndcg_at_1 value: 43.624 - type: ndcg_at_10 value: 58.23500000000001 - type: ndcg_at_100 value: 61.315 - type: ndcg_at_1000 value: 62.20099999999999 - type: ndcg_at_3 value: 53.22 - type: ndcg_at_5 value: 55.88999999999999 - type: precision_at_1 value: 43.624 - type: precision_at_10 value: 8.068999999999999 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 21.752 - type: precision_at_5 value: 14.515 - type: recall_at_1 value: 40.858 - type: recall_at_10 value: 73.744 - type: recall_at_100 value: 87.667 - type: recall_at_1000 value: 94.15599999999999 - type: recall_at_3 value: 60.287 - type: recall_at_5 value: 66.703 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.864 - type: map_at_10 value: 28.592000000000002 - type: map_at_100 value: 30.165 - type: map_at_1000 value: 30.364 - type: map_at_3 value: 24.586 - type: map_at_5 value: 26.717000000000002 - type: mrr_at_1 value: 35.031 - type: mrr_at_10 value: 43.876 - type: mrr_at_100 value: 44.683 - type: mrr_at_1000 value: 44.736 - type: mrr_at_3 value: 40.998000000000005 - type: mrr_at_5 value: 42.595 - type: ndcg_at_1 value: 35.031 - type: ndcg_at_10 value: 36.368 - type: ndcg_at_100 value: 42.472 - type: ndcg_at_1000 value: 45.973000000000006 - type: ndcg_at_3 value: 31.915 - type: ndcg_at_5 value: 33.394 - type: precision_at_1 value: 35.031 - type: precision_at_10 value: 10.139 - type: precision_at_100 value: 1.6420000000000001 - type: precision_at_1000 value: 0.22699999999999998 - type: precision_at_3 value: 21.142 - type: precision_at_5 value: 15.772 - type: recall_at_1 value: 17.864 - type: recall_at_10 value: 43.991 - type: recall_at_100 value: 66.796 - type: recall_at_1000 value: 87.64 - type: recall_at_3 value: 28.915999999999997 - type: recall_at_5 value: 35.185 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.556 - type: map_at_10 value: 53.056000000000004 - type: map_at_100 value: 53.909 - type: map_at_1000 value: 53.98 - type: map_at_3 value: 49.982 - type: map_at_5 value: 51.9 - type: mrr_at_1 value: 73.113 - type: mrr_at_10 value: 79.381 - type: mrr_at_100 value: 79.60300000000001 - type: mrr_at_1000 value: 79.617 - type: mrr_at_3 value: 78.298 - type: mrr_at_5 value: 78.995 - type: ndcg_at_1 value: 73.113 - type: ndcg_at_10 value: 62.21 - type: ndcg_at_100 value: 65.242 - type: ndcg_at_1000 value: 66.667 - type: ndcg_at_3 value: 57.717 - type: ndcg_at_5 value: 60.224 - type: precision_at_1 value: 73.113 - type: precision_at_10 value: 12.842999999999998 - type: precision_at_100 value: 1.522 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 36.178 - type: precision_at_5 value: 23.695 - type: recall_at_1 value: 36.556 - type: recall_at_10 value: 64.213 - type: recall_at_100 value: 76.077 - type: recall_at_1000 value: 85.53699999999999 - type: recall_at_3 value: 54.266999999999996 - type: recall_at_5 value: 59.236999999999995 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 75.958 - type: ap value: 69.82869527654348 - type: f1 value: 75.89120903005633 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.608 - type: map_at_10 value: 36.144 - type: map_at_100 value: 37.244 - type: map_at_1000 value: 37.291999999999994 - type: map_at_3 value: 32.287 - type: map_at_5 value: 34.473 - type: mrr_at_1 value: 24.226 - type: mrr_at_10 value: 36.711 - type: mrr_at_100 value: 37.758 - type: mrr_at_1000 value: 37.8 - type: mrr_at_3 value: 32.92 - type: mrr_at_5 value: 35.104 - type: ndcg_at_1 value: 24.269 - type: ndcg_at_10 value: 43.138 - type: ndcg_at_100 value: 48.421 - type: ndcg_at_1000 value: 49.592000000000006 - type: ndcg_at_3 value: 35.269 - type: ndcg_at_5 value: 39.175 - type: precision_at_1 value: 24.269 - type: precision_at_10 value: 6.755999999999999 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.938 - type: precision_at_5 value: 10.934000000000001 - type: recall_at_1 value: 23.608 - type: recall_at_10 value: 64.679 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.91 - type: recall_at_3 value: 43.25 - type: recall_at_5 value: 52.617000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.21477428180576 - type: f1 value: 92.92502305092152 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.76744186046511 - type: f1 value: 59.19855520057899 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.24613315400134 - type: f1 value: 70.19950395651232 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.75857431069268 - type: f1 value: 76.5433450230191 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.525463791623604 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.28695907385136 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.068174046665224 - type: mrr value: 30.827586642840803 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.322 - type: map_at_10 value: 13.919999999999998 - type: map_at_100 value: 17.416 - type: map_at_1000 value: 18.836 - type: map_at_3 value: 10.111 - type: map_at_5 value: 11.991999999999999 - type: mrr_at_1 value: 48.297000000000004 - type: mrr_at_10 value: 57.114 - type: mrr_at_100 value: 57.713 - type: mrr_at_1000 value: 57.751 - type: mrr_at_3 value: 55.108000000000004 - type: mrr_at_5 value: 56.533 - type: ndcg_at_1 value: 46.44 - type: ndcg_at_10 value: 36.589 - type: ndcg_at_100 value: 33.202 - type: ndcg_at_1000 value: 41.668 - type: ndcg_at_3 value: 41.302 - type: ndcg_at_5 value: 39.829 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 27.059 - type: precision_at_100 value: 8.235000000000001 - type: precision_at_1000 value: 2.091 - type: precision_at_3 value: 38.184000000000005 - type: precision_at_5 value: 34.365 - type: recall_at_1 value: 6.322 - type: recall_at_10 value: 18.288 - type: recall_at_100 value: 32.580999999999996 - type: recall_at_1000 value: 63.605999999999995 - type: recall_at_3 value: 11.266 - type: recall_at_5 value: 14.69 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 36.586999999999996 - type: map_at_10 value: 52.464 - type: map_at_100 value: 53.384 - type: map_at_1000 value: 53.405 - type: map_at_3 value: 48.408 - type: map_at_5 value: 50.788999999999994 - type: mrr_at_1 value: 40.904 - type: mrr_at_10 value: 54.974000000000004 - type: mrr_at_100 value: 55.60699999999999 - type: mrr_at_1000 value: 55.623 - type: mrr_at_3 value: 51.73799999999999 - type: mrr_at_5 value: 53.638 - type: ndcg_at_1 value: 40.904 - type: ndcg_at_10 value: 59.965999999999994 - type: ndcg_at_100 value: 63.613 - type: ndcg_at_1000 value: 64.064 - type: ndcg_at_3 value: 52.486 - type: ndcg_at_5 value: 56.377 - type: precision_at_1 value: 40.904 - type: precision_at_10 value: 9.551 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.552 - type: precision_at_5 value: 16.436999999999998 - type: recall_at_1 value: 36.586999999999996 - type: recall_at_10 value: 80.094 - type: recall_at_100 value: 95.515 - type: recall_at_1000 value: 98.803 - type: recall_at_3 value: 60.907 - type: recall_at_5 value: 69.817 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.422 - type: map_at_10 value: 84.113 - type: map_at_100 value: 84.744 - type: map_at_1000 value: 84.762 - type: map_at_3 value: 81.171 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 81.12 - type: mrr_at_10 value: 87.277 - type: mrr_at_100 value: 87.384 - type: mrr_at_1000 value: 87.385 - type: mrr_at_3 value: 86.315 - type: mrr_at_5 value: 86.981 - type: ndcg_at_1 value: 81.12 - type: ndcg_at_10 value: 87.92 - type: ndcg_at_100 value: 89.178 - type: ndcg_at_1000 value: 89.29899999999999 - type: ndcg_at_3 value: 85.076 - type: ndcg_at_5 value: 86.67099999999999 - type: precision_at_1 value: 81.12 - type: precision_at_10 value: 13.325999999999999 - type: precision_at_100 value: 1.524 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.16 - type: precision_at_5 value: 24.456 - type: recall_at_1 value: 70.422 - type: recall_at_10 value: 95.00800000000001 - type: recall_at_100 value: 99.38 - type: recall_at_1000 value: 99.94800000000001 - type: recall_at_3 value: 86.809 - type: recall_at_5 value: 91.334 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 48.18491891699636 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.190639679711914 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.478 - type: map_at_10 value: 11.268 - type: map_at_100 value: 13.129 - type: map_at_1000 value: 13.41 - type: map_at_3 value: 8.103 - type: map_at_5 value: 9.609 - type: mrr_at_1 value: 22 - type: mrr_at_10 value: 32.248 - type: mrr_at_100 value: 33.355000000000004 - type: mrr_at_1000 value: 33.42 - type: mrr_at_3 value: 29.15 - type: mrr_at_5 value: 30.785 - type: ndcg_at_1 value: 22 - type: ndcg_at_10 value: 18.990000000000002 - type: ndcg_at_100 value: 26.302999999999997 - type: ndcg_at_1000 value: 31.537 - type: ndcg_at_3 value: 18.034 - type: ndcg_at_5 value: 15.655 - type: precision_at_1 value: 22 - type: precision_at_10 value: 9.91 - type: precision_at_100 value: 2.0420000000000003 - type: precision_at_1000 value: 0.33 - type: precision_at_3 value: 16.933 - type: precision_at_5 value: 13.719999999999999 - type: recall_at_1 value: 4.478 - type: recall_at_10 value: 20.087 - type: recall_at_100 value: 41.457 - type: recall_at_1000 value: 67.10199999999999 - type: recall_at_3 value: 10.313 - type: recall_at_5 value: 13.927999999999999 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.27341574565806 - type: cos_sim_spearman value: 79.66419880841734 - type: euclidean_pearson value: 81.32473321838208 - type: euclidean_spearman value: 79.29828832085133 - type: manhattan_pearson value: 81.25554065883132 - type: manhattan_spearman value: 79.23275543279853 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.40468875905418 - type: cos_sim_spearman value: 74.2189990321174 - type: euclidean_pearson value: 80.74376966290956 - type: euclidean_spearman value: 74.97663839079335 - type: manhattan_pearson value: 80.69779331646207 - type: manhattan_spearman value: 75.00225252917613 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.5745290053095 - type: cos_sim_spearman value: 83.31401180333397 - type: euclidean_pearson value: 82.96500607325534 - type: euclidean_spearman value: 83.8534967935793 - type: manhattan_pearson value: 82.83112050632508 - type: manhattan_spearman value: 83.70877296557838 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 80.67833656607704 - type: cos_sim_spearman value: 78.52252410630707 - type: euclidean_pearson value: 80.071189514343 - type: euclidean_spearman value: 78.95143545742796 - type: manhattan_pearson value: 80.0128926165121 - type: manhattan_spearman value: 78.91236678732628 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.48437639980746 - type: cos_sim_spearman value: 88.34876527774259 - type: euclidean_pearson value: 87.64898081823888 - type: euclidean_spearman value: 88.58937180804213 - type: manhattan_pearson value: 87.5942417815288 - type: manhattan_spearman value: 88.53013922267687 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.69189187164781 - type: cos_sim_spearman value: 84.15327883572112 - type: euclidean_pearson value: 83.64202266685898 - type: euclidean_spearman value: 84.6219602318862 - type: manhattan_pearson value: 83.53256698709998 - type: manhattan_spearman value: 84.49260712904946 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.09508017611589 - type: cos_sim_spearman value: 87.23010990417097 - type: euclidean_pearson value: 87.62545569077133 - type: euclidean_spearman value: 86.71152051711714 - type: manhattan_pearson value: 87.5057154278377 - type: manhattan_spearman value: 86.60611898281267 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.72129893941176 - type: cos_sim_spearman value: 62.87871412069194 - type: euclidean_pearson value: 63.21077648290454 - type: euclidean_spearman value: 63.03263080805978 - type: manhattan_pearson value: 63.20740860135976 - type: manhattan_spearman value: 62.89930471802817 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.039118236799 - type: cos_sim_spearman value: 86.18102563389962 - type: euclidean_pearson value: 85.62977041471879 - type: euclidean_spearman value: 86.02478990544347 - type: manhattan_pearson value: 85.60786740521806 - type: manhattan_spearman value: 85.99546210442547 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.89875069737266 - type: mrr value: 95.42621322033087 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.660999999999994 - type: map_at_10 value: 68.738 - type: map_at_100 value: 69.33200000000001 - type: map_at_1000 value: 69.352 - type: map_at_3 value: 66.502 - type: map_at_5 value: 67.686 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 70.003 - type: mrr_at_100 value: 70.441 - type: mrr_at_1000 value: 70.46 - type: mrr_at_3 value: 68.278 - type: mrr_at_5 value: 69.194 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 73.083 - type: ndcg_at_100 value: 75.56 - type: ndcg_at_1000 value: 76.01400000000001 - type: ndcg_at_3 value: 69.28699999999999 - type: ndcg_at_5 value: 70.85000000000001 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.087 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 58.660999999999994 - type: recall_at_10 value: 85.02199999999999 - type: recall_at_100 value: 95.933 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 74.506 - type: recall_at_5 value: 78.583 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8029702970297 - type: cos_sim_ap value: 94.87673936635738 - type: cos_sim_f1 value: 90.00502260170768 - type: cos_sim_precision value: 90.41372351160445 - type: cos_sim_recall value: 89.60000000000001 - type: dot_accuracy value: 99.57524752475247 - type: dot_ap value: 84.81717934496321 - type: dot_f1 value: 78.23026646556059 - type: dot_precision value: 78.66531850353893 - type: dot_recall value: 77.8 - type: euclidean_accuracy value: 99.8029702970297 - type: euclidean_ap value: 94.74658253135284 - type: euclidean_f1 value: 90.08470353761834 - type: euclidean_precision value: 89.77159880834161 - type: euclidean_recall value: 90.4 - type: manhattan_accuracy value: 99.8 - type: manhattan_ap value: 94.69224030742787 - type: manhattan_f1 value: 89.9502487562189 - type: manhattan_precision value: 89.50495049504951 - type: manhattan_recall value: 90.4 - type: max_accuracy value: 99.8029702970297 - type: max_ap value: 94.87673936635738 - type: max_f1 value: 90.08470353761834 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 63.906039623153035 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.56053830923281 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.15326538775145 - type: mrr value: 50.99279295051355 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.44030762047337 - type: cos_sim_spearman value: 31.00910300264562 - type: dot_pearson value: 26.88257194766013 - type: dot_spearman value: 27.646202679013577 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.247 - type: map_at_10 value: 1.9429999999999998 - type: map_at_100 value: 10.82 - type: map_at_1000 value: 25.972 - type: map_at_3 value: 0.653 - type: map_at_5 value: 1.057 - type: mrr_at_1 value: 94 - type: mrr_at_10 value: 96.333 - type: mrr_at_100 value: 96.333 - type: mrr_at_1000 value: 96.333 - type: mrr_at_3 value: 96.333 - type: mrr_at_5 value: 96.333 - type: ndcg_at_1 value: 89 - type: ndcg_at_10 value: 79.63799999999999 - type: ndcg_at_100 value: 57.961 - type: ndcg_at_1000 value: 50.733 - type: ndcg_at_3 value: 84.224 - type: ndcg_at_5 value: 82.528 - type: precision_at_1 value: 94 - type: precision_at_10 value: 84.2 - type: precision_at_100 value: 59.36 - type: precision_at_1000 value: 22.738 - type: precision_at_3 value: 88 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.247 - type: recall_at_10 value: 2.131 - type: recall_at_100 value: 14.035 - type: recall_at_1000 value: 47.457 - type: recall_at_3 value: 0.6779999999999999 - type: recall_at_5 value: 1.124 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.603 - type: map_at_10 value: 11.667 - type: map_at_100 value: 16.474 - type: map_at_1000 value: 18.074 - type: map_at_3 value: 6.03 - type: map_at_5 value: 8.067 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 51.063 - type: mrr_at_100 value: 51.908 - type: mrr_at_1000 value: 51.908 - type: mrr_at_3 value: 47.959 - type: mrr_at_5 value: 49.694 - type: ndcg_at_1 value: 32.653 - type: ndcg_at_10 value: 28.305000000000003 - type: ndcg_at_100 value: 35.311 - type: ndcg_at_1000 value: 47.644999999999996 - type: ndcg_at_3 value: 32.187 - type: ndcg_at_5 value: 29.134999999999998 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 26.122 - type: precision_at_100 value: 6.755 - type: precision_at_1000 value: 1.467 - type: precision_at_3 value: 34.694 - type: precision_at_5 value: 30.203999999999997 - type: recall_at_1 value: 2.603 - type: recall_at_10 value: 18.716 - type: recall_at_100 value: 42.512 - type: recall_at_1000 value: 79.32000000000001 - type: recall_at_3 value: 7.59 - type: recall_at_5 value: 10.949 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 74.117 - type: ap value: 15.89357321699319 - type: f1 value: 57.14385866369257 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.38370118845502 - type: f1 value: 61.67038693866553 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 42.57754941537969 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.1775049174465 - type: cos_sim_ap value: 74.3994879581554 - type: cos_sim_f1 value: 69.32903671308551 - type: cos_sim_precision value: 61.48193508879363 - type: cos_sim_recall value: 79.47229551451187 - type: dot_accuracy value: 81.65345413363534 - type: dot_ap value: 59.690898346685096 - type: dot_f1 value: 57.27622826467499 - type: dot_precision value: 51.34965473948525 - type: dot_recall value: 64.74934036939314 - type: euclidean_accuracy value: 86.04637301066937 - type: euclidean_ap value: 74.33009001775268 - type: euclidean_f1 value: 69.2458374142997 - type: euclidean_precision value: 64.59570580173595 - type: euclidean_recall value: 74.6174142480211 - type: manhattan_accuracy value: 86.11193896405793 - type: manhattan_ap value: 74.2964140130421 - type: manhattan_f1 value: 69.11601528788066 - type: manhattan_precision value: 64.86924323073363 - type: manhattan_recall value: 73.95778364116094 - type: max_accuracy value: 86.1775049174465 - type: max_ap value: 74.3994879581554 - type: max_f1 value: 69.32903671308551 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01501921061823 - type: cos_sim_ap value: 85.97819287477351 - type: cos_sim_f1 value: 78.33882858518875 - type: cos_sim_precision value: 75.49446626204926 - type: cos_sim_recall value: 81.40591315060055 - type: dot_accuracy value: 86.47494857763806 - type: dot_ap value: 78.77420360340282 - type: dot_f1 value: 73.06433247936238 - type: dot_precision value: 67.92140777983595 - type: dot_recall value: 79.04989220819218 - type: euclidean_accuracy value: 88.7297706368611 - type: euclidean_ap value: 85.61550568529317 - type: euclidean_f1 value: 77.84805525263539 - type: euclidean_precision value: 73.73639994491117 - type: euclidean_recall value: 82.44533415460425 - type: manhattan_accuracy value: 88.75111576823068 - type: manhattan_ap value: 85.58701671476263 - type: manhattan_f1 value: 77.70169909067856 - type: manhattan_precision value: 73.37666780704755 - type: manhattan_recall value: 82.5685247921158 - type: max_accuracy value: 89.01501921061823 - type: max_ap value: 85.97819287477351 - type: max_f1 value: 78.33882858518875 --- ## E5-base **News (May 2023): please switch to [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2), which has better performance and same method of usage.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 768. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base') model = AutoModel.from_pretrained('intfloat/e5-base') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-base') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
liddlefish/privacy_embedding_rag_10k_base_12_final
liddlefish
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,718
1,718
7
0
--- language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-base-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.14925373134328 - type: ap value: 39.32336517995478 - type: f1 value: 70.16902252611425 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.386825 - type: ap value: 90.21276917991995 - type: f1 value: 93.37741030006174 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.846000000000004 - type: f1 value: 48.14646269778261 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.754000000000005 - type: map_at_10 value: 55.761 - type: map_at_100 value: 56.330999999999996 - type: map_at_1000 value: 56.333999999999996 - type: map_at_3 value: 51.92 - type: map_at_5 value: 54.010999999999996 - type: mrr_at_1 value: 41.181 - type: mrr_at_10 value: 55.967999999999996 - type: mrr_at_100 value: 56.538 - type: mrr_at_1000 value: 56.542 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.208999999999996 - type: ndcg_at_1 value: 40.754000000000005 - type: ndcg_at_10 value: 63.605000000000004 - type: ndcg_at_100 value: 66.05199999999999 - type: ndcg_at_1000 value: 66.12 - type: ndcg_at_3 value: 55.708 - type: ndcg_at_5 value: 59.452000000000005 - type: precision_at_1 value: 40.754000000000005 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.149000000000001 - type: recall_at_1 value: 40.754000000000005 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 75.747 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.74884539679369 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.8075893810716 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.128470519187736 - type: mrr value: 74.28065778481289 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.24629081484655 - type: cos_sim_spearman value: 86.93752309911496 - type: euclidean_pearson value: 87.58589628573816 - type: euclidean_spearman value: 88.05622328825284 - type: manhattan_pearson value: 87.5594959805773 - type: manhattan_spearman value: 88.19658793233961 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.9512987012987 - type: f1 value: 86.92515357973708 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.10263762928872 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.69711517426737 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.327 - type: map_at_10 value: 44.099 - type: map_at_100 value: 45.525 - type: map_at_1000 value: 45.641999999999996 - type: map_at_3 value: 40.47 - type: map_at_5 value: 42.36 - type: mrr_at_1 value: 39.199 - type: mrr_at_10 value: 49.651 - type: mrr_at_100 value: 50.29 - type: mrr_at_1000 value: 50.329 - type: mrr_at_3 value: 46.924 - type: mrr_at_5 value: 48.548 - type: ndcg_at_1 value: 39.199 - type: ndcg_at_10 value: 50.773 - type: ndcg_at_100 value: 55.67999999999999 - type: ndcg_at_1000 value: 57.495 - type: ndcg_at_3 value: 45.513999999999996 - type: ndcg_at_5 value: 47.703 - type: precision_at_1 value: 39.199 - type: precision_at_10 value: 9.914000000000001 - type: precision_at_100 value: 1.5310000000000001 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.984 - type: precision_at_5 value: 15.737000000000002 - type: recall_at_1 value: 32.327 - type: recall_at_10 value: 63.743 - type: recall_at_100 value: 84.538 - type: recall_at_1000 value: 96.089 - type: recall_at_3 value: 48.065000000000005 - type: recall_at_5 value: 54.519 - type: map_at_1 value: 32.671 - type: map_at_10 value: 42.954 - type: map_at_100 value: 44.151 - type: map_at_1000 value: 44.287 - type: map_at_3 value: 39.912 - type: map_at_5 value: 41.798 - type: mrr_at_1 value: 41.465 - type: mrr_at_10 value: 49.351 - type: mrr_at_100 value: 49.980000000000004 - type: mrr_at_1000 value: 50.016000000000005 - type: mrr_at_3 value: 47.144000000000005 - type: mrr_at_5 value: 48.592999999999996 - type: ndcg_at_1 value: 41.465 - type: ndcg_at_10 value: 48.565999999999995 - type: ndcg_at_100 value: 52.76499999999999 - type: ndcg_at_1000 value: 54.749 - type: ndcg_at_3 value: 44.57 - type: ndcg_at_5 value: 46.759 - type: precision_at_1 value: 41.465 - type: precision_at_10 value: 9.107999999999999 - type: precision_at_100 value: 1.433 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 21.423000000000002 - type: precision_at_5 value: 15.414 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 57.738 - type: recall_at_100 value: 75.86500000000001 - type: recall_at_1000 value: 88.36 - type: recall_at_3 value: 45.626 - type: recall_at_5 value: 51.812000000000005 - type: map_at_1 value: 41.185 - type: map_at_10 value: 53.929 - type: map_at_100 value: 54.92 - type: map_at_1000 value: 54.967999999999996 - type: map_at_3 value: 50.70400000000001 - type: map_at_5 value: 52.673 - type: mrr_at_1 value: 47.398 - type: mrr_at_10 value: 57.303000000000004 - type: mrr_at_100 value: 57.959 - type: mrr_at_1000 value: 57.985 - type: mrr_at_3 value: 54.932 - type: mrr_at_5 value: 56.464999999999996 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 59.653 - type: ndcg_at_100 value: 63.627 - type: ndcg_at_1000 value: 64.596 - type: ndcg_at_3 value: 54.455 - type: ndcg_at_5 value: 57.245000000000005 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.524000000000001 - type: precision_at_100 value: 1.243 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.389 - type: precision_at_5 value: 16.752 - type: recall_at_1 value: 41.185 - type: recall_at_10 value: 73.193 - type: recall_at_100 value: 90.357 - type: recall_at_1000 value: 97.253 - type: recall_at_3 value: 59.199999999999996 - type: recall_at_5 value: 66.118 - type: map_at_1 value: 27.27 - type: map_at_10 value: 36.223 - type: map_at_100 value: 37.218 - type: map_at_1000 value: 37.293 - type: map_at_3 value: 33.503 - type: map_at_5 value: 35.097 - type: mrr_at_1 value: 29.492 - type: mrr_at_10 value: 38.352000000000004 - type: mrr_at_100 value: 39.188 - type: mrr_at_1000 value: 39.247 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.401 - type: ndcg_at_1 value: 29.492 - type: ndcg_at_10 value: 41.239 - type: ndcg_at_100 value: 46.066 - type: ndcg_at_1000 value: 47.992000000000004 - type: ndcg_at_3 value: 36.11 - type: ndcg_at_5 value: 38.772 - type: precision_at_1 value: 29.492 - type: precision_at_10 value: 6.260000000000001 - type: precision_at_100 value: 0.914 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 15.104000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.27 - type: recall_at_10 value: 54.589 - type: recall_at_100 value: 76.70700000000001 - type: recall_at_1000 value: 91.158 - type: recall_at_3 value: 40.974 - type: recall_at_5 value: 47.327000000000005 - type: map_at_1 value: 17.848 - type: map_at_10 value: 26.207 - type: map_at_100 value: 27.478 - type: map_at_1000 value: 27.602 - type: map_at_3 value: 23.405 - type: map_at_5 value: 24.98 - type: mrr_at_1 value: 21.891 - type: mrr_at_10 value: 31.041999999999998 - type: mrr_at_100 value: 32.092 - type: mrr_at_1000 value: 32.151999999999994 - type: mrr_at_3 value: 28.358 - type: mrr_at_5 value: 29.969 - type: ndcg_at_1 value: 21.891 - type: ndcg_at_10 value: 31.585 - type: ndcg_at_100 value: 37.531 - type: ndcg_at_1000 value: 40.256 - type: ndcg_at_3 value: 26.508 - type: ndcg_at_5 value: 28.894 - type: precision_at_1 value: 21.891 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 0.9990000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.769 - type: precision_at_5 value: 9.279 - type: recall_at_1 value: 17.848 - type: recall_at_10 value: 43.452 - type: recall_at_100 value: 69.216 - type: recall_at_1000 value: 88.102 - type: recall_at_3 value: 29.18 - type: recall_at_5 value: 35.347 - type: map_at_1 value: 30.94 - type: map_at_10 value: 41.248000000000005 - type: map_at_100 value: 42.495 - type: map_at_1000 value: 42.602000000000004 - type: map_at_3 value: 37.939 - type: map_at_5 value: 39.924 - type: mrr_at_1 value: 37.824999999999996 - type: mrr_at_10 value: 47.041 - type: mrr_at_100 value: 47.83 - type: mrr_at_1000 value: 47.878 - type: mrr_at_3 value: 44.466 - type: mrr_at_5 value: 46.111999999999995 - type: ndcg_at_1 value: 37.824999999999996 - type: ndcg_at_10 value: 47.223 - type: ndcg_at_100 value: 52.394 - type: ndcg_at_1000 value: 54.432 - type: ndcg_at_3 value: 42.032000000000004 - type: ndcg_at_5 value: 44.772 - type: precision_at_1 value: 37.824999999999996 - type: precision_at_10 value: 8.393 - type: precision_at_100 value: 1.2890000000000001 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 19.698 - type: precision_at_5 value: 14.013 - type: recall_at_1 value: 30.94 - type: recall_at_10 value: 59.316 - type: recall_at_100 value: 80.783 - type: recall_at_1000 value: 94.15400000000001 - type: recall_at_3 value: 44.712 - type: recall_at_5 value: 51.932 - type: map_at_1 value: 27.104 - type: map_at_10 value: 36.675999999999995 - type: map_at_100 value: 38.076 - type: map_at_1000 value: 38.189 - type: map_at_3 value: 33.733999999999995 - type: map_at_5 value: 35.287 - type: mrr_at_1 value: 33.904 - type: mrr_at_10 value: 42.55 - type: mrr_at_100 value: 43.434 - type: mrr_at_1000 value: 43.494 - type: mrr_at_3 value: 40.126 - type: mrr_at_5 value: 41.473 - type: ndcg_at_1 value: 33.904 - type: ndcg_at_10 value: 42.414 - type: ndcg_at_100 value: 48.203 - type: ndcg_at_1000 value: 50.437 - type: ndcg_at_3 value: 37.633 - type: ndcg_at_5 value: 39.67 - type: precision_at_1 value: 33.904 - type: precision_at_10 value: 7.82 - type: precision_at_100 value: 1.2409999999999999 - type: precision_at_1000 value: 0.159 - type: precision_at_3 value: 17.884 - type: precision_at_5 value: 12.648000000000001 - type: recall_at_1 value: 27.104 - type: recall_at_10 value: 53.563 - type: recall_at_100 value: 78.557 - type: recall_at_1000 value: 93.533 - type: recall_at_3 value: 39.92 - type: recall_at_5 value: 45.457 - type: map_at_1 value: 27.707749999999997 - type: map_at_10 value: 36.961 - type: map_at_100 value: 38.158833333333334 - type: map_at_1000 value: 38.270333333333326 - type: map_at_3 value: 34.07183333333334 - type: map_at_5 value: 35.69533333333334 - type: mrr_at_1 value: 32.81875 - type: mrr_at_10 value: 41.293 - type: mrr_at_100 value: 42.116499999999995 - type: mrr_at_1000 value: 42.170249999999996 - type: mrr_at_3 value: 38.83983333333333 - type: mrr_at_5 value: 40.29775 - type: ndcg_at_1 value: 32.81875 - type: ndcg_at_10 value: 42.355 - type: ndcg_at_100 value: 47.41374999999999 - type: ndcg_at_1000 value: 49.5805 - type: ndcg_at_3 value: 37.52825 - type: ndcg_at_5 value: 39.83266666666667 - type: precision_at_1 value: 32.81875 - type: precision_at_10 value: 7.382416666666666 - type: precision_at_100 value: 1.1640833333333334 - type: precision_at_1000 value: 0.15383333333333335 - type: precision_at_3 value: 17.134166666666665 - type: precision_at_5 value: 12.174833333333336 - type: recall_at_1 value: 27.707749999999997 - type: recall_at_10 value: 53.945 - type: recall_at_100 value: 76.191 - type: recall_at_1000 value: 91.101 - type: recall_at_3 value: 40.39083333333334 - type: recall_at_5 value: 46.40083333333333 - type: map_at_1 value: 26.482 - type: map_at_10 value: 33.201 - type: map_at_100 value: 34.107 - type: map_at_1000 value: 34.197 - type: map_at_3 value: 31.174000000000003 - type: map_at_5 value: 32.279 - type: mrr_at_1 value: 29.908 - type: mrr_at_10 value: 36.235 - type: mrr_at_100 value: 37.04 - type: mrr_at_1000 value: 37.105 - type: mrr_at_3 value: 34.355999999999995 - type: mrr_at_5 value: 35.382999999999996 - type: ndcg_at_1 value: 29.908 - type: ndcg_at_10 value: 37.325 - type: ndcg_at_100 value: 41.795 - type: ndcg_at_1000 value: 44.105 - type: ndcg_at_3 value: 33.555 - type: ndcg_at_5 value: 35.266999999999996 - type: precision_at_1 value: 29.908 - type: precision_at_10 value: 5.721 - type: precision_at_100 value: 0.8630000000000001 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 14.008000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 26.482 - type: recall_at_10 value: 47.072 - type: recall_at_100 value: 67.27 - type: recall_at_1000 value: 84.371 - type: recall_at_3 value: 36.65 - type: recall_at_5 value: 40.774 - type: map_at_1 value: 18.815 - type: map_at_10 value: 26.369999999999997 - type: map_at_100 value: 27.458 - type: map_at_1000 value: 27.588 - type: map_at_3 value: 23.990000000000002 - type: map_at_5 value: 25.345000000000002 - type: mrr_at_1 value: 22.953000000000003 - type: mrr_at_10 value: 30.342999999999996 - type: mrr_at_100 value: 31.241000000000003 - type: mrr_at_1000 value: 31.319000000000003 - type: mrr_at_3 value: 28.16 - type: mrr_at_5 value: 29.406 - type: ndcg_at_1 value: 22.953000000000003 - type: ndcg_at_10 value: 31.151 - type: ndcg_at_100 value: 36.309000000000005 - type: ndcg_at_1000 value: 39.227000000000004 - type: ndcg_at_3 value: 26.921 - type: ndcg_at_5 value: 28.938000000000002 - type: precision_at_1 value: 22.953000000000003 - type: precision_at_10 value: 5.602 - type: precision_at_100 value: 0.9530000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 12.606 - type: precision_at_5 value: 9.119 - type: recall_at_1 value: 18.815 - type: recall_at_10 value: 41.574 - type: recall_at_100 value: 64.84400000000001 - type: recall_at_1000 value: 85.406 - type: recall_at_3 value: 29.694 - type: recall_at_5 value: 34.935 - type: map_at_1 value: 27.840999999999998 - type: map_at_10 value: 36.797999999999995 - type: map_at_100 value: 37.993 - type: map_at_1000 value: 38.086999999999996 - type: map_at_3 value: 34.050999999999995 - type: map_at_5 value: 35.379 - type: mrr_at_1 value: 32.649 - type: mrr_at_10 value: 41.025 - type: mrr_at_100 value: 41.878 - type: mrr_at_1000 value: 41.929 - type: mrr_at_3 value: 38.573 - type: mrr_at_5 value: 39.715 - type: ndcg_at_1 value: 32.649 - type: ndcg_at_10 value: 42.142 - type: ndcg_at_100 value: 47.558 - type: ndcg_at_1000 value: 49.643 - type: ndcg_at_3 value: 37.12 - type: ndcg_at_5 value: 38.983000000000004 - type: precision_at_1 value: 32.649 - type: precision_at_10 value: 7.08 - type: precision_at_100 value: 1.1039999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.698 - type: precision_at_5 value: 11.511000000000001 - type: recall_at_1 value: 27.840999999999998 - type: recall_at_10 value: 54.245 - type: recall_at_100 value: 77.947 - type: recall_at_1000 value: 92.36999999999999 - type: recall_at_3 value: 40.146 - type: recall_at_5 value: 44.951 - type: map_at_1 value: 26.529000000000003 - type: map_at_10 value: 35.010000000000005 - type: map_at_100 value: 36.647 - type: map_at_1000 value: 36.857 - type: map_at_3 value: 31.968000000000004 - type: map_at_5 value: 33.554 - type: mrr_at_1 value: 31.818 - type: mrr_at_10 value: 39.550999999999995 - type: mrr_at_100 value: 40.54 - type: mrr_at_1000 value: 40.596 - type: mrr_at_3 value: 36.726 - type: mrr_at_5 value: 38.416 - type: ndcg_at_1 value: 31.818 - type: ndcg_at_10 value: 40.675 - type: ndcg_at_100 value: 46.548 - type: ndcg_at_1000 value: 49.126 - type: ndcg_at_3 value: 35.829 - type: ndcg_at_5 value: 38.0 - type: precision_at_1 value: 31.818 - type: precision_at_10 value: 7.826 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 16.601 - type: precision_at_5 value: 12.095 - type: recall_at_1 value: 26.529000000000003 - type: recall_at_10 value: 51.03 - type: recall_at_100 value: 77.556 - type: recall_at_1000 value: 93.804 - type: recall_at_3 value: 36.986000000000004 - type: recall_at_5 value: 43.096000000000004 - type: map_at_1 value: 23.480999999999998 - type: map_at_10 value: 30.817 - type: map_at_100 value: 31.838 - type: map_at_1000 value: 31.932 - type: map_at_3 value: 28.011999999999997 - type: map_at_5 value: 29.668 - type: mrr_at_1 value: 25.323 - type: mrr_at_10 value: 33.072 - type: mrr_at_100 value: 33.926 - type: mrr_at_1000 value: 33.993 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 32.092 - type: ndcg_at_1 value: 25.323 - type: ndcg_at_10 value: 35.514 - type: ndcg_at_100 value: 40.489000000000004 - type: ndcg_at_1000 value: 42.908 - type: ndcg_at_3 value: 30.092000000000002 - type: ndcg_at_5 value: 32.989000000000004 - type: precision_at_1 value: 25.323 - type: precision_at_10 value: 5.545 - type: precision_at_100 value: 0.861 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.446 - type: precision_at_5 value: 9.131 - type: recall_at_1 value: 23.480999999999998 - type: recall_at_10 value: 47.825 - type: recall_at_100 value: 70.652 - type: recall_at_1000 value: 88.612 - type: recall_at_3 value: 33.537 - type: recall_at_5 value: 40.542 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 13.333999999999998 - type: map_at_10 value: 22.524 - type: map_at_100 value: 24.506 - type: map_at_1000 value: 24.715 - type: map_at_3 value: 19.022 - type: map_at_5 value: 20.693 - type: mrr_at_1 value: 29.186 - type: mrr_at_10 value: 41.22 - type: mrr_at_100 value: 42.16 - type: mrr_at_1000 value: 42.192 - type: mrr_at_3 value: 38.013000000000005 - type: mrr_at_5 value: 39.704 - type: ndcg_at_1 value: 29.186 - type: ndcg_at_10 value: 31.167 - type: ndcg_at_100 value: 38.879000000000005 - type: ndcg_at_1000 value: 42.376000000000005 - type: ndcg_at_3 value: 25.817 - type: ndcg_at_5 value: 27.377000000000002 - type: precision_at_1 value: 29.186 - type: precision_at_10 value: 9.693999999999999 - type: precision_at_100 value: 1.8030000000000002 - type: precision_at_1000 value: 0.246 - type: precision_at_3 value: 19.11 - type: precision_at_5 value: 14.344999999999999 - type: recall_at_1 value: 13.333999999999998 - type: recall_at_10 value: 37.092000000000006 - type: recall_at_100 value: 63.651 - type: recall_at_1000 value: 83.05 - type: recall_at_3 value: 23.74 - type: recall_at_5 value: 28.655 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.151 - type: map_at_10 value: 19.653000000000002 - type: map_at_100 value: 28.053 - type: map_at_1000 value: 29.709000000000003 - type: map_at_3 value: 14.191 - type: map_at_5 value: 16.456 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.4 - type: mrr_at_100 value: 74.715 - type: mrr_at_1000 value: 74.726 - type: mrr_at_3 value: 72.417 - type: mrr_at_5 value: 73.667 - type: ndcg_at_1 value: 54.25 - type: ndcg_at_10 value: 40.77 - type: ndcg_at_100 value: 46.359 - type: ndcg_at_1000 value: 54.193000000000005 - type: ndcg_at_3 value: 44.832 - type: ndcg_at_5 value: 42.63 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 32.175 - type: precision_at_100 value: 10.668 - type: precision_at_1000 value: 2.067 - type: precision_at_3 value: 47.667 - type: precision_at_5 value: 41.3 - type: recall_at_1 value: 9.151 - type: recall_at_10 value: 25.003999999999998 - type: recall_at_100 value: 52.976 - type: recall_at_1000 value: 78.315 - type: recall_at_3 value: 15.487 - type: recall_at_5 value: 18.999 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.89999999999999 - type: f1 value: 46.47777925067403 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 73.706 - type: map_at_10 value: 82.423 - type: map_at_100 value: 82.67999999999999 - type: map_at_1000 value: 82.694 - type: map_at_3 value: 81.328 - type: map_at_5 value: 82.001 - type: mrr_at_1 value: 79.613 - type: mrr_at_10 value: 87.07000000000001 - type: mrr_at_100 value: 87.169 - type: mrr_at_1000 value: 87.17 - type: mrr_at_3 value: 86.404 - type: mrr_at_5 value: 86.856 - type: ndcg_at_1 value: 79.613 - type: ndcg_at_10 value: 86.289 - type: ndcg_at_100 value: 87.201 - type: ndcg_at_1000 value: 87.428 - type: ndcg_at_3 value: 84.625 - type: ndcg_at_5 value: 85.53699999999999 - type: precision_at_1 value: 79.613 - type: precision_at_10 value: 10.399 - type: precision_at_100 value: 1.1079999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.473 - type: precision_at_5 value: 20.132 - type: recall_at_1 value: 73.706 - type: recall_at_10 value: 93.559 - type: recall_at_100 value: 97.188 - type: recall_at_1000 value: 98.555 - type: recall_at_3 value: 88.98700000000001 - type: recall_at_5 value: 91.373 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 19.841 - type: map_at_10 value: 32.643 - type: map_at_100 value: 34.575 - type: map_at_1000 value: 34.736 - type: map_at_3 value: 28.317999999999998 - type: map_at_5 value: 30.964000000000002 - type: mrr_at_1 value: 39.660000000000004 - type: mrr_at_10 value: 48.620000000000005 - type: mrr_at_100 value: 49.384 - type: mrr_at_1000 value: 49.415 - type: mrr_at_3 value: 45.988 - type: mrr_at_5 value: 47.361 - type: ndcg_at_1 value: 39.660000000000004 - type: ndcg_at_10 value: 40.646 - type: ndcg_at_100 value: 47.657 - type: ndcg_at_1000 value: 50.428 - type: ndcg_at_3 value: 36.689 - type: ndcg_at_5 value: 38.211 - type: precision_at_1 value: 39.660000000000004 - type: precision_at_10 value: 11.235000000000001 - type: precision_at_100 value: 1.8530000000000002 - type: precision_at_1000 value: 0.23600000000000002 - type: precision_at_3 value: 24.587999999999997 - type: precision_at_5 value: 18.395 - type: recall_at_1 value: 19.841 - type: recall_at_10 value: 48.135 - type: recall_at_100 value: 74.224 - type: recall_at_1000 value: 90.826 - type: recall_at_3 value: 33.536 - type: recall_at_5 value: 40.311 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.358 - type: map_at_10 value: 64.497 - type: map_at_100 value: 65.362 - type: map_at_1000 value: 65.41900000000001 - type: map_at_3 value: 61.06700000000001 - type: map_at_5 value: 63.317 - type: mrr_at_1 value: 80.716 - type: mrr_at_10 value: 86.10799999999999 - type: mrr_at_100 value: 86.265 - type: mrr_at_1000 value: 86.27 - type: mrr_at_3 value: 85.271 - type: mrr_at_5 value: 85.82499999999999 - type: ndcg_at_1 value: 80.716 - type: ndcg_at_10 value: 72.597 - type: ndcg_at_100 value: 75.549 - type: ndcg_at_1000 value: 76.61 - type: ndcg_at_3 value: 67.874 - type: ndcg_at_5 value: 70.655 - type: precision_at_1 value: 80.716 - type: precision_at_10 value: 15.148 - type: precision_at_100 value: 1.745 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 43.597 - type: precision_at_5 value: 28.351 - type: recall_at_1 value: 40.358 - type: recall_at_10 value: 75.739 - type: recall_at_100 value: 87.259 - type: recall_at_1000 value: 94.234 - type: recall_at_3 value: 65.39500000000001 - type: recall_at_5 value: 70.878 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.80799999999998 - type: ap value: 86.81350378180757 - type: f1 value: 90.79901248314215 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.096 - type: map_at_10 value: 34.384 - type: map_at_100 value: 35.541 - type: map_at_1000 value: 35.589999999999996 - type: map_at_3 value: 30.496000000000002 - type: map_at_5 value: 32.718 - type: mrr_at_1 value: 22.750999999999998 - type: mrr_at_10 value: 35.024 - type: mrr_at_100 value: 36.125 - type: mrr_at_1000 value: 36.168 - type: mrr_at_3 value: 31.225 - type: mrr_at_5 value: 33.416000000000004 - type: ndcg_at_1 value: 22.750999999999998 - type: ndcg_at_10 value: 41.351 - type: ndcg_at_100 value: 46.92 - type: ndcg_at_1000 value: 48.111 - type: ndcg_at_3 value: 33.439 - type: ndcg_at_5 value: 37.407000000000004 - type: precision_at_1 value: 22.750999999999998 - type: precision_at_10 value: 6.564 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.288 - type: precision_at_5 value: 10.581999999999999 - type: recall_at_1 value: 22.096 - type: recall_at_10 value: 62.771 - type: recall_at_100 value: 88.529 - type: recall_at_1000 value: 97.55 - type: recall_at_3 value: 41.245 - type: recall_at_5 value: 50.788 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.16780665754673 - type: f1 value: 93.96331194859894 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.90606475148198 - type: f1 value: 58.58344986604187 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.14660390047075 - type: f1 value: 74.31533923533614 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.16139878950908 - type: f1 value: 80.18532656824924 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.949880906135085 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.56300351524862 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.196521894371315 - type: mrr value: 32.22644231694389 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.783 - type: map_at_10 value: 14.549000000000001 - type: map_at_100 value: 18.433 - type: map_at_1000 value: 19.949 - type: map_at_3 value: 10.936 - type: map_at_5 value: 12.514 - type: mrr_at_1 value: 47.368 - type: mrr_at_10 value: 56.42 - type: mrr_at_100 value: 56.908 - type: mrr_at_1000 value: 56.95 - type: mrr_at_3 value: 54.283 - type: mrr_at_5 value: 55.568 - type: ndcg_at_1 value: 45.666000000000004 - type: ndcg_at_10 value: 37.389 - type: ndcg_at_100 value: 34.253 - type: ndcg_at_1000 value: 43.059999999999995 - type: ndcg_at_3 value: 42.725 - type: ndcg_at_5 value: 40.193 - type: precision_at_1 value: 47.368 - type: precision_at_10 value: 27.988000000000003 - type: precision_at_100 value: 8.672 - type: precision_at_1000 value: 2.164 - type: precision_at_3 value: 40.248 - type: precision_at_5 value: 34.737 - type: recall_at_1 value: 6.783 - type: recall_at_10 value: 17.838 - type: recall_at_100 value: 33.672000000000004 - type: recall_at_1000 value: 66.166 - type: recall_at_3 value: 11.849 - type: recall_at_5 value: 14.205000000000002 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.698999999999998 - type: map_at_10 value: 46.556 - type: map_at_100 value: 47.652 - type: map_at_1000 value: 47.68 - type: map_at_3 value: 42.492000000000004 - type: map_at_5 value: 44.763999999999996 - type: mrr_at_1 value: 35.747 - type: mrr_at_10 value: 49.242999999999995 - type: mrr_at_100 value: 50.052 - type: mrr_at_1000 value: 50.068 - type: mrr_at_3 value: 45.867000000000004 - type: mrr_at_5 value: 47.778999999999996 - type: ndcg_at_1 value: 35.717999999999996 - type: ndcg_at_10 value: 54.14600000000001 - type: ndcg_at_100 value: 58.672999999999995 - type: ndcg_at_1000 value: 59.279 - type: ndcg_at_3 value: 46.407 - type: ndcg_at_5 value: 50.181 - type: precision_at_1 value: 35.717999999999996 - type: precision_at_10 value: 8.844000000000001 - type: precision_at_100 value: 1.139 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 20.993000000000002 - type: precision_at_5 value: 14.791000000000002 - type: recall_at_1 value: 31.698999999999998 - type: recall_at_10 value: 74.693 - type: recall_at_100 value: 94.15299999999999 - type: recall_at_1000 value: 98.585 - type: recall_at_3 value: 54.388999999999996 - type: recall_at_5 value: 63.08200000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.283 - type: map_at_10 value: 85.24000000000001 - type: map_at_100 value: 85.882 - type: map_at_1000 value: 85.897 - type: map_at_3 value: 82.326 - type: map_at_5 value: 84.177 - type: mrr_at_1 value: 82.21000000000001 - type: mrr_at_10 value: 88.228 - type: mrr_at_100 value: 88.32 - type: mrr_at_1000 value: 88.32 - type: mrr_at_3 value: 87.323 - type: mrr_at_5 value: 87.94800000000001 - type: ndcg_at_1 value: 82.17999999999999 - type: ndcg_at_10 value: 88.9 - type: ndcg_at_100 value: 90.079 - type: ndcg_at_1000 value: 90.158 - type: ndcg_at_3 value: 86.18299999999999 - type: ndcg_at_5 value: 87.71799999999999 - type: precision_at_1 value: 82.17999999999999 - type: precision_at_10 value: 13.464 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.693 - type: precision_at_5 value: 24.792 - type: recall_at_1 value: 71.283 - type: recall_at_10 value: 95.742 - type: recall_at_100 value: 99.67200000000001 - type: recall_at_1000 value: 99.981 - type: recall_at_3 value: 87.888 - type: recall_at_5 value: 92.24 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.24267063669042 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.88056988932578 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.903 - type: map_at_10 value: 13.202 - type: map_at_100 value: 15.5 - type: map_at_1000 value: 15.870999999999999 - type: map_at_3 value: 9.407 - type: map_at_5 value: 11.238 - type: mrr_at_1 value: 24.2 - type: mrr_at_10 value: 35.867 - type: mrr_at_100 value: 37.001 - type: mrr_at_1000 value: 37.043 - type: mrr_at_3 value: 32.5 - type: mrr_at_5 value: 34.35 - type: ndcg_at_1 value: 24.2 - type: ndcg_at_10 value: 21.731 - type: ndcg_at_100 value: 30.7 - type: ndcg_at_1000 value: 36.618 - type: ndcg_at_3 value: 20.72 - type: ndcg_at_5 value: 17.954 - type: precision_at_1 value: 24.2 - type: precision_at_10 value: 11.33 - type: precision_at_100 value: 2.4410000000000003 - type: precision_at_1000 value: 0.386 - type: precision_at_3 value: 19.667 - type: precision_at_5 value: 15.86 - type: recall_at_1 value: 4.903 - type: recall_at_10 value: 22.962 - type: recall_at_100 value: 49.563 - type: recall_at_1000 value: 78.238 - type: recall_at_3 value: 11.953 - type: recall_at_5 value: 16.067999999999998 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.12694254604078 - type: cos_sim_spearman value: 80.30141815181918 - type: euclidean_pearson value: 81.34015449877128 - type: euclidean_spearman value: 80.13984197010849 - type: manhattan_pearson value: 81.31767068124086 - type: manhattan_spearman value: 80.11720513114103 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.13112984010417 - type: cos_sim_spearman value: 78.03063573402875 - type: euclidean_pearson value: 83.51928418844804 - type: euclidean_spearman value: 78.4045235411144 - type: manhattan_pearson value: 83.49981637388689 - type: manhattan_spearman value: 78.4042575139372 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.50327987379504 - type: cos_sim_spearman value: 84.18556767756205 - type: euclidean_pearson value: 82.69684424327679 - type: euclidean_spearman value: 83.5368106038335 - type: manhattan_pearson value: 82.57967581007374 - type: manhattan_spearman value: 83.43009053133697 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.50756863007814 - type: cos_sim_spearman value: 82.27204331279108 - type: euclidean_pearson value: 81.39535251429741 - type: euclidean_spearman value: 81.84386626336239 - type: manhattan_pearson value: 81.34281737280695 - type: manhattan_spearman value: 81.81149375673166 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.8727714856726 - type: cos_sim_spearman value: 87.95738287792312 - type: euclidean_pearson value: 86.62920602795887 - type: euclidean_spearman value: 87.05207355381243 - type: manhattan_pearson value: 86.53587918472225 - type: manhattan_spearman value: 86.95382961029586 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.52240359769479 - type: cos_sim_spearman value: 85.47685776238286 - type: euclidean_pearson value: 84.25815333483058 - type: euclidean_spearman value: 85.27415639683198 - type: manhattan_pearson value: 84.29127757025637 - type: manhattan_spearman value: 85.30226224917351 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.42501708915708 - type: cos_sim_spearman value: 86.42276182795041 - type: euclidean_pearson value: 86.5408207354761 - type: euclidean_spearman value: 85.46096321750838 - type: manhattan_pearson value: 86.54177303026881 - type: manhattan_spearman value: 85.50313151916117 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.86521089250766 - type: cos_sim_spearman value: 65.94868540323003 - type: euclidean_pearson value: 67.16569626533084 - type: euclidean_spearman value: 66.37667004134917 - type: manhattan_pearson value: 67.1482365102333 - type: manhattan_spearman value: 66.53240122580029 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.64746265365318 - type: cos_sim_spearman value: 86.41888825906786 - type: euclidean_pearson value: 85.27453642725811 - type: euclidean_spearman value: 85.94095796602544 - type: manhattan_pearson value: 85.28643660505334 - type: manhattan_spearman value: 85.95028003260744 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.48903153618527 - type: mrr value: 96.41081503826601 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.594 - type: map_at_10 value: 69.296 - type: map_at_100 value: 69.782 - type: map_at_1000 value: 69.795 - type: map_at_3 value: 66.23 - type: map_at_5 value: 68.293 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 70.339 - type: mrr_at_100 value: 70.708 - type: mrr_at_1000 value: 70.722 - type: mrr_at_3 value: 68.0 - type: mrr_at_5 value: 69.56700000000001 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 74.039 - type: ndcg_at_100 value: 76.103 - type: ndcg_at_1000 value: 76.47800000000001 - type: ndcg_at_3 value: 68.967 - type: ndcg_at_5 value: 71.96900000000001 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.866999999999999 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 27.111 - type: precision_at_5 value: 18.2 - type: recall_at_1 value: 58.594 - type: recall_at_10 value: 87.422 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 74.217 - type: recall_at_5 value: 81.539 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85049504950496 - type: cos_sim_ap value: 96.33111544137081 - type: cos_sim_f1 value: 92.35443037974684 - type: cos_sim_precision value: 93.53846153846153 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.82376237623762 - type: dot_ap value: 95.38082527310888 - type: dot_f1 value: 90.90909090909092 - type: dot_precision value: 92.90187891440502 - type: dot_recall value: 89.0 - type: euclidean_accuracy value: 99.84851485148515 - type: euclidean_ap value: 96.32316003996347 - type: euclidean_f1 value: 92.2071392659628 - type: euclidean_precision value: 92.71991911021233 - type: euclidean_recall value: 91.7 - type: manhattan_accuracy value: 99.84851485148515 - type: manhattan_ap value: 96.3655668249217 - type: manhattan_f1 value: 92.18356026222895 - type: manhattan_precision value: 92.98067141403867 - type: manhattan_recall value: 91.4 - type: max_accuracy value: 99.85049504950496 - type: max_ap value: 96.3655668249217 - type: max_f1 value: 92.35443037974684 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.94861371629051 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.009430451385 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 54.61164066427969 - type: mrr value: 55.49710603938544 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.622620124907662 - type: cos_sim_spearman value: 31.0678351356163 - type: dot_pearson value: 30.863727693306814 - type: dot_spearman value: 31.230306567021255 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 2.011 - type: map_at_100 value: 10.974 - type: map_at_1000 value: 25.819 - type: map_at_3 value: 0.6649999999999999 - type: map_at_5 value: 1.076 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 78.07300000000001 - type: ndcg_at_100 value: 58.231 - type: ndcg_at_1000 value: 51.153000000000006 - type: ndcg_at_3 value: 81.123 - type: ndcg_at_5 value: 81.059 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 83.0 - type: precision_at_100 value: 59.38 - type: precision_at_1000 value: 22.55 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.2079999999999997 - type: recall_at_100 value: 14.069 - type: recall_at_1000 value: 47.678 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.161 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.809 - type: map_at_10 value: 10.394 - type: map_at_100 value: 16.598 - type: map_at_1000 value: 18.142 - type: map_at_3 value: 5.572 - type: map_at_5 value: 7.1370000000000005 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 46.564 - type: mrr_at_100 value: 47.469 - type: mrr_at_1000 value: 47.469 - type: mrr_at_3 value: 42.177 - type: mrr_at_5 value: 44.524 - type: ndcg_at_1 value: 30.612000000000002 - type: ndcg_at_10 value: 25.701 - type: ndcg_at_100 value: 37.532 - type: ndcg_at_1000 value: 48.757 - type: ndcg_at_3 value: 28.199999999999996 - type: ndcg_at_5 value: 25.987 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 23.469 - type: precision_at_100 value: 7.9799999999999995 - type: precision_at_1000 value: 1.5350000000000001 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.809 - type: recall_at_10 value: 16.887 - type: recall_at_100 value: 48.67 - type: recall_at_1000 value: 82.89699999999999 - type: recall_at_3 value: 6.521000000000001 - type: recall_at_5 value: 9.609 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.57860000000001 - type: ap value: 13.82629211536393 - type: f1 value: 54.59860966183956 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.38030560271647 - type: f1 value: 59.69685552567865 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.4736717043405 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.92853311080646 - type: cos_sim_ap value: 77.67872502591382 - type: cos_sim_f1 value: 70.33941236068895 - type: cos_sim_precision value: 67.63273258645884 - type: cos_sim_recall value: 73.27176781002639 - type: dot_accuracy value: 85.79603027954938 - type: dot_ap value: 73.73786190233379 - type: dot_f1 value: 67.3437901774235 - type: dot_precision value: 65.67201604814443 - type: dot_recall value: 69.10290237467018 - type: euclidean_accuracy value: 86.94045419324074 - type: euclidean_ap value: 77.6687791535167 - type: euclidean_f1 value: 70.47209214023542 - type: euclidean_precision value: 67.7207492094381 - type: euclidean_recall value: 73.45646437994723 - type: manhattan_accuracy value: 86.87488823985218 - type: manhattan_ap value: 77.63373392430728 - type: manhattan_f1 value: 70.40920716112532 - type: manhattan_precision value: 68.31265508684864 - type: manhattan_recall value: 72.63852242744063 - type: max_accuracy value: 86.94045419324074 - type: max_ap value: 77.67872502591382 - type: max_f1 value: 70.47209214023542 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.67155664221679 - type: cos_sim_ap value: 85.64591703003417 - type: cos_sim_f1 value: 77.59531005352656 - type: cos_sim_precision value: 73.60967184801382 - type: cos_sim_recall value: 82.03726516784724 - type: dot_accuracy value: 88.41541506578181 - type: dot_ap value: 84.6482788957769 - type: dot_f1 value: 77.04748541466657 - type: dot_precision value: 74.02440754931176 - type: dot_recall value: 80.3279950723745 - type: euclidean_accuracy value: 88.63080684596576 - type: euclidean_ap value: 85.44570045321562 - type: euclidean_f1 value: 77.28769403336106 - type: euclidean_precision value: 72.90600040958427 - type: euclidean_recall value: 82.22975053895904 - type: manhattan_accuracy value: 88.59393798269105 - type: manhattan_ap value: 85.40271361038187 - type: manhattan_f1 value: 77.17606419344392 - type: manhattan_precision value: 72.4447747078295 - type: manhattan_recall value: 82.5685247921158 - type: max_accuracy value: 88.67155664221679 - type: max_ap value: 85.64591703003417 - type: max_f1 value: 77.59531005352656 --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13") model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` #### Usage via infinity Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BEAR", "BIOSSES", "SCIFACT" ]
Non_BioNLP
NoaiGPT/777
NoaiGPT
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,719
1,719
108
0
--- license: openrail inference: parameters: num_beams: 3 num_beam_groups: 3 num_return_sequences: 1 repetition_penalty: 3 diversity_penalty: 3.01 no_repeat_ngram_size: 2 temperature: 0.8 max_length: 64 widget: - text: 'paraphraser: Software engineering is the discipline of designing, developing, testing, and maintaining software applications. It involves using programming languages, algorithms, and tools to create reliable and efficient software solutions. Key practices include requirements analysis, system architecture, code implementation, and quality assurance, ensuring software meets user needs and performs optimally.' example_title: AWS course - text: 'paraphraser: In healthcare, Generative AI can help generate synthetic medical data to train machine learning models, develop new drug candidates, and design clinical trials.' example_title: Generative AI - text: 'paraphraser: By leveraging prior model training through transfer learning, fine-tuning can reduce the amount of expensive computing power and labeled data needed to obtain large models tailored to niche use cases and business needs.' example_title: Fine Tuning --- # Text Rewriter Paraphraser This repository contains a fine-tuned text-rewriting model based on the T5-Base with 223M parameters. ## Key Features: * **Fine-tuned on t5-base:** Leverages the power of a pre-trained text-to-text transfer model for effective paraphrasing. * **Large Dataset (430k examples):** Trained on a comprehensive dataset combining three open-source sources and cleaned using various techniques for optimal performance. * **High Quality Paraphrases:** Generates paraphrases that significantly alter sentence structure while maintaining accuracy and factual correctness. * **Non-AI Detectable:** Aims to produce paraphrases that appear natural and indistinguishable from human-written text. **Model Performance:** * Train Loss: 1.0645 * Validation Loss: 0.8761 ## Getting Started: T5 model expects a task related prefix: since it is a paraphrasing task, we will add a prefix "paraphraser: " ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM device = "cuda" tokenizer = AutoTokenizer.from_pretrained("NoaiGPT/777", token='your_token') model = AutoModelForSeq2SeqLM.from_pretrained("NoaiGPT/777", token='your_token').to(device) def generate_title(text): input_ids = tokenizer(f'paraphraser: {text}', return_tensors="pt", padding="longest", truncation=True, max_length=64).input_ids.to(device) outputs = model.generate( input_ids, num_beams=4, num_beam_groups=4, num_return_sequences=4, repetition_penalty=10.0, diversity_penalty=3.0, no_repeat_ngram_size=2, temperature=0.8, max_length=64 ) return tokenizer.batch_decode(outputs, skip_special_tokens=True) text = 'By leveraging prior model training through transfer learning, fine-tuning can reduce the amount of expensive computing power and labeled data needed to obtain large models tailored to niche use cases and business needs.' generate_title(text) ``` ### Output: ``` ['The fine-tuning can reduce the amount of expensive computing power and labeled data required to obtain large models adapted for niche use cases and business needs by using prior model training through transfer learning.', 'fine-tuning, by utilizing prior model training through transfer learning, can reduce the amount of expensive computing power and labeled data required to obtain large models tailored for niche use cases and business needs.', 'Fine-tunering by using prior model training through transfer learning can reduce the amount of expensive computing power and labeled data required to obtain large models adapted for niche use cases and business needs.', 'Using transfer learning to use prior model training, fine-tuning can reduce the amount of expensive computing power and labeled data required for large models that are suitable in niche usage cases or businesses.'] ```
[ "PARAPHRASING" ]
[ "MEDICAL DATA" ]
Non_BioNLP
bhavnicksm/brown-fairy-base-v0
bhavnicksm
null
[ "model2vec", "safetensors", "embeddings", "static-embeddings", "sentence-transformers", "mteb", "en", "license:mit", "model-index", "region:us" ]
1,738
1,738
23
1
--- base_model: baai/bge-base-en-v1.5 language: - en library_name: model2vec license: mit tags: - embeddings - static-embeddings - sentence-transformers - mteb model-index: - name: bhavnicksm/brown-fairy-base-v0 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 69.52239999999999 - type: f1 value: 63.4127 - type: f1_weighted value: 72.48599999999999 - type: ap value: 31.8446 - type: ap_weighted value: 31.8446 - type: main_score value: 69.52239999999999 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification (default) type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 68.709 - type: f1 value: 68.2583 - type: f1_weighted value: 68.2583 - type: ap value: 63.728899999999996 - type: ap_weighted value: 63.728899999999996 - type: main_score value: 68.709 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 34.014 - type: f1 value: 33.4588 - type: f1_weighted value: 33.4588 - type: main_score value: 34.014 - task: type: Retrieval dataset: name: MTEB ArguAna (default) type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: ndcg_at_1 value: 20.341 - type: ndcg_at_3 value: 30.547 - type: ndcg_at_5 value: 34.963 - type: ndcg_at_10 value: 39.805 - type: ndcg_at_20 value: 42.397 - type: ndcg_at_100 value: 45.216 - type: ndcg_at_1000 value: 46.339999999999996 - type: map_at_1 value: 20.341 - type: map_at_3 value: 27.962999999999997 - type: map_at_5 value: 30.409999999999997 - type: map_at_10 value: 32.4 - type: map_at_20 value: 33.113 - type: map_at_100 value: 33.512 - type: map_at_1000 value: 33.556000000000004 - type: recall_at_1 value: 20.341 - type: recall_at_3 value: 38.051 - type: recall_at_5 value: 48.791000000000004 - type: recall_at_10 value: 63.798 - type: recall_at_20 value: 74.03999999999999 - type: recall_at_100 value: 89.118 - type: recall_at_1000 value: 97.866 - type: precision_at_1 value: 20.341 - type: precision_at_3 value: 12.684000000000001 - type: precision_at_5 value: 9.758 - type: precision_at_10 value: 6.38 - type: precision_at_20 value: 3.702 - type: precision_at_100 value: 0.8909999999999999 - type: precision_at_1000 value: 0.098 - type: mrr_at_1 value: 20.6259 - type: mrr_at_3 value: 28.058300000000003 - type: mrr_at_5 value: 30.4979 - type: mrr_at_10 value: 32.5131 - type: mrr_at_20 value: 33.222699999999996 - type: mrr_at_100 value: 33.6243 - type: mrr_at_1000 value: 33.6687 - type: nauc_ndcg_at_1_max value: -6.208 - type: nauc_ndcg_at_1_std value: 0.6887 - type: nauc_ndcg_at_1_diff1 value: 5.5123 - type: nauc_ndcg_at_3_max value: -1.8608 - type: nauc_ndcg_at_3_std value: 3.7832999999999997 - type: nauc_ndcg_at_3_diff1 value: 7.5778 - type: nauc_ndcg_at_5_max value: 0.0929 - type: nauc_ndcg_at_5_std value: 5.8453 - type: nauc_ndcg_at_5_diff1 value: 9.316 - type: nauc_ndcg_at_10_max value: 0.557 - type: nauc_ndcg_at_10_std value: 5.8692 - type: nauc_ndcg_at_10_diff1 value: 8.3828 - type: nauc_ndcg_at_20_max value: 1.567 - type: nauc_ndcg_at_20_std value: 8.2355 - type: nauc_ndcg_at_20_diff1 value: 9.1907 - type: nauc_ndcg_at_100_max value: 1.0833000000000002 - type: nauc_ndcg_at_100_std value: 8.6248 - type: nauc_ndcg_at_100_diff1 value: 9.0073 - type: nauc_ndcg_at_1000_max value: -0.166 - type: nauc_ndcg_at_1000_std value: 7.394100000000001 - type: nauc_ndcg_at_1000_diff1 value: 8.1955 - type: nauc_map_at_1_max value: -6.208 - type: nauc_map_at_1_std value: 0.6887 - type: nauc_map_at_1_diff1 value: 5.5123 - type: nauc_map_at_3_max value: -3.0332999999999997 - type: nauc_map_at_3_std value: 2.9010000000000002 - type: nauc_map_at_3_diff1 value: 6.8088 - type: nauc_map_at_5_max value: -1.9215 - type: nauc_map_at_5_std value: 4.023000000000001 - type: nauc_map_at_5_diff1 value: 7.8248999999999995 - type: nauc_map_at_10_max value: -1.8037 - type: nauc_map_at_10_std value: 3.9838 - type: nauc_map_at_10_diff1 value: 7.3617 - type: nauc_map_at_20_max value: -1.5614 - type: nauc_map_at_20_std value: 4.6065000000000005 - type: nauc_map_at_20_diff1 value: 7.5846 - type: nauc_map_at_100_max value: -1.6330999999999998 - type: nauc_map_at_100_std value: 4.693 - type: nauc_map_at_100_diff1 value: 7.5309 - type: nauc_map_at_1000_max value: -1.6847999999999999 - type: nauc_map_at_1000_std value: 4.6508 - type: nauc_map_at_1000_diff1 value: 7.5036000000000005 - type: nauc_recall_at_1_max value: -6.208 - type: nauc_recall_at_1_std value: 0.6887 - type: nauc_recall_at_1_diff1 value: 5.5123 - type: nauc_recall_at_3_max value: 1.2662 - type: nauc_recall_at_3_std value: 6.1506 - type: nauc_recall_at_3_diff1 value: 9.6919 - type: nauc_recall_at_5_max value: 5.7511 - type: nauc_recall_at_5_std value: 11.0652 - type: nauc_recall_at_5_diff1 value: 13.5713 - type: nauc_recall_at_10_max value: 8.5342 - type: nauc_recall_at_10_std value: 12.2161 - type: nauc_recall_at_10_diff1 value: 11.6188 - type: nauc_recall_at_20_max value: 15.7488 - type: nauc_recall_at_20_std value: 25.6755 - type: nauc_recall_at_20_diff1 value: 16.3568 - type: nauc_recall_at_100_max value: 24.424799999999998 - type: nauc_recall_at_100_std value: 47.6945 - type: nauc_recall_at_100_diff1 value: 22.4622 - type: nauc_recall_at_1000_max value: 3.0951 - type: nauc_recall_at_1000_std value: 84.10419999999999 - type: nauc_recall_at_1000_diff1 value: -2.6364 - type: nauc_precision_at_1_max value: -6.208 - type: nauc_precision_at_1_std value: 0.6887 - type: nauc_precision_at_1_diff1 value: 5.5123 - type: nauc_precision_at_3_max value: 1.2662 - type: nauc_precision_at_3_std value: 6.1506 - type: nauc_precision_at_3_diff1 value: 9.6919 - type: nauc_precision_at_5_max value: 5.7511 - type: nauc_precision_at_5_std value: 11.0652 - type: nauc_precision_at_5_diff1 value: 13.5713 - type: nauc_precision_at_10_max value: 8.5342 - type: nauc_precision_at_10_std value: 12.2161 - type: nauc_precision_at_10_diff1 value: 11.6188 - type: nauc_precision_at_20_max value: 15.7488 - type: nauc_precision_at_20_std value: 25.6755 - type: nauc_precision_at_20_diff1 value: 16.3568 - type: nauc_precision_at_100_max value: 24.424799999999998 - type: nauc_precision_at_100_std value: 47.6945 - type: nauc_precision_at_100_diff1 value: 22.4622 - type: nauc_precision_at_1000_max value: 3.0951 - type: nauc_precision_at_1000_std value: 84.10419999999999 - type: nauc_precision_at_1000_diff1 value: -2.6364 - type: nauc_mrr_at_1_max value: -5.611800000000001 - type: nauc_mrr_at_1_std value: 0.2596 - type: nauc_mrr_at_1_diff1 value: 4.5101 - type: nauc_mrr_at_3_max value: -3.1917 - type: nauc_mrr_at_3_std value: 2.7559 - type: nauc_mrr_at_3_diff1 value: 5.756 - type: nauc_mrr_at_5_max value: -2.1292999999999997 - type: nauc_mrr_at_5_std value: 3.7653 - type: nauc_mrr_at_5_diff1 value: 6.7995 - type: nauc_mrr_at_10_max value: -1.8915000000000002 - type: nauc_mrr_at_10_std value: 3.778 - type: nauc_mrr_at_10_diff1 value: 6.4253 - type: nauc_mrr_at_20_max value: -1.6753 - type: nauc_mrr_at_20_std value: 4.389 - type: nauc_mrr_at_20_diff1 value: 6.6081 - type: nauc_mrr_at_100_max value: -1.7302000000000002 - type: nauc_mrr_at_100_std value: 4.4796000000000005 - type: nauc_mrr_at_100_diff1 value: 6.563199999999999 - type: nauc_mrr_at_1000_max value: -1.7819000000000003 - type: nauc_mrr_at_1000_std value: 4.4372 - type: nauc_mrr_at_1000_diff1 value: 6.5346 - type: main_score value: 39.805 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P (default) type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 30.9023 - type: v_measure_std value: 14.6095 - type: main_score value: 30.9023 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S (default) type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 19.1012 - type: v_measure_std value: 15.511800000000001 - type: main_score value: 19.1012 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions (default) type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 54.0474 - type: mrr value: 67.00150000000001 - type: nAUC_map_max value: 14.266100000000002 - type: nAUC_map_std value: 11.7906 - type: nAUC_map_diff1 value: 7.5044 - type: nAUC_mrr_max value: 20.1721 - type: nAUC_mrr_std value: 13.1225 - type: nAUC_mrr_diff1 value: 14.3512 - type: main_score value: 54.0474 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: pearson value: 73.3465 - type: spearman value: 69.6932 - type: cosine_pearson value: 73.3465 - type: cosine_spearman value: 69.6932 - type: manhattan_pearson value: 54.115899999999996 - type: manhattan_spearman value: 54.1759 - type: euclidean_pearson value: 54.2153 - type: euclidean_spearman value: 54.0488 - type: main_score value: 69.6932 - task: type: Classification dataset: name: MTEB Banking77Classification (default) type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 74.2987 - type: f1 value: 73.85119999999999 - type: f1_weighted value: 73.85119999999999 - type: main_score value: 74.2987 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P (default) type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 29.8415 - type: v_measure_std value: 0.7605 - type: main_score value: 29.8415 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S (default) type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 16.4917 - type: v_measure_std value: 1.2364 - type: main_score value: 16.4917 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval (default) type: CQADupstackRetrieval_is_a_combined_dataset config: default split: test revision: '1' metrics: - type: ndcg_at_10 value: 21.9561 - type: main_score value: 21.9561 - task: type: Retrieval dataset: name: MTEB ClimateFEVER (default) type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: ndcg_at_1 value: 18.826999999999998 - type: ndcg_at_3 value: 16.482 - type: ndcg_at_5 value: 17.9 - type: ndcg_at_10 value: 20.948 - type: ndcg_at_20 value: 23.665 - type: ndcg_at_100 value: 28.192 - type: ndcg_at_1000 value: 31.846999999999998 - type: map_at_1 value: 8.221 - type: map_at_3 value: 11.72 - type: map_at_5 value: 12.844 - type: map_at_10 value: 14.17 - type: map_at_20 value: 15.043000000000001 - type: map_at_100 value: 15.842 - type: map_at_1000 value: 16.04 - type: recall_at_1 value: 8.221 - type: recall_at_3 value: 15.214 - type: recall_at_5 value: 19.185 - type: recall_at_10 value: 26.14 - type: recall_at_20 value: 33.931 - type: recall_at_100 value: 51.429 - type: recall_at_1000 value: 72.269 - type: precision_at_1 value: 18.826999999999998 - type: precision_at_3 value: 12.4 - type: precision_at_5 value: 9.707 - type: precision_at_10 value: 6.84 - type: precision_at_20 value: 4.557 - type: precision_at_100 value: 1.461 - type: precision_at_1000 value: 0.212 - type: mrr_at_1 value: 18.8274 - type: mrr_at_3 value: 25.2226 - type: mrr_at_5 value: 27.163999999999998 - type: mrr_at_10 value: 28.6116 - type: mrr_at_20 value: 29.3082 - type: mrr_at_100 value: 29.7302 - type: mrr_at_1000 value: 29.786600000000004 - type: nauc_ndcg_at_1_max value: 23.3019 - type: nauc_ndcg_at_1_std value: 14.4153 - type: nauc_ndcg_at_1_diff1 value: 21.8879 - type: nauc_ndcg_at_3_max value: 22.2746 - type: nauc_ndcg_at_3_std value: 15.487300000000001 - type: nauc_ndcg_at_3_diff1 value: 17.8275 - type: nauc_ndcg_at_5_max value: 23.0993 - type: nauc_ndcg_at_5_std value: 16.4617 - type: nauc_ndcg_at_5_diff1 value: 16.7855 - type: nauc_ndcg_at_10_max value: 24.7783 - type: nauc_ndcg_at_10_std value: 20.1484 - type: nauc_ndcg_at_10_diff1 value: 17.0753 - type: nauc_ndcg_at_20_max value: 26.1465 - type: nauc_ndcg_at_20_std value: 22.3842 - type: nauc_ndcg_at_20_diff1 value: 16.777900000000002 - type: nauc_ndcg_at_100_max value: 27.703100000000003 - type: nauc_ndcg_at_100_std value: 25.3223 - type: nauc_ndcg_at_100_diff1 value: 16.1821 - type: nauc_ndcg_at_1000_max value: 28.778599999999997 - type: nauc_ndcg_at_1000_std value: 27.9877 - type: nauc_ndcg_at_1000_diff1 value: 16.223499999999998 - type: nauc_map_at_1_max value: 22.4083 - type: nauc_map_at_1_std value: 9.546000000000001 - type: nauc_map_at_1_diff1 value: 29.008499999999998 - type: nauc_map_at_3_max value: 22.0196 - type: nauc_map_at_3_std value: 11.7774 - type: nauc_map_at_3_diff1 value: 21.7038 - type: nauc_map_at_5_max value: 22.7222 - type: nauc_map_at_5_std value: 12.8126 - type: nauc_map_at_5_diff1 value: 20.288 - type: nauc_map_at_10_max value: 23.566200000000002 - type: nauc_map_at_10_std value: 14.8877 - type: nauc_map_at_10_diff1 value: 19.9221 - type: nauc_map_at_20_max value: 24.1809 - type: nauc_map_at_20_std value: 15.9395 - type: nauc_map_at_20_diff1 value: 19.6606 - type: nauc_map_at_100_max value: 24.7213 - type: nauc_map_at_100_std value: 16.8474 - type: nauc_map_at_100_diff1 value: 19.5227 - type: nauc_map_at_1000_max value: 24.8168 - type: nauc_map_at_1000_std value: 17.0802 - type: nauc_map_at_1000_diff1 value: 19.496199999999998 - type: nauc_recall_at_1_max value: 22.4083 - type: nauc_recall_at_1_std value: 9.546000000000001 - type: nauc_recall_at_1_diff1 value: 29.008499999999998 - type: nauc_recall_at_3_max value: 19.4585 - type: nauc_recall_at_3_std value: 14.3753 - type: nauc_recall_at_3_diff1 value: 15.7 - type: nauc_recall_at_5_max value: 20.5273 - type: nauc_recall_at_5_std value: 16.2058 - type: nauc_recall_at_5_diff1 value: 12.1747 - type: nauc_recall_at_10_max value: 22.6961 - type: nauc_recall_at_10_std value: 22.400000000000002 - type: nauc_recall_at_10_diff1 value: 13.2301 - type: nauc_recall_at_20_max value: 23.9165 - type: nauc_recall_at_20_std value: 25.392300000000002 - type: nauc_recall_at_20_diff1 value: 11.8797 - type: nauc_recall_at_100_max value: 26.6031 - type: nauc_recall_at_100_std value: 31.7759 - type: nauc_recall_at_100_diff1 value: 8.9369 - type: nauc_recall_at_1000_max value: 32.4917 - type: nauc_recall_at_1000_std value: 47.7736 - type: nauc_recall_at_1000_diff1 value: 9.5485 - type: nauc_precision_at_1_max value: 23.3019 - type: nauc_precision_at_1_std value: 14.4153 - type: nauc_precision_at_1_diff1 value: 21.8879 - type: nauc_precision_at_3_max value: 23.9748 - type: nauc_precision_at_3_std value: 21.5474 - type: nauc_precision_at_3_diff1 value: 10.6452 - type: nauc_precision_at_5_max value: 24.9076 - type: nauc_precision_at_5_std value: 23.9797 - type: nauc_precision_at_5_diff1 value: 7.1156999999999995 - type: nauc_precision_at_10_max value: 26.721 - type: nauc_precision_at_10_std value: 30.1734 - type: nauc_precision_at_10_diff1 value: 7.0459 - type: nauc_precision_at_20_max value: 27.9059 - type: nauc_precision_at_20_std value: 33.1933 - type: nauc_precision_at_20_diff1 value: 5.7082 - type: nauc_precision_at_100_max value: 25.7203 - type: nauc_precision_at_100_std value: 35.108 - type: nauc_precision_at_100_diff1 value: 2.2525 - type: nauc_precision_at_1000_max value: 23.6155 - type: nauc_precision_at_1000_std value: 39.4567 - type: nauc_precision_at_1000_diff1 value: -1.2073 - type: nauc_mrr_at_1_max value: 23.3019 - type: nauc_mrr_at_1_std value: 14.4153 - type: nauc_mrr_at_1_diff1 value: 21.8879 - type: nauc_mrr_at_3_max value: 23.340700000000002 - type: nauc_mrr_at_3_std value: 18.1166 - type: nauc_mrr_at_3_diff1 value: 16.4821 - type: nauc_mrr_at_5_max value: 23.5278 - type: nauc_mrr_at_5_std value: 19.023200000000003 - type: nauc_mrr_at_5_diff1 value: 15.7295 - type: nauc_mrr_at_10_max value: 24.199 - type: nauc_mrr_at_10_std value: 20.218600000000002 - type: nauc_mrr_at_10_diff1 value: 16.173199999999998 - type: nauc_mrr_at_20_max value: 24.4813 - type: nauc_mrr_at_20_std value: 20.5169 - type: nauc_mrr_at_20_diff1 value: 16.2274 - type: nauc_mrr_at_100_max value: 24.378800000000002 - type: nauc_mrr_at_100_std value: 20.4327 - type: nauc_mrr_at_100_diff1 value: 16.220499999999998 - type: nauc_mrr_at_1000_max value: 24.3802 - type: nauc_mrr_at_1000_std value: 20.4123 - type: nauc_mrr_at_1000_diff1 value: 16.2191 - type: main_score value: 20.948 - task: type: Retrieval dataset: name: MTEB DBPedia (default) type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: ndcg_at_1 value: 30.375000000000004 - type: ndcg_at_3 value: 26.590999999999998 - type: ndcg_at_5 value: 24.586 - type: ndcg_at_10 value: 23.246 - type: ndcg_at_20 value: 23.025000000000002 - type: ndcg_at_100 value: 26.994 - type: ndcg_at_1000 value: 33.591 - type: map_at_1 value: 4.104 - type: map_at_3 value: 6.869 - type: map_at_5 value: 7.949000000000001 - type: map_at_10 value: 9.511 - type: map_at_20 value: 10.959000000000001 - type: map_at_100 value: 13.444999999999999 - type: map_at_1000 value: 14.482999999999999 - type: recall_at_1 value: 4.104 - type: recall_at_3 value: 8.394 - type: recall_at_5 value: 10.453 - type: recall_at_10 value: 14.413 - type: recall_at_20 value: 19.421 - type: recall_at_100 value: 34.134 - type: recall_at_1000 value: 56.74 - type: precision_at_1 value: 43.0 - type: precision_at_3 value: 32.25 - type: precision_at_5 value: 26.650000000000002 - type: precision_at_10 value: 20.575 - type: precision_at_20 value: 15.587000000000002 - type: precision_at_100 value: 6.784999999999999 - type: precision_at_1000 value: 1.465 - type: mrr_at_1 value: 43.0 - type: mrr_at_3 value: 50.416700000000006 - type: mrr_at_5 value: 51.554199999999994 - type: mrr_at_10 value: 52.5436 - type: mrr_at_20 value: 53.0818 - type: mrr_at_100 value: 53.3559 - type: mrr_at_1000 value: 53.3775 - type: nauc_ndcg_at_1_max value: 32.3654 - type: nauc_ndcg_at_1_std value: 10.134799999999998 - type: nauc_ndcg_at_1_diff1 value: 30.7456 - type: nauc_ndcg_at_3_max value: 35.7454 - type: nauc_ndcg_at_3_std value: 11.2598 - type: nauc_ndcg_at_3_diff1 value: 28.8957 - type: nauc_ndcg_at_5_max value: 37.3094 - type: nauc_ndcg_at_5_std value: 12.0986 - type: nauc_ndcg_at_5_diff1 value: 30.1683 - type: nauc_ndcg_at_10_max value: 37.8415 - type: nauc_ndcg_at_10_std value: 13.6007 - type: nauc_ndcg_at_10_diff1 value: 27.7172 - type: nauc_ndcg_at_20_max value: 36.201899999999995 - type: nauc_ndcg_at_20_std value: 14.508399999999998 - type: nauc_ndcg_at_20_diff1 value: 25.6504 - type: nauc_ndcg_at_100_max value: 37.8181 - type: nauc_ndcg_at_100_std value: 22.2808 - type: nauc_ndcg_at_100_diff1 value: 22.156100000000002 - type: nauc_ndcg_at_1000_max value: 43.2943 - type: nauc_ndcg_at_1000_std value: 29.2433 - type: nauc_ndcg_at_1000_diff1 value: 24.593 - type: nauc_map_at_1_max value: 3.9762 - type: nauc_map_at_1_std value: 2.929 - type: nauc_map_at_1_diff1 value: 21.787699999999997 - type: nauc_map_at_3_max value: 7.2749 - type: nauc_map_at_3_std value: 4.1128 - type: nauc_map_at_3_diff1 value: 19.4785 - type: nauc_map_at_5_max value: 11.6105 - type: nauc_map_at_5_std value: 3.9446000000000003 - type: nauc_map_at_5_diff1 value: 21.250700000000002 - type: nauc_map_at_10_max value: 17.3344 - type: nauc_map_at_10_std value: 6.990200000000001 - type: nauc_map_at_10_diff1 value: 20.962 - type: nauc_map_at_20_max value: 23.447200000000002 - type: nauc_map_at_20_std value: 11.8169 - type: nauc_map_at_20_diff1 value: 21.0181 - type: nauc_map_at_100_max value: 32.9328 - type: nauc_map_at_100_std value: 21.3233 - type: nauc_map_at_100_diff1 value: 19.3584 - type: nauc_map_at_1000_max value: 34.9988 - type: nauc_map_at_1000_std value: 23.3726 - type: nauc_map_at_1000_diff1 value: 19.9623 - type: nauc_recall_at_1_max value: 3.9762 - type: nauc_recall_at_1_std value: 2.929 - type: nauc_recall_at_1_diff1 value: 21.787699999999997 - type: nauc_recall_at_3_max value: 2.7925999999999997 - type: nauc_recall_at_3_std value: -2.4797 - type: nauc_recall_at_3_diff1 value: 13.525 - type: nauc_recall_at_5_max value: 6.8843000000000005 - type: nauc_recall_at_5_std value: -3.7343 - type: nauc_recall_at_5_diff1 value: 17.638499999999997 - type: nauc_recall_at_10_max value: 11.6201 - type: nauc_recall_at_10_std value: -1.0245 - type: nauc_recall_at_10_diff1 value: 15.4671 - type: nauc_recall_at_20_max value: 15.815999999999999 - type: nauc_recall_at_20_std value: 3.6186999999999996 - type: nauc_recall_at_20_diff1 value: 15.407000000000002 - type: nauc_recall_at_100_max value: 24.712 - type: nauc_recall_at_100_std value: 22.0841 - type: nauc_recall_at_100_diff1 value: 10.1828 - type: nauc_recall_at_1000_max value: 33.821 - type: nauc_recall_at_1000_std value: 36.807 - type: nauc_recall_at_1000_diff1 value: 12.8396 - type: nauc_precision_at_1_max value: 39.2878 - type: nauc_precision_at_1_std value: 15.6774 - type: nauc_precision_at_1_diff1 value: 31.384 - type: nauc_precision_at_3_max value: 43.498 - type: nauc_precision_at_3_std value: 17.592299999999998 - type: nauc_precision_at_3_diff1 value: 25.154799999999998 - type: nauc_precision_at_5_max value: 47.632600000000004 - type: nauc_precision_at_5_std value: 19.6694 - type: nauc_precision_at_5_diff1 value: 26.762399999999996 - type: nauc_precision_at_10_max value: 50.91139999999999 - type: nauc_precision_at_10_std value: 23.6363 - type: nauc_precision_at_10_diff1 value: 23.097 - type: nauc_precision_at_20_max value: 52.53489999999999 - type: nauc_precision_at_20_std value: 28.8839 - type: nauc_precision_at_20_diff1 value: 18.9418 - type: nauc_precision_at_100_max value: 48.79 - type: nauc_precision_at_100_std value: 31.642500000000002 - type: nauc_precision_at_100_diff1 value: 13.646700000000001 - type: nauc_precision_at_1000_max value: 27.015099999999997 - type: nauc_precision_at_1000_std value: 13.613900000000001 - type: nauc_precision_at_1000_diff1 value: 12.138300000000001 - type: nauc_mrr_at_1_max value: 39.2878 - type: nauc_mrr_at_1_std value: 15.6774 - type: nauc_mrr_at_1_diff1 value: 31.384 - type: nauc_mrr_at_3_max value: 41.747299999999996 - type: nauc_mrr_at_3_std value: 14.7682 - type: nauc_mrr_at_3_diff1 value: 29.8219 - type: nauc_mrr_at_5_max value: 42.408699999999996 - type: nauc_mrr_at_5_std value: 14.769099999999998 - type: nauc_mrr_at_5_diff1 value: 31.1068 - type: nauc_mrr_at_10_max value: 42.571999999999996 - type: nauc_mrr_at_10_std value: 14.8256 - type: nauc_mrr_at_10_diff1 value: 31.156299999999998 - type: nauc_mrr_at_20_max value: 42.4832 - type: nauc_mrr_at_20_std value: 14.7993 - type: nauc_mrr_at_20_diff1 value: 31.260700000000003 - type: nauc_mrr_at_100_max value: 42.5018 - type: nauc_mrr_at_100_std value: 14.9009 - type: nauc_mrr_at_100_diff1 value: 31.2395 - type: nauc_mrr_at_1000_max value: 42.4996 - type: nauc_mrr_at_1000_std value: 14.9098 - type: nauc_mrr_at_1000_diff1 value: 31.230400000000003 - type: main_score value: 23.246 - task: type: Classification dataset: name: MTEB EmotionClassification (default) type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 45.68 - type: f1 value: 43.1207 - type: f1_weighted value: 48.0349 - type: main_score value: 45.68 - task: type: Retrieval dataset: name: MTEB FEVER (default) type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: ndcg_at_1 value: 16.742 - type: ndcg_at_3 value: 23.316 - type: ndcg_at_5 value: 25.738 - type: ndcg_at_10 value: 28.68 - type: ndcg_at_20 value: 30.959999999999997 - type: ndcg_at_100 value: 34.037 - type: ndcg_at_1000 value: 36.004999999999995 - type: map_at_1 value: 15.797 - type: map_at_3 value: 21.209 - type: map_at_5 value: 22.547 - type: map_at_10 value: 23.762 - type: map_at_20 value: 24.401 - type: map_at_100 value: 24.83 - type: map_at_1000 value: 24.901 - type: recall_at_1 value: 15.797 - type: recall_at_3 value: 28.233000000000004 - type: recall_at_5 value: 33.997 - type: recall_at_10 value: 42.888 - type: recall_at_20 value: 51.635 - type: recall_at_100 value: 67.801 - type: recall_at_1000 value: 82.998 - type: precision_at_1 value: 16.742 - type: precision_at_3 value: 10.096 - type: precision_at_5 value: 7.335999999999999 - type: precision_at_10 value: 4.65 - type: precision_at_20 value: 2.817 - type: precision_at_100 value: 0.748 - type: precision_at_1000 value: 0.093 - type: mrr_at_1 value: 16.7417 - type: mrr_at_3 value: 22.4122 - type: mrr_at_5 value: 23.8374 - type: mrr_at_10 value: 25.101000000000003 - type: mrr_at_20 value: 25.739800000000002 - type: mrr_at_100 value: 26.164199999999997 - type: mrr_at_1000 value: 26.227800000000002 - type: nauc_ndcg_at_1_max value: 13.991500000000002 - type: nauc_ndcg_at_1_std value: -25.4382 - type: nauc_ndcg_at_1_diff1 value: 21.2751 - type: nauc_ndcg_at_3_max value: 15.4019 - type: nauc_ndcg_at_3_std value: -25.9724 - type: nauc_ndcg_at_3_diff1 value: 16.3365 - type: nauc_ndcg_at_5_max value: 16.4606 - type: nauc_ndcg_at_5_std value: -26.063599999999997 - type: nauc_ndcg_at_5_diff1 value: 15.334900000000001 - type: nauc_ndcg_at_10_max value: 17.1297 - type: nauc_ndcg_at_10_std value: -26.709 - type: nauc_ndcg_at_10_diff1 value: 14.072799999999999 - type: nauc_ndcg_at_20_max value: 18.0756 - type: nauc_ndcg_at_20_std value: -25.849899999999998 - type: nauc_ndcg_at_20_diff1 value: 13.3475 - type: nauc_ndcg_at_100_max value: 18.5017 - type: nauc_ndcg_at_100_std value: -25.1975 - type: nauc_ndcg_at_100_diff1 value: 13.128200000000001 - type: nauc_ndcg_at_1000_max value: 18.570500000000003 - type: nauc_ndcg_at_1000_std value: -24.5199 - type: nauc_ndcg_at_1000_diff1 value: 13.608600000000001 - type: nauc_map_at_1_max value: 14.4553 - type: nauc_map_at_1_std value: -25.291999999999998 - type: nauc_map_at_1_diff1 value: 21.4966 - type: nauc_map_at_3_max value: 15.1199 - type: nauc_map_at_3_std value: -25.8608 - type: nauc_map_at_3_diff1 value: 17.5 - type: nauc_map_at_5_max value: 15.748599999999998 - type: nauc_map_at_5_std value: -25.928 - type: nauc_map_at_5_diff1 value: 16.8883 - type: nauc_map_at_10_max value: 16.036 - type: nauc_map_at_10_std value: -26.2116 - type: nauc_map_at_10_diff1 value: 16.335 - type: nauc_map_at_20_max value: 16.305500000000002 - type: nauc_map_at_20_std value: -25.965500000000002 - type: nauc_map_at_20_diff1 value: 16.1305 - type: nauc_map_at_100_max value: 16.380200000000002 - type: nauc_map_at_100_std value: -25.870199999999997 - type: nauc_map_at_100_diff1 value: 16.1253 - type: nauc_map_at_1000_max value: 16.3924 - type: nauc_map_at_1000_std value: -25.838499999999996 - type: nauc_map_at_1000_diff1 value: 16.1408 - type: nauc_recall_at_1_max value: 14.4553 - type: nauc_recall_at_1_std value: -25.291999999999998 - type: nauc_recall_at_1_diff1 value: 21.4966 - type: nauc_recall_at_3_max value: 16.1074 - type: nauc_recall_at_3_std value: -25.916099999999997 - type: nauc_recall_at_3_diff1 value: 13.5176 - type: nauc_recall_at_5_max value: 18.0189 - type: nauc_recall_at_5_std value: -25.795299999999997 - type: nauc_recall_at_5_diff1 value: 11.3842 - type: nauc_recall_at_10_max value: 19.4035 - type: nauc_recall_at_10_std value: -27.2015 - type: nauc_recall_at_10_diff1 value: 7.9085 - type: nauc_recall_at_20_max value: 22.5578 - type: nauc_recall_at_20_std value: -24.1674 - type: nauc_recall_at_20_diff1 value: 5.0956 - type: nauc_recall_at_100_max value: 25.2855 - type: nauc_recall_at_100_std value: -19.9378 - type: nauc_recall_at_100_diff1 value: 1.3199 - type: nauc_recall_at_1000_max value: 29.253400000000003 - type: nauc_recall_at_1000_std value: -8.519599999999999 - type: nauc_recall_at_1000_diff1 value: 0.1057 - type: nauc_precision_at_1_max value: 13.991500000000002 - type: nauc_precision_at_1_std value: -25.4382 - type: nauc_precision_at_1_diff1 value: 21.2751 - type: nauc_precision_at_3_max value: 15.758700000000001 - type: nauc_precision_at_3_std value: -26.3494 - type: nauc_precision_at_3_diff1 value: 13.6081 - type: nauc_precision_at_5_max value: 17.851300000000002 - type: nauc_precision_at_5_std value: -26.3818 - type: nauc_precision_at_5_diff1 value: 11.4331 - type: nauc_precision_at_10_max value: 19.5748 - type: nauc_precision_at_10_std value: -27.594400000000004 - type: nauc_precision_at_10_diff1 value: 8.0539 - type: nauc_precision_at_20_max value: 22.453799999999998 - type: nauc_precision_at_20_std value: -23.707800000000002 - type: nauc_precision_at_20_diff1 value: 5.2 - type: nauc_precision_at_100_max value: 24.1067 - type: nauc_precision_at_100_std value: -16.6068 - type: nauc_precision_at_100_diff1 value: 1.1200999999999999 - type: nauc_precision_at_1000_max value: 22.516 - type: nauc_precision_at_1000_std value: -0.621 - type: nauc_precision_at_1000_diff1 value: -0.26749999999999996 - type: nauc_mrr_at_1_max value: 13.991500000000002 - type: nauc_mrr_at_1_std value: -25.4382 - type: nauc_mrr_at_1_diff1 value: 21.2751 - type: nauc_mrr_at_3_max value: 14.95 - type: nauc_mrr_at_3_std value: -25.885 - type: nauc_mrr_at_3_diff1 value: 17.3215 - type: nauc_mrr_at_5_max value: 15.5568 - type: nauc_mrr_at_5_std value: -25.963 - type: nauc_mrr_at_5_diff1 value: 16.699 - type: nauc_mrr_at_10_max value: 15.901299999999999 - type: nauc_mrr_at_10_std value: -26.2471 - type: nauc_mrr_at_10_diff1 value: 16.189899999999998 - type: nauc_mrr_at_20_max value: 16.1798 - type: nauc_mrr_at_20_std value: -25.989600000000003 - type: nauc_mrr_at_20_diff1 value: 15.984499999999999 - type: nauc_mrr_at_100_max value: 16.2602 - type: nauc_mrr_at_100_std value: -25.9187 - type: nauc_mrr_at_100_diff1 value: 16.0136 - type: nauc_mrr_at_1000_max value: 16.2577 - type: nauc_mrr_at_1000_std value: -25.9039 - type: nauc_mrr_at_1000_diff1 value: 16.0318 - type: main_score value: 28.68 - task: type: Retrieval dataset: name: MTEB FiQA2018 (default) type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: ndcg_at_1 value: 14.198 - type: ndcg_at_3 value: 14.018 - type: ndcg_at_5 value: 14.857000000000001 - type: ndcg_at_10 value: 16.509999999999998 - type: ndcg_at_20 value: 18.499 - type: ndcg_at_100 value: 22.658 - type: ndcg_at_1000 value: 26.894000000000002 - type: map_at_1 value: 7.061000000000001 - type: map_at_3 value: 10.151 - type: map_at_5 value: 11.0 - type: map_at_10 value: 11.883000000000001 - type: map_at_20 value: 12.5 - type: map_at_100 value: 13.154 - type: map_at_1000 value: 13.343 - type: recall_at_1 value: 7.061000000000001 - type: recall_at_3 value: 13.339 - type: recall_at_5 value: 16.689999999999998 - type: recall_at_10 value: 21.435000000000002 - type: recall_at_20 value: 27.779999999999998 - type: recall_at_100 value: 45.381 - type: recall_at_1000 value: 71.61699999999999 - type: precision_at_1 value: 14.198 - type: precision_at_3 value: 9.568 - type: precision_at_5 value: 7.099 - type: precision_at_10 value: 4.7379999999999995 - type: precision_at_20 value: 3.1329999999999996 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.181 - type: mrr_at_1 value: 14.1975 - type: mrr_at_3 value: 18.5185 - type: mrr_at_5 value: 19.8302 - type: mrr_at_10 value: 20.6685 - type: mrr_at_20 value: 21.273 - type: mrr_at_100 value: 21.8076 - type: mrr_at_1000 value: 21.9063 - type: nauc_ndcg_at_1_max value: 12.2117 - type: nauc_ndcg_at_1_std value: -10.7059 - type: nauc_ndcg_at_1_diff1 value: 27.4415 - type: nauc_ndcg_at_3_max value: 12.4823 - type: nauc_ndcg_at_3_std value: -10.252500000000001 - type: nauc_ndcg_at_3_diff1 value: 20.6834 - type: nauc_ndcg_at_5_max value: 10.3316 - type: nauc_ndcg_at_5_std value: -9.8648 - type: nauc_ndcg_at_5_diff1 value: 19.6879 - type: nauc_ndcg_at_10_max value: 9.2057 - type: nauc_ndcg_at_10_std value: -9.3284 - type: nauc_ndcg_at_10_diff1 value: 19.5253 - type: nauc_ndcg_at_20_max value: 8.3092 - type: nauc_ndcg_at_20_std value: -6.686400000000001 - type: nauc_ndcg_at_20_diff1 value: 19.0031 - type: nauc_ndcg_at_100_max value: 9.321200000000001 - type: nauc_ndcg_at_100_std value: -4.4703 - type: nauc_ndcg_at_100_diff1 value: 19.2995 - type: nauc_ndcg_at_1000_max value: 11.754199999999999 - type: nauc_ndcg_at_1000_std value: -2.6593999999999998 - type: nauc_ndcg_at_1000_diff1 value: 20.3056 - type: nauc_map_at_1_max value: 17.227899999999998 - type: nauc_map_at_1_std value: -6.8508 - type: nauc_map_at_1_diff1 value: 25.9133 - type: nauc_map_at_3_max value: 13.716999999999999 - type: nauc_map_at_3_std value: -8.86 - type: nauc_map_at_3_diff1 value: 21.0714 - type: nauc_map_at_5_max value: 12.146700000000001 - type: nauc_map_at_5_std value: -8.909400000000002 - type: nauc_map_at_5_diff1 value: 20.3887 - type: nauc_map_at_10_max value: 11.417 - type: nauc_map_at_10_std value: -8.9141 - type: nauc_map_at_10_diff1 value: 20.7165 - type: nauc_map_at_20_max value: 11.0988 - type: nauc_map_at_20_std value: -7.9453 - type: nauc_map_at_20_diff1 value: 20.7809 - type: nauc_map_at_100_max value: 11.1694 - type: nauc_map_at_100_std value: -7.4639 - type: nauc_map_at_100_diff1 value: 20.9252 - type: nauc_map_at_1000_max value: 11.3405 - type: nauc_map_at_1000_std value: -7.3102 - type: nauc_map_at_1000_diff1 value: 20.9959 - type: nauc_recall_at_1_max value: 17.227899999999998 - type: nauc_recall_at_1_std value: -6.8508 - type: nauc_recall_at_1_diff1 value: 25.9133 - type: nauc_recall_at_3_max value: 11.2722 - type: nauc_recall_at_3_std value: -9.4755 - type: nauc_recall_at_3_diff1 value: 15.1741 - type: nauc_recall_at_5_max value: 6.7860000000000005 - type: nauc_recall_at_5_std value: -8.9743 - type: nauc_recall_at_5_diff1 value: 14.091999999999999 - type: nauc_recall_at_10_max value: 4.5781 - type: nauc_recall_at_10_std value: -8.4828 - type: nauc_recall_at_10_diff1 value: 13.1033 - type: nauc_recall_at_20_max value: 3.0408999999999997 - type: nauc_recall_at_20_std value: -1.0319 - type: nauc_recall_at_20_diff1 value: 11.2412 - type: nauc_recall_at_100_max value: 4.6371 - type: nauc_recall_at_100_std value: 5.6984 - type: nauc_recall_at_100_diff1 value: 10.648399999999999 - type: nauc_recall_at_1000_max value: 14.4284 - type: nauc_recall_at_1000_std value: 20.471 - type: nauc_recall_at_1000_diff1 value: 13.6603 - type: nauc_precision_at_1_max value: 12.2117 - type: nauc_precision_at_1_std value: -10.7059 - type: nauc_precision_at_1_diff1 value: 27.4415 - type: nauc_precision_at_3_max value: 8.3303 - type: nauc_precision_at_3_std value: -12.3434 - type: nauc_precision_at_3_diff1 value: 20.3774 - type: nauc_precision_at_5_max value: 5.46 - type: nauc_precision_at_5_std value: -10.6964 - type: nauc_precision_at_5_diff1 value: 19.3914 - type: nauc_precision_at_10_max value: 5.8885 - type: nauc_precision_at_10_std value: -9.0149 - type: nauc_precision_at_10_diff1 value: 21.8392 - type: nauc_precision_at_20_max value: 3.8181 - type: nauc_precision_at_20_std value: -4.2505 - type: nauc_precision_at_20_diff1 value: 19.9848 - type: nauc_precision_at_100_max value: 9.6538 - type: nauc_precision_at_100_std value: 1.8809 - type: nauc_precision_at_100_diff1 value: 18.6529 - type: nauc_precision_at_1000_max value: 15.5018 - type: nauc_precision_at_1000_std value: 5.4286 - type: nauc_precision_at_1000_diff1 value: 13.2946 - type: nauc_mrr_at_1_max value: 12.2117 - type: nauc_mrr_at_1_std value: -10.7059 - type: nauc_mrr_at_1_diff1 value: 27.4415 - type: nauc_mrr_at_3_max value: 10.5481 - type: nauc_mrr_at_3_std value: -10.7069 - type: nauc_mrr_at_3_diff1 value: 22.1345 - type: nauc_mrr_at_5_max value: 9.463000000000001 - type: nauc_mrr_at_5_std value: -10.5558 - type: nauc_mrr_at_5_diff1 value: 21.8622 - type: nauc_mrr_at_10_max value: 9.6679 - type: nauc_mrr_at_10_std value: -10.399600000000001 - type: nauc_mrr_at_10_diff1 value: 21.7847 - type: nauc_mrr_at_20_max value: 9.422600000000001 - type: nauc_mrr_at_20_std value: -9.8865 - type: nauc_mrr_at_20_diff1 value: 21.4703 - type: nauc_mrr_at_100_max value: 9.640500000000001 - type: nauc_mrr_at_100_std value: -9.8299 - type: nauc_mrr_at_100_diff1 value: 21.5227 - type: nauc_mrr_at_1000_max value: 9.6734 - type: nauc_mrr_at_1000_std value: -9.8079 - type: nauc_mrr_at_1000_diff1 value: 21.5451 - type: main_score value: 16.509999999999998 - task: type: Retrieval dataset: name: MTEB HotpotQA (default) type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: ndcg_at_1 value: 40.297 - type: ndcg_at_3 value: 31.719 - type: ndcg_at_5 value: 33.744 - type: ndcg_at_10 value: 35.72 - type: ndcg_at_20 value: 37.266 - type: ndcg_at_100 value: 39.778000000000006 - type: ndcg_at_1000 value: 42.056 - type: map_at_1 value: 20.149 - type: map_at_3 value: 25.899 - type: map_at_5 value: 27.157999999999998 - type: map_at_10 value: 28.105000000000004 - type: map_at_20 value: 28.586 - type: map_at_100 value: 29.000999999999998 - type: map_at_1000 value: 29.098000000000003 - type: recall_at_1 value: 20.149 - type: recall_at_3 value: 29.932 - type: recall_at_5 value: 33.93 - type: recall_at_10 value: 38.92 - type: recall_at_20 value: 43.903 - type: recall_at_100 value: 55.057 - type: recall_at_1000 value: 70.27 - type: precision_at_1 value: 40.297 - type: precision_at_3 value: 19.955000000000002 - type: precision_at_5 value: 13.572000000000001 - type: precision_at_10 value: 7.784000000000001 - type: precision_at_20 value: 4.390000000000001 - type: precision_at_100 value: 1.101 - type: precision_at_1000 value: 0.14100000000000001 - type: mrr_at_1 value: 40.2971 - type: mrr_at_3 value: 46.041 - type: mrr_at_5 value: 47.199600000000004 - type: mrr_at_10 value: 47.9631 - type: mrr_at_20 value: 48.3871 - type: mrr_at_100 value: 48.661500000000004 - type: mrr_at_1000 value: 48.707 - type: nauc_ndcg_at_1_max value: 27.8706 - type: nauc_ndcg_at_1_std value: -8.272300000000001 - type: nauc_ndcg_at_1_diff1 value: 57.8385 - type: nauc_ndcg_at_3_max value: 27.852500000000003 - type: nauc_ndcg_at_3_std value: -6.4216 - type: nauc_ndcg_at_3_diff1 value: 48.365 - type: nauc_ndcg_at_5_max value: 27.509099999999997 - type: nauc_ndcg_at_5_std value: -5.6179 - type: nauc_ndcg_at_5_diff1 value: 46.5015 - type: nauc_ndcg_at_10_max value: 27.002 - type: nauc_ndcg_at_10_std value: -4.5545 - type: nauc_ndcg_at_10_diff1 value: 45.7081 - type: nauc_ndcg_at_20_max value: 26.984799999999996 - type: nauc_ndcg_at_20_std value: -3.6883 - type: nauc_ndcg_at_20_diff1 value: 44.9584 - type: nauc_ndcg_at_100_max value: 27.283600000000003 - type: nauc_ndcg_at_100_std value: -2.3537 - type: nauc_ndcg_at_100_diff1 value: 44.1115 - type: nauc_ndcg_at_1000_max value: 27.417399999999997 - type: nauc_ndcg_at_1000_std value: -1.2178 - type: nauc_ndcg_at_1000_diff1 value: 44.0544 - type: nauc_map_at_1_max value: 27.8706 - type: nauc_map_at_1_std value: -8.272300000000001 - type: nauc_map_at_1_diff1 value: 57.8385 - type: nauc_map_at_3_max value: 27.584799999999998 - type: nauc_map_at_3_std value: -5.9387 - type: nauc_map_at_3_diff1 value: 47.2019 - type: nauc_map_at_5_max value: 27.242 - type: nauc_map_at_5_std value: -5.3224 - type: nauc_map_at_5_diff1 value: 45.831 - type: nauc_map_at_10_max value: 26.9723 - type: nauc_map_at_10_std value: -4.7007 - type: nauc_map_at_10_diff1 value: 45.3311 - type: nauc_map_at_20_max value: 26.919700000000002 - type: nauc_map_at_20_std value: -4.3851 - type: nauc_map_at_20_diff1 value: 45.0687 - type: nauc_map_at_100_max value: 26.995400000000004 - type: nauc_map_at_100_std value: -4.0821000000000005 - type: nauc_map_at_100_diff1 value: 44.9062 - type: nauc_map_at_1000_max value: 26.998499999999996 - type: nauc_map_at_1000_std value: -4.0238000000000005 - type: nauc_map_at_1000_diff1 value: 44.8961 - type: nauc_recall_at_1_max value: 27.8706 - type: nauc_recall_at_1_std value: -8.272300000000001 - type: nauc_recall_at_1_diff1 value: 57.8385 - type: nauc_recall_at_3_max value: 27.3795 - type: nauc_recall_at_3_std value: -5.1751 - type: nauc_recall_at_3_diff1 value: 42.3825 - type: nauc_recall_at_5_max value: 25.634800000000002 - type: nauc_recall_at_5_std value: -3.3379 - type: nauc_recall_at_5_diff1 value: 37.0532 - type: nauc_recall_at_10_max value: 23.5746 - type: nauc_recall_at_10_std value: -0.5226 - type: nauc_recall_at_10_diff1 value: 34.071200000000005 - type: nauc_recall_at_20_max value: 22.1536 - type: nauc_recall_at_20_std value: 2.3993 - type: nauc_recall_at_20_diff1 value: 29.439 - type: nauc_recall_at_100_max value: 20.7576 - type: nauc_recall_at_100_std value: 8.468499999999999 - type: nauc_recall_at_100_diff1 value: 21.221799999999998 - type: nauc_recall_at_1000_max value: 18.7522 - type: nauc_recall_at_1000_std value: 18.916800000000002 - type: nauc_recall_at_1000_diff1 value: 13.558200000000001 - type: nauc_precision_at_1_max value: 27.8706 - type: nauc_precision_at_1_std value: -8.272300000000001 - type: nauc_precision_at_1_diff1 value: 57.8385 - type: nauc_precision_at_3_max value: 27.3795 - type: nauc_precision_at_3_std value: -5.1751 - type: nauc_precision_at_3_diff1 value: 42.3825 - type: nauc_precision_at_5_max value: 25.634800000000002 - type: nauc_precision_at_5_std value: -3.3379 - type: nauc_precision_at_5_diff1 value: 37.0532 - type: nauc_precision_at_10_max value: 23.5746 - type: nauc_precision_at_10_std value: -0.5226 - type: nauc_precision_at_10_diff1 value: 34.071200000000005 - type: nauc_precision_at_20_max value: 22.1536 - type: nauc_precision_at_20_std value: 2.3993 - type: nauc_precision_at_20_diff1 value: 29.439 - type: nauc_precision_at_100_max value: 20.7576 - type: nauc_precision_at_100_std value: 8.468499999999999 - type: nauc_precision_at_100_diff1 value: 21.221799999999998 - type: nauc_precision_at_1000_max value: 18.7522 - type: nauc_precision_at_1000_std value: 18.916800000000002 - type: nauc_precision_at_1000_diff1 value: 13.558200000000001 - type: nauc_mrr_at_1_max value: 27.8706 - type: nauc_mrr_at_1_std value: -8.272300000000001 - type: nauc_mrr_at_1_diff1 value: 57.8385 - type: nauc_mrr_at_3_max value: 28.256700000000002 - type: nauc_mrr_at_3_std value: -8.050699999999999 - type: nauc_mrr_at_3_diff1 value: 54.5601 - type: nauc_mrr_at_5_max value: 28.2928 - type: nauc_mrr_at_5_std value: -7.8317 - type: nauc_mrr_at_5_diff1 value: 54.046499999999995 - type: nauc_mrr_at_10_max value: 28.151500000000002 - type: nauc_mrr_at_10_std value: -7.6431 - type: nauc_mrr_at_10_diff1 value: 53.9751 - type: nauc_mrr_at_20_max value: 28.215 - type: nauc_mrr_at_20_std value: -7.5285 - type: nauc_mrr_at_20_diff1 value: 53.9177 - type: nauc_mrr_at_100_max value: 28.215600000000002 - type: nauc_mrr_at_100_std value: -7.524699999999999 - type: nauc_mrr_at_100_diff1 value: 53.9393 - type: nauc_mrr_at_1000_max value: 28.2194 - type: nauc_mrr_at_1000_std value: -7.5150999999999994 - type: nauc_mrr_at_1000_diff1 value: 53.95290000000001 - type: main_score value: 35.72 - task: type: Classification dataset: name: MTEB ImdbClassification (default) type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 65.8656 - type: f1 value: 65.385 - type: f1_weighted value: 65.385 - type: ap value: 60.506899999999995 - type: ap_weighted value: 60.506899999999995 - type: main_score value: 65.8656 - task: type: Retrieval dataset: name: MTEB MSMARCO (default) type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: ndcg_at_1 value: 6.877 - type: ndcg_at_3 value: 10.963000000000001 - type: ndcg_at_5 value: 12.845 - type: ndcg_at_10 value: 14.918000000000001 - type: ndcg_at_20 value: 16.721 - type: ndcg_at_100 value: 20.041 - type: ndcg_at_1000 value: 23.296 - type: map_at_1 value: 6.717 - type: map_at_3 value: 9.846 - type: map_at_5 value: 10.886999999999999 - type: map_at_10 value: 11.74 - type: map_at_20 value: 12.237 - type: map_at_100 value: 12.683 - type: map_at_1000 value: 12.792 - type: recall_at_1 value: 6.717 - type: recall_at_3 value: 13.963999999999999 - type: recall_at_5 value: 18.498 - type: recall_at_10 value: 24.869 - type: recall_at_20 value: 31.901000000000003 - type: recall_at_100 value: 49.786 - type: recall_at_1000 value: 75.913 - type: precision_at_1 value: 6.877 - type: precision_at_3 value: 4.809 - type: precision_at_5 value: 3.8280000000000003 - type: precision_at_10 value: 2.5829999999999997 - type: precision_at_20 value: 1.6650000000000003 - type: precision_at_100 value: 0.523 - type: precision_at_1000 value: 0.08 - type: mrr_at_1 value: 6.876799999999999 - type: mrr_at_3 value: 10.093100000000002 - type: mrr_at_5 value: 11.1526 - type: mrr_at_10 value: 12.0074 - type: mrr_at_20 value: 12.5083 - type: mrr_at_100 value: 12.9529 - type: mrr_at_1000 value: 13.057099999999998 - type: nauc_ndcg_at_1_max value: 4.7264 - type: nauc_ndcg_at_1_std value: -16.2439 - type: nauc_ndcg_at_1_diff1 value: 27.4463 - type: nauc_ndcg_at_3_max value: 6.1734 - type: nauc_ndcg_at_3_std value: -16.8949 - type: nauc_ndcg_at_3_diff1 value: 22.7183 - type: nauc_ndcg_at_5_max value: 6.493 - type: nauc_ndcg_at_5_std value: -15.7852 - type: nauc_ndcg_at_5_diff1 value: 21.0805 - type: nauc_ndcg_at_10_max value: 7.099600000000001 - type: nauc_ndcg_at_10_std value: -15.1727 - type: nauc_ndcg_at_10_diff1 value: 20.3957 - type: nauc_ndcg_at_20_max value: 7.9073 - type: nauc_ndcg_at_20_std value: -14.596200000000001 - type: nauc_ndcg_at_20_diff1 value: 20.0084 - type: nauc_ndcg_at_100_max value: 9.112 - type: nauc_ndcg_at_100_std value: -12.0562 - type: nauc_ndcg_at_100_diff1 value: 19.3717 - type: nauc_ndcg_at_1000_max value: 10.1474 - type: nauc_ndcg_at_1000_std value: -10.3955 - type: nauc_ndcg_at_1000_diff1 value: 19.2427 - type: nauc_map_at_1_max value: 4.4801 - type: nauc_map_at_1_std value: -16.4499 - type: nauc_map_at_1_diff1 value: 27.5511 - type: nauc_map_at_3_max value: 5.8799 - type: nauc_map_at_3_std value: -16.7696 - type: nauc_map_at_3_diff1 value: 23.531299999999998 - type: nauc_map_at_5_max value: 6.0905000000000005 - type: nauc_map_at_5_std value: -16.0525 - type: nauc_map_at_5_diff1 value: 22.395799999999998 - type: nauc_map_at_10_max value: 6.3876 - type: nauc_map_at_10_std value: -15.774 - type: nauc_map_at_10_diff1 value: 22.0367 - type: nauc_map_at_20_max value: 6.6676 - type: nauc_map_at_20_std value: -15.5729 - type: nauc_map_at_20_diff1 value: 21.8952 - type: nauc_map_at_100_max value: 6.912400000000001 - type: nauc_map_at_100_std value: -15.162400000000002 - type: nauc_map_at_100_diff1 value: 21.7666 - type: nauc_map_at_1000_max value: 6.952500000000001 - type: nauc_map_at_1000_std value: -15.085799999999999 - type: nauc_map_at_1000_diff1 value: 21.7618 - type: nauc_recall_at_1_max value: 4.4801 - type: nauc_recall_at_1_std value: -16.4499 - type: nauc_recall_at_1_diff1 value: 27.5511 - type: nauc_recall_at_3_max value: 6.7195 - type: nauc_recall_at_3_std value: -17.2961 - type: nauc_recall_at_3_diff1 value: 20.9572 - type: nauc_recall_at_5_max value: 7.199 - type: nauc_recall_at_5_std value: -15.260599999999998 - type: nauc_recall_at_5_diff1 value: 18.4745 - type: nauc_recall_at_10_max value: 8.3289 - type: nauc_recall_at_10_std value: -14.0152 - type: nauc_recall_at_10_diff1 value: 17.3142 - type: nauc_recall_at_20_max value: 10.1702 - type: nauc_recall_at_20_std value: -12.7265 - type: nauc_recall_at_20_diff1 value: 16.5162 - type: nauc_recall_at_100_max value: 13.9363 - type: nauc_recall_at_100_std value: -4.0486 - type: nauc_recall_at_100_diff1 value: 14.5015 - type: nauc_recall_at_1000_max value: 24.3013 - type: nauc_recall_at_1000_std value: 12.3673 - type: nauc_recall_at_1000_diff1 value: 10.9827 - type: nauc_precision_at_1_max value: 4.7264 - type: nauc_precision_at_1_std value: -16.2439 - type: nauc_precision_at_1_diff1 value: 27.4463 - type: nauc_precision_at_3_max value: 6.895700000000001 - type: nauc_precision_at_3_std value: -17.0973 - type: nauc_precision_at_3_diff1 value: 20.7819 - type: nauc_precision_at_5_max value: 7.3601 - type: nauc_precision_at_5_std value: -15.189400000000001 - type: nauc_precision_at_5_diff1 value: 18.2284 - type: nauc_precision_at_10_max value: 8.5933 - type: nauc_precision_at_10_std value: -13.9345 - type: nauc_precision_at_10_diff1 value: 17.1801 - type: nauc_precision_at_20_max value: 10.5732 - type: nauc_precision_at_20_std value: -12.2593 - type: nauc_precision_at_20_diff1 value: 16.3194 - type: nauc_precision_at_100_max value: 14.462800000000001 - type: nauc_precision_at_100_std value: -2.7812 - type: nauc_precision_at_100_diff1 value: 13.8556 - type: nauc_precision_at_1000_max value: 22.7827 - type: nauc_precision_at_1000_std value: 13.1185 - type: nauc_precision_at_1000_diff1 value: 8.331199999999999 - type: nauc_mrr_at_1_max value: 4.7264 - type: nauc_mrr_at_1_std value: -16.2439 - type: nauc_mrr_at_1_diff1 value: 27.4463 - type: nauc_mrr_at_3_max value: 5.9976 - type: nauc_mrr_at_3_std value: -16.5493 - type: nauc_mrr_at_3_diff1 value: 23.5058 - type: nauc_mrr_at_5_max value: 6.1958 - type: nauc_mrr_at_5_std value: -15.893699999999999 - type: nauc_mrr_at_5_diff1 value: 22.4454 - type: nauc_mrr_at_10_max value: 6.514200000000001 - type: nauc_mrr_at_10_std value: -15.5116 - type: nauc_mrr_at_10_diff1 value: 22.0264 - type: nauc_mrr_at_20_max value: 6.7813 - type: nauc_mrr_at_20_std value: -15.2942 - type: nauc_mrr_at_20_diff1 value: 21.8857 - type: nauc_mrr_at_100_max value: 7.0158 - type: nauc_mrr_at_100_std value: -14.894599999999999 - type: nauc_mrr_at_100_diff1 value: 21.757299999999997 - type: nauc_mrr_at_1000_max value: 7.0534 - type: nauc_mrr_at_1000_std value: -14.8351 - type: nauc_mrr_at_1000_diff1 value: 21.7544 - type: main_score value: 14.918000000000001 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 82.4669 - type: f1 value: 81.3346 - type: f1_weighted value: 82.6885 - type: main_score value: 82.4669 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 58.1145 - type: f1 value: 40.7841 - type: f1_weighted value: 62.343 - type: main_score value: 58.1145 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 60.24549999999999 - type: f1 value: 59.534 - type: f1_weighted value: 60.47670000000001 - type: main_score value: 60.24549999999999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 66.32820000000001 - type: f1 value: 65.2929 - type: f1_weighted value: 66.51979999999999 - type: main_score value: 66.32820000000001 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P (default) type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 25.8495 - type: v_measure_std value: 1.6320000000000001 - type: main_score value: 25.8495 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S (default) type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 20.0754 - type: v_measure_std value: 1.3306 - type: main_score value: 20.0754 - task: type: Reranking dataset: name: MTEB MindSmallReranking (default) type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: map value: 28.5611 - type: mrr value: 29.4014 - type: nAUC_map_max value: -20.8019 - type: nAUC_map_std value: -5.307300000000001 - type: nAUC_map_diff1 value: 20.6483 - type: nAUC_mrr_max value: -14.9738 - type: nAUC_mrr_std value: -2.9508 - type: nAUC_mrr_diff1 value: 18.6743 - type: main_score value: 28.5611 - task: type: Retrieval dataset: name: MTEB NFCorpus (default) type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: ndcg_at_1 value: 32.972 - type: ndcg_at_3 value: 29.965000000000003 - type: ndcg_at_5 value: 28.773 - type: ndcg_at_10 value: 26.434 - type: ndcg_at_20 value: 24.922 - type: ndcg_at_100 value: 24.852 - type: ndcg_at_1000 value: 33.388 - type: map_at_1 value: 3.737 - type: map_at_3 value: 6.387 - type: map_at_5 value: 7.420999999999999 - type: map_at_10 value: 8.652 - type: map_at_20 value: 9.745 - type: map_at_100 value: 11.247 - type: map_at_1000 value: 12.494 - type: recall_at_1 value: 3.737 - type: recall_at_3 value: 7.889 - type: recall_at_5 value: 10.026 - type: recall_at_10 value: 12.615000000000002 - type: recall_at_20 value: 16.184 - type: recall_at_100 value: 26.988 - type: recall_at_1000 value: 57.594 - type: precision_at_1 value: 34.675 - type: precision_at_3 value: 28.173 - type: precision_at_5 value: 25.201 - type: precision_at_10 value: 20.0 - type: precision_at_20 value: 15.356 - type: precision_at_100 value: 6.898 - type: precision_at_1000 value: 1.936 - type: mrr_at_1 value: 34.674899999999994 - type: mrr_at_3 value: 42.0537 - type: mrr_at_5 value: 43.741 - type: mrr_at_10 value: 44.277699999999996 - type: mrr_at_20 value: 44.819700000000005 - type: mrr_at_100 value: 45.1552 - type: mrr_at_1000 value: 45.2048 - type: nauc_ndcg_at_1_max value: 27.6992 - type: nauc_ndcg_at_1_std value: 13.1387 - type: nauc_ndcg_at_1_diff1 value: 33.7772 - type: nauc_ndcg_at_3_max value: 32.4741 - type: nauc_ndcg_at_3_std value: 19.264 - type: nauc_ndcg_at_3_diff1 value: 26.1486 - type: nauc_ndcg_at_5_max value: 32.6623 - type: nauc_ndcg_at_5_std value: 21.435499999999998 - type: nauc_ndcg_at_5_diff1 value: 24.0412 - type: nauc_ndcg_at_10_max value: 33.217400000000005 - type: nauc_ndcg_at_10_std value: 22.591900000000003 - type: nauc_ndcg_at_10_diff1 value: 22.3637 - type: nauc_ndcg_at_20_max value: 33.3978 - type: nauc_ndcg_at_20_std value: 22.520200000000003 - type: nauc_ndcg_at_20_diff1 value: 22.0163 - type: nauc_ndcg_at_100_max value: 33.0608 - type: nauc_ndcg_at_100_std value: 20.4305 - type: nauc_ndcg_at_100_diff1 value: 21.1175 - type: nauc_ndcg_at_1000_max value: 38.198100000000004 - type: nauc_ndcg_at_1000_std value: 26.8712 - type: nauc_ndcg_at_1000_diff1 value: 22.78 - type: nauc_map_at_1_max value: 18.898300000000003 - type: nauc_map_at_1_std value: -11.0976 - type: nauc_map_at_1_diff1 value: 55.1605 - type: nauc_map_at_3_max value: 20.451800000000002 - type: nauc_map_at_3_std value: -12.0342 - type: nauc_map_at_3_diff1 value: 45.2096 - type: nauc_map_at_5_max value: 21.199 - type: nauc_map_at_5_std value: -9.8514 - type: nauc_map_at_5_diff1 value: 42.0142 - type: nauc_map_at_10_max value: 23.1645 - type: nauc_map_at_10_std value: -5.8333 - type: nauc_map_at_10_diff1 value: 38.048 - type: nauc_map_at_20_max value: 24.9482 - type: nauc_map_at_20_std value: -1.5368 - type: nauc_map_at_20_diff1 value: 36.241299999999995 - type: nauc_map_at_100_max value: 27.1413 - type: nauc_map_at_100_std value: 5.6268 - type: nauc_map_at_100_diff1 value: 33.3298 - type: nauc_map_at_1000_max value: 28.7674 - type: nauc_map_at_1000_std value: 10.9326 - type: nauc_map_at_1000_diff1 value: 31.700899999999997 - type: nauc_recall_at_1_max value: 18.898300000000003 - type: nauc_recall_at_1_std value: -11.0976 - type: nauc_recall_at_1_diff1 value: 55.1605 - type: nauc_recall_at_3_max value: 19.4721 - type: nauc_recall_at_3_std value: -13.496 - type: nauc_recall_at_3_diff1 value: 35.0178 - type: nauc_recall_at_5_max value: 19.5024 - type: nauc_recall_at_5_std value: -12.3428 - type: nauc_recall_at_5_diff1 value: 29.517 - type: nauc_recall_at_10_max value: 21.215500000000002 - type: nauc_recall_at_10_std value: -8.7165 - type: nauc_recall_at_10_diff1 value: 24.282 - type: nauc_recall_at_20_max value: 21.735 - type: nauc_recall_at_20_std value: -5.0988999999999995 - type: nauc_recall_at_20_diff1 value: 20.3041 - type: nauc_recall_at_100_max value: 19.9243 - type: nauc_recall_at_100_std value: 3.4522999999999997 - type: nauc_recall_at_100_diff1 value: 5.9747 - type: nauc_recall_at_1000_max value: 21.7819 - type: nauc_recall_at_1000_std value: 13.6785 - type: nauc_recall_at_1000_diff1 value: -0.25980000000000003 - type: nauc_precision_at_1_max value: 28.624899999999997 - type: nauc_precision_at_1_std value: 12.709599999999998 - type: nauc_precision_at_1_diff1 value: 33.308 - type: nauc_precision_at_3_max value: 35.1699 - type: nauc_precision_at_3_std value: 25.9338 - type: nauc_precision_at_3_diff1 value: 18.5464 - type: nauc_precision_at_5_max value: 33.4433 - type: nauc_precision_at_5_std value: 32.4517 - type: nauc_precision_at_5_diff1 value: 12.5543 - type: nauc_precision_at_10_max value: 32.3973 - type: nauc_precision_at_10_std value: 37.7554 - type: nauc_precision_at_10_diff1 value: 6.7227 - type: nauc_precision_at_20_max value: 31.591599999999996 - type: nauc_precision_at_20_std value: 44.658 - type: nauc_precision_at_20_diff1 value: 2.2702 - type: nauc_precision_at_100_max value: 25.163600000000002 - type: nauc_precision_at_100_std value: 51.7642 - type: nauc_precision_at_100_diff1 value: -4.8361 - type: nauc_precision_at_1000_max value: 20.2984 - type: nauc_precision_at_1000_std value: 49.0469 - type: nauc_precision_at_1000_diff1 value: -6.662700000000001 - type: nauc_mrr_at_1_max value: 28.624899999999997 - type: nauc_mrr_at_1_std value: 12.709599999999998 - type: nauc_mrr_at_1_diff1 value: 33.308 - type: nauc_mrr_at_3_max value: 32.3306 - type: nauc_mrr_at_3_std value: 18.1604 - type: nauc_mrr_at_3_diff1 value: 31.128600000000002 - type: nauc_mrr_at_5_max value: 32.0504 - type: nauc_mrr_at_5_std value: 18.3022 - type: nauc_mrr_at_5_diff1 value: 30.1868 - type: nauc_mrr_at_10_max value: 32.093500000000006 - type: nauc_mrr_at_10_std value: 18.348 - type: nauc_mrr_at_10_diff1 value: 30.2307 - type: nauc_mrr_at_20_max value: 32.3491 - type: nauc_mrr_at_20_std value: 18.309800000000003 - type: nauc_mrr_at_20_diff1 value: 30.0848 - type: nauc_mrr_at_100_max value: 32.5297 - type: nauc_mrr_at_100_std value: 18.4197 - type: nauc_mrr_at_100_diff1 value: 30.03 - type: nauc_mrr_at_1000_max value: 32.502700000000004 - type: nauc_mrr_at_1000_std value: 18.4073 - type: nauc_mrr_at_1000_diff1 value: 30.059599999999996 - type: main_score value: 26.434 - task: type: Retrieval dataset: name: MTEB NQ (default) type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: ndcg_at_1 value: 9.067 - type: ndcg_at_3 value: 13.33 - type: ndcg_at_5 value: 15.773000000000001 - type: ndcg_at_10 value: 18.239 - type: ndcg_at_20 value: 20.777 - type: ndcg_at_100 value: 25.046000000000003 - type: ndcg_at_1000 value: 27.814 - type: map_at_1 value: 8.007 - type: map_at_3 value: 11.732 - type: map_at_5 value: 13.095 - type: map_at_10 value: 14.127 - type: map_at_20 value: 14.860000000000001 - type: map_at_100 value: 15.467 - type: map_at_1000 value: 15.57 - type: recall_at_1 value: 8.007 - type: recall_at_3 value: 16.553 - type: recall_at_5 value: 22.282 - type: recall_at_10 value: 29.592000000000002 - type: recall_at_20 value: 39.134 - type: recall_at_100 value: 61.307 - type: recall_at_1000 value: 82.556 - type: precision_at_1 value: 9.067 - type: precision_at_3 value: 6.441 - type: precision_at_5 value: 5.220000000000001 - type: precision_at_10 value: 3.488 - type: precision_at_20 value: 2.329 - type: precision_at_100 value: 0.734 - type: precision_at_1000 value: 0.1 - type: mrr_at_1 value: 9.0672 - type: mrr_at_3 value: 13.1277 - type: mrr_at_5 value: 14.544199999999998 - type: mrr_at_10 value: 15.589400000000001 - type: mrr_at_20 value: 16.2651 - type: mrr_at_100 value: 16.8195 - type: mrr_at_1000 value: 16.902800000000003 - type: nauc_ndcg_at_1_max value: 11.3832 - type: nauc_ndcg_at_1_std value: -4.1221 - type: nauc_ndcg_at_1_diff1 value: 20.5341 - type: nauc_ndcg_at_3_max value: 11.4743 - type: nauc_ndcg_at_3_std value: -4.4418 - type: nauc_ndcg_at_3_diff1 value: 16.481 - type: nauc_ndcg_at_5_max value: 12.6479 - type: nauc_ndcg_at_5_std value: -4.5466 - type: nauc_ndcg_at_5_diff1 value: 15.1785 - type: nauc_ndcg_at_10_max value: 14.3237 - type: nauc_ndcg_at_10_std value: -4.4135 - type: nauc_ndcg_at_10_diff1 value: 14.6574 - type: nauc_ndcg_at_20_max value: 15.717300000000002 - type: nauc_ndcg_at_20_std value: -3.0106 - type: nauc_ndcg_at_20_diff1 value: 14.6044 - type: nauc_ndcg_at_100_max value: 17.5878 - type: nauc_ndcg_at_100_std value: -0.36519999999999997 - type: nauc_ndcg_at_100_diff1 value: 14.5606 - type: nauc_ndcg_at_1000_max value: 17.5657 - type: nauc_ndcg_at_1000_std value: 1.1903000000000001 - type: nauc_ndcg_at_1000_diff1 value: 14.5654 - type: nauc_map_at_1_max value: 10.2386 - type: nauc_map_at_1_std value: -4.9847 - type: nauc_map_at_1_diff1 value: 20.9545 - type: nauc_map_at_3_max value: 10.9023 - type: nauc_map_at_3_std value: -4.8369 - type: nauc_map_at_3_diff1 value: 17.5991 - type: nauc_map_at_5_max value: 11.7413 - type: nauc_map_at_5_std value: -4.9516 - type: nauc_map_at_5_diff1 value: 16.7798 - type: nauc_map_at_10_max value: 12.6051 - type: nauc_map_at_10_std value: -4.9007000000000005 - type: nauc_map_at_10_diff1 value: 16.4911 - type: nauc_map_at_20_max value: 13.1256 - type: nauc_map_at_20_std value: -4.4518 - type: nauc_map_at_20_diff1 value: 16.4184 - type: nauc_map_at_100_max value: 13.4467 - type: nauc_map_at_100_std value: -3.9765 - type: nauc_map_at_100_diff1 value: 16.4427 - type: nauc_map_at_1000_max value: 13.452 - type: nauc_map_at_1000_std value: -3.8988 - type: nauc_map_at_1000_diff1 value: 16.4438 - type: nauc_recall_at_1_max value: 10.2386 - type: nauc_recall_at_1_std value: -4.9847 - type: nauc_recall_at_1_diff1 value: 20.9545 - type: nauc_recall_at_3_max value: 11.843399999999999 - type: nauc_recall_at_3_std value: -4.3091 - type: nauc_recall_at_3_diff1 value: 14.285999999999998 - type: nauc_recall_at_5_max value: 13.5182 - type: nauc_recall_at_5_std value: -4.417800000000001 - type: nauc_recall_at_5_diff1 value: 12.1453 - type: nauc_recall_at_10_max value: 17.0065 - type: nauc_recall_at_10_std value: -4.252000000000001 - type: nauc_recall_at_10_diff1 value: 11.457199999999998 - type: nauc_recall_at_20_max value: 20.3871 - type: nauc_recall_at_20_std value: -0.7614 - type: nauc_recall_at_20_diff1 value: 11.5536 - type: nauc_recall_at_100_max value: 28.3368 - type: nauc_recall_at_100_std value: 9.5722 - type: nauc_recall_at_100_diff1 value: 10.7211 - type: nauc_recall_at_1000_max value: 37.0782 - type: nauc_recall_at_1000_std value: 31.6326 - type: nauc_recall_at_1000_diff1 value: 8.82 - type: nauc_precision_at_1_max value: 11.3832 - type: nauc_precision_at_1_std value: -4.1221 - type: nauc_precision_at_1_diff1 value: 20.5341 - type: nauc_precision_at_3_max value: 12.951099999999999 - type: nauc_precision_at_3_std value: -3.4715999999999996 - type: nauc_precision_at_3_diff1 value: 14.0988 - type: nauc_precision_at_5_max value: 14.8679 - type: nauc_precision_at_5_std value: -3.9043 - type: nauc_precision_at_5_diff1 value: 11.9479 - type: nauc_precision_at_10_max value: 18.0976 - type: nauc_precision_at_10_std value: -3.1489999999999996 - type: nauc_precision_at_10_diff1 value: 10.7419 - type: nauc_precision_at_20_max value: 20.4974 - type: nauc_precision_at_20_std value: 1.2608 - type: nauc_precision_at_20_diff1 value: 9.8315 - type: nauc_precision_at_100_max value: 24.1911 - type: nauc_precision_at_100_std value: 11.971400000000001 - type: nauc_precision_at_100_diff1 value: 7.0899 - type: nauc_precision_at_1000_max value: 20.2919 - type: nauc_precision_at_1000_std value: 23.0171 - type: nauc_precision_at_1000_diff1 value: 1.4091 - type: nauc_mrr_at_1_max value: 11.3832 - type: nauc_mrr_at_1_std value: -4.1221 - type: nauc_mrr_at_1_diff1 value: 20.5341 - type: nauc_mrr_at_3_max value: 11.7865 - type: nauc_mrr_at_3_std value: -3.6935999999999996 - type: nauc_mrr_at_3_diff1 value: 16.8127 - type: nauc_mrr_at_5_max value: 12.518199999999998 - type: nauc_mrr_at_5_std value: -3.7152 - type: nauc_mrr_at_5_diff1 value: 15.893699999999999 - type: nauc_mrr_at_10_max value: 13.1787 - type: nauc_mrr_at_10_std value: -3.6301 - type: nauc_mrr_at_10_diff1 value: 15.617500000000001 - type: nauc_mrr_at_20_max value: 13.529399999999999 - type: nauc_mrr_at_20_std value: -3.1929 - type: nauc_mrr_at_20_diff1 value: 15.6602 - type: nauc_mrr_at_100_max value: 13.770199999999999 - type: nauc_mrr_at_100_std value: -2.9103 - type: nauc_mrr_at_100_diff1 value: 15.6841 - type: nauc_mrr_at_1000_max value: 13.7598 - type: nauc_mrr_at_1000_std value: -2.8705000000000003 - type: nauc_mrr_at_1000_diff1 value: 15.6886 - type: main_score value: 18.239 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval (default) type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: ndcg_at_1 value: 72.39 - type: ndcg_at_3 value: 76.303 - type: ndcg_at_5 value: 78.164 - type: ndcg_at_10 value: 79.946 - type: ndcg_at_20 value: 80.963 - type: ndcg_at_100 value: 82.086 - type: ndcg_at_1000 value: 82.494 - type: map_at_1 value: 62.965 - type: map_at_3 value: 72.429 - type: map_at_5 value: 74.246 - type: map_at_10 value: 75.414 - type: map_at_20 value: 75.87899999999999 - type: map_at_100 value: 76.164 - type: map_at_1000 value: 76.198 - type: recall_at_1 value: 62.965 - type: recall_at_3 value: 78.39 - type: recall_at_5 value: 83.506 - type: recall_at_10 value: 88.787 - type: recall_at_20 value: 92.223 - type: recall_at_100 value: 96.98 - type: recall_at_1000 value: 99.30099999999999 - type: precision_at_1 value: 72.39 - type: precision_at_3 value: 33.040000000000006 - type: precision_at_5 value: 21.884 - type: precision_at_10 value: 12.084999999999999 - type: precision_at_20 value: 6.49 - type: precision_at_100 value: 1.444 - type: precision_at_1000 value: 0.154 - type: mrr_at_1 value: 72.39 - type: mrr_at_3 value: 77.9883 - type: mrr_at_5 value: 78.8933 - type: mrr_at_10 value: 79.443 - type: mrr_at_20 value: 79.6218 - type: mrr_at_100 value: 79.7045 - type: mrr_at_1000 value: 79.7112 - type: nauc_ndcg_at_1_max value: 43.343199999999996 - type: nauc_ndcg_at_1_std value: -15.6476 - type: nauc_ndcg_at_1_diff1 value: 74.5603 - type: nauc_ndcg_at_3_max value: 41.4951 - type: nauc_ndcg_at_3_std value: -18.006 - type: nauc_ndcg_at_3_diff1 value: 71.4871 - type: nauc_ndcg_at_5_max value: 41.665 - type: nauc_ndcg_at_5_std value: -18.2802 - type: nauc_ndcg_at_5_diff1 value: 71.31060000000001 - type: nauc_ndcg_at_10_max value: 41.9766 - type: nauc_ndcg_at_10_std value: -17.1129 - type: nauc_ndcg_at_10_diff1 value: 71.4114 - type: nauc_ndcg_at_20_max value: 42.3933 - type: nauc_ndcg_at_20_std value: -16.8854 - type: nauc_ndcg_at_20_diff1 value: 71.5046 - type: nauc_ndcg_at_100_max value: 42.7267 - type: nauc_ndcg_at_100_std value: -15.7841 - type: nauc_ndcg_at_100_diff1 value: 71.7294 - type: nauc_ndcg_at_1000_max value: 42.770799999999994 - type: nauc_ndcg_at_1000_std value: -15.8694 - type: nauc_ndcg_at_1000_diff1 value: 71.8391 - type: nauc_map_at_1_max value: 34.103899999999996 - type: nauc_map_at_1_std value: -17.6429 - type: nauc_map_at_1_diff1 value: 74.37780000000001 - type: nauc_map_at_3_max value: 39.3622 - type: nauc_map_at_3_std value: -19.3706 - type: nauc_map_at_3_diff1 value: 72.3035 - type: nauc_map_at_5_max value: 40.3833 - type: nauc_map_at_5_std value: -19.126099999999997 - type: nauc_map_at_5_diff1 value: 71.99950000000001 - type: nauc_map_at_10_max value: 40.8837 - type: nauc_map_at_10_std value: -18.34 - type: nauc_map_at_10_diff1 value: 71.92150000000001 - type: nauc_map_at_20_max value: 41.14 - type: nauc_map_at_20_std value: -18.01 - type: nauc_map_at_20_diff1 value: 71.85629999999999 - type: nauc_map_at_100_max value: 41.2511 - type: nauc_map_at_100_std value: -17.6727 - type: nauc_map_at_100_diff1 value: 71.8731 - type: nauc_map_at_1000_max value: 41.2569 - type: nauc_map_at_1000_std value: -17.6477 - type: nauc_map_at_1000_diff1 value: 71.8801 - type: nauc_recall_at_1_max value: 34.103899999999996 - type: nauc_recall_at_1_std value: -17.6429 - type: nauc_recall_at_1_diff1 value: 74.37780000000001 - type: nauc_recall_at_3_max value: 37.4459 - type: nauc_recall_at_3_std value: -21.2405 - type: nauc_recall_at_3_diff1 value: 68.2773 - type: nauc_recall_at_5_max value: 38.5924 - type: nauc_recall_at_5_std value: -21.644 - type: nauc_recall_at_5_diff1 value: 66.3095 - type: nauc_recall_at_10_max value: 39.3957 - type: nauc_recall_at_10_std value: -17.0364 - type: nauc_recall_at_10_diff1 value: 64.8501 - type: nauc_recall_at_20_max value: 40.325 - type: nauc_recall_at_20_std value: -15.4228 - type: nauc_recall_at_20_diff1 value: 63.5063 - type: nauc_recall_at_100_max value: 43.7134 - type: nauc_recall_at_100_std value: 3.7923 - type: nauc_recall_at_100_diff1 value: 63.7613 - type: nauc_recall_at_1000_max value: 53.65180000000001 - type: nauc_recall_at_1000_std value: 35.6561 - type: nauc_recall_at_1000_diff1 value: 65.9936 - type: nauc_precision_at_1_max value: 43.343199999999996 - type: nauc_precision_at_1_std value: -15.6476 - type: nauc_precision_at_1_diff1 value: 74.5603 - type: nauc_precision_at_3_max value: 21.8142 - type: nauc_precision_at_3_std value: -1.1627999999999998 - type: nauc_precision_at_3_diff1 value: 9.954 - type: nauc_precision_at_5_max value: 15.2041 - type: nauc_precision_at_5_std value: 4.2947 - type: nauc_precision_at_5_diff1 value: -5.305 - type: nauc_precision_at_10_max value: 8.163499999999999 - type: nauc_precision_at_10_std value: 10.9367 - type: nauc_precision_at_10_diff1 value: -18.0036 - type: nauc_precision_at_20_max value: 3.5585 - type: nauc_precision_at_20_std value: 14.5351 - type: nauc_precision_at_20_diff1 value: -25.249700000000004 - type: nauc_precision_at_100_max value: -3.0063 - type: nauc_precision_at_100_std value: 19.791700000000002 - type: nauc_precision_at_100_diff1 value: -32.281 - type: nauc_precision_at_1000_max value: -6.468100000000001 - type: nauc_precision_at_1000_std value: 20.025100000000002 - type: nauc_precision_at_1000_diff1 value: -34.4531 - type: nauc_mrr_at_1_max value: 43.2621 - type: nauc_mrr_at_1_std value: -15.864 - type: nauc_mrr_at_1_diff1 value: 74.5603 - type: nauc_mrr_at_3_max value: 43.8197 - type: nauc_mrr_at_3_std value: -16.1674 - type: nauc_mrr_at_3_diff1 value: 72.9802 - type: nauc_mrr_at_5_max value: 43.9843 - type: nauc_mrr_at_5_std value: -16.042 - type: nauc_mrr_at_5_diff1 value: 72.907 - type: nauc_mrr_at_10_max value: 44.0294 - type: nauc_mrr_at_10_std value: -15.711500000000001 - type: nauc_mrr_at_10_diff1 value: 72.9915 - type: nauc_mrr_at_20_max value: 44.044200000000004 - type: nauc_mrr_at_20_std value: -15.7842 - type: nauc_mrr_at_20_diff1 value: 73.0535 - type: nauc_mrr_at_100_max value: 44.0194 - type: nauc_mrr_at_100_std value: -15.7612 - type: nauc_mrr_at_100_diff1 value: 73.0738 - type: nauc_mrr_at_1000_max value: 44.0187 - type: nauc_mrr_at_1000_std value: -15.764100000000001 - type: nauc_mrr_at_1000_diff1 value: 73.0758 - type: main_score value: 79.946 - task: type: Clustering dataset: name: MTEB RedditClustering (default) type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 20.2171 - type: v_measure_std value: 4.4216 - type: main_score value: 20.2171 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P (default) type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 38.8882 - type: v_measure_std value: 9.315 - type: main_score value: 38.8882 - task: type: Retrieval dataset: name: MTEB SCIDOCS (default) type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: ndcg_at_1 value: 15.1 - type: ndcg_at_3 value: 12.036 - type: ndcg_at_5 value: 11.007 - type: ndcg_at_10 value: 13.352 - type: ndcg_at_20 value: 15.6 - type: ndcg_at_100 value: 19.871 - type: ndcg_at_1000 value: 25.255 - type: map_at_1 value: 3.058 - type: map_at_3 value: 5.268 - type: map_at_5 value: 6.406000000000001 - type: map_at_10 value: 7.478 - type: map_at_20 value: 8.21 - type: map_at_100 value: 8.946 - type: map_at_1000 value: 9.223 - type: recall_at_1 value: 3.058 - type: recall_at_3 value: 6.793 - type: recall_at_5 value: 10.003 - type: recall_at_10 value: 14.288 - type: recall_at_20 value: 19.542 - type: recall_at_100 value: 33.413 - type: recall_at_1000 value: 59.733000000000004 - type: precision_at_1 value: 15.1 - type: precision_at_3 value: 11.167 - type: precision_at_5 value: 9.879999999999999 - type: precision_at_10 value: 7.07 - type: precision_at_20 value: 4.825 - type: precision_at_100 value: 1.649 - type: precision_at_1000 value: 0.294 - type: mrr_at_1 value: 15.1 - type: mrr_at_3 value: 20.2833 - type: mrr_at_5 value: 22.4733 - type: mrr_at_10 value: 23.6601 - type: mrr_at_20 value: 24.3772 - type: mrr_at_100 value: 24.9007 - type: mrr_at_1000 value: 24.9743 - type: nauc_ndcg_at_1_max value: 18.8537 - type: nauc_ndcg_at_1_std value: -3.2037000000000004 - type: nauc_ndcg_at_1_diff1 value: 20.8288 - type: nauc_ndcg_at_3_max value: 15.3817 - type: nauc_ndcg_at_3_std value: -3.2159 - type: nauc_ndcg_at_3_diff1 value: 18.13 - type: nauc_ndcg_at_5_max value: 17.940900000000003 - type: nauc_ndcg_at_5_std value: 0.3294 - type: nauc_ndcg_at_5_diff1 value: 16.9378 - type: nauc_ndcg_at_10_max value: 21.146 - type: nauc_ndcg_at_10_std value: 2.6954 - type: nauc_ndcg_at_10_diff1 value: 15.363399999999999 - type: nauc_ndcg_at_20_max value: 21.9075 - type: nauc_ndcg_at_20_std value: 4.9554 - type: nauc_ndcg_at_20_diff1 value: 15.4857 - type: nauc_ndcg_at_100_max value: 22.9248 - type: nauc_ndcg_at_100_std value: 8.8094 - type: nauc_ndcg_at_100_diff1 value: 15.1255 - type: nauc_ndcg_at_1000_max value: 24.7883 - type: nauc_ndcg_at_1000_std value: 13.3551 - type: nauc_ndcg_at_1000_diff1 value: 15.1244 - type: nauc_map_at_1_max value: 19.238 - type: nauc_map_at_1_std value: -2.9537 - type: nauc_map_at_1_diff1 value: 21.3456 - type: nauc_map_at_3_max value: 16.0914 - type: nauc_map_at_3_std value: -4.2357 - type: nauc_map_at_3_diff1 value: 17.1314 - type: nauc_map_at_5_max value: 17.9317 - type: nauc_map_at_5_std value: -1.2885 - type: nauc_map_at_5_diff1 value: 15.5052 - type: nauc_map_at_10_max value: 20.1204 - type: nauc_map_at_10_std value: 0.29109999999999997 - type: nauc_map_at_10_diff1 value: 14.513200000000001 - type: nauc_map_at_20_max value: 20.6688 - type: nauc_map_at_20_std value: 1.6063 - type: nauc_map_at_20_diff1 value: 14.934800000000001 - type: nauc_map_at_100_max value: 21.2455 - type: nauc_map_at_100_std value: 3.1651 - type: nauc_map_at_100_diff1 value: 14.6507 - type: nauc_map_at_1000_max value: 21.4903 - type: nauc_map_at_1000_std value: 3.7647 - type: nauc_map_at_1000_diff1 value: 14.6354 - type: nauc_recall_at_1_max value: 19.238 - type: nauc_recall_at_1_std value: -2.9537 - type: nauc_recall_at_1_diff1 value: 21.3456 - type: nauc_recall_at_3_max value: 14.5564 - type: nauc_recall_at_3_std value: -3.2211 - type: nauc_recall_at_3_diff1 value: 17.0505 - type: nauc_recall_at_5_max value: 18.159200000000002 - type: nauc_recall_at_5_std value: 2.6766 - type: nauc_recall_at_5_diff1 value: 14.7598 - type: nauc_recall_at_10_max value: 23.6071 - type: nauc_recall_at_10_std value: 6.6582 - type: nauc_recall_at_10_diff1 value: 11.7647 - type: nauc_recall_at_20_max value: 23.5471 - type: nauc_recall_at_20_std value: 10.6906 - type: nauc_recall_at_20_diff1 value: 11.5654 - type: nauc_recall_at_100_max value: 23.2746 - type: nauc_recall_at_100_std value: 18.3139 - type: nauc_recall_at_100_diff1 value: 10.2364 - type: nauc_recall_at_1000_max value: 27.2333 - type: nauc_recall_at_1000_std value: 32.5351 - type: nauc_recall_at_1000_diff1 value: 8.7211 - type: nauc_precision_at_1_max value: 18.8537 - type: nauc_precision_at_1_std value: -3.2037000000000004 - type: nauc_precision_at_1_diff1 value: 20.8288 - type: nauc_precision_at_3_max value: 14.260200000000001 - type: nauc_precision_at_3_std value: -3.1767 - type: nauc_precision_at_3_diff1 value: 16.9826 - type: nauc_precision_at_5_max value: 17.999399999999998 - type: nauc_precision_at_5_std value: 2.7119999999999997 - type: nauc_precision_at_5_diff1 value: 14.685300000000002 - type: nauc_precision_at_10_max value: 23.5629 - type: nauc_precision_at_10_std value: 6.7014000000000005 - type: nauc_precision_at_10_diff1 value: 11.6848 - type: nauc_precision_at_20_max value: 23.1819 - type: nauc_precision_at_20_std value: 10.478 - type: nauc_precision_at_20_diff1 value: 11.6263 - type: nauc_precision_at_100_max value: 22.7954 - type: nauc_precision_at_100_std value: 18.215500000000002 - type: nauc_precision_at_100_diff1 value: 10.526299999999999 - type: nauc_precision_at_1000_max value: 26.4283 - type: nauc_precision_at_1000_std value: 31.9492 - type: nauc_precision_at_1000_diff1 value: 9.031799999999999 - type: nauc_mrr_at_1_max value: 18.8537 - type: nauc_mrr_at_1_std value: -3.2037000000000004 - type: nauc_mrr_at_1_diff1 value: 20.8288 - type: nauc_mrr_at_3_max value: 16.253500000000003 - type: nauc_mrr_at_3_std value: -2.3413 - type: nauc_mrr_at_3_diff1 value: 20.333399999999997 - type: nauc_mrr_at_5_max value: 17.2285 - type: nauc_mrr_at_5_std value: -0.5249 - type: nauc_mrr_at_5_diff1 value: 20.119 - type: nauc_mrr_at_10_max value: 18.351100000000002 - type: nauc_mrr_at_10_std value: 0.0489 - type: nauc_mrr_at_10_diff1 value: 19.711000000000002 - type: nauc_mrr_at_20_max value: 18.409100000000002 - type: nauc_mrr_at_20_std value: 0.41079999999999994 - type: nauc_mrr_at_20_diff1 value: 19.5248 - type: nauc_mrr_at_100_max value: 18.404799999999998 - type: nauc_mrr_at_100_std value: 0.4336 - type: nauc_mrr_at_100_diff1 value: 19.5129 - type: nauc_mrr_at_1000_max value: 18.3706 - type: nauc_mrr_at_1000_std value: 0.41529999999999995 - type: nauc_mrr_at_1000_diff1 value: 19.5103 - type: main_score value: 13.352 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: pearson value: 73.39529999999999 - type: spearman value: 63.871599999999994 - type: cosine_pearson value: 73.39529999999999 - type: cosine_spearman value: 63.871500000000005 - type: manhattan_pearson value: 62.5861 - type: manhattan_spearman value: 56.714600000000004 - type: euclidean_pearson value: 62.606899999999996 - type: euclidean_spearman value: 56.714200000000005 - type: main_score value: 63.871500000000005 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: pearson value: 72.35770000000001 - type: spearman value: 63.606899999999996 - type: cosine_pearson value: 72.35770000000001 - type: cosine_spearman value: 63.610299999999995 - type: manhattan_pearson value: 59.8404 - type: manhattan_spearman value: 56.85059999999999 - type: euclidean_pearson value: 59.8116 - type: euclidean_spearman value: 56.691 - type: main_score value: 63.610299999999995 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: pearson value: 76.4727 - type: spearman value: 76.983 - type: cosine_pearson value: 76.4727 - type: cosine_spearman value: 76.983 - type: manhattan_pearson value: 49.4803 - type: manhattan_spearman value: 51.1301 - type: euclidean_pearson value: 49.4542 - type: euclidean_spearman value: 51.19669999999999 - type: main_score value: 76.983 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: pearson value: 75.777 - type: spearman value: 71.2099 - type: cosine_pearson value: 75.777 - type: cosine_spearman value: 71.2099 - type: manhattan_pearson value: 52.475899999999996 - type: manhattan_spearman value: 53.8072 - type: euclidean_pearson value: 52.416799999999995 - type: euclidean_spearman value: 53.725500000000004 - type: main_score value: 71.2099 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: pearson value: 80.1072 - type: spearman value: 80.735 - type: cosine_pearson value: 80.1072 - type: cosine_spearman value: 80.7349 - type: manhattan_pearson value: 50.711600000000004 - type: manhattan_spearman value: 53.491299999999995 - type: euclidean_pearson value: 50.6255 - type: euclidean_spearman value: 53.47539999999999 - type: main_score value: 80.7349 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: pearson value: 73.1658 - type: spearman value: 74.2121 - type: cosine_pearson value: 73.1658 - type: cosine_spearman value: 74.2121 - type: manhattan_pearson value: 43.4074 - type: manhattan_spearman value: 47.193200000000004 - type: euclidean_pearson value: 43.438300000000005 - type: euclidean_spearman value: 47.2757 - type: main_score value: 74.2121 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: pearson value: 81.8156 - type: spearman value: 81.9457 - type: cosine_pearson value: 81.8156 - type: cosine_spearman value: 81.9457 - type: manhattan_pearson value: 59.4332 - type: manhattan_spearman value: 60.5687 - type: euclidean_pearson value: 59.2942 - type: euclidean_spearman value: 60.39679999999999 - type: main_score value: 81.9457 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: pearson value: 48.9285 - type: spearman value: 55.862500000000004 - type: cosine_pearson value: 48.9285 - type: cosine_spearman value: 55.862500000000004 - type: manhattan_pearson value: 43.082300000000004 - type: manhattan_spearman value: 51.1876 - type: euclidean_pearson value: 43.2313 - type: euclidean_spearman value: 51.094899999999996 - type: main_score value: 55.862500000000004 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: pearson value: 73.44380000000001 - type: spearman value: 71.9343 - type: cosine_pearson value: 73.44380000000001 - type: cosine_spearman value: 71.9345 - type: manhattan_pearson value: 52.233799999999995 - type: manhattan_spearman value: 51.7687 - type: euclidean_pearson value: 52.2753 - type: euclidean_spearman value: 51.845 - type: main_score value: 71.9345 - task: type: Reranking dataset: name: MTEB SciDocsRR (default) type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 71.4557 - type: mrr value: 90.6219 - type: nAUC_map_max value: 54.74830000000001 - type: nAUC_map_std value: 65.2558 - type: nAUC_map_diff1 value: 10.2936 - type: nAUC_mrr_max value: 75.10900000000001 - type: nAUC_mrr_std value: 69.6523 - type: nAUC_mrr_diff1 value: 49.4991 - type: main_score value: 71.4557 - task: type: Retrieval dataset: name: MTEB SciFact (default) type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: ndcg_at_1 value: 43.667 - type: ndcg_at_3 value: 52.102000000000004 - type: ndcg_at_5 value: 54.751000000000005 - type: ndcg_at_10 value: 57.422 - type: ndcg_at_20 value: 59.425 - type: ndcg_at_100 value: 61.166 - type: ndcg_at_1000 value: 62.244 - type: map_at_1 value: 41.888999999999996 - type: map_at_3 value: 49.435 - type: map_at_5 value: 51.029 - type: map_at_10 value: 52.190000000000005 - type: map_at_20 value: 52.797000000000004 - type: map_at_100 value: 53.03 - type: map_at_1000 value: 53.069 - type: recall_at_1 value: 41.888999999999996 - type: recall_at_3 value: 57.916999999999994 - type: recall_at_5 value: 64.372 - type: recall_at_10 value: 72.311 - type: recall_at_20 value: 79.97800000000001 - type: recall_at_100 value: 89.333 - type: recall_at_1000 value: 97.867 - type: precision_at_1 value: 43.667 - type: precision_at_3 value: 20.778 - type: precision_at_5 value: 14.066999999999998 - type: precision_at_10 value: 8.033 - type: precision_at_20 value: 4.45 - type: precision_at_100 value: 1.0030000000000001 - type: precision_at_1000 value: 0.11 - type: mrr_at_1 value: 43.666700000000006 - type: mrr_at_3 value: 50.9444 - type: mrr_at_5 value: 52.3444 - type: mrr_at_10 value: 53.3852 - type: mrr_at_20 value: 53.8864 - type: mrr_at_100 value: 54.0887 - type: mrr_at_1000 value: 54.11749999999999 - type: nauc_ndcg_at_1_max value: 36.6444 - type: nauc_ndcg_at_1_std value: -7.4722 - type: nauc_ndcg_at_1_diff1 value: 63.631099999999996 - type: nauc_ndcg_at_3_max value: 37.2859 - type: nauc_ndcg_at_3_std value: -11.2775 - type: nauc_ndcg_at_3_diff1 value: 56.352999999999994 - type: nauc_ndcg_at_5_max value: 36.7832 - type: nauc_ndcg_at_5_std value: -12.310699999999999 - type: nauc_ndcg_at_5_diff1 value: 55.41740000000001 - type: nauc_ndcg_at_10_max value: 37.9586 - type: nauc_ndcg_at_10_std value: -9.7483 - type: nauc_ndcg_at_10_diff1 value: 56.8082 - type: nauc_ndcg_at_20_max value: 38.4072 - type: nauc_ndcg_at_20_std value: -7.473299999999999 - type: nauc_ndcg_at_20_diff1 value: 56.4974 - type: nauc_ndcg_at_100_max value: 38.5583 - type: nauc_ndcg_at_100_std value: -5.521100000000001 - type: nauc_ndcg_at_100_diff1 value: 56.8808 - type: nauc_ndcg_at_1000_max value: 38.580999999999996 - type: nauc_ndcg_at_1000_std value: -6.6578 - type: nauc_ndcg_at_1000_diff1 value: 57.3412 - type: nauc_map_at_1_max value: 35.4069 - type: nauc_map_at_1_std value: -11.9598 - type: nauc_map_at_1_diff1 value: 62.351299999999995 - type: nauc_map_at_3_max value: 36.3612 - type: nauc_map_at_3_std value: -12.6999 - type: nauc_map_at_3_diff1 value: 57.918099999999995 - type: nauc_map_at_5_max value: 36.268299999999996 - type: nauc_map_at_5_std value: -12.921199999999999 - type: nauc_map_at_5_diff1 value: 57.496 - type: nauc_map_at_10_max value: 36.918099999999995 - type: nauc_map_at_10_std value: -11.6299 - type: nauc_map_at_10_diff1 value: 58.1148 - type: nauc_map_at_20_max value: 37.060900000000004 - type: nauc_map_at_20_std value: -10.8228 - type: nauc_map_at_20_diff1 value: 58.0205 - type: nauc_map_at_100_max value: 37.085499999999996 - type: nauc_map_at_100_std value: -10.5358 - type: nauc_map_at_100_diff1 value: 58.095 - type: nauc_map_at_1000_max value: 37.1083 - type: nauc_map_at_1000_std value: -10.5578 - type: nauc_map_at_1000_diff1 value: 58.1224 - type: nauc_recall_at_1_max value: 35.4069 - type: nauc_recall_at_1_std value: -11.9598 - type: nauc_recall_at_1_diff1 value: 62.351299999999995 - type: nauc_recall_at_3_max value: 37.6511 - type: nauc_recall_at_3_std value: -13.3993 - type: nauc_recall_at_3_diff1 value: 50.4572 - type: nauc_recall_at_5_max value: 35.8548 - type: nauc_recall_at_5_std value: -16.1098 - type: nauc_recall_at_5_diff1 value: 47.2106 - type: nauc_recall_at_10_max value: 38.9793 - type: nauc_recall_at_10_std value: -8.1869 - type: nauc_recall_at_10_diff1 value: 50.5379 - type: nauc_recall_at_20_max value: 42.3127 - type: nauc_recall_at_20_std value: 4.1918999999999995 - type: nauc_recall_at_20_diff1 value: 47.5366 - type: nauc_recall_at_100_max value: 48.4392 - type: nauc_recall_at_100_std value: 37.5486 - type: nauc_recall_at_100_diff1 value: 46.853699999999996 - type: nauc_recall_at_1000_max value: 70.1389 - type: nauc_recall_at_1000_std value: 81.7519 - type: nauc_recall_at_1000_diff1 value: 46.0741 - type: nauc_precision_at_1_max value: 36.6444 - type: nauc_precision_at_1_std value: -7.4722 - type: nauc_precision_at_1_diff1 value: 63.631099999999996 - type: nauc_precision_at_3_max value: 37.9141 - type: nauc_precision_at_3_std value: -2.6281 - type: nauc_precision_at_3_diff1 value: 45.406600000000005 - type: nauc_precision_at_5_max value: 35.0402 - type: nauc_precision_at_5_std value: 0.7128 - type: nauc_precision_at_5_diff1 value: 36.686099999999996 - type: nauc_precision_at_10_max value: 37.4825 - type: nauc_precision_at_10_std value: 15.613199999999999 - type: nauc_precision_at_10_diff1 value: 33.1716 - type: nauc_precision_at_20_max value: 36.1575 - type: nauc_precision_at_20_std value: 30.4446 - type: nauc_precision_at_20_diff1 value: 23.3224 - type: nauc_precision_at_100_max value: 29.5019 - type: nauc_precision_at_100_std value: 52.942 - type: nauc_precision_at_100_diff1 value: 9.0284 - type: nauc_precision_at_1000_max value: 20.350099999999998 - type: nauc_precision_at_1000_std value: 52.2915 - type: nauc_precision_at_1000_diff1 value: -8.6009 - type: nauc_mrr_at_1_max value: 36.6444 - type: nauc_mrr_at_1_std value: -7.4722 - type: nauc_mrr_at_1_diff1 value: 63.631099999999996 - type: nauc_mrr_at_3_max value: 38.016299999999994 - type: nauc_mrr_at_3_std value: -8.0229 - type: nauc_mrr_at_3_diff1 value: 58.757400000000004 - type: nauc_mrr_at_5_max value: 37.433899999999994 - type: nauc_mrr_at_5_std value: -8.1996 - type: nauc_mrr_at_5_diff1 value: 58.235899999999994 - type: nauc_mrr_at_10_max value: 37.7997 - type: nauc_mrr_at_10_std value: -7.542699999999999 - type: nauc_mrr_at_10_diff1 value: 58.8486 - type: nauc_mrr_at_20_max value: 37.8879 - type: nauc_mrr_at_20_std value: -7.133000000000001 - type: nauc_mrr_at_20_diff1 value: 58.834900000000005 - type: nauc_mrr_at_100_max value: 37.8627 - type: nauc_mrr_at_100_std value: -6.9667 - type: nauc_mrr_at_100_diff1 value: 58.880900000000004 - type: nauc_mrr_at_1000_max value: 37.8675 - type: nauc_mrr_at_1000_std value: -6.9817 - type: nauc_mrr_at_1000_diff1 value: 58.904500000000006 - type: main_score value: 57.422 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions (default) type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: similarity_accuracy value: 99.6703 - type: similarity_accuracy_threshold value: 81.69669999999999 - type: similarity_f1 value: 82.5479 - type: similarity_f1_threshold value: 80.97919999999999 - type: similarity_precision value: 85.6069 - type: similarity_recall value: 79.7 - type: similarity_ap value: 87.6918 - type: cosine_accuracy value: 99.6703 - type: cosine_accuracy_threshold value: 81.69669999999999 - type: cosine_f1 value: 82.5479 - type: cosine_f1_threshold value: 80.97919999999999 - type: cosine_precision value: 85.6069 - type: cosine_recall value: 79.7 - type: cosine_ap value: 87.6918 - type: manhattan_accuracy value: 99.4327 - type: manhattan_accuracy_threshold value: 2292.4838999999997 - type: manhattan_f1 value: 66.0851 - type: manhattan_f1_threshold value: 2517.333 - type: manhattan_precision value: 72.6619 - type: manhattan_recall value: 60.6 - type: manhattan_ap value: 68.1683 - type: euclidean_accuracy value: 99.4327 - type: euclidean_accuracy_threshold value: 105.6427 - type: euclidean_f1 value: 66.1605 - type: euclidean_f1_threshold value: 114.9346 - type: euclidean_precision value: 72.2749 - type: euclidean_recall value: 61.0 - type: euclidean_ap value: 68.2419 - type: dot_accuracy value: 99.0168 - type: dot_accuracy_threshold value: 1011.5417000000001 - type: dot_f1 value: 18.6459 - type: dot_f1_threshold value: 554.0581999999999 - type: dot_precision value: 20.9476 - type: dot_recall value: 16.8 - type: dot_ap value: 11.5838 - type: max_accuracy value: 99.6703 - type: max_f1 value: 82.5479 - type: max_precision value: 85.6069 - type: max_recall value: 79.7 - type: max_ap value: 87.6918 - type: main_score value: 87.6918 - task: type: Clustering dataset: name: MTEB StackExchangeClustering (default) type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 27.147700000000004 - type: v_measure_std value: 4.3151 - type: main_score value: 27.147700000000004 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P (default) type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 28.9253 - type: v_measure_std value: 1.6500000000000001 - type: main_score value: 28.9253 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions (default) type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 42.7933 - type: mrr value: 43.2531 - type: nAUC_map_max value: 15.137400000000001 - type: nAUC_map_std value: 4.6048 - type: nAUC_map_diff1 value: 31.665100000000002 - type: nAUC_mrr_max value: 16.429299999999998 - type: nAUC_mrr_std value: 4.943899999999999 - type: nAUC_mrr_diff1 value: 30.8849 - type: main_score value: 42.7933 - task: type: Summarization dataset: name: MTEB SummEval (default) type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: pearson value: 31.8891 - type: spearman value: 30.635299999999997 - type: cosine_spearman value: 30.635299999999997 - type: cosine_pearson value: 31.8891 - type: dot_spearman value: 23.1495 - type: dot_pearson value: 20.2811 - type: main_score value: 30.635299999999997 - task: type: Retrieval dataset: name: MTEB TRECCOVID (default) type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: ndcg_at_1 value: 60.0 - type: ndcg_at_3 value: 56.592 - type: ndcg_at_5 value: 52.15 - type: ndcg_at_10 value: 48.264 - type: ndcg_at_20 value: 43.568 - type: ndcg_at_100 value: 31.196 - type: ndcg_at_1000 value: 26.101000000000003 - type: map_at_1 value: 0.153 - type: map_at_3 value: 0.4 - type: map_at_5 value: 0.601 - type: map_at_10 value: 1.016 - type: map_at_20 value: 1.6099999999999999 - type: map_at_100 value: 4.169 - type: map_at_1000 value: 9.733 - type: recall_at_1 value: 0.153 - type: recall_at_3 value: 0.42300000000000004 - type: recall_at_5 value: 0.6629999999999999 - type: recall_at_10 value: 1.201 - type: recall_at_20 value: 2.022 - type: recall_at_100 value: 6.5409999999999995 - type: recall_at_1000 value: 24.422 - type: precision_at_1 value: 64.0 - type: precision_at_3 value: 58.667 - type: precision_at_5 value: 54.0 - type: precision_at_10 value: 49.8 - type: precision_at_20 value: 44.3 - type: precision_at_100 value: 31.180000000000003 - type: precision_at_1000 value: 12.21 - type: mrr_at_1 value: 64.0 - type: mrr_at_3 value: 68.6667 - type: mrr_at_5 value: 69.9667 - type: mrr_at_10 value: 71.2222 - type: mrr_at_20 value: 71.3651 - type: mrr_at_100 value: 71.4965 - type: mrr_at_1000 value: 71.51429999999999 - type: nauc_ndcg_at_1_max value: 37.0018 - type: nauc_ndcg_at_1_std value: 3.0042 - type: nauc_ndcg_at_1_diff1 value: 1.0129000000000001 - type: nauc_ndcg_at_3_max value: 42.3179 - type: nauc_ndcg_at_3_std value: 1.1211 - type: nauc_ndcg_at_3_diff1 value: -1.3197999999999999 - type: nauc_ndcg_at_5_max value: 38.2867 - type: nauc_ndcg_at_5_std value: 1.436 - type: nauc_ndcg_at_5_diff1 value: -0.635 - type: nauc_ndcg_at_10_max value: 36.545100000000005 - type: nauc_ndcg_at_10_std value: 9.4313 - type: nauc_ndcg_at_10_diff1 value: 0.7185 - type: nauc_ndcg_at_20_max value: 28.841499999999996 - type: nauc_ndcg_at_20_std value: 14.584 - type: nauc_ndcg_at_20_diff1 value: 0.2278 - type: nauc_ndcg_at_100_max value: 22.2284 - type: nauc_ndcg_at_100_std value: 30.9548 - type: nauc_ndcg_at_100_diff1 value: 1.7124000000000001 - type: nauc_ndcg_at_1000_max value: 7.9275 - type: nauc_ndcg_at_1000_std value: 43.918 - type: nauc_ndcg_at_1000_diff1 value: 1.1608 - type: nauc_map_at_1_max value: 16.718700000000002 - type: nauc_map_at_1_std value: -14.5026 - type: nauc_map_at_1_diff1 value: 6.9494 - type: nauc_map_at_3_max value: 26.3749 - type: nauc_map_at_3_std value: -14.2379 - type: nauc_map_at_3_diff1 value: 2.6883 - type: nauc_map_at_5_max value: 26.8639 - type: nauc_map_at_5_std value: -11.9289 - type: nauc_map_at_5_diff1 value: -0.5275 - type: nauc_map_at_10_max value: 28.7924 - type: nauc_map_at_10_std value: -6.2317 - type: nauc_map_at_10_diff1 value: 0.153 - type: nauc_map_at_20_max value: 24.3923 - type: nauc_map_at_20_std value: 1.5524 - type: nauc_map_at_20_diff1 value: -0.7799999999999999 - type: nauc_map_at_100_max value: 14.5538 - type: nauc_map_at_100_std value: 29.851499999999998 - type: nauc_map_at_100_diff1 value: -1.5013 - type: nauc_map_at_1000_max value: 6.609800000000001 - type: nauc_map_at_1000_std value: 50.8853 - type: nauc_map_at_1000_diff1 value: 2.2463 - type: nauc_recall_at_1_max value: 16.718700000000002 - type: nauc_recall_at_1_std value: -14.5026 - type: nauc_recall_at_1_diff1 value: 6.9494 - type: nauc_recall_at_3_max value: 26.313 - type: nauc_recall_at_3_std value: -16.5391 - type: nauc_recall_at_3_diff1 value: -0.0947 - type: nauc_recall_at_5_max value: 27.136 - type: nauc_recall_at_5_std value: -13.486999999999998 - type: nauc_recall_at_5_diff1 value: -2.2484 - type: nauc_recall_at_10_max value: 27.9019 - type: nauc_recall_at_10_std value: -7.2991 - type: nauc_recall_at_10_diff1 value: 0.35729999999999995 - type: nauc_recall_at_20_max value: 24.1923 - type: nauc_recall_at_20_std value: 0.3075 - type: nauc_recall_at_20_diff1 value: -2.6993 - type: nauc_recall_at_100_max value: 15.928400000000002 - type: nauc_recall_at_100_std value: 24.5423 - type: nauc_recall_at_100_diff1 value: -4.0408 - type: nauc_recall_at_1000_max value: -0.2523 - type: nauc_recall_at_1000_std value: 49.0728 - type: nauc_recall_at_1000_diff1 value: -0.1562 - type: nauc_precision_at_1_max value: 42.5437 - type: nauc_precision_at_1_std value: 0.859 - type: nauc_precision_at_1_diff1 value: -7.6319 - type: nauc_precision_at_3_max value: 46.4231 - type: nauc_precision_at_3_std value: -2.6254 - type: nauc_precision_at_3_diff1 value: -5.129700000000001 - type: nauc_precision_at_5_max value: 40.022600000000004 - type: nauc_precision_at_5_std value: 1.4931 - type: nauc_precision_at_5_diff1 value: -5.634399999999999 - type: nauc_precision_at_10_max value: 37.8846 - type: nauc_precision_at_10_std value: 11.4085 - type: nauc_precision_at_10_diff1 value: -2.3909 - type: nauc_precision_at_20_max value: 26.971400000000003 - type: nauc_precision_at_20_std value: 17.3784 - type: nauc_precision_at_20_diff1 value: -1.5310000000000001 - type: nauc_precision_at_100_max value: 19.9237 - type: nauc_precision_at_100_std value: 35.952400000000004 - type: nauc_precision_at_100_diff1 value: 1.4594 - type: nauc_precision_at_1000_max value: 6.1676 - type: nauc_precision_at_1000_std value: 50.53959999999999 - type: nauc_precision_at_1000_diff1 value: 3.8484 - type: nauc_mrr_at_1_max value: 42.5437 - type: nauc_mrr_at_1_std value: 0.859 - type: nauc_mrr_at_1_diff1 value: -7.6319 - type: nauc_mrr_at_3_max value: 44.3255 - type: nauc_mrr_at_3_std value: -4.5994 - type: nauc_mrr_at_3_diff1 value: -12.2252 - type: nauc_mrr_at_5_max value: 45.7817 - type: nauc_mrr_at_5_std value: -3.1611000000000002 - type: nauc_mrr_at_5_diff1 value: -10.706100000000001 - type: nauc_mrr_at_10_max value: 45.5444 - type: nauc_mrr_at_10_std value: -1.1735 - type: nauc_mrr_at_10_diff1 value: -9.6912 - type: nauc_mrr_at_20_max value: 45.3001 - type: nauc_mrr_at_20_std value: -0.8477999999999999 - type: nauc_mrr_at_20_diff1 value: -8.7214 - type: nauc_mrr_at_100_max value: 45.3697 - type: nauc_mrr_at_100_std value: -1.2326 - type: nauc_mrr_at_100_diff1 value: -9.1853 - type: nauc_mrr_at_1000_max value: 45.356 - type: nauc_mrr_at_1000_std value: -1.2729000000000001 - type: nauc_mrr_at_1000_diff1 value: -9.2226 - type: main_score value: 48.264 - task: type: Retrieval dataset: name: MTEB Touche2020 (default) type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: ndcg_at_1 value: 13.264999999999999 - type: ndcg_at_3 value: 16.817 - type: ndcg_at_5 value: 17.718999999999998 - type: ndcg_at_10 value: 17.318 - type: ndcg_at_20 value: 18.445 - type: ndcg_at_100 value: 28.137 - type: ndcg_at_1000 value: 41.744 - type: map_at_1 value: 1.335 - type: map_at_3 value: 2.94 - type: map_at_5 value: 4.37 - type: map_at_10 value: 6.447 - type: map_at_20 value: 8.141 - type: map_at_100 value: 10.428999999999998 - type: map_at_1000 value: 12.23 - type: recall_at_1 value: 1.335 - type: recall_at_3 value: 4.05 - type: recall_at_5 value: 7.507999999999999 - type: recall_at_10 value: 12.862000000000002 - type: recall_at_20 value: 18.953999999999997 - type: recall_at_100 value: 40.384 - type: recall_at_1000 value: 82.421 - type: precision_at_1 value: 16.326999999999998 - type: precision_at_3 value: 21.088 - type: precision_at_5 value: 21.224 - type: precision_at_10 value: 17.755000000000003 - type: precision_at_20 value: 13.264999999999999 - type: precision_at_100 value: 6.5920000000000005 - type: precision_at_1000 value: 1.516 - type: mrr_at_1 value: 16.3265 - type: mrr_at_3 value: 29.251700000000003 - type: mrr_at_5 value: 32.9252 - type: mrr_at_10 value: 34.613699999999994 - type: mrr_at_20 value: 35.3587 - type: mrr_at_100 value: 35.6307 - type: mrr_at_1000 value: 35.6307 - type: nauc_ndcg_at_1_max value: -32.3322 - type: nauc_ndcg_at_1_std value: -13.9866 - type: nauc_ndcg_at_1_diff1 value: -21.525 - type: nauc_ndcg_at_3_max value: -33.6213 - type: nauc_ndcg_at_3_std value: -9.2265 - type: nauc_ndcg_at_3_diff1 value: -7.9922 - type: nauc_ndcg_at_5_max value: -38.3363 - type: nauc_ndcg_at_5_std value: -19.017999999999997 - type: nauc_ndcg_at_5_diff1 value: 0.7867000000000001 - type: nauc_ndcg_at_10_max value: -45.460699999999996 - type: nauc_ndcg_at_10_std value: -36.0452 - type: nauc_ndcg_at_10_diff1 value: 11.525599999999999 - type: nauc_ndcg_at_20_max value: -43.7997 - type: nauc_ndcg_at_20_std value: -39.293499999999995 - type: nauc_ndcg_at_20_diff1 value: 18.019099999999998 - type: nauc_ndcg_at_100_max value: -47.180499999999995 - type: nauc_ndcg_at_100_std value: -31.8569 - type: nauc_ndcg_at_100_diff1 value: 14.1121 - type: nauc_ndcg_at_1000_max value: -40.8476 - type: nauc_ndcg_at_1000_std value: -21.2172 - type: nauc_ndcg_at_1000_diff1 value: 20.3064 - type: nauc_map_at_1_max value: -39.5068 - type: nauc_map_at_1_std value: -16.150000000000002 - type: nauc_map_at_1_diff1 value: -31.249900000000004 - type: nauc_map_at_3_max value: -41.2738 - type: nauc_map_at_3_std value: -23.5467 - type: nauc_map_at_3_diff1 value: -21.5959 - type: nauc_map_at_5_max value: -45.9079 - type: nauc_map_at_5_std value: -28.181099999999997 - type: nauc_map_at_5_diff1 value: -14.3231 - type: nauc_map_at_10_max value: -45.8169 - type: nauc_map_at_10_std value: -41.293400000000005 - type: nauc_map_at_10_diff1 value: -0.7166 - type: nauc_map_at_20_max value: -42.233900000000006 - type: nauc_map_at_20_std value: -42.2579 - type: nauc_map_at_20_diff1 value: 9.9162 - type: nauc_map_at_100_max value: -42.6044 - type: nauc_map_at_100_std value: -39.921 - type: nauc_map_at_100_diff1 value: 10.408900000000001 - type: nauc_map_at_1000_max value: -41.4171 - type: nauc_map_at_1000_std value: -37.167899999999996 - type: nauc_map_at_1000_diff1 value: 11.7185 - type: nauc_recall_at_1_max value: -39.5068 - type: nauc_recall_at_1_std value: -16.150000000000002 - type: nauc_recall_at_1_diff1 value: -31.249900000000004 - type: nauc_recall_at_3_max value: -38.8655 - type: nauc_recall_at_3_std value: -21.6066 - type: nauc_recall_at_3_diff1 value: -11.395900000000001 - type: nauc_recall_at_5_max value: -47.9991 - type: nauc_recall_at_5_std value: -32.9137 - type: nauc_recall_at_5_diff1 value: -1.0116 - type: nauc_recall_at_10_max value: -49.586999999999996 - type: nauc_recall_at_10_std value: -48.6293 - type: nauc_recall_at_10_diff1 value: 13.092699999999999 - type: nauc_recall_at_20_max value: -45.1018 - type: nauc_recall_at_20_std value: -46.1638 - type: nauc_recall_at_20_diff1 value: 20.9848 - type: nauc_recall_at_100_max value: -48.106700000000004 - type: nauc_recall_at_100_std value: -30.618699999999997 - type: nauc_recall_at_100_diff1 value: 8.3225 - type: nauc_recall_at_1000_max value: -35.183 - type: nauc_recall_at_1000_std value: 9.1089 - type: nauc_recall_at_1000_diff1 value: 14.8164 - type: nauc_precision_at_1_max value: -36.7404 - type: nauc_precision_at_1_std value: -20.7164 - type: nauc_precision_at_1_diff1 value: -24.9514 - type: nauc_precision_at_3_max value: -32.1394 - type: nauc_precision_at_3_std value: -14.9321 - type: nauc_precision_at_3_diff1 value: -5.2914 - type: nauc_precision_at_5_max value: -39.6017 - type: nauc_precision_at_5_std value: -27.8755 - type: nauc_precision_at_5_diff1 value: 6.2789 - type: nauc_precision_at_10_max value: -42.565799999999996 - type: nauc_precision_at_10_std value: -45.101200000000006 - type: nauc_precision_at_10_diff1 value: 18.4024 - type: nauc_precision_at_20_max value: -36.074 - type: nauc_precision_at_20_std value: -41.6858 - type: nauc_precision_at_20_diff1 value: 29.625899999999998 - type: nauc_precision_at_100_max value: -20.7563 - type: nauc_precision_at_100_std value: -6.5164 - type: nauc_precision_at_100_diff1 value: 13.5108 - type: nauc_precision_at_1000_max value: 41.492200000000004 - type: nauc_precision_at_1000_std value: 45.918 - type: nauc_precision_at_1000_diff1 value: 9.314400000000001 - type: nauc_mrr_at_1_max value: -36.7404 - type: nauc_mrr_at_1_std value: -20.7164 - type: nauc_mrr_at_1_diff1 value: -24.9514 - type: nauc_mrr_at_3_max value: -34.8748 - type: nauc_mrr_at_3_std value: -11.2167 - type: nauc_mrr_at_3_diff1 value: -14.4811 - type: nauc_mrr_at_5_max value: -39.5232 - type: nauc_mrr_at_5_std value: -18.9591 - type: nauc_mrr_at_5_diff1 value: -13.2719 - type: nauc_mrr_at_10_max value: -41.7821 - type: nauc_mrr_at_10_std value: -18.368399999999998 - type: nauc_mrr_at_10_diff1 value: -13.4359 - type: nauc_mrr_at_20_max value: -42.8581 - type: nauc_mrr_at_20_std value: -18.6052 - type: nauc_mrr_at_20_diff1 value: -13.6098 - type: nauc_mrr_at_100_max value: -42.0696 - type: nauc_mrr_at_100_std value: -18.1447 - type: nauc_mrr_at_100_diff1 value: -14.102500000000001 - type: nauc_mrr_at_1000_max value: -42.0696 - type: nauc_mrr_at_1000_std value: -18.1447 - type: nauc_mrr_at_1000_diff1 value: -14.102500000000001 - type: main_score value: 17.318 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification (default) type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 74.0283 - type: f1 value: 54.813100000000006 - type: f1_weighted value: 79.4125 - type: ap value: 12.750800000000002 - type: ap_weighted value: 12.750800000000002 - type: main_score value: 74.0283 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification (default) type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 52.818299999999994 - type: f1 value: 52.8999 - type: f1_weighted value: 52.223299999999995 - type: main_score value: 52.818299999999994 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering (default) type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 14.5905 - type: v_measure_std value: 1.0532 - type: main_score value: 14.5905 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 (default) type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: similarity_accuracy value: 80.3481 - type: similarity_accuracy_threshold value: 85.3551 - type: similarity_f1 value: 51.27850000000001 - type: similarity_f1_threshold value: 75.8966 - type: similarity_precision value: 45.8247 - type: similarity_recall value: 58.205799999999996 - type: similarity_ap value: 52.295100000000005 - type: cosine_accuracy value: 80.3481 - type: cosine_accuracy_threshold value: 85.3551 - type: cosine_f1 value: 51.27850000000001 - type: cosine_f1_threshold value: 75.8966 - type: cosine_precision value: 45.8247 - type: cosine_recall value: 58.205799999999996 - type: cosine_ap value: 52.295199999999994 - type: manhattan_accuracy value: 78.9712 - type: manhattan_accuracy_threshold value: 3046.9002 - type: manhattan_f1 value: 44.784600000000005 - type: manhattan_f1_threshold value: 4624.7635 - type: manhattan_precision value: 35.5133 - type: manhattan_recall value: 60.606899999999996 - type: manhattan_ap value: 44.4155 - type: euclidean_accuracy value: 78.9772 - type: euclidean_accuracy_threshold value: 141.3014 - type: euclidean_f1 value: 44.8638 - type: euclidean_f1_threshold value: 210.8781 - type: euclidean_precision value: 35.3191 - type: euclidean_recall value: 61.477599999999995 - type: euclidean_ap value: 44.3973 - type: dot_accuracy value: 77.4095 - type: dot_accuracy_threshold value: 3833.3893000000003 - type: dot_f1 value: 41.7116 - type: dot_f1_threshold value: 336.5812 - type: dot_precision value: 28.259600000000002 - type: dot_recall value: 79.6042 - type: dot_ap value: 30.7809 - type: max_accuracy value: 80.3481 - type: max_f1 value: 51.27850000000001 - type: max_precision value: 45.8247 - type: max_recall value: 79.6042 - type: max_ap value: 52.295199999999994 - type: main_score value: 52.295199999999994 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus (default) type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: similarity_accuracy value: 85.9025 - type: similarity_accuracy_threshold value: 71.6078 - type: similarity_f1 value: 70.9832 - type: similarity_f1_threshold value: 66.4079 - type: similarity_precision value: 68.9871 - type: similarity_recall value: 73.0982 - type: similarity_ap value: 79.2622 - type: cosine_accuracy value: 85.9025 - type: cosine_accuracy_threshold value: 71.6078 - type: cosine_f1 value: 70.9832 - type: cosine_f1_threshold value: 66.4079 - type: cosine_precision value: 68.9871 - type: cosine_recall value: 73.0982 - type: cosine_ap value: 79.2622 - type: manhattan_accuracy value: 81.8954 - type: manhattan_accuracy_threshold value: 2754.9084000000003 - type: manhattan_f1 value: 58.4303 - type: manhattan_f1_threshold value: 3301.9608 - type: manhattan_precision value: 56.1511 - type: manhattan_recall value: 60.9024 - type: manhattan_ap value: 66.2046 - type: euclidean_accuracy value: 81.8974 - type: euclidean_accuracy_threshold value: 122.74810000000001 - type: euclidean_f1 value: 58.455 - type: euclidean_f1_threshold value: 151.3654 - type: euclidean_precision value: 55.0722 - type: euclidean_recall value: 62.2806 - type: euclidean_ap value: 66.22019999999999 - type: dot_accuracy value: 78.7402 - type: dot_accuracy_threshold value: 317.0264 - type: dot_f1 value: 58.2905 - type: dot_f1_threshold value: 187.0591 - type: dot_precision value: 48.1454 - type: dot_recall value: 73.8528 - type: dot_ap value: 58.116 - type: max_accuracy value: 85.9025 - type: max_f1 value: 70.9832 - type: max_precision value: 68.9871 - type: max_recall value: 73.8528 - type: max_ap value: 79.2622 - type: main_score value: 79.2622 --- # 🧚🏻‍♀️ brown-fairy-base-v0 Model Card <div align="center"> <img width="50%" alt="Fairy logo" src="./assets/fairy_logo.png"> </div> > [!TIP] > Fairies are among the most enchanting and magical beings in folklore and mythology. They appear across countless cultures and stories, from ancient forests to modern gardens. They are celebrated for their ability to bridge the mundane and magical realms, known for their ethereal grace and transformative powers. Fairies are tiny, higher-dimensional beings that can interact with the world in ways that are beyond our understanding. The fairy series of models are an attempt to tune the beetle series of models to be more suitable for downstream tasks. These models are meant to fully open experiments at making state-of-the-art static embeddings. The brown-fairy-base-v0 model is a distillation of the `baai/bge-base-en-v1.5` model into the `brown-beetle-base-v0` model. There was no PCA or Zipf applied to this model. ## Installation Install model2vec using pip: ```bash pip install model2vec ``` ## Usage Load this model using the `from_pretrained` method: ```python from model2vec import StaticModel # Load a pretrained Model2Vec model model = StaticModel.from_pretrained("bhavnicksm/brown-fairy-base-v0") # Compute text embeddings embeddings = model.encode(["Example sentence"]) ``` Read more about the Model2Vec library [here](https://github.com/MinishLab/model2vec). ## Reproduce this model This model was trained on a subset of the 2 Million texts from the [FineWeb-Edu](https://huggingface.co/datasets/mixedbread-ai/fineweb-edu) dataset, which was labeled by the `baai/bge-base-en-v1.5` model. <details> <summary>Training Code</summary> Note: The datasets need to me made seperately and loaded with the `datasets` library. ```python static_embedding = StaticEmbedding.from_model2vec("bhavnicksm/brown-beetle-base-v0") model = SentenceTransformer( modules=[static_embedding] ) loss = MSELoss(model) run_name = "brown-fairy-base-v0" args = SentenceTransformerTrainingArguments( # Required parameter: output_dir=f"output/{run_name}", # Optional training parameters: num_train_epochs=1, per_device_train_batch_size=2048, per_device_eval_batch_size=2048, learning_rate=1e-1, warmup_ratio=0.1, fp16=False, # Set to False if you get an error that your GPU can't run on FP16 bf16=True, # Set to True if you have a GPU that supports BF16 batch_sampler=BatchSamplers.NO_DUPLICATES, # Optional tracking/debugging parameters: eval_strategy="steps", eval_steps=50, save_strategy="steps", save_steps=50, save_total_limit=5, logging_steps=50, logging_first_step=True, run_name=run_name, ) evaluator = NanoBEIREvaluator() evaluator(model) trainer = SentenceTransformerTrainer( model=model, args=args, train_dataset=train_dataset, eval_dataset=eval_dataset, loss=loss, evaluator=evaluator, ) trainer.train() evaluator(model) model.save_pretrained(f"output/{run_name}") ``` </details> ## Comparison with other models Coming soon... ## Acknowledgements This model is based on the [Model2Vec](https://github.com/MinishLab/model2vec) library. Credit goes to the [Minish Lab](https://github.com/MinishLab) team for developing this library. ## Citation This model builds on work done by Minish Lab. Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work. ```bibtex @software{minishlab2024model2vec, authors = {Stephan Tulkens, Thomas van Dongen}, title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model}, year = {2024}, url = {https://github.com/MinishLab/model2vec}, } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
Hiveurban/multilingual-e5-large-pooled
Hiveurban
feature-extraction
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "xlm-roberta", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,727
1,727
5,176
1
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- ## Multilingual-E5-large [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-large') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
[ "SEMANTIC_SIMILARITY", "TRANSLATION", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
DAMO-NLP-SG/mt-llama-7b-delta
DAMO-NLP-SG
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,683
1,686
16
2
--- license: mit --- # MT-LLaMA Model Card ## Model details **Model type:** MT-LLaMA is an open-source multi-task model trained by fine-tuning LLaMA on the massive tasks in [P3](https://huggingface.co/datasets/bigscience/P3) (i.e., T0 Train). Concretely, the used datasets during training and task taxonomy are listed below: * Multi-choice QA: CommonsenseQA, Cosmos QA, DREAM, QuAIL, QuaRTz, QASC, QuaRel, SciQ, Social IQA, Wiki Hop, WiQA * Extractive QA: Adversarial QA, DuoRC, Quoref, ROPES * Close-Book QA: Hotpot QA, Wiki QA * Sentiment Classification: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp * Topic Classification: AG News, DBPedia, TREC * Structure-to-Text Generation: Common Gen, Wiki Bio * Text Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum * Paraphrase Identification: MRPC, PAWS, QQP **Organizations developing the model:** The MT-LLaMA team with members from Alibaba Damo Academy and the Chinese University of Hong Kong. ## Intended use You can try the codes from our [github repo](https://github.com/DAMO-NLP-SG/MT-LLaMA). ## Zero-shot Evaluation We primarily follow the protocols of [Bigscience T0](https://openreview.net/forum?id=9Vrb9D0WI4) to assess the generalization capability of our Multi-task LLaMA to: (1) _**Unseen Datasets**_ (i.e., datasets from seen tasks); (2) _**Unseen Tasks**_. #### Prompt Format Extractive QA: 1. XQuAD, TyDiQA, MLQA, SQuAD ```angular2html Input: Answer the question according to the context. Question: ${question}. Context: ${context}. Answer: Output: ${Answer} ``` Sentiment: 1. SST-2 ```angular2html Input: ${sentence} Based on this review, would the user recommend this product? No or Yes? Output: Yes / No ``` Multiple-Choice QA: 1. OpenbookQA ```angular2html Input: ${question} Which is the correct answer? - (A) ${choiceA} - (B) ${choiceB} - (C) ${choiceC} - (D) ${choiceD} Output: ${choiceA} / ${choiceB} / ${choiceC} / ${choiceD} ``` Sentence Completion: 1. COPA ```angular2html Input: ${premise} {% if question == "cause" %} This happened because... {% else %} As a consequence... Help me pick the more plausible option: - ${text1} - ${text2} Output: ${text1} / ${text2} ``` Coreference Resolution: 1. Winogrande: ```angular2html Input: ${sentence} In the previous sentence, does _ refer to ${option1} or ${option2}? Output: ${option1} / ${option2} ``` Word Sense Disambiguation: 1. WiC ```angular2html Input: Does the word "${word}" have the same meaning in these two sentences? Yes, No? ${sentence1} ${sentence2} Output: ${sentence1} / ${sentence2} ``` Natural Language Inference: 1. MNLI: ```angular2html Input: ${premise} Question: Does this imply that ${hypothesis}? Please response with 'Yes', 'No', or 'Maybe'. Output: Yes / No / Maybe ``` 2. RTE ```angular2html Input: Given ${premise} Is it guaranteed true that "${hypothesis}"? Yes or no? Output: Yes / no ``` #### Results on _Unseen Datasets_ | Model | XQuAD-en (F1/EM) | TyDiQA-en (F1/EM) | MLQA-en (F1/EM) | SQuAD (F1/EM) | SST-2 (Acc.) | OpenbookQA (Acc.) | |:------------|------------------|-------------------|-----------------|---------------|--------------|-------------------| | LLaMA-7b | 9.5 / 2.0 | 14.3 / 2.6 | 13.4 / 3.3 | 29.4 / 11.5 | 50.5 | 32.4 | | MT-LLaMA-7b | 42.3 / 31.1 | 38.9 / 26.9 | 45.4 / 31.5 | 85.9 / 77.6 | 92.6 | 38.2 | #### Results on _Unseen Tasks_ | Model | COPA (Acc.) | Winogrande (Acc.) | WiC (Acc.) | MNLI (Acc.) | RTE (Acc.) | |:------------|-------------|--------------------|------------|-------------|------------| | LLaMA-7b | 56.0 | 49.3 | 51.7 | 30.2 | 52.7 | | MT-LLaMA-7b | 88.0 | 54.9 | 52.2 | 49.6 | 79.1 | ## Acknowledgement * Our training codes are largely borrowed from [FastChat](https://github.com/lm-sys/FastChat) * We are also grateful for the efforts of [LLaMA](https://github.com/facebookresearch/llama) (from FAIR) and [T0](https://github.com/bigscience-workshop/t-zero) (from BigScience), which serve as the foundation of our work If you find this resource useful, please cite the repo as follows: ``` @software{damonlpsg2023mtllama, author = {Xu, Weiwen and Li, Xin and Bing, Lidong}, title = {Multi-task Instruction-tuned LLaMA}, year = 2023, url = {https://github.com/DAMO-NLP-SG/MT-LLaMA} } ```
[ "COREFERENCE_RESOLUTION", "SUMMARIZATION" ]
[ "SCIQ" ]
Non_BioNLP
PM234/DeepSeek-R1-MedExpert-LoRA-8B-bnb4bit
PM234
null
[ "peft", "safetensors", "medical", "en", "dataset:PM234/MedQA-SFT", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
1,741
1,741
8
0
--- base_model: unsloth/deepseek-r1-distill-llama-8b-bnb-4bit datasets: - PM234/MedQA-SFT language: - en library_name: peft license: apache-2.0 tags: - medical --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This is a LoRA adapter-based fine-tuned version of DeepSeek-R1-Distill-Llama-8B, optimized for Medical Question Answering (MedQA) using PEFT, LoRA adapters, and bnb-4bit quantization. The fine-tuning was performed on a curated dataset of 10k examples containing medical questions and answers from trusted sources. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ```python from unsloth import FastLanguageModel # Load model + adapters directly model, tokenizer = FastLanguageModel.from_pretrained("PM234/DeepSeek-R1-MedExpert-LoRA-8B-bnb4bit") # Prep for inference FastLanguageModel.for_inference(model) # Example: test_input = "Below is an instruction...\n### Instruction: Answer the following medical question.\n### Input: What is the primary source of energy for the human body?\n### Response:" inputs = tokenizer(test_input, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # "Glucose" ``` [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
[ "QUESTION_ANSWERING" ]
[ "MEDQA" ]
BioNLP
epfl-llm/meditron-70b
epfl-llm
text-generation
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "medical", "health", "llama2", "en", "dataset:bigbio/med_qa", "dataset:medmcqa", "dataset:bigbio/pubmed_qa", "dataset:epfl-llm/guidelines", "arxiv:2311.16079", "base_model:meta-llama/Llama-2-70b", "base_model:finetune:meta-llama/Llama-2-70b", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,699
1,701
2,262
234
--- base_model: meta-llama/Llama-2-70b datasets: - bigbio/med_qa - medmcqa - bigbio/pubmed_qa - epfl-llm/guidelines language: - en license: llama2 metrics: - accuracy - perplexity pipeline_tag: text-generation tags: - medical - health - llama2 --- <img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo"> # Model Card for Meditron-70B-v1.0 Meditron is a suite of open-source medical Large Language Models (LLMs). Meditron-70B is a 70 billion parameters model adapted to the medical domain from Llama-2-70B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). Meditron-70B, finetuned on relevant training data, outperforms Llama-2-70B, GPT-3.5 (`text-davinci-003`, 8-shot), and Flan-PaLM on multiple medical reasoning tasks. <!--# Table of Contents [Model Card for Meditron 70B](#model-card-for--meditron-70b-v1.0) - [Table of Contents](#table-of-contents) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Downstream Use](#downstream-use) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Evaluation](#evaluation) - [Testing Data & Metrics](#testing-data-&-metrics) - [Testing Data](#testing-data) - [Metrics](#metrics) - [Results](#results) - [Environmental Impact](#environmental-impact) - [Citation](#citation)--> <details open> <summary><strong>Advisory Notice</strong></summary> <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;"> While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. </blockquote> </details> ## Model Details - **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm) - **Model type:** Causal decoder-only transformer language model - **Language(s):** English (mainly) - **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Code License:** [APACHE 2.0 LICENSE](LICENSE) - **Continue-pretrained from model:** [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b) - **Context length:** 4K tokens - **Input:** Text-only data - **Output:** Model generates text only - **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance. - **Knowledge Cutoff:** August 2023 ### Model Sources - **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron) - **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)* ## Uses Meditron-70B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to: - Medical exam question answering - Supporting differential diagnosis - Disease information (symptoms, cause, treatment) query - General health information query ### Direct Use It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people. ### Downstream Use Meditron-70B and Meditron-7B are both foundation models without finetuning or instruction-tuning. They can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications. There are two ways we have used this model for downstream question-answering tasks. 1. We apply in-context learning with k demonstrations (3 or 5 in our paper) added to the prompt. 2. We finetuned the models for downstream question-answering tasks using specific training sets. We encourage and look forward to the adaption of the base model for more diverse applications. If you want a more interactive way to prompt the model, we recommend using a high-throughput and memory-efficient inference engine with a UI that supports chat and text generation. You can check out our deployment [guide](https://github.com/epfLLM/meditron/blob/main/deployment/README.md), where we used [FastChat](https://github.com/lm-sys/FastChat) with [vLLM](https://github.com/vllm-project/vllm). We collected generations for our qualitative analysis through an interactive UI platform, [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT). Here is the prompt format we used as an example: <img width=70% src="prompt_example.png" alt="qualitative-analysis-prompt" title="Qualitative Analysis Prompt"> ### Out-of-Scope Use We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise. ## Truthfulness, Helpfulness, Risk, and Bias <!-- This section is meant to convey both technical and sociotechnical limitations. --> We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models. We use TruthfulQA (multiple choice) as the main evaluation benchmark. We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science. For 7B models, we perform one-shot evaluations for consistent answer generation. For 70B models, the evaluations are under the zero-shot setting. Below, we report the detailed truthfulness performance of each category. | | | | | | | | | | --- | ------ |----- |----- |----- |----- |----- |----- | |Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b | |Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 | |Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 | |Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 | |Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 | |Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 | | | | | | | | | For a more detailed performance analysis, please see our paper. For **Helpfulness**, **Risk** and **Bias**, we provide a comprehensive qualitative generation report of Meditron-70B on queries designed by medical experts. Each query targets specific aspects of helpfulness (medical accuracy, up-to-date information, etc.), risk (public health, medical ethics, etc.) and bias (gender, age, race, etc.). Please see the detailed generations in our paper. We compare our generations to Llama-2-70B and ChatGPT-3.5 (version Nov, 27, 2023) Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model. ### Recommendations **IMPORTANT!** Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations. Understanding these limitations is especially important in a domain like medicine. Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine without comprehensive testing for your application. ## Training Details ### Training Data Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora: - [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations. - **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers. - **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers. - **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <img width="60%" src="gap-replay.png" alt="Alt text" title="Meditron-logo"> #### Data Preprocessing Please see the detailed preprocessing procedure in our paper. ### Training Procedure We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency. Hardware consists of 16 nodes of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM. The nodes are connected via RDMA over Converged Ethernet. Our three-way parallelism scheme uses: - Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2, - Pipeline Parallelism (PP -- different GPUs process different layers) of 8, - Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 8. #### Training Hyperparameters | | | | --- | ------ | | bf16 | true | | lr | 1.5e-4 | | eps | 1e-5 | | betas | \[0.9, 0.95\] | | clip_grad | 1 | | weight decay | 0.1 | | DP size | 2 | | TP size | 8 | | PP size | 8 | | seq length | 4096 | | lr scheduler | cosine| | min lr | 1e-6 | | warmup iteration | 2000 | | micro batch size | 2 | | global batch size | 512 | | | | #### Speeds, Sizes, Times The model was trained in September and October 2023. The model architecture is exactly Llama 2, meaning | | | | --- | ------ | | Model size | 70B | | Hidden dimension | 8192 | | Num. attention heads | 64 | | Num. layers | 80 | | | | | We train the 70B model on 48e9 tokens, at a throughput of about 40,200 tokens / second. This amounts to a bfloat16 model flops utilization of roughly 42.3\%. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results We finetune meditron-70b and llama-2-70b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually. We report the finetuned models' performance with self-consistency chain-of-thought as the inference mode. For MMLU-Medical, models finetuned on MedMCQA are used for inference. For MedQA-4-Option, models finetuned on MedQA are used for inference. For a more detailed performance analysis, please see our paper. | | | | | | | | --- | ------ |----- |----- |----- |----- | |Dataset| meditron-70b | llama-2-70b | med42-70b* | clinical-camel-70b* | |MMLU-Medical | 77.6 | 77.9 | 74.5 | 65.7 | |PubMedQA | 81.6 | 80.0 | 61.2 | 67.0 | |MedMCQA | 66.0 | 62.6 | 59.2 | 46.7 | |MedQA | 64.4 | 61.5 | 59.1 | 50.8 | |MedQA-4-Option| 70.2 | 63.8 | 63.9 | 56.8 | |Avg | 72.0 | 69.2 | 63.6 | 57.4 | | | | | | | | **Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 128 x NVIDIA A100 (80GB) SXM - **Total GPU hours:** 42,496 - **Hardware Provider:** EPFL Research Computing Platform - **Compute Region:** Switzerland - **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 332 hours of 128 A100s means 42496 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.8, total emissions are estimated to be: (400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 332 h * 128 GPU) * 1.8 PUE = 486 kgCO2. ## Citation **BibTeX:** If you use Meditron or its training data, please cite our work: ``` @misc{chen2023meditron70b, title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models}, author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, year={2023}, eprint={2311.16079}, archivePrefix={arXiv}, primaryClass={cs.CL} } @software{epfmedtrn, author = {Zeming Chen and Alejandro Hernández Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models}, month = November, year = 2023, url = {https://github.com/epfLLM/meditron} } ```
[ "QUESTION_ANSWERING" ]
[ "MEDQA", "PUBMEDQA" ]
BioNLP
Netta1994/setfit_baai_rag_ds_gpt-4o_cot-few_shot-instructions_remove_final_evaluation_e1_172700
Netta1994
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "model-index", "region:us" ]
1,727
1,727
7
0
--- base_model: BAAI/bge-base-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'The answer provided concisely lists the changes being made to the storage AM as per Haribabu Kommi''s email. It addresses the specific question asked and is directly related to the content of the provided document. There is no deviation into unrelated topics, and unnecessary information is avoided. Final Evaluation:' - text: 'Reasoning: 1. Context Grounding: The provided document clearly states that China''s Ning Zhongyan won the gold medal in the men''s 1,500m final at the speed skating World Cup. 2. Relevance: The answer directly addresses who won the gold medal in the men''s 1,500m final at the speed skating World Cup, which is the specific question asked. 3. Conciseness: The answer is short and to the point, providing just the necessary information without any unnecessary elaboration. Final result:' - text: 'Evaluation: 1. **Context Grounding:** The answer correctly states the sizes for both individual and combined portraits as listed in the provided document. 2. **Relevance:** The answer directly addresses the specific question regarding the sizes of the portraits available. 3. **Conciseness:** The answer is clear, to the point, and avoids unnecessary information, making it concise. Final result:' - text: "The provided answer is well-supported by the document and directly related\ \ to the question. It enumerates the components of the British Medieval Student\ \ Guide accurately. \n\nReasoning:\n1. Context Grounding: The answer is consistent\ \ with the provided document, which lists the elements of the Student Guide.\n\ 2. Relevance: The answer directly addresses the question asked, providing detailed\ \ components of the guide.\n3. Conciseness: The answer is clear and to the point,\ \ avoiding unnecessary information that doesn't pertain to the question.\n\nFinal\ \ Evaluation:" - text: 'The answer lists Rep. Andy Harris, Reps. Kyle Evans, and Jessica Smith as the first three Members of Congress to call for an end to the blockade of Gaza. However, according to the document, the correct individuals are Reps. Keith Ellison, Barbara Lee, and Danny Davis. Therefore, the answer is not grounded in the provided document and is factually incorrect. The final evaluation:' inference: true model-index: - name: SetFit with BAAI/bge-base-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.88 name: Accuracy --- # SetFit with BAAI/bge-base-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>"The answer correctly identifies that the primary reasons behind the Nuggets' offensive outburst in January are the team's increased comfort and effectiveness, as well as Coach Brian Shaw's strategy of encouraging the team to push the ball after makes and misses and to take the first available shot in the rhythm of the offense. However, the mention of a new training technique involving virtual reality is not supported by the provided document.\n\nReasoning:\n1. Context Grounding: The majority of the answer is well-supported by the document, but the part about virtual reality training is not mentioned in the provided text.\n2. Relevance: The answer is mostly relevant to the question, but the inclusion of virtual reality training deviates from the information in the document.\n3. Conciseness: The answer could be clearer and more concise by excluding the irrelevant information about virtual reality training.\n\nThe final evaluation:"</li><li>"**Reasoning:**\n\n1. **Context Grounding:** The answer is generally well-grounded in the document but contains some inaccuracies. The document discusses that film over-exposes better, not under-exposes better. The answer also mentions 5MP sensors, while the document refers to 10MP. \n\n2. **Relevance:** The answer is relevant to the question, addressing the differences between film and digital photography based on the author's experience.\n\n3. **Conciseness:** The answer is concise and to the point, which is good. However, inaccuracies in the details affect its quality.\n\n**Final Result:**"</li><li>'Reasoning:\nThe provided answer does not address the question asked. The question seeks information about the main conflict in the third book of the Arcana Chronicles by Kresley Cole, while the answer given only discusses the results of a mixed martial arts event and the performance of fighters in various bouts. This answer is neither relevant to the question nor grounded in the correct context.\n\nFinal evaluation:'</li></ul> | | 1 | <ul><li>'The answer provided effectively outlines best practices for web designers, detailing practices such as understanding client needs, signing detailed contracts, and maintaining clear communication. These are directly rooted in the provided document and address the specified question accurately.\n\n1. **Context Grounding:** \n - The answer is well-supported by the document, specifically referencing getting to know the client, maintaining a contract, and explaining the importance of communication as outlined in the text.\n\n2. **Relevance:**\n - The answer is highly relevant to the question, focusing precisely on best practices for web designers to avoid unnecessary revisions and conflicts.\n\n3. **Conciseness:**\n - The answer is clear, concise, and avoids extraneous details.\n\nFinal evaluation:'</li><li>"Reasoning:\n\n1. **Context Grounding**: The answer is well-supported by the provided document. The author does emphasize that using the author's own experiences, especially those involving pain and emotion, makes the story genuine and relatable, thereby creating a connection between the reader and the characters.\n \n2. **Relevance**: The answer directly addresses the specific question asked about the key to creating a connection between the reader and the characters in a story.\n\n3. **Conciseness**: The answer is clear and to the point, without including unnecessary information.\n\nFinal result:"</li><li>'Reasoning:\n1. Context Grounding: The answer is directly supported by the provided document, which mentions that Mauro Rubin is the CEO of JoinPad and that he spoke during the event at Talent Garden Calabiana, Milan.\n2. Relevance: The answer is relevant to the question asked, directly addressing the identity of the CEO during the specified event.\n3. Conciseness: The answer is clear, to the point, and does not include unnecessary information.\n\nFinal result:'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.88 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Netta1994/setfit_baai_rag_ds_gpt-4o_cot-few_shot-instructions_remove_final_evaluation_e1_172700") # Run inference preds = model("The answer provided concisely lists the changes being made to the storage AM as per Haribabu Kommi's email. It addresses the specific question asked and is directly related to the content of the provided document. There is no deviation into unrelated topics, and unnecessary information is avoided. Final Evaluation:") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 28 | 79.3803 | 155 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 34 | | 1 | 37 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0056 | 1 | 0.2384 | - | | 0.2809 | 50 | 0.2527 | - | | 0.5618 | 100 | 0.1556 | - | | 0.8427 | 150 | 0.0404 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.1.0 - Sentence Transformers: 3.1.1 - Transformers: 4.44.0 - PyTorch: 2.4.0+cu121 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "MEDAL" ]
Non_BioNLP
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
StivenLancheros
token-classification
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,647
1,647
122
1
--- metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset. It achieves the following results on the evaluation set: - Loss: 0.2299 - Precision: 0.8122 - Recall: 0.8475 - F1: 0.8294 - Accuracy: 0.9661 ## Model description This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0542 | 1.0 | 2719 | 0.1540 | 0.7834 | 0.8300 | 0.8060 | 0.9622 | | 0.0229 | 2.0 | 5438 | 0.1920 | 0.8092 | 0.8219 | 0.8155 | 0.9644 | | 0.0069 | 3.0 | 8157 | 0.2054 | 0.8130 | 0.8481 | 0.8302 | 0.9656 | | 0.0023 | 4.0 | 10876 | 0.2299 | 0.8122 | 0.8475 | 0.8294 | 0.9661 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
[ "NAMED_ENTITY_RECOGNITION" ]
[ "CRAFT" ]
BioNLP
RichardErkhov/bigscience_-_bloom-3b-gguf
RichardErkhov
null
[ "gguf", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "endpoints_compatible", "region:us" ]
1,714
1,714
243
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) bloom-3b - GGUF - Model creator: https://huggingface.co/bigscience/ - Original model: https://huggingface.co/bigscience/bloom-3b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [bloom-3b.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q2_K.gguf) | Q2_K | 1.52GB | | [bloom-3b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.IQ3_XS.gguf) | IQ3_XS | 1.68GB | | [bloom-3b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.IQ3_S.gguf) | IQ3_S | 1.71GB | | [bloom-3b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q3_K_S.gguf) | Q3_K_S | 1.71GB | | [bloom-3b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.IQ3_M.gguf) | IQ3_M | 1.81GB | | [bloom-3b.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q3_K.gguf) | Q3_K | 1.9GB | | [bloom-3b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q3_K_M.gguf) | Q3_K_M | 1.9GB | | [bloom-3b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q3_K_L.gguf) | Q3_K_L | 2.02GB | | [bloom-3b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.IQ4_XS.gguf) | IQ4_XS | 2.0GB | | [bloom-3b.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q4_0.gguf) | Q4_0 | 2.08GB | | [bloom-3b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.IQ4_NL.gguf) | IQ4_NL | 2.09GB | | [bloom-3b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q4_K_S.gguf) | Q4_K_S | 2.09GB | | [bloom-3b.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q4_K.gguf) | Q4_K | 2.24GB | | [bloom-3b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q4_K_M.gguf) | Q4_K_M | 2.24GB | | [bloom-3b.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q4_1.gguf) | Q4_1 | 2.25GB | | [bloom-3b.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q5_0.gguf) | Q5_0 | 2.43GB | | [bloom-3b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q5_K_S.gguf) | Q5_K_S | 2.43GB | | [bloom-3b.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q5_K.gguf) | Q5_K | 2.55GB | | [bloom-3b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q5_K_M.gguf) | Q5_K_M | 1.64GB | | [bloom-3b.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q5_1.gguf) | Q5_1 | 1.58GB | | [bloom-3b.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigscience_-_bloom-3b-gguf/blob/main/bloom-3b.Q6_K.gguf) | Q6_K | 1.31GB | Original model description: --- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation model-index: - name: bloom results: - task: type: text-generation name: text generation dataset: name: arc_challenge type: arc_challenge metrics: - name: acc type: acc value: 0.27986348122866894 verified: false - task: type: text-generation name: text generation dataset: name: arc_easy type: arc_easy metrics: - name: acc type: acc value: 0.5946969696969697 verified: false - task: type: text-generation name: text generation dataset: name: axb type: axb metrics: - name: acc type: acc value: 0.4433876811594203 verified: false - task: type: text-generation name: text generation dataset: name: axg type: axg metrics: - name: acc type: acc value: 0.5 verified: false - task: type: text-generation name: text generation dataset: name: boolq type: boolq metrics: - name: acc type: acc value: 0.6165137614678899 verified: false - task: type: text-generation name: text generation dataset: name: cb type: cb metrics: - name: acc type: acc value: 0.30357142857142855 verified: false - task: type: text-generation name: text generation dataset: name: cola type: cola metrics: - name: acc type: acc value: 0.610738255033557 verified: false - task: type: text-generation name: text generation dataset: name: copa type: copa metrics: - name: acc type: acc value: 0.63 verified: false - task: type: text-generation name: text generation dataset: name: crows_pairs_english type: crows_pairs_english metrics: - name: acc type: acc value: 0.4973166368515206 verified: false - task: type: text-generation name: text generation dataset: name: crows_pairs_french type: crows_pairs_french metrics: - name: acc type: acc value: 0.5032796660703638 verified: false - task: type: text-generation name: text generation dataset: name: diabla type: diabla metrics: - name: acc type: acc value: 0.28888308977035493 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_afr type: gsarti/flores_101_afr metrics: - name: byte_perplexity type: byte_perplexity value: 6.500798737976343 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_amh type: gsarti/flores_101_amh metrics: - name: byte_perplexity type: byte_perplexity value: 3.9726863338897145 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ara type: gsarti/flores_101_ara metrics: - name: byte_perplexity type: byte_perplexity value: 1.8083841089875814 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_asm type: gsarti/flores_101_asm metrics: - name: byte_perplexity type: byte_perplexity value: 5.699102962086425 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ast type: gsarti/flores_101_ast metrics: - name: byte_perplexity type: byte_perplexity value: 3.9252047073429384 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_azj type: gsarti/flores_101_azj metrics: - name: byte_perplexity type: byte_perplexity value: 6.942805054270002 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_bel type: gsarti/flores_101_bel metrics: - name: byte_perplexity type: byte_perplexity value: 3.614136245847082 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ben type: gsarti/flores_101_ben metrics: - name: byte_perplexity type: byte_perplexity value: 5.121491534300969 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_bos type: gsarti/flores_101_bos metrics: - name: byte_perplexity type: byte_perplexity value: 5.653353469118798 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_bul type: gsarti/flores_101_bul metrics: - name: byte_perplexity type: byte_perplexity value: 2.7014693938055068 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_cat type: gsarti/flores_101_cat metrics: - name: byte_perplexity type: byte_perplexity value: 2.305190041967345 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ceb type: gsarti/flores_101_ceb metrics: - name: byte_perplexity type: byte_perplexity value: 6.291000321323428 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ces type: gsarti/flores_101_ces metrics: - name: byte_perplexity type: byte_perplexity value: 5.447322753586386 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ckb type: gsarti/flores_101_ckb metrics: - name: byte_perplexity type: byte_perplexity value: 3.7255124939234765 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_cym type: gsarti/flores_101_cym metrics: - name: byte_perplexity type: byte_perplexity value: 12.539424151448149 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_dan type: gsarti/flores_101_dan metrics: - name: byte_perplexity type: byte_perplexity value: 5.183309001005672 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_deu type: gsarti/flores_101_deu metrics: - name: byte_perplexity type: byte_perplexity value: 3.1180422286591347 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ell type: gsarti/flores_101_ell metrics: - name: byte_perplexity type: byte_perplexity value: 2.467943456164706 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_eng type: gsarti/flores_101_eng metrics: - name: byte_perplexity type: byte_perplexity value: 2.018740628193298 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_est type: gsarti/flores_101_est metrics: - name: byte_perplexity type: byte_perplexity value: 9.11654425176368 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_fas type: gsarti/flores_101_fas metrics: - name: byte_perplexity type: byte_perplexity value: 3.058009097116482 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_fin type: gsarti/flores_101_fin metrics: - name: byte_perplexity type: byte_perplexity value: 6.847047959628553 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_fra type: gsarti/flores_101_fra metrics: - name: byte_perplexity type: byte_perplexity value: 1.9975177011840075 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ful type: gsarti/flores_101_ful metrics: - name: byte_perplexity type: byte_perplexity value: 11.465912731488828 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_gle type: gsarti/flores_101_gle metrics: - name: byte_perplexity type: byte_perplexity value: 8.681491663539422 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_glg type: gsarti/flores_101_glg metrics: - name: byte_perplexity type: byte_perplexity value: 3.029991089015508 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_guj type: gsarti/flores_101_guj metrics: - name: byte_perplexity type: byte_perplexity value: 4.955224230286231 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hau type: gsarti/flores_101_hau metrics: - name: byte_perplexity type: byte_perplexity value: 10.758347356372159 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_heb type: gsarti/flores_101_heb metrics: - name: byte_perplexity type: byte_perplexity value: 3.6004478129801667 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hin type: gsarti/flores_101_hin metrics: - name: byte_perplexity type: byte_perplexity value: 4.712530650588064 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hrv type: gsarti/flores_101_hrv metrics: - name: byte_perplexity type: byte_perplexity value: 5.822418943372185 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hun type: gsarti/flores_101_hun metrics: - name: byte_perplexity type: byte_perplexity value: 6.440482646965992 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_hye type: gsarti/flores_101_hye metrics: - name: byte_perplexity type: byte_perplexity value: 3.657718918347166 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ibo type: gsarti/flores_101_ibo metrics: - name: byte_perplexity type: byte_perplexity value: 5.564814003872672 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ind type: gsarti/flores_101_ind metrics: - name: byte_perplexity type: byte_perplexity value: 2.1597101468869373 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_isl type: gsarti/flores_101_isl metrics: - name: byte_perplexity type: byte_perplexity value: 8.082349269518136 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ita type: gsarti/flores_101_ita metrics: - name: byte_perplexity type: byte_perplexity value: 2.9687591414176207 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_jav type: gsarti/flores_101_jav metrics: - name: byte_perplexity type: byte_perplexity value: 7.0573805415708994 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_jpn type: gsarti/flores_101_jpn metrics: - name: byte_perplexity type: byte_perplexity value: 2.7758864197116933 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kam type: gsarti/flores_101_kam metrics: - name: byte_perplexity type: byte_perplexity value: 11.072949642861332 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kan type: gsarti/flores_101_kan metrics: - name: byte_perplexity type: byte_perplexity value: 5.551730651007082 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kat type: gsarti/flores_101_kat metrics: - name: byte_perplexity type: byte_perplexity value: 2.522630524283745 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kaz type: gsarti/flores_101_kaz metrics: - name: byte_perplexity type: byte_perplexity value: 3.3901748516975574 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kea type: gsarti/flores_101_kea metrics: - name: byte_perplexity type: byte_perplexity value: 8.918534182590863 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kir type: gsarti/flores_101_kir metrics: - name: byte_perplexity type: byte_perplexity value: 3.729278369847201 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_kor type: gsarti/flores_101_kor metrics: - name: byte_perplexity type: byte_perplexity value: 3.932884847226212 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lao type: gsarti/flores_101_lao metrics: - name: byte_perplexity type: byte_perplexity value: 2.9077314760849924 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lav type: gsarti/flores_101_lav metrics: - name: byte_perplexity type: byte_perplexity value: 7.777221919194806 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lin type: gsarti/flores_101_lin metrics: - name: byte_perplexity type: byte_perplexity value: 7.524842908050988 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lit type: gsarti/flores_101_lit metrics: - name: byte_perplexity type: byte_perplexity value: 7.369179434621725 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ltz type: gsarti/flores_101_ltz metrics: - name: byte_perplexity type: byte_perplexity value: 8.801059747949214 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_lug type: gsarti/flores_101_lug metrics: - name: byte_perplexity type: byte_perplexity value: 8.483203026364786 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_luo type: gsarti/flores_101_luo metrics: - name: byte_perplexity type: byte_perplexity value: 11.975963093623681 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mal type: gsarti/flores_101_mal metrics: - name: byte_perplexity type: byte_perplexity value: 4.615948455160037 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mar type: gsarti/flores_101_mar metrics: - name: byte_perplexity type: byte_perplexity value: 5.483253482821379 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mkd type: gsarti/flores_101_mkd metrics: - name: byte_perplexity type: byte_perplexity value: 2.9656732291754087 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mlt type: gsarti/flores_101_mlt metrics: - name: byte_perplexity type: byte_perplexity value: 15.004773437665275 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mon type: gsarti/flores_101_mon metrics: - name: byte_perplexity type: byte_perplexity value: 3.410598542315402 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mri type: gsarti/flores_101_mri metrics: - name: byte_perplexity type: byte_perplexity value: 7.474035895661322 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_msa type: gsarti/flores_101_msa metrics: - name: byte_perplexity type: byte_perplexity value: 2.5710001772665634 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_mya type: gsarti/flores_101_mya metrics: - name: byte_perplexity type: byte_perplexity value: 2.413577969878331 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nld type: gsarti/flores_101_nld metrics: - name: byte_perplexity type: byte_perplexity value: 4.127831721885065 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nob type: gsarti/flores_101_nob metrics: - name: byte_perplexity type: byte_perplexity value: 5.402763169129877 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_npi type: gsarti/flores_101_npi metrics: - name: byte_perplexity type: byte_perplexity value: 5.199342701937889 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nso type: gsarti/flores_101_nso metrics: - name: byte_perplexity type: byte_perplexity value: 8.154626800955667 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_nya type: gsarti/flores_101_nya metrics: - name: byte_perplexity type: byte_perplexity value: 8.179860208369393 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_oci type: gsarti/flores_101_oci metrics: - name: byte_perplexity type: byte_perplexity value: 4.8617357393685845 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_orm type: gsarti/flores_101_orm metrics: - name: byte_perplexity type: byte_perplexity value: 12.911595421079408 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ory type: gsarti/flores_101_ory metrics: - name: byte_perplexity type: byte_perplexity value: 5.189421861225964 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_pan type: gsarti/flores_101_pan metrics: - name: byte_perplexity type: byte_perplexity value: 4.698477289331806 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_pol type: gsarti/flores_101_pol metrics: - name: byte_perplexity type: byte_perplexity value: 4.625550458479643 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_por type: gsarti/flores_101_por metrics: - name: byte_perplexity type: byte_perplexity value: 1.9754515986213523 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_pus type: gsarti/flores_101_pus metrics: - name: byte_perplexity type: byte_perplexity value: 4.4963371422771585 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ron type: gsarti/flores_101_ron metrics: - name: byte_perplexity type: byte_perplexity value: 4.965456830031304 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_rus type: gsarti/flores_101_rus metrics: - name: byte_perplexity type: byte_perplexity value: 2.0498020542445303 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_slk type: gsarti/flores_101_slk metrics: - name: byte_perplexity type: byte_perplexity value: 6.450822127057479 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_slv type: gsarti/flores_101_slv metrics: - name: byte_perplexity type: byte_perplexity value: 6.620252120186232 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_sna type: gsarti/flores_101_sna metrics: - name: byte_perplexity type: byte_perplexity value: 8.462166771382726 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_snd type: gsarti/flores_101_snd metrics: - name: byte_perplexity type: byte_perplexity value: 5.466066951221973 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_som type: gsarti/flores_101_som metrics: - name: byte_perplexity type: byte_perplexity value: 11.95918054093392 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_spa type: gsarti/flores_101_spa metrics: - name: byte_perplexity type: byte_perplexity value: 1.8965140104323535 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_srp type: gsarti/flores_101_srp metrics: - name: byte_perplexity type: byte_perplexity value: 2.871214785885079 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_swe type: gsarti/flores_101_swe metrics: - name: byte_perplexity type: byte_perplexity value: 5.054972008155866 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_swh type: gsarti/flores_101_swh metrics: - name: byte_perplexity type: byte_perplexity value: 3.6973091886730676 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tam type: gsarti/flores_101_tam metrics: - name: byte_perplexity type: byte_perplexity value: 4.539493400469833 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tel type: gsarti/flores_101_tel metrics: - name: byte_perplexity type: byte_perplexity value: 5.807499987508966 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tgk type: gsarti/flores_101_tgk metrics: - name: byte_perplexity type: byte_perplexity value: 3.5994818827380426 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tgl type: gsarti/flores_101_tgl metrics: - name: byte_perplexity type: byte_perplexity value: 5.667053833119858 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tha type: gsarti/flores_101_tha metrics: - name: byte_perplexity type: byte_perplexity value: 2.365940201944242 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_tur type: gsarti/flores_101_tur metrics: - name: byte_perplexity type: byte_perplexity value: 4.885014749844601 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_ukr type: gsarti/flores_101_ukr metrics: - name: byte_perplexity type: byte_perplexity value: 2.7240934990288483 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_umb type: gsarti/flores_101_umb metrics: - name: byte_perplexity type: byte_perplexity value: 12.766915508610673 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_urd type: gsarti/flores_101_urd metrics: - name: byte_perplexity type: byte_perplexity value: 1.9797467071381232 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_uzb type: gsarti/flores_101_uzb metrics: - name: byte_perplexity type: byte_perplexity value: 12.002337637722146 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_vie type: gsarti/flores_101_vie metrics: - name: byte_perplexity type: byte_perplexity value: 1.76578415476397 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_wol type: gsarti/flores_101_wol metrics: - name: byte_perplexity type: byte_perplexity value: 9.144285650306488 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_xho type: gsarti/flores_101_xho metrics: - name: byte_perplexity type: byte_perplexity value: 7.403240538286952 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_yor type: gsarti/flores_101_yor metrics: - name: byte_perplexity type: byte_perplexity value: 5.91272037551173 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_zho_simpl type: gsarti/flores_101_zho_simpl metrics: - name: byte_perplexity type: byte_perplexity value: 2.2769070822768533 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_zho_trad type: gsarti/flores_101_zho_trad metrics: - name: byte_perplexity type: byte_perplexity value: 2.5180582198242383 verified: false - task: type: text-generation name: text generation dataset: name: gsarti/flores_101_zul type: gsarti/flores_101_zul metrics: - name: byte_perplexity type: byte_perplexity value: 8.53353320693145 verified: false - task: type: text-generation name: text generation dataset: name: headqa type: headqa metrics: - name: acc type: acc value: 0.26440554339897887 verified: false - task: type: text-generation name: text generation dataset: name: hellaswag type: hellaswag metrics: - name: acc type: acc value: 0.41236805417247563 verified: false - task: type: text-generation name: text generation dataset: name: logiqa type: logiqa metrics: - name: acc type: acc value: 0.2073732718894009 verified: false - task: type: text-generation name: text generation dataset: name: mathqa type: mathqa metrics: - name: acc type: acc value: 0.24958123953098826 verified: false - task: type: text-generation name: text generation dataset: name: mc_taco type: mc_taco metrics: - name: em type: em value: 0.11936936936936937 verified: false - task: type: text-generation name: text generation dataset: name: mnli type: mnli metrics: - name: acc type: acc value: 0.35496688741721855 verified: false - task: type: text-generation name: text generation dataset: name: mnli_mismatched type: mnli_mismatched metrics: - name: acc type: acc value: 0.35211554109031734 verified: false - task: type: text-generation name: text generation dataset: name: mrpc type: mrpc metrics: - name: acc type: acc value: 0.5857843137254902 verified: false - task: type: text-generation name: text generation dataset: name: multirc type: multirc metrics: - name: acc type: acc value: 0.5375412541254125 verified: false - task: type: text-generation name: text generation dataset: name: openbookqa type: openbookqa metrics: - name: acc type: acc value: 0.216 verified: false - task: type: text-generation name: text generation dataset: name: piqa type: piqa metrics: - name: acc type: acc value: 0.7078346028291621 verified: false - task: type: text-generation name: text generation dataset: name: prost type: prost metrics: - name: acc type: acc value: 0.22683603757472245 verified: false - task: type: text-generation name: text generation dataset: name: pubmedqa type: pubmedqa metrics: - name: acc type: acc value: 0.616 verified: false - task: type: text-generation name: text generation dataset: name: qnli type: qnli metrics: - name: acc type: acc value: 0.5072304594545122 verified: false - task: type: text-generation name: text generation dataset: name: qqp type: qqp metrics: - name: acc type: acc value: 0.3842443729903537 verified: false - task: type: text-generation name: text generation dataset: name: race type: race metrics: - name: acc type: acc value: 0.3521531100478469 verified: false - task: type: text-generation name: text generation dataset: name: rte type: rte metrics: - name: acc type: acc value: 0.47653429602888087 verified: false - task: type: text-generation name: text generation dataset: name: sciq type: sciq metrics: - name: acc type: acc value: 0.892 verified: false - task: type: text-generation name: text generation dataset: name: sst type: sst metrics: - name: acc type: acc value: 0.5177752293577982 verified: false - task: type: text-generation name: text generation dataset: name: triviaqa type: triviaqa metrics: - name: acc type: acc value: 0.041633518960487934 verified: false - task: type: text-generation name: text generation dataset: name: tydiqa_primary type: tydiqa_primary metrics: - name: acc type: acc value: 0.3011337608795236 verified: false - task: type: text-generation name: text generation dataset: name: webqs type: webqs metrics: - name: acc type: acc value: 0.01673228346456693 verified: false - task: type: text-generation name: text generation dataset: name: wic type: wic metrics: - name: acc type: acc value: 0.5015673981191222 verified: false - task: type: text-generation name: text generation dataset: name: winogrande type: winogrande metrics: - name: acc type: acc value: 0.5864246250986582 verified: false - task: type: text-generation name: text generation dataset: name: wnli type: wnli metrics: - name: acc type: acc value: 0.471830985915493 verified: false - task: type: text-generation name: text generation dataset: name: wsc type: wsc metrics: - name: acc type: acc value: 0.4423076923076923 verified: false - task: type: text-generation name: text generation dataset: name: humaneval type: humaneval metrics: - name: pass@1 type: pass@1 value: 0.15524390243902436 verified: false - name: pass@10 type: pass@10 value: 0.3220367632383857 verified: false - name: pass@100 type: pass@100 value: 0.5545431515723145 verified: false --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 3,002,557,440 parameters: * 642,252,800 embedding parameters * 30 layers, 32 attention heads * Hidden layers are 2560-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11c-2B5-logs) - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of BLOOM models. Its focus is on aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Zero-shot evaluations:** See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results | Task | Language | Metric | BLOOM-2B5 | |:----|:----|:----|:----:| | arc_challenge | eng | acc ↑ | 0.28 | | arc_easy | eng | acc ↑ | 0.595 | | axb (Median of 10 prompts) | eng | acc ↑ | 0.443 | | axg (Median of 10 prompts) | eng | acc ↑ | 0.5 | | boolq (Median of 11 prompts) | eng | acc ↑ | 0.617 | | cb (Median of 15 prompts) | eng | acc ↑ | 0.304 | | cola (Median of 5 prompts) | eng | acc ↑ | 0.611 | | copa (Median of 9 prompts) | eng | acc ↑ | 0.63 | | crows_pairs_english (Median of 6 prompts) | eng | acc ↑ | 0.497 | | crows_pairs_french (Median of 7 prompts) | fra | acc ↑ | 0.503 | | diabla (Median of 2 prompts) | eng | acc ↑ | 0.289 | | gsarti/flores_101_afr | afr | byte_perplexity ↓ | 6.501 | | gsarti/flores_101_amh | amh | byte_perplexity ↓ | 3.973 | | gsarti/flores_101_ara | ara | byte_perplexity ↓ | 1.808 | | gsarti/flores_101_asm | asm | byte_perplexity ↓ | 5.699 | | gsarti/flores_101_ast | ast | byte_perplexity ↓ | 3.925 | | gsarti/flores_101_azj | azj | byte_perplexity ↓ | 6.943 | | gsarti/flores_101_bel | bel | byte_perplexity ↓ | 3.614 | | gsarti/flores_101_ben | ben | byte_perplexity ↓ | 5.121 | | gsarti/flores_101_bos | bos | byte_perplexity ↓ | 5.653 | | gsarti/flores_101_bul | bul | byte_perplexity ↓ | 2.701 | | gsarti/flores_101_cat | cat | byte_perplexity ↓ | 2.305 | | gsarti/flores_101_ceb | ceb | byte_perplexity ↓ | 6.291 | | gsarti/flores_101_ces | ces | byte_perplexity ↓ | 5.447 | | gsarti/flores_101_ckb | ckb | byte_perplexity ↓ | 3.726 | | gsarti/flores_101_cym | cym | byte_perplexity ↓ | 12.539 | | gsarti/flores_101_dan | dan | byte_perplexity ↓ | 5.183 | | gsarti/flores_101_deu | deu | byte_perplexity ↓ | 3.118 | | gsarti/flores_101_ell | ell | byte_perplexity ↓ | 2.468 | | gsarti/flores_101_eng | eng | byte_perplexity ↓ | 2.019 | | gsarti/flores_101_est | est | byte_perplexity ↓ | 9.117 | | gsarti/flores_101_fas | fas | byte_perplexity ↓ | 3.058 | | gsarti/flores_101_fin | fin | byte_perplexity ↓ | 6.847 | | gsarti/flores_101_fra | fra | byte_perplexity ↓ | 1.998 | | gsarti/flores_101_ful | ful | byte_perplexity ↓ | 11.466 | | gsarti/flores_101_gle | gle | byte_perplexity ↓ | 8.681 | | gsarti/flores_101_glg | glg | byte_perplexity ↓ | 3.03 | | gsarti/flores_101_guj | guj | byte_perplexity ↓ | 4.955 | | gsarti/flores_101_hau | hau | byte_perplexity ↓ | 10.758 | | gsarti/flores_101_heb | heb | byte_perplexity ↓ | 3.6 | | gsarti/flores_101_hin | hin | byte_perplexity ↓ | 4.713 | | gsarti/flores_101_hrv | hrv | byte_perplexity ↓ | 5.822 | | gsarti/flores_101_hun | hun | byte_perplexity ↓ | 6.44 | | gsarti/flores_101_hye | hye | byte_perplexity ↓ | 3.658 | | gsarti/flores_101_ibo | ibo | byte_perplexity ↓ | 5.565 | | gsarti/flores_101_ind | ind | byte_perplexity ↓ | 2.16 | | gsarti/flores_101_isl | isl | byte_perplexity ↓ | 8.082 | | gsarti/flores_101_ita | ita | byte_perplexity ↓ | 2.969 | | gsarti/flores_101_jav | jav | byte_perplexity ↓ | 7.057 | | gsarti/flores_101_jpn | jpn | byte_perplexity ↓ | 2.776 | | gsarti/flores_101_kam | kam | byte_perplexity ↓ | 11.073 | | gsarti/flores_101_kan | kan | byte_perplexity ↓ | 5.552 | | gsarti/flores_101_kat | kat | byte_perplexity ↓ | 2.523 | | gsarti/flores_101_kaz | kaz | byte_perplexity ↓ | 3.39 | | gsarti/flores_101_kea | kea | byte_perplexity ↓ | 8.919 | | gsarti/flores_101_kir | kir | byte_perplexity ↓ | 3.729 | | gsarti/flores_101_kor | kor | byte_perplexity ↓ | 3.933 | | gsarti/flores_101_lao | lao | byte_perplexity ↓ | 2.908 | | gsarti/flores_101_lav | lav | byte_perplexity ↓ | 7.777 | | gsarti/flores_101_lin | lin | byte_perplexity ↓ | 7.525 | | gsarti/flores_101_lit | lit | byte_perplexity ↓ | 7.369 | | gsarti/flores_101_ltz | ltz | byte_perplexity ↓ | 8.801 | | gsarti/flores_101_lug | lug | byte_perplexity ↓ | 8.483 | | gsarti/flores_101_luo | luo | byte_perplexity ↓ | 11.976 | | gsarti/flores_101_mal | mal | byte_perplexity ↓ | 4.616 | | gsarti/flores_101_mar | mar | byte_perplexity ↓ | 5.483 | | gsarti/flores_101_mkd | mkd | byte_perplexity ↓ | 2.966 | | gsarti/flores_101_mlt | mlt | byte_perplexity ↓ | 15.005 | | gsarti/flores_101_mon | mon | byte_perplexity ↓ | 3.411 | | gsarti/flores_101_mri | mri | byte_perplexity ↓ | 7.474 | | gsarti/flores_101_msa | msa | byte_perplexity ↓ | 2.571 | | gsarti/flores_101_mya | mya | byte_perplexity ↓ | 2.414 | | gsarti/flores_101_nld | nld | byte_perplexity ↓ | 4.128 | | gsarti/flores_101_nob | nob | byte_perplexity ↓ | 5.403 | | gsarti/flores_101_npi | npi | byte_perplexity ↓ | 5.199 | | gsarti/flores_101_nso | nso | byte_perplexity ↓ | 8.155 | | gsarti/flores_101_nya | nya | byte_perplexity ↓ | 8.18 | | gsarti/flores_101_oci | oci | byte_perplexity ↓ | 4.862 | | gsarti/flores_101_orm | orm | byte_perplexity ↓ | 12.912 | | gsarti/flores_101_ory | ory | byte_perplexity ↓ | 5.189 | | gsarti/flores_101_pan | pan | byte_perplexity ↓ | 4.698 | | gsarti/flores_101_pol | pol | byte_perplexity ↓ | 4.626 | | gsarti/flores_101_por | por | byte_perplexity ↓ | 1.975 | | gsarti/flores_101_pus | pus | byte_perplexity ↓ | 4.496 | | gsarti/flores_101_ron | ron | byte_perplexity ↓ | 4.965 | | gsarti/flores_101_rus | rus | byte_perplexity ↓ | 2.05 | | gsarti/flores_101_slk | slk | byte_perplexity ↓ | 6.451 | | gsarti/flores_101_slv | slv | byte_perplexity ↓ | 6.62 | | gsarti/flores_101_sna | sna | byte_perplexity ↓ | 8.462 | | gsarti/flores_101_snd | snd | byte_perplexity ↓ | 5.466 | | gsarti/flores_101_som | som | byte_perplexity ↓ | 11.959 | | gsarti/flores_101_spa | spa | byte_perplexity ↓ | 1.897 | | gsarti/flores_101_srp | srp | byte_perplexity ↓ | 2.871 | | gsarti/flores_101_swe | swe | byte_perplexity ↓ | 5.055 | | gsarti/flores_101_swh | swh | byte_perplexity ↓ | 3.697 | | gsarti/flores_101_tam | tam | byte_perplexity ↓ | 4.539 | | gsarti/flores_101_tel | tel | byte_perplexity ↓ | 5.807 | | gsarti/flores_101_tgk | tgk | byte_perplexity ↓ | 3.599 | | gsarti/flores_101_tgl | tgl | byte_perplexity ↓ | 5.667 | | gsarti/flores_101_tha | tha | byte_perplexity ↓ | 2.366 | | gsarti/flores_101_tur | tur | byte_perplexity ↓ | 4.885 | | gsarti/flores_101_ukr | ukr | byte_perplexity ↓ | 2.724 | | gsarti/flores_101_umb | umb | byte_perplexity ↓ | 12.767 | | gsarti/flores_101_urd | urd | byte_perplexity ↓ | 1.98 | | gsarti/flores_101_uzb | uzb | byte_perplexity ↓ | 12.002 | | gsarti/flores_101_vie | vie | byte_perplexity ↓ | 1.766 | | gsarti/flores_101_wol | wol | byte_perplexity ↓ | 9.144 | | gsarti/flores_101_xho | xho | byte_perplexity ↓ | 7.403 | | gsarti/flores_101_yor | yor | byte_perplexity ↓ | 5.913 | | gsarti/flores_101_zho_simpl | zho_simpl | byte_perplexity ↓ | 2.277 | | gsarti/flores_101_zho_trad | zho_trad | byte_perplexity ↓ | 2.518 | | gsarti/flores_101_zul | zul | byte_perplexity ↓ | 8.534 | | headqa | esp | acc ↑ | 0.264 | | hellaswag | eng | acc ↑ | 0.412 | | logiqa | eng | acc ↑ | 0.207 | | mathqa | eng | acc ↑ | 0.25 | | mc_taco | eng | em ↑ | 0.119 | | mnli (Median of 15 prompts) | eng | acc ↑ | 0.355 | | mnli_mismatched (Median of 15 prompts) | eng | acc ↑ | 0.352 | | mrpc | eng | acc ↑ | 0.586 | | multirc (Median of 11 prompts) | eng | acc ↑ | 0.538 | | openbookqa | eng | acc ↑ | 0.216 | | piqa | eng | acc ↑ | 0.708 | | prost | eng | acc ↑ | 0.227 | | pubmedqa | eng | acc ↑ | 0.616 | | qnli | eng | acc ↑ | 0.507 | | qqp (Median of 7 prompts) | eng | acc ↑ | 0.384 | | race | eng | acc ↑ | 0.352 | | rte (Median of 6 prompts) | eng | acc ↑ | 0.477 | | sciq | eng | acc ↑ | 0.892 | | sst (Median of 6 prompts) | eng | acc ↑ | 0.518 | | triviaqa | eng | acc ↑ | 0.042 | | tydiqa_primary (Median of 24 prompts) | eng | acc ↑ | 0.301 | | webqs | eng | acc ↑ | 0.017 | | wic (Median of 11 prompts) | eng | acc ↑ | 0.502 | | winogrande | eng | acc ↑ | 0.586 | | wnli (Median of 6 prompts) | eng | acc ↑ | 0.472 | | wsc (Median of 11 prompts) | eng | acc ↑ | 0.442 | | humaneval | python | pass@1 ↑ | 0.155 | | humaneval | python | pass@10 ↑ | 0.322 | | humaneval | python | pass@100 ↑ | 0.555 | **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
[ "PUBMEDQA", "SCIQ" ]
Non_BioNLP
udrearobert999/multi-qa-mpnet-base-cos-v1-ocontrastive-3e-300samples-20iter
udrearobert999
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/multi-qa-mpnet-base-cos-v1", "base_model:finetune:sentence-transformers/multi-qa-mpnet-base-cos-v1", "model-index", "region:us" ]
1,715
1,715
4
0
--- base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1 library_name: setfit metrics: - f1 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: in durankulak near varna is another important example other signs of early metals are found from the third millennium bc in palmela portugal los millares spain and stonehenge united kingdom the precise beginnings however have not be clearly ascertained and new discoveries are both continuous and ongoing in tamilnadu in approximately 1900 bc ancient iron smelting sites were functioning in tamil nadu in the near east about 3500 bc it was discovered that by combining copper and tin a superior metal could be made an alloy called bronze this represented a major technological shift known as the bronze age the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin the process appears to have been invented by the hittites in about 1200 bc beginning the iron age the secret of extracting and working iron was a key factor in the success of the philistineshistorical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations this includes the ancient and medieval kingdoms and empires of the middle east and near east ancient iran ancient egypt ancient nubia and anatolia in presentday turkey ancient nok carthage the greeks and romans of ancient europe medieval europe ancient and medieval china ancient and medieval india ancient and medieval japan amongst others many applications practices and devices associated or involved in metallurgy were established in ancient china such as the innovation of the blast furnace cast iron hydraulicpowered trip hammers and double acting piston bellowsa 16th century book by georg agricola de re metallica describes the highly developed and complex processes of mining metal ores metal extraction and metallurgy of the time agricola has been described as the father of metallurgy extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form in order to convert a metal oxide or sulphide to a purer metal the ore must be reduced physically chemically or electrolytically extractive metallurgists are interested in three primary streams feed concentrate metal oxidesulphide and tailings waste after mining large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough where each particle is either mostly valuable or mostly waste concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products mining may not be necessary if the ore body and physical environment are conducive to leaching leaching dissolves minerals in an ore body and results in an enriched solution the solution is collected and processed to extract valuable metals ore - text: '##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert' - text: authority to select projects and mandated new metropolitan planning initiatives for the first time state transportation officials were required to consult seriously with local representatives on mpo governing boards regarding matters of project prioritization and decisionmaking these changes had their roots in the need to address increasingly difficult transportation problems — in particular the more complicated patterns of traffic congestion that arose with the suburban development boom in the previous decades many recognized that the problems could only be addressed effectively through a stronger federal commitment to regional planning the legislation that emerged the intermodal surface transportation efficiency act istea was signed into federal law by president george h w bush in december 1991 it focused on improving transportation not as an end in itself but as the means to achieve important national goals including economic progress cleaner air energy conservation and social equity istea promoted a transportation system in which different modes and facilities — highway transit pedestrian bicycle aviation and marine — were integrated to allow a seamless movement of both goods and people new funding programs provided greater flexibility in the use of funds particularly regarding using previously restricted highway funds for transit development improved intermodal connections and emphasized upgrades to existing facilities over building new capacity — particularly roadway capacity to accomplish more serious metropolitan planning istea doubled federal funding for mpo operations and required the agencies to evaluate a variety of multimodal solutions to roadway congestion and other transportation problems mpos also were required to broaden public participation in the planning process and to see that investment decisions contributed to meeting the air quality standards of the clean air act amendments in addition istea placed a new requirement on mpos to conduct fiscally constrained planning and ensure that longrange transportation plans and shortterm transportation improvement programs were fiscally constrained in other words adopted plans and programs can not include more projects than reasonably can be expected to be funded through existing or projected sources of revenues this new requirement represented a major conceptual shift for many mpos and others in the planning community since the imposition of fiscal discipline on plans now required not only understanding how much money might be available but how to prioritize investment needs and make difficult choices among competing needs adding to this complexity is the need to plan across transportation modes and develop approaches for multimodal investment prioritization and decision making it is in this context of greater prominence funding and requirements that mpos function today an annual element is composed of transportation improvement projects contained in an areas transportation improvement program tip which is proposed for implementation during the current year the annual element is submitted to the us department of transportation as part of the required planning process the passage of safe accountable flexible efficient transportation equity act a legacy for users safetealu - text: '##pignygiroux served as an assistant professor from 1997 2003 associate professor from 2003 2014 chair of the department of geography from 2015 2018 and professor beginning in 2014 with secondary appointments in department of geology the college of education social services and rubenstein school of environment natural resources she teaches courses in meteorology climatology physical geography remote sensing and landsurface processes in her work as state climatologist for vermont dupignygiroux uses her expertise hydrology and extreme weather such as floods droughts and storms to keep the residents of vermont informed on how climate change will affect their homes health and livelihoods she assists other state agencies in preparing for and adapting to current and future impacts of climate change on vermonts transportation system emergency management planning and agriculture and forestry industries for example she has published analyses of the impacts of climate change on the health of vermonts sugar maples a hardwood species of key economic and cultural importance to the state as cochair of vermonts state ’ s drought task force she played a key role in developing the 2018 vermont state hazard mitigation plandupignygiroux served as secretary for the american association of state climatologists from 20102011 and president elect from 20192020 in june 2020 she was elected as president of the american association of state climatologists which is a twoyear term in addition to her research on climate change dupignygiroux is known for her efforts to research and promote climate literacy climate literacy is an understanding of the influences of and influences on the climate system including how people change the climate how climate metrics are observed and modelled and how climate change affects society “ being climate literate is more critical than ever before ” lesleyann dupignygiroux stated for a 2020 article on climate literacy “ if we do not understand weather climate and climate change as intricate and interconnected systems then our appreciation of the big picture is lost ” dupignygiroux is known for her climate literacy work with elementary and high school teachers and students she cofounded the satellites weather and climate swac project in 2008 which is a professional development program for k12 teachers designed to promote climate literacy and interest in the stem science technology engineering and mathematics careers dupignygiroux is also a founding member of the climate literacy and energy awareness network clean formerly climate literacy network a communitybased effort to support climate literacy and communication in a 2016 interview dupignygiroux stated “ sharing knowledge and giving back to my community are my two axioms in life watching students mature and flourish in' - text: no solutions to x n y n z n displaystyle xnynzn for all n ≥ 3 displaystyle ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus euler the interest of leonhard euler 1707 – 1783 in number theory was first spurred in 1729 when a friend of his the amateur goldbach pointed him towards some of fermats work on the subject this has been called the rebirth of modern number theory after fermats relative lack of success in getting his contemporaries attention for the subject eulers work on number theory includes the following proofs for fermats statements this includes fermats little theorem generalised by euler to nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p ≡ 1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer is the sum of four squares the first complete proof is by josephlouis lagrange 1770 soon improved by euler himself the lack of nonzero integer solutions to x 4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the case n3 of which euler also proved by a related method pells equation first misnamed by euler he wrote on the link between continued fractions and pells equation first steps towards analytic number theory in his work of sums of four squares partitions pentagonal numbers and the distribution of prime numbers euler pioneered the use of what can be seen as analysis in particular infinite series in number theory since he lived before the development of complex analysis most of his work is restricted to the formal manipulation of power series he did however do some very notable though not fully rigorous early work on what would later be called the riemann zeta function quadratic forms following fermats lead euler did further research on the question of which primes can be expressed in the form x 2 n y 2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine equations euler worked on some diophantine equations of genus 0 and 1 in particular he studied diophantuss work he tried to systematise it but the time was not yet ripe for such an endeavour — algebraic geometry was still in its infancy he did notice there was a connection between diophantine problems and elliptic integrals whose study he had himself initiated lagrange legendre and gauss josephlouis inference: true model-index: - name: SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: f1 value: 0.7425854169247909 name: F1 --- # SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) - **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 43 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 9 | <ul><li>'##hthalmic formulation of latilactobacillus sakei while the oral probiotic demonstrated no discernible benefits there is limited evidence indicating probiotics are of benefit in the management of infection or inflammation of the urinary tract one literature review found lactobacillus probiotic supplements appeared to increase vaginal lactobacilli levels thus reducing the incidence of vaginal infections in otherwise healthy adult women supplements such as tablets capsules powders and sachets containing bacteria have been studied however probiotics taken orally can be destroyed by the acidic conditions of the stomach as of 2010 a number of microencapsulation techniques were being developed to address this problem preliminary research is evaluating the potential physiological effects of multiple probiotic strains as opposed to a single strain as the human gut may contain tens of thousands of microbial species one theory indicates that this diverse environment may benefit from consuming multiple probiotic strains an effect that remains scientifically unconfirmed only preliminary evidence exists for most probiotic health claims even for the most studied probiotic strains few have been sufficiently developed in basic and clinical research to warrant approval for health claim status by a regulatory agency such as the fda or efsa and as of 2010 no claims had been approved by those two agencies some experts are skeptical about the efficacy of different probiotic strains and believe that not all subjects benefit from probiotics first probiotics must be alive when administered one of the concerns throughout the scientific literature resides in the viability and reproducibility on a large scale of observed results for specific studies as well as the viability and stability during use and storage and finally the ability to survive in stomach acids and then in the intestinal ecosystemsecond probiotics must have undergone controlled evaluation to document health benefits in the target host only products that contain live organisms shown in reproducible human studies to confer a health benefit may claim to be probiotic the correct definition of health benefit backed with solid scientific evidence is a strong element for the proper identification and assessment of the effect of a probiotic this aspect is a challenge for scientific and industrial investigations because several difficulties arise such as variability in the site for probiotic use oral vaginal intestinal and mode of applicationthird the probiotic candidate must be a taxonomically defined microbe or combination of microbes genus species and strain level it is commonly admitted that most effects of probiotics are strainspecific and cannot be extended to other probiotics of the same genus or species this calls for precise identification of the strain ie gen'</li><li>'thiosulfate – citrate – bile salts – sucrose agar or tcbs agar is a type of selective agar culture plate that is used in microbiology laboratories to isolate vibrio species tcbs agar is highly selective for the isolation of v cholerae and v parahaemolyticus as well as other vibrio species apart from tcbs agar other rapid testing dipsticks like immunochromatographic dipstick is also used in endemic areas such as asia africa and latin america though tcbs agar study is required for confirmation this becomes immensely important in cases of gastroenteritis caused by campylobacter species whose symptoms mimic that of cholera since no yellow bacterial growth is observed in case of campylobacter species on tcbs agar chances of incorrect diagnosis can be rectified tcbs agar contains high concentrations of sodium thiosulfate and sodium citrate to inhibit the growth of enterobacteriaceae inhibition of grampositive bacteria is achieved by the incorporation of ox gall which is a naturally occurring substance containing a mixture of bile salts and sodium cholate a pure bile salt sodium thiosulfate also serves as a sulfur source and its presence in combination with ferric citrate allows for the easy detection of hydrogen sulfide production saccharose sucrose is included as a fermentable carbohydrate for metabolism by vibrio species the alkaline ph of the medium enhances the recovery of v cholerae and inhibits the growth of others thymol blue and bromothymol blue are included as indicators of ph changes approximate amounts per literyeast extract 50 g proteose peptone 100 g sodium thiosulfate 100 g sodium citrate 100 g ox gall 50 g sodium cholate 30 g saccharose 200 g sodium chloride 100 g ferric citrate 10 g bromothymol blue 004 g thymol blue 004 g agar 150 gph 86 ± 02 25 °c typical colony morphologyv cholerae large yellow colonies v parahaemolyticus colonies with blue to green centers v alginolyticus large yellow mucoidal colonies v harveyiv fischeri greyishgreen to bluishgreen colonies which show luminescence in dark older colonies fail to show bioluminescence proteusenterococci partial inhibition if growth colonies are small and yellow to translucent pseudomonasaeromonas partial inhibition if growth colonies are blueba'</li><li>'penicillin binding protein 3 pbp3 the ftsl gene is a group of filamentation temperaturesensitive genes used in cell division their product pbp3 as mentioned above is a membrane transpeptidase required for peptidoglycan synthesis at the septum inactivation of the ftsl gene product requires the sospromoting reca and lexa genes as well as dpia and transiently inhibits bacterial cell division the dpia is the effector for the dpib twocomponent system interaction of dpia with replication origins competes with the binding of the replication proteins dnaa and dnab when overexpressed dpia can interrupt dna replication and induce the sos response resulting in inhibition of cell division nutritional stress can change bacterial morphology a common shape alteration is filamentation which can be triggered by a limited availability of one or more substrates nutrients or electron acceptors since the filament can increase a cells uptake – surface area without significantly changing its volume appreciably moreover the filamentation benefits bacterial cells attaching to a surface because it increases specific surface area in direct contact with the solid medium in addition the filamentation may allows bacterial cells to access nutrients by enhancing the possibility that part of the filament will contact a nutrientrich zone and pass compounds to the rest of the cells biomass for example actinomyces israelii grows as filamentous rods or branched in the absence of phosphate cysteine or glutathione however it returns to a regular rodlike morphology when adding back these nutrients filamentation protoplasts spheroplasts'</li></ul> | | 26 | <ul><li>'into pellets these cost on average 70 more than raw ore finally gas requirements can significantly increase investment costs gas produced by a corex is remarkably wellsuited to feeding a midrex unit but the attraction of the low investment then fades although gas handling and processing are far more economical than converting coal into coke not to mention the associated constraints such as bulk handling high sensitivity of coking plants to production fluctuations environmental impact etc replacing coke with natural gas only makes direct reduction attractive to steelmakers with cheap gas resources this point is essential as european steelmakers pointed out in 1998theres no secret to be competitive direct reduction requires natural gas at 2 per gigajoule half the european price lusine nouvelle september 1998 la reduction directe passe au charbonthis explains the development of certain reductionmelting processes which because of the high temperatures involved have a surplus of reducing gas reductionmelting processes such as the corex capable of feeding an ancillary midrex direct reduction unit or the tecnored are justified by their ability to produce corich gas despite their higher investment cost in addition coke oven gas is an essential coproduct in the energy strategy of a steel complex the absence of a coke oven must therefore be compensated for by higher natural gas consumption for downstream tools notably hot rolling and annealing furnaces the worldwide distribution of direct reduction plants is therefore directly correlated with the availability of natural gas and ore in 2007 the breakdown was as follows natural gas processes are concentrated in latin america where many have already been developed and the middle east coalfired processes are remarkably successful in india maintaining the proportion of steel produced by direct reduction despite the strong development of the chinese steel industrychina a country with gigantic needs and a deficit of scrap metal and europe lacking competitive ore and fuels have never invested massively in these processes remaining faithful to the blast furnace route the united states meanwhile has always had a few units but since 2012 the exploitation of shale gas has given a new impetus to natural gas processeshowever because direct reduction uses much more hydrogen as a reducing agent than blast furnaces which is very clear for natural gas processes it produces much less co2 a greenhouse gas this advantage has motivated the development of ulcos processes in developed countries such as hisarna ulcored and others the emergence of mature gas treatment technologies such as pressure swing adsorption or amine gas treating has also rekindled the interest of researchers in addition to reducing co2 emissions pure hydrogen processes such as hybrit are being actively studied with a view to decarbonizing the'</li><li>'have shapes that depend on crystalline orientation often needle or plateshaped these particles align themselves as water leaves the slurry or as clay is formed casting or other fluidtosolid transitions ie thinfilm deposition produce textured solids when there is enough time and activation energy for atoms to find places in existing crystals rather than condensing as an amorphous solid or starting new crystals of random orientation some facets of a crystal often the closepacked planes grow more rapidly than others and the crystallites for which one of these planes faces in the direction of growth will usually outcompete crystals in other orientations in the extreme only one crystal will survive after a certain length this is exploited in the czochralski process unless a seed crystal is used and in the casting of turbine blades and other creepsensitive parts material properties such as strength chemical reactivity stress corrosion cracking resistance weldability deformation behavior resistance to radiation damage and magnetic susceptibility can be highly dependent on the material ’ s texture and related changes in microstructure in many materials properties are texturespecific and development of unfavorable textures when the material is fabricated or in use can create weaknesses that can initiate or exacerbate failures parts can fail to perform due to unfavorable textures in their component materials failures can correlate with the crystalline textures formed during fabrication or use of that component consequently consideration of textures that are present in and that could form in engineered components while in use can be a critical when making decisions about the selection of some materials and methods employed to manufacture parts with those materials when parts fail during use or abuse understanding the textures that occur within those parts can be crucial to meaningful interpretation of failure analysis data as the result of substrate effects producing preferred crystallite orientations pronounced textures tend to occur in thin films modern technological devices to a large extent rely on polycrystalline thin films with thicknesses in the nanometer and micrometer ranges this holds for instance for all microelectronic and most optoelectronic systems or sensoric and superconducting layers most thin film textures may be categorized as one of two different types 1 for socalled fiber textures the orientation of a certain lattice plane is preferentially parallel to the substrate plane 2 in biaxial textures the inplane orientation of crystallites also tend to align with respect to the sample the latter phenomenon is accordingly observed in nearly epitaxial growth processes where certain crystallographic axes of crystals in the layer tend to align along'</li><li>'origin of friction contact between surfaces is made up of a large number of microscopic regions in the literature called asperities or junctions of contact where atomtoatom contact takes place the phenomenon of friction and therefore of the dissipation of energy is due precisely to the deformations that such regions undergo due to the load and relative movement plastic elastic or rupture deformations can be observed plastic deformations – permanent deformations of the shape of the bumps elastic deformations – deformations in which the energy expended in the compression phase is almost entirely recovered in the decompression phase elastic hysteresis break deformations – deformations that lead to the breaking of bumps and the creation of new contact areasthe energy that is dissipated during the phenomenon is transformed into heat thus increasing the temperature of the surfaces in contact the increase in temperature also depends on the relative speed and the roughness of the material it can be so high as to even lead to the fusion of the materials involved in friction phenomena temperature is fundamental in many areas of application for example a rise in temperature may result in a sharp reduction of the friction coefficient and consequently the effectiveness of the brakes the cohesion theory the adhesion theory states that in the case of spherical asperities in contact with each other subjected to a w → displaystyle vec w load a deformation is observed which as the load increases passes from an elastic to a plastic deformation this phenomenon involves an enlargement of the real contact area a r displaystyle ar which for this reason can be expressed as where d is the hardness of the material definable as the applied load divided by the area of the contact surface if at this point the two surfaces are sliding between them a resistance to shear stress t is observed given by the presence of adhesive bonds which were created precisely because of the plastic deformations and therefore the frictional force will be given by at this point since the coefficient of friction is the ratio between the intensity of the frictional force and that of the applied load it is possible to state that thus relating to the two material properties shear strength t and hardness to obtain low value friction coefficients μ displaystyle mu it is possible to resort to materials which require less shear stress but which are also very hard in the case of lubricants in fact we use a substrate of material with low cutting stress t placed on a very hard material the force acting between two solids in contact will not only have normal components as implied so far but will also have tangential components this further complicates the description of the interactions'</li></ul> | | 4 | <ul><li>'dover publications isbn 0486669580 t w korner 2012 vectors pure and applied a general introduction to linear algebra cambridge university press p 216 isbn 9781107033566 r torretti 1996 relativity and geometry courier dover publications p 103 isbn 0486690466 j j l synge a schild 1978 tensor calculus courier dover publications p 128 isbn 048614139x c a balafoutis r v patel 1991 dynamic analysis of robot manipulators a cartesian tensor approach the kluwer international series in engineering and computer science robotics vision manipulation and sensors vol 131 springer isbn 0792391454 s g tzafestas 1992 robotic systems advanced techniques and applications springer isbn 0792317491 t dass s k sharma 1998 mathematical methods in classical and quantum physics universities press p 144 isbn 8173710899 g f j temple 2004 cartesian tensors an introduction dover books on mathematics series dover isbn 0486439089 h jeffreys 1961 cartesian tensors cambridge university press isbn 9780521054232'</li><li>'vibration from latin vibro to shake is a mechanical phenomenon whereby oscillations occur about an equilibrium point vibration may be deterministic if the oscillations can be characterised precisely eg the periodic motion of a pendulum or random if the oscillations can only be analysed statistically eg the movement of a tire on a gravel road vibration can be desirable for example the motion of a tuning fork the reed in a woodwind instrument or harmonica a mobile phone or the cone of a loudspeaker in many cases however vibration is undesirable wasting energy and creating unwanted sound for example the vibrational motions of engines electric motors or any mechanical device in operation are typically unwanted such vibrations could be caused by imbalances in the rotating parts uneven friction or the meshing of gear teeth careful designs usually minimize unwanted vibrations the studies of sound and vibration are closely related both fall under acoustics sound or pressure waves are generated by vibrating structures eg vocal cords these pressure waves can also induce the vibration of structures eg ear drum hence attempts to reduce noise are often related to issues of vibration machining vibrations are common in the process of subtractive manufacturing free vibration or natural vibration occurs when a mechanical system is set in motion with an initial input and allowed to vibrate freely examples of this type of vibration are pulling a child back on a swing and letting it go or hitting a tuning fork and letting it ring the mechanical system vibrates at one or more of its natural frequencies and damps down to motionlessness forced vibration is when a timevarying disturbance load displacement velocity or acceleration is applied to a mechanical system the disturbance can be a periodic and steadystate input a transient input or a random input the periodic input can be a harmonic or a nonharmonic disturbance examples of these types of vibration include a washing machine shaking due to an imbalance transportation vibration caused by an engine or uneven road or the vibration of a building during an earthquake for linear systems the frequency of the steadystate vibration response resulting from the application of a periodic harmonic input is equal to the frequency of the applied force or motion with the response magnitude being dependent on the actual mechanical system damped vibration when the energy of a vibrating system is gradually dissipated by friction and other resistances the vibrations are said to be damped the vibrations gradually reduce or change in frequency or intensity or cease and the system rests in its equilibrium position an example of this type of vibration is the vehicular suspension dampened by the shock absorber vibration testing is accomplished by introducing a forcing function into a'</li><li>'the most cited empirical generalizations in marketing as of august 2023 the paper a new product growth for model consumer durables published in management science had approximately 11352 citations in google scholarthis model has been widely influential in marketing and management science in 2004 it was selected as one of the ten most frequently cited papers in the 50year history of management science it was ranked number five and the only marketing paper in the list it was subsequently reprinted in the december 2004 issue of management sciencethe bass model was developed for consumer durables however it has been used also to forecast market acceptance of numerous consumer and industrial products and services including tangible nontangible medical and financial products sultan et al 1990 applied the bass model to 213 product categories mostly consumer durables in a wide range of prices but also to services such as motels and industrialfarming products like hybrid corn seeds diffusion of innovation forecasting lazy user model shifted gompertz distribution'</li></ul> | | 38 | <ul><li>'##ityas switching between languages is exceedingly common and takes many forms we can recognize codeswitching more often as sentence alternation a sentence may begin in one language and finish in another or phrases from both languages may succeed each other in apparently random order such behavior can be explained only by postulating a range of linguistic or social factors such as the following speakers cannot express themselves adequately in one language so they switch to another to work around the deficiency this may trigger a speaker to continue in the other language for a while switching to a minority language is very common as a means of expressing solidarity with a social group the language change signals to the listener that the speaker is from a certain background if the listener responds with a similar switch a degree of rapport is established the switch between languages can signal the speakers attitude towards the listener friendly irritated distant ironic jocular and so on monolinguals can communicate these effects to some extent by varying the level of formality of their speech bilinguals can do it by language switchingcodeswitching involves the capacity of bilingual individuals to switch between different languages within a single conversation john guiteriz notes that it is important to note that codeswitching is most commonly observed among bilingual individuals who are highly skilled in both languages and is actually prevalent in numerous bilingual communities contrary to common beliefs the patterns of language switching exhibited by the speaker can be influenced by the listeners level of proficiency in the languages or their personal language preferences codeswitching is distinct from other language contact phenomena such as borrowing pidgins and creoles and loan translation calques borrowing affects the lexicon the words that make up a language while codeswitching takes place in individual utterances speakers form and establish a pidgin language when two or more speakers who do not speak a common language form an intermediate third language speakers also practice codeswitching when they are each fluent in both languages code mixing is a thematically related term but the usage of the terms codeswitching and codemixing varies some scholars use either term to denote the same practice while others apply codemixing to denote the formal linguistic properties of languagecontact phenomena and codeswitching to denote the actual spoken usages by multilingual persons there is much debate in the field of linguistics regarding the distinction between codeswitching and language transfer according to jeanine treffersdaller considering cs codeswitching and language transfer as similar phenomena is helpful if one wants to create a theory that is as parsimonious as possible and therefore it is worth attempting to aim for'</li><li>'of new rhetoric has also been expanded in various academic disciplines for example in 2015 philosophers rutten soetaert used the new rhetoric concept to study changing attitudes in regards to education as a way to better understand if burkes ideas can be applied to this arenaburkes new rhetoric has also been used to understand the womens equality movement specifically in regards to the education of women and sharing of knowledge through print media academic amlong deconstructed print medias of the 1800s addressing human rights as an aspect of educating women about the womens rights movement generated when two peoples substances overlap burke asserts that all things have substance which he defines as the general nature of something identification is a recognized common ground between two peoples substances regarding physical characteristics talents occupation experiences personality beliefs and attitudes the more substance two people share the greater the identification it is used to overcome human division can be falsified to result in homophily sometimes the speaker tries to falsely identify with the audience which results in homophily for the audience homophily is the perceived similarity between speaker and listener the socalled i is merely a unique combination of potentially conflicting corporate wes for example the use of the people rather than the worker would more clearly tap into the lower middleclass values of the audience the movement was trying to reach reflects ambiguities of substance burke recognizes that identification rests on both unity and division since no ones substance can completely overlap with others individuals are both joined and separated humans can unite on certain aspects of substance but at the same time remain unique which is labeled as ambiguities identification can be increased by the process of consubstantiation which refers to bridging divisions between two people rhetoric is needed in this process to build unity according to burke guilt redemption is considered the plot of all human drama or the root of all rhetoric he defined the guilt as the theological doctrine of original sin as cited in littlejohn burke sees guilt as allpurpose word for any feeling of tension within a person — anxiety embarrassment selfhatred disgust and the likein this perspective burke concluded that the ultimate motivation of man is to purge oneself of ones sense of guilt through public speaking the term guilt covers tension anxiety shame disgust embarrassment and other similar feelings guilt serves as a motivating factor that drives the human drama burkes cycle refers to the process of feeling guilt and attempting to reduce it which follows a predictable pattern order or hierarchy the negative victimage scapegoat or mortification and redemption order or hierarchy society is a dramatic process in which hierarchy forms structure through power relationships the structure of social hierarchy considered in'</li><li>'between these speech traits and sexual orientation but also clarified the studys narrow scope on only certain phonetic features language and gender scholar robin lakoff not only compares gay male with female speech but also claims that gay men deliberately imitate the latter claiming this to include an increased use of superlatives inflected intonation and lisping later linguists have reevaluated lakoffs claims and concluded that these characterizations are not consistent for women instead reflecting stereotypes that may have social meaning and importance but that do not fully capture actual gendered language uselinguist david crystal correlated the use among men of an effeminate or simpering voice with a widened range of pitch glissando effects between stressed syllables greater use of fallrise and risefall tones vocal breathiness and huskiness and occasionally more switching to the falsetto register still research has not confirmed any unique intonation or pitch qualities of gay speech some such characteristics have been portrayed as mimicking womens speech and judged as derogatory toward or trivializing of women a study of over 300 flemish dutchspeaking belgian participants men and women found a significantly higher prevalence of a lisplike feature in gay men than in other demographics several studies have also examined and confirmed gay speech characteristics in puerto rican spanish and other dialects of caribbean spanish despite some similarities in gaysounding speech found crosslinguistically it is important to note that phonetic features that cue listener perception of gayness are likely to be languagedependent and languagespecific and a feature that is attributed to gayness in one linguistic variety or language may not have the same indexical meaning in a different linguistic variety or language for example a study from 2015 comparing gaysounding speech in german and italian finds slightly different acoustic cues for the languages as well as different extents of the correlation of gaysounding speech to genderatypicalsounding speech crocker l munson b 2006 speech characteristics of gendernonconforming boys oral presentation given at the conference on new ways of analyzing variation in language columbus oh mack s munson b 2008 implicit processing social stereotypes and the gay lisp oral presentation given at the annual meeting of the linguistic society of america chicago il mack sara munson benjamin 2012 the influence of s quality on ratings of mens sexual orientation explicit and implicit measures of the gay lisp stereotype journal of phonetics 40 1 198 – 212 doi101016jwocn201110002 munson b zimmerman lj 2006a the perception of sexual orientation masculini'</li></ul> | | 36 | <ul><li>'kill masses of people the role of the authentic patriots in contrast to the sheeplike followers of the elite and more recently the theory that the federal government was trying to starve the public by forcing fertilizer limitations on farmersin a august 29th ctv news interview in response to the attack on minister freeland public safety minister marco mendicino described the attack as unacceptable and said that it was not only a threat to freeland but also a threat to democracy he said they were in consultation with the rcmp and police services to investigate increasing security details for ministers and all politicians rage farming and rage baiting are most recent iterations of clickbait and other forms of internet manipulation that use conspiracy theories and misinformation to fuel anger and engage users facebook has been blamed for fanning sectarian hatred steering users toward extremism and conspiracy theories and incentivizing politicians to take more divisive stands according to a 2021 washington post report in spite of previous reports on changes to its news feed algorithms to reduce clickbait revelations by facebook whistleblower frances haugen and content from the 2021 facebook leak informally referred to as the facebook papers provide evidence of the role the companys news feed algorithm had playedmedia and governmental investigations in the wake of revelations from facebook whistleblower frances haugen and the 2021 facebook leak provide insight into the role various algorithms play in farming outrage for profit by spreading divisiveness conspiracy theories and sectarian hatred that can allegedly contribute to realworld violence a highly criticized example was when facebook with over 25 million accounts in myanmar neglected to police rageinducing hate speech posts targeting the rohingya muslim minority in myanmar that allegedly facilitated the rohingya genocide in 2021 a us173 billion class action lawsuit filed against meta platforms inc the new name of facebook on behalf of rohingya refugees claimed that facebooks algorithms amplified hate speechin response to complaints about clickbait on facebooks news feed and news feed ranking algorithm in 2014 and again in 2016 the company introduced an anticlickbait algorithm to remove sites from their news feed that frequently use headlines that withhold exaggerate or distort informationa february 2019 article that was promoted in facebook described how outrage bait made people angry on purpose digital media companies and social media actors incite outrage to increase engagement clicks comments likes and shares which generate more advertising revenue if content does not increase engagement timeline algorithm limits the number of users that this uninteresting content can reach according to this article when geared up on its war against clickbait algorithm changed which made it'</li><li>'aposiopesis classical greek αποσιωπησις becoming silent is a figure of speech wherein a sentence is deliberately broken off and left unfinished the ending to be supplied by the imagination giving an impression of unwillingness or inability to continue an example would be the threat get out or else — this device often portrays its users as overcome with passion fear anger excitement or modesty to mark the occurrence of aposiopesis with punctuation an emrule — or an ellipsis … may be used one classical example of aposiopesis in virgil occurs in aeneid 1135 neptune the roman god of the sea is angry with the winds whom juno released to start a storm and harass the trojan hero and protagonist aeneas neptune berates the winds for causing a storm without his approval but breaks himself off midthreatanother example in virgil occurs in the aeneid 2100 sinon the greek who is posing as a defector to deceive the trojans into accepting the trojan horse within their city wall tells about how ulixes lied to spur on the warfor an example from classical latin theater this occurs multiple times in one speech in terences adelphoe lines 159140 in the play demea has two sons he has given one to his brother micio to raise in the following scene demea has worked himself up in anger over his brothers laxer parenting style the following speech provides multiple examples of aposiopesisa biblical example is found in psalm 27 verse 13 it says unless i had believed i would see the goodness of the lord in the land of the living … the implication is that the author does not know what he would have done king lear overcome by anger at his daughters says aposiopesis also occurs at the agitated climax of mercutios queen mab speech resulting in a calming intervention by romeo dante alighieri used an aposiopesis in his divine comedy hell ix 79 citation from the translation by henry wadsworth longfellow virgil speaks to himself in syntax an aposiopesis arises when the if clause protasis of a condition is stated without an ensuing then clause or apodosis because an aposiopesis implies the trailing off of thought it is never directly followed by a period which would effectively result in four consecutive dots anacoluthon anapodoton prosiopesis quos ego figure of speech non sequitur literary device'</li><li>'and later entered his own monastery in fact aphrodito and its vicinity “ boasted over thirty churches and nearly forty monasteries ” there is no surviving record for the early years of dioscorus his father apollos was an entrepreneur and local official the commonly accepted date for the birth of dioscorus is around ad 520 although there is no evidence it is likely that dioscorus went to school in alexandria where one of his teachers might have been the neoplatonic philosopher john philoponus although alexandria was not the most prominent place for a legal education – that was the famed law school of beirut – young men did travel there for rhetorical training preliminary to the study of law these included the celebrated poet agathias a contemporary of dioscorus who at an early age published a successful collection of poems called the cycle and later became the center of a circle of prominent poets in the byzantine capital constantinopleback in aphrodito dioscorus married had children and pursued a career similar to his fathers acquiring leasing out and managing property and helping in the administration of the village his first dated appearance in the papyrus is 543 dioscorus had the assistant of the defensor civitatis of antaeopolis examine the damage done by a shepherd and his flock to a field of crops which was owned by the monastery of apa sourous but managed by dioscorus dioscorus also became engaged in legal work in 5467 after his father apollos died dioscorus wrote a formal petition to emperor justinian and a formal explanation to empress theodora about tax conflicts affecting aphrodito the village was under the special patronage of the empress and had been granted the status of autopragia this meant that the village could collect its own public taxes and deliver them directly to the imperial treasury aphrodito was not under the jurisdiction of the pagarch stationed in antaeopolis who handled the public taxes for the rest of the nome dioscoruss petition and explanation to the imperial palace described the pagarchs violations of their special tax status including theft of the collected tax money the communications to constantinople seem to have had little effect and in 551 three years after the death of theodora dioscorus travelled with a contingency of aphroditans to constantinople to present the problem to the emperor directly dioscorus may have spent three years in the capital of the byzantine empire in poetry the city was very active not only was agathias now writing there but also'</li></ul> | | 1 | <ul><li>'a vortex ring also called a toroidal vortex is a torusshaped vortex in a fluid that is a region where the fluid mostly spins around an imaginary axis line that forms a closed loop the dominant flow in a vortex ring is said to be toroidal more precisely poloidalvortex rings are plentiful in turbulent flows of liquids and gases but are rarely noticed unless the motion of the fluid is revealed by suspended particles — as in the smoke rings which are often produced intentionally or accidentally by smokers fiery vortex rings are also a commonly produced trick by fire eaters visible vortex rings can also be formed by the firing of certain artillery in mushroom clouds and in microburstsa vortex ring usually tends to move in a direction that is perpendicular to the plane of the ring and such that the inner edge of the ring moves faster forward than the outer edge within a stationary body of fluid a vortex ring can travel for relatively long distance carrying the spinning fluid with it in a typical vortex ring the fluid particles move in roughly circular paths around an imaginary circle the core that is perpendicular to those paths as in any vortex the velocity of the fluid is roughly constant except near the core so that the angular velocity increases towards the core and most of the vorticity and hence most of the energy dissipation is concentrated near itunlike a sea wave whose motion is only apparent a moving vortex ring actually carries the spinning fluid along just as a rotating wheel lessens friction between a car and the ground the poloidal flow of the vortex lessens the friction between the core and the surrounding stationary fluid allowing it to travel a long distance with relatively little loss of mass and kinetic energy and little change in size or shape thus a vortex ring can carry mass much further and with less dispersion than a jet of fluid that explains for instance why a smoke ring keeps traveling long after any extra smoke blown out with it has stopped and dispersed these properties of vortex rings are exploited in the vortex ring gun for riot control and vortex ring toys such as the air vortex cannons the formation of vortex rings has fascinated the scientific community for more than a century starting with william barton rogers who made sounding observations of the formation process of air vortex rings in air air rings in liquids and liquid rings in liquids in particular william barton rogers made use of the simple experimental method of letting a drop of liquid fall on a free liquid surface a falling colored drop of liquid such as milk or dyed water will inevitably form a vortex ring at the interface due to the surface tension a method proposed by g i taylor to generate a vortex ring is'</li><li>'additional parameters can be input the wgplnf htplnf and vtplnf namelists define the wing horizontal tail and vertical tail respectively the basic parameters such as root chord tip chord halfspan twist dihedral and sweep are input digital datcom also accepts wing planforms which change geometry along the span such as the f4 phantom ii which had 15 degrees of outboard dihedral canards can also be analyzed in digital datcom the canard must be specified as the forward lifting surface ie wing and the wing as the aft lift surface for airfoil designations most traditional naca 4 5 and 6 airfoils can be specified in digital datcom additionally custom airfoils can be input using the appropriate namelists also twin vertical tails can be designated in digital datcom but not twin booms using the symflp and asyflp namelists flaps elevators and ailerons can be defined digital datcom allows a multitude of flap types including plain singleslotted and fowler flaps up to 9 flap deflections can be analyzed at each machaltitude combination unfortunately the rudder is not implemented in digital datcom digital datcom also offers an automated aircraft trim function which calculates elevator deflections needed to trim the aircraft other digital datcom inputs include power effects propeller and jet ground effects trim tabs and experimental data the exprxx namelist allows a user to use experimental data such as coefficient of lift coefficient of drag etc in lieu of the data digital datcom produces in the intermediate steps of its component buildup all dimensions are taken in feet and degrees unless specified otherwise digital datcom provides commands for outputting the dynamic derivatives damp as well as the stability coefficients of each components build digital datcom produces a copious amount of data for the relatively small amount of inputs it requires by default only the data for the aircraft is output but additional configurations can be output body alone wing alone horizontal tail alone vertical tail alone wingbody configuration bodyhorizontal tail configuration bodyvertical tail configuration wingbodyhorizontal tail configuration wingbodyvertical tail configuration wingbodyhorizontal tailvertical tail configurationfor each configuration stability coefficients and derivatives are output at each angle of attack specified the details of this output are defined in section 6 of the usaf digital datcom manual volume i the basic output includes cl lift coefficient cd drag coefficient cm pitching moment coefficient cn normal force coefficient ca axial force coefficient clα lift curve slope derivative of lift coefficient with respect to angle of attack cmα pitching moment curve slope derivative of pitching moment coefficient with respect to'</li><li>'textfluctuationquad textandquad vyoverline vyvy and similarly for temperature t t t ′ and pressure p p p ′ where the primed quantities denote fluctuations superposed to the mean this decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by osborne reynolds in 1895 and is considered to be the beginning of the systematic mathematical analysis of turbulent flow as a subfield of fluid dynamics while the mean values are taken as predictable variables determined by dynamics laws the turbulent fluctuations are regarded as stochastic variables the heat flux and momentum transfer represented by the shear stress τ in the direction normal to the flow for a given time are q v y ′ ρ c p t ′ [UNK] experimental value − k turb ∂ t [UNK] ∂ y τ − ρ v y ′ v x ′ [UNK] [UNK] experimental value μ turb ∂ v [UNK] x ∂ y displaystyle beginalignedqunderbrace vyrho cpt textexperimental valuektextturbfrac partial overline tpartial ytau underbrace rho overline vyvx textexperimental valuemu textturbfrac partial overline vxpartial yendaligned where cp is the heat capacity at constant pressure ρ is the density of the fluid μturb is the coefficient of turbulent viscosity and kturb is the turbulent thermal conductivity richardsons notion of turbulence was that a turbulent flow is composed by eddies of different sizes the sizes define a characteristic length scale for the eddies which are also characterized by flow velocity scales and time scales turnover time dependent on the length scale the large eddies are unstable and eventually break up originating smaller eddies and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it these smaller eddies undergo the same process giving rise to even smaller eddies which inherit the energy of their predecessor eddy and so on in this way the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy in his original theory of 1941 kolmogorov postulated that for very high reynolds numbers the smallscale turbulent motions are statistically isotropic ie no preferential spatial direction could be discerned in general the large scales of a flow are not isotropic since they are determined by the particular geometrical features of the boundaries the size characterizing the large scales will'</li></ul> | | 34 | <ul><li>'or include the use of modern digital technologies many incorporate key components of active learning blended learning is a learning program in which a student learns at least in part through delivery of content and instruction via digital and online media with greater student control over time place path or pace than with traditional learning personalized learning is an educational strategy that offers pedagogy curriculum and learning environments to meet the individual students needs learning preferences and specific interests it also encompasses differentiated instruction that supports student progress based on mastery of specific subjects or skills21st century skills are a series of higherorder skills abilities and learning dispositions that have been identified as being required content and outcomes for success in 21st century society and workplaces by educators business leaders academics and governmental agencies these skills include core subjects the three rs 21st century content collaboration communication creativity critical thinking information and communication technologies ict literacy life skills and 21st century assessments digital literacy is becoming critical to successful learning for mobile and personal technology is transforming learning environments and workplaces alike it allows learning — including research collaboration creating writing production and presentation — to occur almost anywhere its robust tools support creativity of thought — through collaboration generation and production that does not require manual dexterity it fosters personalization of learning spaces by teachers and students which both supports the learning activity directly as well as indirectly through providing a greater feeling of ownership and relevancy a conducive classroom climate is one that is optimal for teaching and learning and where students feel safe and nurtured such classroom climate creations include modelling fairness and justicethe tone set by the teacher plays an important role in establishing expectations about respectful behaviour in the classroom a teacher who is calm fair and transparent about expectations and conduct serves as a model for students this includes establishing clear and appropriate consequences for breaking classroom and school rules ensuring that they are just proportional and paired with positive reinforcement positive engagement opportunities for adolescentsadolescents bring creativity enthusiasm and a strong sense of natural justice to their learning and play where learners are given meaningful opportunities to provide creative and constructive input into lesson planning and school governance processes expected benefits include increased engagement the development of skills in planning problemsolving group work and communication and a sense of pride in school activities and their own learning experience in addition finding the right choice structure for student engagement ensures these benefits overly complex choices can result in negative or no outcome in learning thoughtful classroom setupphysical classroom should be arranged so that students can work independently and easily arrange their desks for group work for example having an open space area conducive to teamwork teachers can also identify open areas outside of the classroom that could work for activities and group'</li><li>'make offering higherlevel courses such as ap classes less feasible or if there is not enough student interest to warrant offering the subject fully online courses involve a digital teacher who has many digital students with no inclass or facetoface time these courses can be facilitated either within a school or made accessible to homeschool or abroad students many virtual school options receive at least partial funding from state education initiatives and are monitored by state educational committees florida virtual school is funded through the florida education finance program fefp and is free to florida residents flvs is governed by a board of trustees appointed by the governor and its performance is monitored by the commissioner of education and reported to the state board of education and legislaturethere is much debate over the efficacy of virtual school options the consensus on blended education where students receive facetoface instruction from teachers and the online portions are only conducted in partial time is largely positive blended learning is credited with allowing students to take some agency with the pace of learning something that would not otherwise be available to them in a traditional classroom it allows students to make meaningful decisions about their learning and sets a basis for lifelong selfmotivation and learning the use of new technologies in classrooms also allows students to keep pace with innovations in learning technologies to expand the pedagogical toolset available to them such as messageboards and videos and to have instantaneous feedback and evaluation however in fully online courses the benefits of online learning are less clear as reported in one study about online mathematics for grade 8 students while more advanced students may excel in online courses the students who need the most help may suffer disproportionately to their peers when compared to traditional facetoface courses it would appear that onlineonly courses exacerbate difficulties for students with difficulties while allowing more advanced students the agency desired to excel in individual learning digital technology platforms dtp are now being implemented in numerous classrooms in order to facilitate digital learning higher education digital pedagogy is also used at the undergraduate level in varying ways including the use of digital tools for assignments hybrid or fully online courses and opencollaborative online learning digital mapping one increasingly common tool in the undergraduate classroom is digital mapping in digital mapping students use visual maps made with software like esri and arcgis to aid their work courses are typically interactive project focused and designed to for students with varied levels of skills cartographic fundamentals are taught to students through a scaffolded curriculum that combines both theory and technical skills courses also familiarize students with the practical applications of new technologies such as gps and kml scripting online courses digital peda'</li><li>'the united states hart didnt think millers introduction would help the book and approached margaret mead who refused on the grounds of neills connection with reich several months later psychoanalyst and sociologist erich fromm agreed to the project and found consensus with neill and the publisher fromms introduction placed summerhill in a history of backlash against progressive education and claimed that the perverted implementation of child freedom was more at fault than the idea of child freedom itself he wrote that summerhill was one of few schools that provided education without fear or hidden coercion and that it carried the goals of the western humanistic tradition reason love integrity and courage fromm also highlighted adult confusion about nonauthoritarianism and how they mistook coercion for genuine freedoma revised edition was edited by albert lamb and released by st martins press as summerhill school a new view of childhood in 1993 summerhill is a s neills aphoristic and anecdotal account of his famous early progressive school experiment in england founded in the 1920s summerhill school the books intent is to demonstrate the origins and effects of unhappiness and then show how to raise children to avoid this unhappiness it is an affirmation of the goodness of the child summerhill is the story of summerhill schools origins its programs and pupils how they live and are affected by the program and neills own educational philosophy it is split into seven chapters that introduce the school and discuss parenting sex morality and religion childrens problems parents problems and questions and answersthe school is run as a democracy with students deciding affairs that range from the curriculum to the behavior code lessons are noncompulsory neill emphasizes selfregulation personal responsibility freedom from fear freedom in sex play and loving understanding over moral instruction or force in his philosophy all attempts to mold children are coercive in nature and therefore harmful caretakers are advised to trust in the natural process and let children selfregulate such that they live by their own rules and consequently treat with the highest respect the rights of others to live by their own rules neills selfregulation constitutes a childs right to live freely without outside authority in things psychic and somatic — that children eat and come of age when they want are never hit and are always loved and protected children can do as they please until their actions affect others in an example a student can skip french class to play music but cannot disruptively play music during the french class against the popular image of go as you please schools summerhill has many rules however they are decided at a schoolwide meeting where students and'</li></ul> | | 40 | <ul><li>'##ear and ε a m e r displaystyle boldsymbol varepsilon amer are shown to be full supercategories of various wellknown categories including the category s t o p displaystyle boldsymbol stop of symmetric topological spaces and continuous maps and the category m e t ∞ displaystyle boldsymbol metinfty of extended metric spaces and nonexpansive maps the notation a [UNK] b displaystyle boldsymbol ahookrightarrow boldsymbol b reads category a displaystyle boldsymbol a is embedded in category b displaystyle boldsymbol b the categories ε a m e r displaystyle boldsymbol varepsilon amer and ε a n e a r displaystyle boldsymbol varepsilon anear are supercategories for a variety of familiar categories shown in fig 3 let ε a n e a r displaystyle boldsymbol varepsilon anear denote the category of all ε displaystyle varepsilon approach nearness spaces and contractions and let ε a m e r displaystyle boldsymbol varepsilon amer denote the category of all ε displaystyle varepsilon approach merotopic spaces and contractions among these familiar categories is s t o p displaystyle boldsymbol stop the symmetric form of t o p displaystyle boldsymbol top see category of topological spaces the category with objects that are topological spaces and morphisms that are continuous maps between them m e t ∞ displaystyle boldsymbol metinfty with objects that are extended metric spaces is a subcategory of ε a p displaystyle boldsymbol varepsilon ap having objects ε displaystyle varepsilon approach spaces and contractions see also let ρ x ρ y displaystyle rho xrho y be extended pseudometrics on nonempty sets x y displaystyle xy respectively the map f x ρ x [UNK] y ρ y displaystyle fxrho xlongrightarrow yrho y is a contraction if and only if f x ν d ρ x [UNK] y ν d ρ y displaystyle fxnu drho xlongrightarrow ynu drho y is a contraction for nonempty subsets a b ∈ 2 x displaystyle abin 2x the distance function d ρ 2 x × 2 x [UNK] 0 ∞ displaystyle drho 2xtimes 2xlongrightarrow 0infty is defined by d ρ'</li><li>'in mathematics equivariant topology is the study of topological spaces that possess certain symmetries in studying topological spaces one often considers continuous maps f x → y displaystyle fxto y and while equivariant topology also considers such maps there is the additional constraint that each map respects symmetry in both its domain and target space the notion of symmetry is usually captured by considering a group action of a group g displaystyle g on x displaystyle x and y displaystyle y and requiring that f displaystyle f is equivariant under this action so that f g ⋅ x g ⋅ f x displaystyle fgcdot xgcdot fx for all x ∈ x displaystyle xin x a property usually denoted by f x → g y displaystyle fxto gy heuristically speaking standard topology views two spaces as equivalent up to deformation while equivariant topology considers spaces equivalent up to deformation so long as it pays attention to any symmetry possessed by both spaces a famous theorem of equivariant topology is the borsuk – ulam theorem which asserts that every z 2 displaystyle mathbf z 2 equivariant map f s n → r n displaystyle fsnto mathbb r n necessarily vanishes an important construction used in equivariant cohomology and other applications includes a naturally occurring group bundle see principal bundle for details let us first consider the case where g displaystyle g acts freely on x displaystyle x then given a g displaystyle g equivariant map f x → g y displaystyle fxto gy we obtain sections s f x g → x × y g displaystyle sfxgto xtimes yg given by x ↦ x f x displaystyle xmapsto xfx where x × y displaystyle xtimes y gets the diagonal action g x y g x g y displaystyle gxygxgy and the bundle is p x × y g → x g displaystyle pxtimes ygto xg with fiber y displaystyle y and projection given by p x y x displaystyle pxyx often the total space is written x × g y displaystyle xtimes gy more generally the assignment s f displaystyle sf actually does not map to x × y g displaystyle xtimes yg generally since f displaystyle f is equivariant if g ∈ g x displaystyle gin gx the isotropy subgroup then by equivariance we have that g ⋅ f'</li><li>'in formal ontology a branch of metaphysics and in ontological computer science mereotopology is a firstorder theory embodying mereological and topological concepts of the relations among wholes parts parts of parts and the boundaries between parts mereotopology begins in philosophy with theories articulated by a n whitehead in several books and articles he published between 1916 and 1929 drawing in part on the mereogeometry of de laguna 1922 the first to have proposed the idea of a pointfree definition of the concept of topological space in mathematics was karl menger in his book dimensionstheorie 1928 see also his 1940 the early historical background of mereotopology is documented in belanger and marquis 2013 and whiteheads early work is discussed in kneebone 1963 ch 135 and simons 1987 291 the theory of whiteheads 1929 process and reality augmented the partwhole relation with topological notions such as contiguity and connection despite whiteheads acumen as a mathematician his theories were insufficiently formal even flawed by showing how whiteheads theories could be fully formalized and repaired clarke 1981 1985 founded contemporary mereotopology the theories of clarke and whitehead are discussed in simons 1987 2102 and lucas 2000 ch 10 the entry whiteheads pointfree geometry includes two contemporary treatments of whiteheads theories due to giangiacomo gerla each different from the theory set out in the next section although mereotopology is a mathematical theory we owe its subsequent development to logicians and theoretical computer scientists lucas 2000 ch 10 and casati and varzi 1999 ch 45 are introductions to mereotopology that can be read by anyone having done a course in firstorder logic more advanced treatments of mereotopology include cohn and varzi 2003 and for the mathematically sophisticated roeper 1997 for a mathematical treatment of pointfree geometry see gerla 1995 latticetheoretic algebraic treatments of mereotopology as contact algebras have been applied to separate the topological from the mereological structure see stell 2000 duntsch and winter 2004 barry smith anthony cohn achille varzi and their coauthors have shown that mereotopology can be useful in formal ontology and computer science by allowing the formalization of relations such as contact connection boundaries interiors holes and so on mereotopology has been applied also as a tool for qualitative spatialtemporal reasoning with constraint calculi such as the region connection calculus rcc it provides the starting point for the theory of fiat boundaries developed by smith and varzi which grew out of the attempt to distinguish'</li></ul> | | 14 | <ul><li>'blastocyst cavity and fill it with loosely packed cells when the extraembryonic mesoderm is separated into two portions a new gap arises called the gestational sac this new cavity is responsible for detaching the embryo and its amnion and yolk sac from the far wall of the blastocyst which is now named the chorion when the extraembryonic mesoderm splits into two layers the amnion yolk sac and chorion also become doublelayered the amnion and chorion are composed of extraembryonic ectoderm and mesoderm whereas the yolk sac is composed of extraembryonic endoderm and mesoderm by day 13 the connecting stalk a dense portion of extraembryonic mesoderm restrains the embryonic disc in the gestational sac like the amnion the yolk sac is a fetal membrane that surrounds a cavity formation of the definitive yolk sac occurs after the extraembryonic mesoderm splits and it becomes a double layered structure with hypoblastderived endoderm on the inside and mesoderm surrounding the outside the definitive yolk sac contributes greatly to the embryo during the fourth week of development and executes critical functions for the embryo one of which being the formation of blood or hematopoiesis also primordial germ cells are first found in the wall of the yolk sac before primordial germ cell migration after the fourth week of development the growing embryonic disc becomes much larger than the yolk sac and eventually involutes before birth uncommonly the yolk sac may persist as the vitelline duct and cause a congenital out pouching of the digestive tract called meckels diverticulum in the third week gastrulation begins with the formation of the primitive streak gastrulation occurs when pluripotent stem cells differentiate into the three germ cell layers ectoderm mesoderm and endoderm during gastrulation cells of the epiblast migrate towards the primitive streak enter it and then move apart from it through a process called ingression on day 16 epiblast cells that are next to the primitive streak experience epithelialtomesenchymal transformation as they ingress through the primitive streak the first wave of epiblast cells takes over the hypoblast which slowly becomes replaced by new cells that eventually constitute the definitive endoderm the definitive endoderm is'</li><li>'primates are precocial at birth with the exception of humansthe duration of gestation in placental mammals varies from 18 days in jumping mice to 23 months in elephants generally speaking fetuses of larger land mammals require longer gestation periods the benefits of a fetal stage means that young are more developed when they are born therefore they may need less parental care and may be better able to fend for themselves however carrying fetuses exerts costs on the mother who must take on extra food to fuel the growth of her offspring and whose mobility and comfort may be affected especially toward the end of the fetal stage in some instances the presence of a fetal stage may allow organisms to time the birth of their offspring to a favorable season'</li><li>'results in cyclopic embryos characterized by a lack of medial floor plate and ventral forebrain not all nodals result in the formation of mesoectoderm xenopus nodal related 3 xnr3 a divergent member of the tgfβ superfamily induces the expression of the protein xbra the xbra expression pattern in correlation the expression pattern another neuroinducer xlim1 result in the patterning of the organizer in xenopus this signaling in conjuncture with other nodals noggin chordin follistatin and others results in the final patterning of vertebrate central nervous system'</li></ul> | | 6 | <ul><li>'##arrow infty z ± e i σ r ∗ as r ∗ → − ∞ displaystyle zpm eisigma rquad textasquad rrightarrow infty indicating that we have purely outgoing waves with amplitude a ± displaystyle apm and purely ingoing waves at the horizon the problem becomes an eigenvalue problem again because of the relation mentioned between the two problem the spectrum of z displaystyle z and z − displaystyle z are identical and thus it enough to consider the spectrum of z − displaystyle z the problem is simplified by introducing z − exp i [UNK] r ∗ [UNK] d r ∗ displaystyle zexp leftiint rphi drright the nonlinear eigenvalue problem is given by i d [UNK] d r ∗ σ 2 − [UNK] 2 − v − 0 [UNK] − ∞ σ [UNK] ∞ − σ displaystyle ifrac dphi drsigma 2phi 2v0quad phi infty sigma quad phi infty sigma the solution is found to exist only for a discrete set of values of σ displaystyle sigma'</li><li>'site software for analyzing the data is also available nasas alan stern associate administrator for science at nasa headquarters launched a public competition 7 february 2008 closing 31 march 2008 to rename glast in a way that would capture the excitement of glasts mission and call attention to gammaray and highenergy astronomy something memorable to commemorate this spectacular new astronomy mission a name that is catchy easy to say and will help make the satellite and its mission a topic of dinner table and classroom discussionfermi gained its new name in 2008 on 26 august 2008 glast was renamed the fermi gammaray space telescope in honor of enrico fermi a pioneer in highenergy physics nasa designed the mission with a fiveyear lifetime with a goal of ten years of operationsthe key scientific objectives of the fermi mission have been described as to understand the mechanisms of particle acceleration in active galactic nuclei agn pulsars and supernova remnants snr resolve the gammaray sky unidentified sources and diffuse emission determine the highenergy behavior of gammaray bursts and transients probe dark matter eg by looking for an excess of gamma rays from the center of the milky way and early universe search for evaporating primordial micro black holes mbh from their presumed gamma burst signatures hawking radiation componentthe national academies of sciences ranked this mission as a top priority many new possibilities and discoveries are anticipated to emerge from this single mission and greatly expand our view of the universe blazars and active galaxiesstudy energy spectra and variability of wavelengths of light coming from blazars so as to determine the composition of the black hole jets aimed directly at earth whether they are a a combination of electrons and positrons or b only protonsgammaray burstsstudy gammaray bursts with an energy range several times more intense than ever before so that scientists may be able to understand them betterneutron starsstudy younger more energetic pulsars in the milky way than ever before so as to broaden our understanding of stars study the pulsed emissions of magnetospheres so as to possibly solve how they are produced study how pulsars generate winds of interstellar particlesmilky way galaxyprovide new data to help improve upon existing theoretical models of our own galaxygammaray background radiationstudy better than ever before whether ordinary galaxies are responsible for gammaray background radiation the potential for a tremendous discovery awaits if ordinary sources are determined to be irresponsible in which case the cause may be anything from selfannihilating dark matter to entirely new chain reactions among inter'</li><li>'heavy nuclei in models with neutron stars specifically young pulsars or magnetars as the source of extragalactic cosmic rays heavy elements mainly iron are stripped from the surface of the object by the electric field created by the magnetized neutron stars rapid rotation this same electric field can accelerate iron nucleii up to 1020 ev the photodisintegration of the heavy nucleii would produce lighter elements with lower energies matching the observations of the pierre auger observatory in this scenario the cosmic rays accelerated by neutron stars within the milky way could fill in the transition region between galactic cosmic rays produced in supernova remnants and extragalactic cosmic rays ultrahighenergy cosmic ray'</li></ul> | | 17 | <ul><li>'significant meltwater rerouting events occurred in eastern north america though there is still much debate among geologists as to where these events occurred they likely took place when the ice sheet receded from the adirondack mountains and the st lawrence lowlands first glacial lake iroquois drained to the atlantic in catastrophic hudson valley releases as the receding ice sheet dam failed and reestablished itself in three jokulhlaups evidence of the scale of the meltwater discharge down the hudson valley includes deeply incised sediments in the valley large sediment deposit lobes on the continental shelf and glacial erratic boulders greater than 2 metres in diameter on the outer shelf later when the st lawrence valley was deglaciated glacial lake candona drained to the north atlantic with subsequent drainage events routed through the champlain sea and st lawrence valley this surge of meltwater to the north atlantic by jokulhlaup about 13350 years ago is believed to have triggered the reduction in thermohaline circulation and the shortlived northern hemisphere intraallerød cold period finally lake agassiz was an immense glacial lake located in the center of north america fed by glacial runoff at the end of the last glacial period its area was larger than all of the modern great lakes combined and it held more water than contained by all lakes in the world today it drained in a series of events between 13000 bp and 8400 bp also into the pacific ocean large drainage events took place through the columbia river gorge dubbed the missoula floods since 2011 periodic glacial floods have occurred from the suicide basin through the mendenhall glacier in juneau alaska on 7 february 2021 part of nanda devi glacier broke away in the 2021 uttarakhand glacier burst triggering outburst flood sweeping away a power plant more than 150 people were feared dead around 9 500 bc the baltic ice lake was tapped on water as the ice front retreated north of mount billingen helgi bjornsson subglacial lakes and jokulhlaups in iceland global and planetary change 35 2002 255 – 271'</li><li>'a moraine is any accumulation of unconsolidated debris regolith and rock sometimes referred to as glacial till that occurs in both currently and formerly glaciated regions and that has been previously carried along by a glacier or ice sheet it may consist of partly rounded particles ranging in size from boulders in which case it is often referred to as boulder clay down to gravel and sand in a groundmass of finelydivided clayey material sometimes called glacial flour lateral moraines are those formed at the side of the ice flow and terminal moraines were formed at the foot marking the maximum advance of the glacier other types of moraine include ground moraines tillcovered areas forming sheets on flat or irregular topography and medial moraines moraines formed where two glaciers meet the word moraine is borrowed from french moraine mɔʁɛn which in turn is derived from the savoyard italian morena mound of earth morena in this case was derived from provencal morre snout itself from vulgar latin murrum rounded object the term was introduced into geology by horace benedict de saussure in 1779 moraines are landforms composed of glacial till deposited primarily by glacial ice glacial till in turn is unstratified and unsorted debris ranging in size from siltsized glacial flour to large boulders the individual rock fragments are typically subangular to rounded in shape moraines may be found on the glaciers surface or deposited as piles or sheets of debris where the glacier has melted moraines may form through a number of processes depending on the characteristics of sediment the dynamics on the ice and the location on the glacier in which the moraine is formed moraine forming processes may be loosely divided into passive and activepassive processes involve the placing of chaotic supraglacial sediments onto the landscape with limited reworking typically forming hummocky moraines these moraines are composed of supraglacial sediments from the ice surfaceactive processes form or rework moraine sediment directly by the movement of ice known as glaciotectonism these form push moraines and thrustblock moraines which are often composed of till and reworked proglacial sedimentmoraine may also form by the accumulation of sand and gravel deposits from glacial streams emanating from the ice margin these fan deposits may coalesce to form a long moraine bank marking the ice margin several processes may combine to form and rework a single moraine and most moraines record a continuum of processes reworking of moraines may lead to the formation of placer deposits of gold as is'</li><li>'marine isotope stages mis marine oxygenisotope stages or oxygen isotope stages ois are alternating warm and cool periods in the earths paleoclimate deduced from oxygen isotope data derived from deep sea core samples working backwards from the present which is mis 1 in the scale stages with even numbers have high levels of oxygen18 and represent cold glacial periods while the oddnumbered stages are lows in the oxygen18 figures representing warm interglacial intervals the data are derived from pollen and foraminifera plankton remains in drilled marine sediment cores sapropels and other data that reflect historic climate these are called proxies the mis timescale was developed from the pioneering work of cesare emiliani in the 1950s and is now widely used in archaeology and other fields to express dating in the quaternary period the last 26 million years as well as providing the fullest and best data for that period for paleoclimatology or the study of the early climate of the earth representing the standard to which we correlate other quaternary climate records emilianis work in turn depended on harold ureys prediction in a paper of 1947 that the ratio between oxygen18 and oxygen16 isotopes in calcite the main chemical component of the shells and other hard parts of a wide range of marine organisms should vary depending on the prevailing water temperature in which the calcite was formedover 100 stages have been identified currently going back some 6 million years and the scale may in future reach back up to 15 mya some stages in particular mis 5 are divided into substages such as mis 5a with 5 a c and e being warm and b and d cold a numeric system for referring to horizons events rather than periods may also be used with for example mis 55 representing the peak point of mis 5e and 551 552 etc representing the peaks and troughs of the record at a still more detailed level for more recent periods increasingly precise resolution of timing continues to be developed in 1957 emiliani moved to the university of miami to have access to coredrilling ships and equipment and began to drill in the caribbean and collect core data a further important advance came in 1967 when nicholas shackleton suggested that the fluctuations over time in the marine isotope ratios that had become evident by then were caused not so much by changes in water temperature as emiliani thought but mainly by changes in the volume of icesheets which when they expanded took up the lighter oxygen16 isotope in preference to the heavier oxygen18 the cycles in the isotope ratio were found to correspond to terrestrial evidence of'</li></ul> | | 20 | <ul><li>'medieval studies is the academic interdisciplinary study of the middle ages a historian who studies medieval studies is called a medievalist the term medieval studies began to be adopted by academics in the opening decades of the twentieth century initially in the titles of books like g g coultons ten medieval studies 1906 to emphasize a greater interdisciplinary approach to a historical subject in american and european universities the term provided a coherent identity to centres composed of academics from a variety of disciplines including archaeology art history architecture history literature and linguistics the institute of mediaeval studies at st michaels college of the university of toronto became the first centre of this type in 1929 it is now the pontifical institute of mediaeval studies pims and is part of the university of toronto it was soon followed by the medieval institute at the university of notre dame in indiana which was founded in 1946 but whose roots go back to the establishment of a program of medieval studies in 1933 as with many of the early programs at roman catholic institutions it drew its strengths from the revival of medieval scholastic philosophy by such scholars as etienne gilson and jacques maritain both of whom made regular visits to the university in the 1930s and 1940s these institutions were preceded in the united kingdom in 1927 by the establishment of the idiosyncratic department of anglosaxon norse and celtic at the university of cambridge although anglosaxon norse and celtic was limited geographically to the british isles and scandinavia and chronologically mostly the early middle ages it promoted the interdisciplinarity characteristic of medieval studies and many of its graduates were involved in the later development of medieval studies programmes elsewhere in the ukwith university expansion in the late 1960s and early 1970s encouraging interdisciplinary cooperation similar centres were established in england at university of reading 1965 at university of leeds 1967 and the university of york 1968 and in the united states at fordham university 1971 a more recent wave of foundations perhaps helped by the rise of interest in things medieval associated with neomedievalism include centres at kings college london 1988 the university of bristol 1994 the university of sydney 1997 and bangor university 2005medieval studies is buoyed by a number of annual international conferences which bring together thousands of professional medievalists including the international congress on medieval studies at kalamazoo mi us and the international medieval congress at the university of leeds there are a number of journals devoted to medieval studies including mediaevalia comitatus viator traditio medieval worlds journal of medieval history journal of medieval military history and speculum an organ of the medieval academy of america founded in 1925 and based in cambridge massachusetts another part of the infrastructure of the field is the international'</li><li>'navidad on the island of hispaniola leaving behind some spanish colonists and traders columbus reports he also left behind a caravel — evidently covering up the loss of his flagship the santa maria he reports that la navidad is located near reported gold mines and is a wellplaced entrepot for the commerce that will doubtlessly soon be opened with the great khan gran can on the mainland he speaks of a local king near navidad whom he befriended and treated him as a brother y grand amistad con el rey de aquella tierra en tanto grado que se preciava de me lhamar e tener por hermano — almost certainly a reference to guacanagarix cacique of marienin the copiador version but not the printed editions columbus alludes to the treachery of one from palos uno de palos who made off with one of the ships evidently a complaint about martin alonso pinzon the captain of the pinta although this portion of the copiador manuscript is damaged and hard to read the copiador version also mentions other points of personal friction not contained in the printed editions eg references to the ridicule columbus suffered in the spanish court prior to his departure his bowing to pressure to use large ships for ocean navigation rather than the small caravels he preferred which would have been more convenient for exploring at the end of his printed letter columbus promises that if the catholic monarchs back his bid to return with a larger fleet he will bring back a lot of gold spices cotton repeatedly referenced in the letter mastic gum aloe slaves and possibly rhubarb and cinnamon of which i heard about here columbus ends the letter urging their majesties the church and the people of spain to give thanks to god for allowing him to find so many souls hitherto lost ready for conversion to christianity and eternal salvation he also urges them to give thanks in advance for all the temporal goods found in abundance in the indies that shall soon be made available to castile and the rest of christendom the copiador version but not the printed spanish or latin editions also contains a somewhat bizarre detour into messianic fantasy where columbus suggests the monarchs should use the wealth of the indies to finance a new crusade to conquer jerusalem columbus himself offering to underwrite a large army of ten thousand cavalry and hundred thousand infantry to that end the sign off varies between editions the printed spanish letter is dated aboard the caravel on the canary islands on february 15 1493 fecha en la caravela sobra las yslas'</li><li>'##cracies the 1980s saw a general retreat for the communist bloc the soviet – afghan war 1979 – 1989 is often called the soviet unions vietnam war in comparison to the american defeat being an expensive and ultimately unsuccessful war and occupation more importantly the intervening decades had seen that eastern europe was unable to compete economically with western europe which undermined the promise of communist abundance compared to capitalist poverty the western capitalist economies had proven wealthier and stronger which made matching the soviet defense budget to the american one strain limited resources the paneuropean picnic in 1989 then set in motion a peaceful chain reaction with the subsequent fall of the berlin wall the revolutions of 1989 saw many countries of eastern europe throw off their communist governments and the ussr declined to invade to reestablish them east and west germany were reunified client state status for many states ended as there was no conflict left to fund the malta summit on 3 december 1989 the failure of the august coup by soviet hardliners and the formal dissolution of the soviet union on 26 december 1991 sealed the end of the cold war the end of the cold war left the united states the worlds sole superpower communism seemed discredited while china remained an officially communist state deng xiaopings economic reforms and socialism with chinese characteristics allowed for the growth of a capitalist private sector in china in russia president boris yeltsin pursued a policy of privatization spinning off former government agencies into private corporations attempting to handle budget problems inherited from the ussr the end of soviet foreign aid caused a variety of changes in countries previously part of the eastern bloc many officially became democratic republics though some were more accurately described as authoritarian or oligarchic republics and oneparty states many western commentators treated the development optimistically it was thought the world was steadily progressing toward free liberal democracies south africa no longer able to attract western support by claiming to be anticommunist ended apartheid in the early 1990s and many eastern european countries switched to stable democracies while some americans had anticipated a peace dividend from budget cuts to the defense department these cuts were not as large as some had hoped the european economic community evolved into the european union with the signing of the maastricht treaty in 1993 which integrated europe across borders to a new degree international coalitions continued to have a role the gulf war saw a large international coalition undo baathist iraqs annexation of kuwait but other police style actions were less successful somalia and afghanistan descended into long bloody civil wars for almost the entirety of the decade somali civil war afghan civil war 1992 – 1996 afghan civil war 1996 – 2001 russia fought a brutal war in che'</li></ul> | | 29 | <ul><li>'fossil biomarkers of green sulfur bacteria indicates that this process could have played a role in that mass extinction event and possibly other extinction events the trigger for these mass extinctions appears to be a warming of the ocean caused by a rise of carbon dioxide levels to about 1000 parts per million reduced oxygen levels are expected to lead to increased seawater concentrations of redoxsensitive metals the reductive dissolution of iron – manganese oxyhydroxides in seafloor sediments under lowoxygen conditions would release those metals and associated trace metals sulfate reduction in such sediments could release other metals such as barium when heavymetalrich anoxic deep water entered continental shelves and encountered increased o2 levels precipitation of some of the metals as well as poisoning of the local biota would have occurred in the late silurian midpridoli event increases are seen in the fe cu as al pb ba mo and mn levels in shallowwater sediment and microplankton this is associated with a marked increase in the malformation rate in chitinozoans and other microplankton types likely due to metal toxicity similar metal enrichment has been reported in sediments from the midsilurian ireviken event sulfidic or euxinic conditions which exist today in many water bodies from ponds to various landsurrounded mediterranean seas such as the black sea were particularly prevalent in the cretaceous atlantic but also characterised other parts of the world ocean in an icefree sea of these supposed supergreenhouse worlds oceanic waters were as much as 200 metres 660 ft higher in some eras during the timespans in question the continental plates are believed to have been well separated and the mountains as they are known today were mostly future tectonic events — meaning the overall landscapes were generally much lower — and even the half supergreenhouse climates would have been eras of highly expedited water erosion carrying massive amounts of nutrients into the world oceans fuelling an overall explosive population of microorganisms and their predator species in the oxygenated upper layers detailed stratigraphic studies of cretaceous black shales from many parts of the world have indicated that two oceanic anoxic events oaes were particularly significant in terms of their impact on the chemistry of the oceans one in the early aptian 120 ma sometimes called the selli event or oae 1a after the italian geologist raimondo selli 1916 – 1983 and another at the cenomanian – turonian boundary 93 ma also called the bonarelli event or oae2 after the'</li><li>'rock or facies mainland greece thus consists geologically of strips or isopic zones “ same facies ” or “ tectonostratigraphic units ” of distinct rock trending from nw to sethe regime through the oligocene evidenced in the zone structure of greece was compressional the subduction was in the trench and its forearc was the edge of the overriding plate the classical model subsequently a superimposed extensional regime moved the subduction and the trench back but not necessarily at the same rate nor did they always necessarily coincide the former reverse faults were converted to normal and many new extensional lineaments tectonic features such as pullapart basins appeared the extensional regime the start line of the extension was a transform fault that has been called the eastern mediterranean north transform emnt it trended from the sw corner of anatolia in a nw direction through the future center of the forearc across central greece well north of the future gulf of corinth at some point the new forces began to pull apart the former strikeslip fault north of anatolia merging it with the subduction and pulling out a separate forearc from the previously docked coastal ridge consisting of strips of the outer hellenides in the ionian and some other zones cw rotation of the subduction zone slab rollback moved the subduction zone away from but not parallel to the continental coastline a bathymetric view of the current configuration suggests that an angle was generated on the west by rotating the subduction zone away from the original strike of the emnt as a baseline in the cw direction about a vertex or pole on the coast of apulia italy a triangle was formed of the base line the subduction line and a chord across the arc of the subtended angle currently the vertex opposite the base line does not extend as far as the chord the east leg curves shortening the west leg the curvature demonstrates that the east leg is not as rigid as the west plate consumption varies slightly over the west leg but falls off sharply over the east it is hypothesized that the consumption on the east is expressed by short segments cutting across the scarps which nevertheless have slip vectors aligned with the western vectors over the entire arc in a wheelspoke pattern that is the azimuths of the vectors decrease regularly from west to eastthough often shown crossing the adriatic on maps the subduction does not actually do so the stress of the rotation was too great for the rock the subducting plate broke along the ktf and also along the plato – strabo trench area'</li><li>'##00 square kilometres 320000 sq mi surpassed the chagos marine protected area as the worlds largest contiguous marine reserve until the august 2016 expansion of the papahanaumokuakea marine national monument in the united states to 1510000 square kilometres 580000 sq mi in january 2016 the uk government announced the intention to create a marine protected area around ascension island the protected area will be 234291 square kilometres 90460 sq mi half of which will be closed to fishingon 13 november 2020 it was announced that the 687247 square kilometres 265348 sq mi of the waters surrounding the tristan da cunha and neighboring islands will become a marine protection zone the move will make the zone the largest notake zone in the atlantic and the fourth largest on the planet the great barrier reef marine park in queensland australia the ligurian sea cetacean sanctuary in the seas of italy monaco and france the dry tortugas national park in the florida keys usa the papahanaumokuakea marine national monument in hawaii the phoenix islands protected area kiribati the channel islands national marine sanctuary in california usa the chagos marine protected area in the indian ocean the wadden sea bordering the north sea in the netherlands germany and denmark the ascension island marine protected area which encompasses 100 the islands exclusive economic zone the following shows a list of countries and their marine protected areas as percentage of their territorial waters click show to expand managers and scientists use geographic information systems and remote sensing to map and analyze mpas noaa coastal services center compiled an inventory of gisbased decisionsupport tools for mpas the report focuses on gis tools with the highest utility for mpa processes remote sensing uses advances in aerial photography image capture popup archival satellite tags satellite imag'</li></ul> | | 25 | <ul><li>'the infinite product also if [UNK] n 1 ∞ p n 2 textstyle sum n1infty pn2 is convergent then the sum [UNK] n 1 ∞ p n textstyle sum n1infty pn and the product [UNK] n 1 ∞ 1 p n textstyle prod n1infty 1pn are either both convergent or both divergent one important result concerning infinite products is that every entire function fz that is every function that is holomorphic over the entire complex plane can be factored into an infinite product of entire functions each with at most a single root in general if f has a root of order m at the origin and has other complex roots at u1 u2 u3 listed with multiplicities equal to their orders then f z z m e [UNK] z [UNK] n 1 ∞ 1 − z u n exp z u n 1 2 z u n 2 [UNK] 1 λ n z u n λ n displaystyle fzzmephi zprod n1infty left1frac zunrightexp leftlbrace frac zunfrac 12leftfrac zunright2cdots frac 1lambda nleftfrac zunrightlambda nrightrbrace where λn are nonnegative integers that can be chosen to make the product converge and [UNK] z displaystyle phi z is some entire function which means the term before the product will have no roots in the complex plane the above factorization is not unique since it depends on the choice of values for λn however for most functions there will be some minimum nonnegative integer p such that λn p gives a convergent product called the canonical product representation this p is called the rank of the canonical product in the event that p 0 this takes the form f z z m e [UNK] z [UNK] n 1 ∞ 1 − z u n displaystyle fzzmephi zprod n1infty left1frac zunright this can be regarded as a generalization of the fundamental theorem of algebra since for polynomials the product becomes finite and [UNK] z displaystyle phi z is constant in addition to these examples the following representations are of special note the last of these is not a product representation of the same sort discussed above as ζ is not entire rather the above product representation of ζz converges precisely for rez 1 where it is an analytic function by techniques of analytic continuation this function can be extended uniquely to an analytic function still denoted ζz on the whole complex plane except at'</li><li>'in mathematics a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number then writing this other number as the sum of its integer part and another reciprocal and so on in a finite continued fraction or terminated continued fraction the iterationrecursion is terminated after finitely many steps by using an integer in lieu of another continued fraction in contrast an infinite continued fraction is an infinite expression in either case all integers in the sequence other than the first must be positive the integers a i displaystyle ai are called the coefficients or terms of the continued fractionit is generally assumed that the numerator of all of the fractions is 1 if arbitrary values andor functions are used in place of one or more of the numerators or the integers in the denominators the resulting expression is a generalized continued fraction when it is necessary to distinguish the first form from generalized continued fractions the former may be called a simple or regular continued fraction or said to be in canonical form continued fractions have a number of remarkable properties related to the euclidean algorithm for integers or real numbers every rational number p displaystyle p q displaystyle q has two closely related expressions as a finite continued fraction whose coefficients ai can be determined by applying the euclidean algorithm to p q displaystyle pq the numerical value of an infinite continued fraction is irrational it is defined from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions each finite continued fraction of the sequence is obtained by using a finite prefix of the infinite continued fractions defining sequence of integers moreover every irrational number α displaystyle alpha is the value of a unique infinite regular continued fraction whose coefficients can be found using the nonterminating version of the euclidean algorithm applied to the incommensurable values α displaystyle alpha and 1 this way of expressing real numbers rational and irrational is called their continued fraction representation the term continued fraction may also refer to representations of rational functions arising in their analytic theory for this use of the term see pade approximation and chebyshev rational functions consider for example the rational number 41593 which is around 44624 as a first approximation start with 4 which is the integer part 41593 4 4393 the fractional part is the reciprocal of 9343 which is about 21628 use the integer part 2 as an approximation for the reciprocal to obtain a second approximation of 4 12 45 now 9343 2 743 the remaining fractional part 743 is the reciprocal of 437 and 437 is around'</li><li>'participants as they arrive and contain short papers on each conference talk the iwota proceedings follow mathematics conference tradition and contain a modest number of papers and are published several years after the conference iwota has received support from many sources including the national science foundation the london mathematical society the engineering and physical sciences research council deutsche forschungsgemeinschaft secretaria de estado de investigacion desarrollo e innovacion spain australian mathematical sciences institute national board for higher mathematics international centre for theoretical physics indian statistical institute korea research foundation united statesindia science technology endowment fund nederlandse organisatie voor wetenschappelijk onderzoek the commission for developing countries of the international mathematical union stichting advancement of mathematics netherlands the national research foundation of south africa and birkhauser publishing ltd iwota is directed by a steering committee which chooses the site for the next meeting elects the chief local organizers and insures the appearance of the enduring themes of iwota the subthemes of an iwota workshop and the lecturers are chosen by the local organizing committee after hearing the steering committees board the board consists of its vice presidents joseph a ball j william helton chair sanne ter horst m a kaashoek igor klep christiane tretter irene sabadini victor vinnikov and hugo j woerdeman in addition past chief organizers who remain active in iwota are members of the steering committee the board governs iwota with consultation and the consent of the full steering committee honorary members of the steering committee elected in 2016 are israel gohberg deceased leiba rodman deceased tsuyoshi ando harry dym ciprian foias deceased heinz langer nikolai nikolski iwota 2024 will be held at university of kent in canterbury united kingdom main organizer is ian wood dates are august 1216 2024 iwota 2025 will be held at university of twente in enschede the netherlands main organizer is felix schwenninger dates are july 1418 2025 the israel gohberg ilasiwota lecture was introduced in august 2016 and honors the legacy of israel gohberg whose research crossed borders between operator theory linear algebra and related fields this lecture is in collaboration with the international linear algebra society ilas this series of lectures will be delivered at iwota and ilas conferences in different years in the approximate ratio twothirds at iwota and onethird at ilas the first three lectures will take place at iwota lancaster uk'</li></ul> | | 30 | <ul><li>'providing a more detailed molecular and genetic understanding of cancer biology than was previously possible and offering hope for the development of new therapeutic strategies gleaned from these insights the cancer genome atlas the cancer genome atlas tcga a collaborative effort between the national cancer institute and the national human genome research institute is an example of a basic research project that is employing some of these new molecular approaches one tcga publication notes the following here we report the interim integrative analysis of dna copy number gene expression and dna methylation aberrations in 206 glioblastomastogether these findings establish the feasibility and power of tcga demonstrating that it can rapidly expand knowledge of the molecular basis of cancer in a cancer research funding announcement made by president obama in september 2009 tcga project is slated to receive 175 million in funding to collect comprehensive gene sequence data on 20000 tissue samples from people with more than 20 different types of cancer in order to help researchers understand the genetic changes underlying cancer new targeted therapeutic approaches are expected to arise from the insights resulting from such studies cancer genome project the cancer genome project at the wellcome trust sanger institute aims to identify sequence variantsmutations critical in the development of human cancers the cancer genome project combines knowledge of the human genome sequence with high throughput mutation detection techniques advances in information technology supporting cancer research such as the ncis cabig project promise to improve data sharing among cancer researchers and accelerate the discovery of new approaches for the detection diagnosis treatment and prevention of cancer ultimately improving patient outcomes researchers are considering ways to improve the efficiency costeffectiveness and overall success rate of cancer clinical trialsincreased participation in rigorously designed clinical trials would increase the pace of research currently about 3 of people with cancer participate in clinical trials more than half of them are patients for whom no other options are left patients who are participating in exploratory trials designed to burnish the researchers resumes or promote a drug rather than to produce meaningful information or in trials that will not enroll enough patients to produce a statistically significant result a major challenge in cancer treatment is to find better ways to specifically target tumors with drugs and chemotherapeutic agents in order to provide a more effective localized dose and to minimize exposure of healthy tissue in other parts of the body to the potentially adverse effects of the treatments the accessibility of different tissues and organs to antitumor drugs contributes to this challenge for example the blood – brain barrier blocks many drugs that may otherwise be effective against brain tumors in november 2009 a new experimental therapeutic approach for treating glioblastoma was published in which'</li><li>'national medical research radiological center of the ministry of health of the russian federation russian фгбу « национальныи медицинскии исследовательскии центр радиологии » нмиц радиологии министерства здравоохранения россиискои федерации is one of the largest oncological and radiological clusters in russia the main institution for radiology a reference center in the field of pathomorphological research radiation diagnostics and therapy in 2014 russias first scientific medical cluster in the field of oncology radiology and urology was established as the national medical research radiological centre of the ministry of health of the russian federation the status of the national centre allowed us to apply all aspects of modern hightech medical care available in the world the purpose of the centre is to unite the efforts of scientists and practitioners in the fight against cancer to create conditions for the introduction of the latest technologies in the field of cancer treatment to ensure the breakthrough of russian science and practice in the creation of nuclear medicinesince 2020 the center becomes as the basic organization for the cis member states in the field of oncology on the basis of the center there are two national registers the cancer register and the national radiation and epidemiological register the centre has a full range of modern diagnostic and complex and combined treatment methods for oncological diseases the сenter obtain various kinds of modern radiation installations including gamma and cyber knives the russian proton therapy complex prometheus introduces advanced technologies such as pipac x – ray surgical methods of treatment brachytherapy treatment of radiation injuriesthe centre is one of the leaders in the field of nuclear medicine development of russian radiopharmaceuticals and introduction of technologies for their use in clinical practice the use of nuclear medicine technologies in combined and complex treatment provides significant advantages in the treatment of both oncological and nononcological diseases nmrrc was established in may 2014 as a joint medical centre bringing together three of russias oldest medical research institutions in moscow and the kaluga region which formed its branches p hertsen moscow oncology research center a tsyb medical radiological research center n lopatkin research institute of urology and interventional radio'</li><li>'biomarkers include mutations on genes kras p53 egfr erbb2 for colorectal esophageal liver and pancreatic cancer mutations of genes brca1 and brca2 for breast and ovarian cancer abnormal methylation of tumor suppressor genes p16 cdkn2b and p14arf for brain cancer hypermethylation of myod1 cdh1 and cdh13 for cervical cancer and hypermethylation of p16 p14 and rb1 for oral cancer diagnosis cancer biomarkers can also be useful in establishing a specific diagnosis this is particularly the case when there is a need to determine whether tumors are of primary or metastatic origin to make this distinction researchers can screen the chromosomal alterations found on cells located in the primary tumor site against those found in the secondary site if the alterations match the secondary tumor can be identified as metastatic whereas if the alterations differ the secondary tumor can be identified as a distinct primary tumor for example people with tumors have high levels of circulating tumor dna ctdna due to tumor cells that have gone through apoptosis this tumor marker can be detected in the blood saliva or urine the possibility of identifying an effective biomarker for early cancer diagnosis has recently been questioned in light of the high molecular heterogeneity of tumors observed by nextgeneration sequencing studies prognosis and treatment predictions another use of biomarkers in cancer medicine is for disease prognosis which take place after an individual has been diagnosed with cancer here biomarkers can be useful in determining the aggressiveness of an identified cancer as well as its likelihood of responding to a given treatment in part this is because tumors exhibiting particular biomarkers may be responsive to treatments tied to that biomarkers expression or presence examples of such prognostic biomarkers include elevated levels of metallopeptidase inhibitor 1 timp1 a marker associated with more aggressive forms of multiple myeloma elevated estrogen receptor er andor progesterone receptor pr expression markers associated with better overall survival in patients with breast cancer her2neu gene amplification a marker indicating a breast cancer will likely respond to trastuzumab treatment a mutation in exon 11 of the protooncogene ckit a marker indicating a gastrointestinal stromal tumor gist will likely respond to imatinib treatment and mutations in the tyrosine kinase domain of egfr1 a marker indicating a patients nonsmallcell lung carcinoma nsclc will likely respond'</li></ul> | | 21 | <ul><li>'##corrhizal fungi that assist in breaking up the porous lava and by these means organic matter and a finer mineral soil accumulate with time such initial stages of soil development have been described on volcanoes inselbergs and glacial moraineshow soil formation proceeds is influenced by at least five classic factors that are intertwined in the evolution of a soil parent material climate topography relief organisms and time when reordered to climate relief organisms parent material and time they form the acronym cropt the physical properties of soils in order of decreasing importance for ecosystem services such as crop production are texture structure bulk density porosity consistency temperature colour and resistivity soil texture is determined by the relative proportion of the three kinds of soil mineral particles called soil separates sand silt and clay at the next larger scale soil structures called peds or more commonly soil aggregates are created from the soil separates when iron oxides carbonates clay silica and humus coat particles and cause them to adhere into larger relatively stable secondary structures soil bulk density when determined at standardized moisture conditions is an estimate of soil compaction soil porosity consists of the void part of the soil volume and is occupied by gases or water soil consistency is the ability of soil materials to stick together soil temperature and colour are selfdefining resistivity refers to the resistance to conduction of electric currents and affects the rate of corrosion of metal and concrete structures which are buried in soil these properties vary through the depth of a soil profile ie through soil horizons most of these properties determine the aeration of the soil and the ability of water to infiltrate and to be held within the soil soil water content can be measured as volume or weight soil moisture levels in order of decreasing water content are saturation field capacity wilting point air dry and oven dry field capacity describes a drained wet soil at the point water content reaches equilibrium with gravity irrigating soil above field capacity risks percolation losses wilting point describes the dry limit for growing plants during growing season soil moisture is unaffected by functional groups or specie richnessavailable water capacity is the amount of water held in a soil profile available to plants as water content drops plants have to work against increasing forces of adhesion and sorptivity to withdraw water irrigation scheduling avoids moisture stress by replenishing depleted water before stress is inducedcapillary action is responsible for moving groundwater from wet regions of the soil to dry areas subirrigation designs eg wicking beds subirrigated planters rely on capillarity to supply water to plant roots capillary action can result in an eva'</li><li>'an ant garden is a mutualistic interaction between certain species of arboreal ants and various epiphytic plants it is a structure made in the tree canopy by the ants that is filled with debris and other organic matter in which epiphytes grow the ants benefit from this arrangement by having a stable framework on which to build their nest while the plants benefit by obtaining nutrients from the soil and from the moisture retained there epiphytes are common in tropical rain forest and in cloud forest an epiphyte normally derives its moisture and nutrients from the air rain mist and dew nitrogenous matter is in short supply and the epiphytes benefit significantly from the nutrients in the ant garden the ant garden is made from carton a mixture of vegetable fibres leaf debris refuse glandular secretions and ant faeces the ants use this material to build their nests among the branches of the trees to shelter the hemipteran insects that they tend in order to feed on their honeydew and to make the pockets of material in which the epiphytes growthe ants harvest seeds from the epiphytic plants and deposit them in the carton material the plants have evolved various traits to encourage ants to disperse their seeds by producing chemical attractants eleven unrelated epiphytes that grow in ant gardens have been found to contain methyl salicylate oil of wintergreen and it seems likely that this compound is an ant attractant species of ant that make gardens include crematogaster carinata camponotus femoratus and solenopsis parabioticus all of which are parabiotic species which routinely share their nests with unrelated species of ant epiphytic plants that they grow include various members of the araceae bromeliaceae cactaceae gesneriaceae moraceae piperaceae and solanaceae epiphytic plants in the genus codonanthopsis including those formerly placed in codonanthe grow almost exclusively in ant gardens often associated with ants of the genus azteca the ant camponotus irritabilis not only plants the seeds of hoya elliptica in planned locations on its carton nest but also prunes the roots to accommodate its nest chambers and fertilises the areas where it wants extra plant growth to occur'</li><li>'in these minerals produce mineral deficiency in experimental animals gontzea and sutzescu 1968 as cited in chavan and kadam 1989 the latter authors state that the sprouting of cereals has been reported to decrease levels of phytic acid similarly shipard 2005 states that enzymes of germination and sprouting can help decrease the detrimental substances such as phytic acid however the amount of phytic acid reduction from soaking is only marginal and not enough to fully counteract its antinutrient effects alfalfa seeds and sprouts contain lcanavanine which can cause lupuslike disease in primates in order to prevent incidents like the 2011 ehec epidemic on 11 march 2013 the european commission issued three new tighter regulations regulation eu no 2082013 requires that the origins of seeds must always be traceable at all stages of processing production and distribution therefore a full description of the seeds or sprouts needs to be kept on record see also article 18 of regulation ec no 1782002 regulation eu no 2092013 amends regulation ec no 20732005 in respect of microbiological criteria for sprouts and the sampling rules for poultry carcasses and fresh poultry meat regulation eu no 2112013 requires that imported sprouts and seeds intended for the production of sprouts have a certificate drawn up in accordance with the model certificate in the annex of the regulation that serves as proof that the production process complies with the general hygiene provisions in part a of annex i to regulation ec no 8522004 and the traceability requirements of implementing regulation eu no 2082013 safron jeremy a 2003 the raw truth the art of preparing living foods berkeley celestial arts isbn 9781587611728 moran leslie 2007 the complete guide to successful sprouting for parrots and everyone else in the family silver springs nv critter connection isbn 9781419684791 cuddeford d 1 september 1989 hydroponic grass in practice 11 5 211 – 214 doi101136inpract115211 s2cid 219216512 nutritional improvement of cereals by fermentation source critical reviews in food science and nutrition chavan jk kadam ss 1989 shipard isabell 2005 how can i grow and use sprouts as living food nambour qld david stewart isbn 9780975825204 kavas a els n 1992 changes in nutritive value of lentils and mung beans during germination chemmikrobiol technol le'</li></ul> | | 16 | <ul><li>'##hyolites with much higher eruption temperatures 850 °c to 1000 °c than normal rhyolites since 1992 the definition of lip has been expanded and refined and remains a work in progress some new definitions of the term lip include large granitic provinces such as those found in the andes mountains of south america and in western north america comprehensive taxonomies have been developed to focus technical discussions in 2008 bryan and ernst refined the definition to narrow it somewhat large igneous provinces are magmatic provinces with areal extents 1×105 km2 igneous volumes 1×105 km3 and maximum lifespans of 50 myr that have intraplate tectonic settings or geochemical affinities and are characterised by igneous pulses of short duration 1 – 5 myr during which a large proportion 75 of the total igneous volume has been emplaced they are dominantly mafic but also can have significant ultramafic and silicic components and some are dominated by silicic magmatism this definition places emphasis on the high magma emplacement rate characteristics of the lip event and excludes seamounts seamount groups submarine ridges and anomalous seafloor crustlip is now frequently used to also describe voluminous areas of not just mafic but all types of igneous rocks subcategorization of lips into large volcanic provinces lvp and large plutonic provinces lpp and including rocks produced by normal plate tectonic processes has been proposed further the minimum threshold to be included as a lip has been lowered to 50000 km2 the working taxonomy focused heavily on geochemistry which will be used to structure examples below is large igneous provinces lip large volcanic provinces lvp large rhyolitic provinces lrps large andesitic provinces laps large basaltic provinces lbps oceanic or continental flood basalts large basaltic – rhyolitic provinces lbrps large plutonic provinces lpp large granitic provinces lgp large mafic plutonic provincesaerally extensive dike swarms sill provinces and large layered ultramafic intrusions are indicators of lips even when other evidence is not now observable the upper basalt layers of older lips may have been removed by erosion or deformed by tectonic plate collisions occurring after the layer is formed this is especially likely for earlier periods such as the paleozoic and proterozoicgiant dyke swarms having lengths over 300 km are a common record of severely eroded lips both radial and linear dyke swarm configurations exist radial swarms with an areal'</li><li>'isostasy greek isos equal stasis standstill or isostatic equilibrium is the state of gravitational equilibrium between earths crust or lithosphere and mantle such that the crust floats at an elevation that depends on its thickness and density this concept is invoked to explain how different topographic heights can exist at earths surface although originally defined in terms of continental crust and mantle it has subsequently been interpreted in terms of lithosphere and asthenosphere particularly with respect to oceanic island volcanoes such as the hawaiian islands although earth is a dynamic system that responds to loads in many different ways isostasy describes the important limiting case in which crust and mantle are in static equilibrium certain areas such as the himalayas and other convergent margins are not in isostatic equilibrium and are not well described by isostatic models the general term isostasy was coined in 1882 by the american geologist clarence dutton in the 17th and 18th centuries french geodesists for example jean picard attempted to determine the shape of the earth the geoid by measuring the length of a degree of latitude at different latitudes arc measurement a party working in ecuador was aware that its plumb lines used to determine the vertical direction would be deflected by the gravitational attraction of the nearby andes mountains however the deflection was less than expected which was attributed to the mountains having lowdensity roots that compensated for the mass of the mountains in other words the lowdensity mountain roots provided the buoyancy to support the weight of the mountains above the surrounding terrain similar observations in the 19th century by british surveyors in india showed that this was a widespread phenomenon in mountainous areas it was later found that the difference between the measured local gravitational field and what was expected for the altitude and local terrain the bouguer anomaly is positive over ocean basins and negative over high continental areas this shows that the low elevation of ocean basins and high elevation of continents is also compensated at depththe american geologist clarence dutton use the word isostasy in 1889 to describe this general phenomenon however two hypotheses to explain the phenomenon had by then already been proposed in 1855 one by george airy and the other by john henry pratt the airy hypothesis was later refined by the finnish geodesist veikko aleksanteri heiskanen and the pratt hypothesis by the american geodesist john fillmore hayfordboth the airyheiskanen and pratthayford hypotheses assume that isostacy reflects a local hydrostatic balance a third hypothesis lithospheric flexure takes into account the rigidity'</li><li>'hypsometry from ancient greek υψος hupsos height and μετρον metron measure is the measurement of the elevation and depth of features of earths surface relative to mean sea levelon earth the elevations can take on either positive or negative below sea level values the distribution is theorised to be bimodal due to the difference in density between the lighter continental crust and denser oceanic crust on other planets within this solar system elevations are typically unimodal owing to the lack of plate tectonics on those bodies a hypsometric curve is a histogram or cumulative distribution function of elevations in a geographical area differences in hypsometric curves between landscapes arise because the geomorphic processes that shape the landscape may be different when drawn as a 2dimensional histogram a hypsometric curve displays the elevation y on the vertical yaxis and area above the corresponding elevation x on the horizontal or xaxis the curve can also be shown in nondimensional or standardized form by scaling elevation and area by the maximum values the nondimensional hypsometric curve provides a hydrologist or a geomorphologist with a way to assess the similarity of watersheds — and is one of several characteristics used for doing so the hypsometric integral is a summary measure of the shape of the hypsometric curve in the original paper on this topic arthur strahler proposed a curve containing three parameters to fit different hypsometric relations y d − x x ⋅ a d − a z displaystyle yleftfrac dxxcdot frac adarightz where a d and z are fitting parameters subsequent research using twodimensional landscape evolution models has called the general applicability of this fit into question as well as the capability of the hypsometric curve to deal with scaledependent effects a modified curve with one additional parameter has been proposed to improve the fithypsometric curves are commonly used in limnology to represent the relationship between lake surface area and depth and calculate total lake volume these graphs can be used to predict various characteristics of lakes such as productivity dilution of incoming chemicals and potential for water mixing bathymetry hypsometric equation hypsometer an instrument used in hypsometry which estimates the elevation by boiling water – water boils at different temperatures depending on the air pressure and thus altitude levelling topography orography hypsometric curve'</li></ul> | | 28 | <ul><li>'y p displaystyle xypleq maxxpypleq xpyp moreover if x p = y p displaystyle xpneq yp one has x y p max x p y p displaystyle xypmaxxpyp this makes the padic numbers a metric space and even an ultrametric space with the padic distance defined by d p x y x − y p displaystyle dpxyxyp as a metric space the padic numbers form the completion of the rational numbers equipped with the padic absolute value this provides another way for defining the padic numbers however the general construction of a completion can be simplified in this case because the metric is defined by a discrete valuation in short one can extract from every cauchy sequence a subsequence such that the differences between two consecutive terms have strictly decreasing absolute values such a subsequence is the sequence of the partial sums of a padic series and thus a unique normalized padic series can be associated to every equivalence class of cauchy sequences so for building the completion it suffices to consider normalized padic series instead of equivalence classes of cauchy sequences as the metric is defined from a discrete valuation every open ball is also closed more precisely the open ball b r x y [UNK] d p x y r displaystyle brxymid dpxyr equals the closed ball b p − v x y [UNK] d p x y ≤ p − v displaystyle bpvxymid dpxyleq pv where v is the least integer such that p − v r displaystyle pvr similarly b r x b p − w x displaystyle brxbpwx where w is the greatest integer such that p − w r displaystyle pwr this implies that the padic numbers form a locally compact space and the padic integers — that is the ball b 1 0 b p 0 displaystyle b10bp0 — form a compact space the decimal expansion of a positive rational number r displaystyle r is its representation as a series r [UNK] i k ∞ a i 10 − i displaystyle rsum ikinfty ai10i where k displaystyle k is an integer and each a i displaystyle ai is also an integer such that 0 ≤ a i 10 displaystyle 0leq ai10 this expansion can be computed by long division of the numerator by the denominator which is itself based on the following theorem if r n d displaystyle rtfrac nd is a rational number such that 10 k ≤ r 10 k'</li><li>'that for every sequence xn of positive integers the sum of the series [UNK] n 1 ∞ 1 a n x n displaystyle sum n1infty frac 1anxn exists and is a transcendental number'</li><li>'the proof [UNK] 0 z log γ x d x z 1 − z 2 z 2 log 2 π z log γ z − log g 1 z displaystyle int 0zlog gamma xdxfrac z1z2frac z2log 2pi zlog gamma zlog g1z and since g 1 z γ z g z displaystyle g1zgamma zgz then [UNK] 0 z log γ x d x z 1 − z 2 z 2 log 2 π − 1 − z log γ z − log g z displaystyle int 0zlog gamma xdxfrac z1z2frac z2log 2pi 1zlog gamma zlog gz'</li></ul> | | 11 | <ul><li>'remain in the systemic circulation for a certain period of time during that time ultrasound waves are directed on the area of interest when microbubbles in the blood flow past the imaging window the microbubbles compressible gas cores oscillate in response to the high frequency sonic energy field as described in the ultrasound article the microbubbles reflect a unique echo that stands in stark contrast to the surrounding tissue due to the orders of magnitude mismatch between microbubble and tissue echogenicity the ultrasound system converts the strong echogenicity into a contrastenhanced image of the area of interest in this way the bloodstreams echo is enhanced thus allowing the clinician to distinguish blood from surrounding tissues targeted contrastenhanced ultrasound works in a similar fashion with a few alterations microbubbles targeted with ligands that bind certain molecular markers that are expressed by the area of imaging interest are still injected systemically in a small bolus microbubbles theoretically travel through the circulatory system eventually finding their respective targets and binding specifically ultrasound waves can then be directed on the area of interest if a sufficient number of microbubbles have bound in the area their compressible gas cores oscillate in response to the high frequency sonic energy field as described in the ultrasound article the targeted microbubbles also reflect a unique echo that stands in stark contrast to the surrounding tissue due to the orders of magnitude mismatch between microbubble and tissue echogenicity the ultrasound system converts the strong echogenicity into a contrastenhanced image of the area of interest revealing the location of the bound microbubbles detection of bound microbubbles may then show that the area of interest is expressing that particular molecular marker which can be indicative of a certain disease state or identify particular cells in the area of interest untargeted contrastenhanced ultrasound is currently applied in echocardiography and radiology targeted contrastenhanced ultrasound is being developed for a variety of medical applications untargeted microbubbles like optison and levovist are currently used in echocardiography in addition sonovue ultrasound contrast agent is used in radiology for lesion characterization organ edge delineation microbubbles can enhance the contrast at the interface between the tissue and blood a clearer picture of this interface gives the clinician a better picture of the structure of an organ tissue structure is crucial in echocardiograms where a thinning thickening or irregularity in the heart wall indicates a serious heart condition that requires'</li><li>'right side of the heart to the lungs to the descending aorta in about 25 of adults the foramen ovale does not close completely but remains as a small patent foramen ovale pfo in most of these individuals the pfo causes no problems and remains undetected throughout life pfo has long been studied because of its role in paradoxical embolism an embolism that travels from the venous side to the arterial side this may lead to a stroke or transient ischemic attack transesophageal echocardiography is considered the most accurate investigation to demonstrate a patent foramen ovale a patent foramen ovale may also be an incidental finding coronary arteries'</li><li>'p t d t displaystyle 1r1 over r2itr1cl over r2dit over dtlcd2it over dt2pt over r2cdpt over dt these models relate blood flow to blood pressure through parameters of r c and in the case of the fourelement model l these equations can be easily solved eg by employing matlab and its supplement simulink to either find the values of pressure given flow and r c l parameters or find values of r c l given flow and pressure an example for the twoelement model is shown below where it is depicted as an input signal during systole and diastole systole is represented by the sin function while flow during diastole is zero s represents the duration of the cardiac cycle while ts represents the duration of systole and td represents the duration of diastole eg in seconds i t i o sin π ∗ t s t s for t s ≤ t s displaystyle itiosinpi t over s over tstext for t over sleq ts i t 0 for t s t d t s displaystyle it0text for tstdts the windkessel effect becomes diminished with age as the elastic arteries become less compliant termed hardening of the arteries or arteriosclerosis probably secondary to fragmentation and loss of elastin the reduction in the windkessel effect results in increased pulse pressure for a given stroke volume the increased pulse pressure results in elevated systolic pressure hypertension which increases the risk of myocardial infarction stroke heart failure and a variety of other cardiovascular diseases although the windkessel is a simple and convenient concept it has been largely superseded by more modern approaches that interpret arterial pressure and flow waveforms in terms of wave propagation and reflection recent attempts to integrate wave propagation and windkessel approaches through a reservoir concept have been criticized and a recent consensus document highlighted the wavelike nature of the reservoir hydraulic accumulator – reservoir to store and stabilise fluid pressure'</li></ul> | | 33 | <ul><li>'paranormal events are purported phenomena described in popular culture folk and other nonscientific bodies of knowledge whose existence within these contexts is described as being beyond the scope of normal scientific understanding notable paranormal beliefs include those that pertain to extrasensory perception for example telepathy spiritualism and the pseudosciences of ghost hunting cryptozoology and ufologyproposals regarding the paranormal are different from scientific hypotheses or speculations extrapolated from scientific evidence because scientific ideas are grounded in empirical observations and experimental data gained through the scientific method in contrast those who argue for the existence of the paranormal explicitly do not base their arguments on empirical evidence but rather on anecdote testimony and suspicion the standard scientific models give the explanation that what appears to be paranormal phenomena is usually a misinterpretation misunderstanding or anomalous variation of natural phenomena the term paranormal has existed in the english language since at least 1920 the word consists of two parts para and normal the definition implies that the scientific explanation of the world around us is normal and anything that is above beyond or contrary to that is para on the classification of paranormal subjects psychologist terence hines said in his book pseudoscience and the paranormal 2003 the paranormal can best be thought of as a subset of pseudoscience what sets the paranormal apart from other pseudosciences is a reliance on explanations for alleged phenomena that are well outside the bounds of established science thus paranormal phenomena include extrasensory perception esp telekinesis ghosts poltergeists life after death reincarnation faith healing human auras and so forth the explanations for these allied phenomena are phrased in vague terms of psychic forces human energy fields and so on this is in contrast to many pseudoscientific explanations for other nonparanormal phenomena which although very bad science are still couched in acceptable scientific terms ghost hunting is the investigation of locations that are reportedly haunted by ghosts typically a ghosthunting team will attempt to collect evidence supporting the existence of paranormal activity in traditional ghostlore and fiction featuring ghosts a ghost is a manifestation of the spirit or soul of a person alternative theories expand on that idea and include belief in the ghosts of deceased animals sometimes the term ghost is used synonymously with any spirit or demon however in popular usage the term typically refers to the spirit of a deceased person the belief in ghosts as souls of the departed is closely tied to the concept of animism an ancient belief that attributed souls to everything in nature as the 19thcentury anthropologist george frazer explained in his classic work the golden bough 1890 souls were seen as'</li><li>'alleged telekinetic mediums exposed as frauds include anna rasmussen and maria silbertpolish medium stanisława tomczyk active in the early 20th century claimed to be able to perform acts of telekinetic levitation by way of an entity she called little stasia a 1909 photograph of her showing a pair of scissors floating between her hands is often found in books and other publications as an example of telekinesis scientists suspected tomczyk performed her feats by the use of a fine thread or hair between her hands this was confirmed when psychical researchers who tested tomczyk occasionally observed the threadmany of indias godmen have claimed macrotelekinetic abilities and demonstrated apparently miraculous phenomena in public although as more controls are put in place to prevent trickery fewer phenomena are produced annemarie schaberl a 19yearold secretary was said to have telekinetic powers by parapsychologist hans bender in the rosenheim poltergeist case in the 1960s magicians and scientists who investigated the case suspected the phenomena were produced by trickery 107 – 108 swami rama a yogi skilled in controlling his heart functions was studied at the menninger foundation in the spring and fall of 1970 and was alleged by some observers at the foundation to have telekinetically moved a knitting needle twice from a distance of five feet although he wore a facemask and gown to prevent allegations that he moved the needle with his breath or body movements and air vents in the room were covered at least one physician observer who was present was not convinced and expressed the opinion that air movement was somehow the cause russian psychic nina kulagina came to wide public attention following the publication of sheila ostrander and lynn schroeders bestseller psychic discoveries behind the iron curtain the alleged soviet psychic of the late 1960s and early 1970s was shown apparently performing telekinesis while seated in numerous blackandwhite short films and was also mentioned in the us defense intelligence agency report from 1978 magicians and skeptics have argued that kulaginas feats could easily be performed by one practiced in sleight of hand or through means such as cleverly concealed or disguised threads small pieces of magnetic metal or mirrorsjames hydrick an american martial arts expert and psychic was famous for his alleged telekinetic ability to turn the pages of books and make pencils spin while placed on the edge of a desk it was later revealed by magicians that he achieved his feats by air currents psychologist richard wiseman wrote that hydrick'</li><li>'##ksha tells us that in order to be freed from the cycle of rebirth and death one must separate karma from the soul in order to find out what karma is attached to your soul you can participate in “ jatismaran ” jatismaran is remembering past lives the nineteenth century saw the rise of spiritualism involving seances and other techniques for contacting departed spirits allan kardec 1804 – 1869 sought to codify the lessons thus obtained in a set of five books the spiritist codification thespiritist pentateuch 1857 – 1868 including the spirits book 1857 and heaven and hell 1865 these books introduce concepts of how spirits evolve through a series of incarnations madame blavatsky 1831 – 1891 cofounder of the theosophical society introduced the sanskrit term akasha beginning in isis unveiled 1877 as a vague life force that was continuously redefined always vaguely in subsequent publications separately but also in isis unveiled she referred to indestructible tablets of the astral light recording both the past and future of human thought and action these concepts were combined into a single idea the akashic records espoused by alfred percy sinnett in his book esoteric buddhism 1883 the idea that the akashic records held past life data set the stage whereby western practitioners of the paranormal could sidestep the notion of forgetfulness that in traditional teachings about reincarnation had prevented memories of former lives from being accessed an early report for a human accessing past life information during a trance state comes from 1923 when edgar cayce while answering questions posed by arthur lammers publisher in a trance state spoke of lammers past lives and of reincarnation the use of hypnosis for past life regressions is said to have been developed by a r asa roy martin of sharon pennsylvania who published researches in reincarnation and beyond in 1942in 1952 the bridey murphy case in which housewife virginia tighe of pueblo colorado under hypnosis was reported by the hypnotist to have recounted memories of a 19thcentury irish woman bridey murphypast life regression is widely rejected as a psychiatric treatment by clinical psychiatrists and psychologists a 2006 survey found that a majority of a sample of doctoral level mental health professionals rated past lives therapy as certainly discredited as a treatment for mental or behavioral disorders in the west pastlife regression practitioners use hypnosis and suggestion to promote recall in their patients using a series of questions designed to elicit statements and memories about the past lifes history and identity some practitioners also use bridging techniques from a clients currentlife problem to'</li></ul> | | 18 | <ul><li>'to interpret it successfully this interpretative capacity is one aspect of graphicacy computer graphics are often used in the majority of new feature films especially those with a large budget films that heavily use computer graphics include the lord of the rings film trilogy the harry potter films spiderman and war of the worlds the majority of schools college s and universities around the world educate students on the subject of graphic design and art the subject is taught in a broad variety of ways each course teaching its own distinctive balance of craft skills and intellectual response to the clients needs some graphics courses prioritize traditional craft skills — drawing printmaking and typography — over modern craft skills other courses may place an emphasis on teaching digital craft skills still other courses may downplay the crafts entirely concentrating on training students to generate novel intellectual responses that engage with the brief despite these apparent differences in training and curriculum the staff and students on any of these courses will generally consider themselves to be graphic designers the typical pedagogy of a graphic design or graphic communication visual communication graphic arts or any number of synonymous course titles will be broadly based on the teaching models developed in the bauhaus school in germany or vkhutemas in russia the teaching model will tend to expose students to a variety of craft skills currently everything from drawing to motion capture combined with an effort to engage the student with the world of visual culture aldus manutius designed the first italic type style which is often used in desktop publishing and graphic design april greiman is known for her influential poster design paul rand is well known as a design pioneer for designing many popular corporate logos including the logo for ibm next and ups william caslon during the mid18th century designed many typefaces including itc founders caslon itc founders caslon ornaments caslon graphique itc caslon no 224 caslon old face and big caslon editorial cartoon visualization graphics semiotics'</li><li>'in automotive design a class a surface is any of a set of freeform surfaces of high efficiency and quality although strictly it is nothing more than saying the surfaces have curvature and tangency alignment – to ideal aesthetical reflection quality many people interpret class a surfaces to have g2 or even g3 curvature continuity to one another see free form surface modelling class a surfacing is done using computeraided industrial design applications class a surface modellers are also called digital sculptors in the industry industrial designers develop their design styling through the asurface the physical surface the end user can feel touch see etc a common method of working is to start with a prototype model and produce smooth mathematical class a surfaces to describe the products outer body from this the production of tools and inspection of finished parts can be carried out class a surfacing complements the prototype modelling stage by reducing time and increasing control over design iterations class a surfaces can be defined as any surface that has styling intent that is either seen touched or both and mathematically meets the definition for bezier in automotive design application class a surfaces are created on all visible exterior surfaces ex body panels bumper grill lights etc and all visible surfaces of seetouch feel parts in interior ex dashboard seats door pads etc this can also include beauty covers in the engine compartment mud flaps trunk panels and carpeting in the product design realm class a surfacing can be applied to such things like housing for industrial appliances that are injection moulded home appliances highly aesthetic plastic packaging defined by highly organic surfaces toys or furniture among the most famous users of autodesk alias software in product design is apple aerospace has styling and product design considerations in interiors like bezels for air vents and lights interior roof storage racks seats and cockpit area etc in recent years airbus used icem surf for generating the exterior surface geometry for aesthetics and aerodynamic optimisation before delivering the surface to downstream cad software like catia class a surfacing digital sculpting is similar to clay modelling with the added advantage of computing power to change or incorporate design changes in existingnew design moreover the revisions of clay modelling and refinement iteration are carried out in digital version the scanned data of a selected clay model will be taken as a point cloud data input and class a designers work on this point cloud data to generate preliminary surfaces and further refine them to class a surfaces class a surfaces are currently not standardized a team of french engineers propose a new idea for standardization – class s g0g1g2g3 – high aesthetic quality of reflections – class a g0g1g2 – good aesthetic'</li><li>'two to three years from 1892 to 1913 accompanied by his wife minnie in the early years building a mutually beneficial relationship between british and us operations which endured for eighty years long after winterbottom himself had diedwinterbottom continued to grow and consolidate the business in rhode island fending off competition in 1904 with record sales over the next ten years earning him large sums of money twentytwo years after having taken over operations in america winterbottom booked passage with a group of friends to new york aboard titanic but was delayed by business at home forcing him to postpone his passage by a week winterbottom travelled to new york aboard adriatic on april 18 1912 three days after titanic had gone down with the loss of 1500 lives adriatic returned to liverpool on the 2nd of may with some of the surviving crew and management of titanic bringing operations from the us and germany into the wbcc corporate group resulted in a near global monopoly which stabilised prices but risked the disaffection of book manufacturers who had previously been able to shop around to get the best price for their businesses winterbottom took a conciliatory approach to dissent visiting customers to negotiate deals and easing them into compliance lawyers were also kept busy ensuring that partners remained aligned making minor changes to the original agreement or by threatening his larger partners with his own resignationwinterbottom would tolerate no compromise on quality control with all production standards set by victoria mills which were subsequently applied to the ten other factories in the group significant investment in new machinery and changes in production methods were required at interlaken mills and the bamberg works keeping up with emerging technologies and markets whilst maintaining strict quality control winterbottoms uncompromising attention to detail and rejection of new stock that didnt measure up ensured consistency within all the groups operations this was not always easy to apply particularly in germany where he was forced to make changes to staffing to ensure strict compliance with his restrictive confidentiality controls which preserved corporate intellectual property rights and enforce strict competitive intelligence protocolsexports made a vital contribution to winterbottoms net income by the turn of the century a quarter of the wbcc ’ s customers were from overseas with bookcloth and tracing cloth exports from salford going to at least 50 countries the us government commissioned a study on the industry in 1899 and found that world trade was divided largely between winterbottom and two or three german firms who also sourced their best grades from manchester following fifteen years securing world markets through forging new alliances and mergers in which the merger had restored profitability to the industry whilst returning huge net profits yearonyear winterbottom'</li></ul> | | 22 | <ul><li>'the arctic intermediate water aiw is a water mass found between the top cold relatively fresh polar water and the bottom deep water in the arctic domain bounded by the polar and arctic fronts aiw is formed in small quantities compared to other water masses and has limited influence outside of the arctic domain two types of aiw are found which are lower aiw and upper aiw separately lower aiw is the water mass with temperature and salinity maximum found at 250400m deep right above the deep water with temperature for lower aiw ranges from 0 to 3 °c and salinity greater than 349upper aiw is defined to be a denser layer on top of the lower aiw between surface cold water and the lower aiw including water masses with temperature maximum to minimum it is characterized by temperatures less than 2 °c in the salinity ranges from 347 to 349 the upper aiw is usually found at 75150m overlain by arctic surface water asw however it could be found at the sea surface in winterthere are overlaps in density for upper and lower aiw according to their definitions it is possible that water mass falling within the definition of upper aiw is below the defined lower aiw for example in norwegian sea one intermediate layer of salinity slightly less than 349 was found below the water mass with temperature and salinity maximum it is generally accepted that aiw is formed and modified in the north part of arctic domain as aiw moves from north to south along the greenland continental slope its temperature and salinity on the whole decrease southwards due to mixing with surface cold water the lower aiw is produced by the cooling and sinking of atlantic water aw which is traditionally defined with salinity greater than 35 and by the polar intermediate water piw that is colder than 0 °c with salinity in the range 344347 amount of aiw varies with different seasons for example the upper aiw in iceland sea increased from about 10 of the total volume in fall to over 21 in winter in the same time both asw and lower aiw show significant summertowinter decreases which might contribute to the new upper aiw similar process can also be found in greenland sea but with a smaller amount of formed upper aiw'</li><li>'sociohydrology socio from the latin word socius meaning ‘ companion and hydrology from the greek υδωρ hydor meaning water and λογος logos meaning study is an interdisciplinary field studying the dynamic interactions and feedbacks between water and people areas of research in sociohydrology include the historical study of the interplay between hydrological and social processes comparative analysis of the coevolution and selforganization of human and water systems in different cultures and processbased modelling of coupled humanwater systems the first approach to sociohydrology was the term hydrosociology which arises from a concern about the scale of impact of human activities on the hydrological cycle sociohydrology is defined as the humanswater interaction and later as “ the science of people and water ” which introduces bidirectional feedbacks between human – water systems differentiating it from other related disciplines that deal with water furthermore sociohydrology has been presented as one of the most relevant challenges for the anthropocene in relationship with its aims at unraveling dynamic crossscale interactions and feedbacks between natural and human processes that give rise to many water sustainability challenges socio ‐ hydrology is also predicted to be an important license for modellers in traditional hydrology human activities are typically described as boundary conditions or external forcings to the water systems scenariobased approach this traditional approach tends to make long term predictions unrealistic as interactions and bidirectional feedbacks between human and water systems cannot be capturedfollowing the increased hydrological challenges due to humaninduced changes hydrologists started to overcome the limitation of traditional hydrology by accounting for the mutual interactions between water and society and by advocating for greater connection between social science and hydrologysociohydrologists argue that water and human systems change interdependently as well as in connection with each other and that their mutual reshaping continues and evolves over time on the one hand society importantly alters the hydrological regime it modifies the frequency and severity of floods and droughts through continuous water abstraction dams and reservoirs construction flood protection measures urbanization etc in turn modified water regimes and hydrological extremes shape societies which respond and adapt spontaneously or through collective strategiesin general to explain the coevolution of human and water systems sociohydrology should draw on different disciplines and include historical studies comparative analysis and process based modeling most of the sociohydrological efforts to date have focused on investigating recurring social behavior and societal development resulting from their coevolution with hydrological systems the'</li><li>'as a mass throughput but rather as a pressure throughput and having units of pressure times volume per second p 1 displaystyle p1 and p 2 displaystyle p2 are the upstream and downstream pressures c displaystyle c is the conductance having units of volumetime which are the same units as pumping speed for a vacuum pumpthis definition proves useful in vacuum systems because under conditions of rarefied gas flow the conductance of various structures is usually constant and the overall conductance of a complex network of pipes orifices and other conveyances can be found in direct analogy to a resistive electrical circuit for example the conductance of a simple orifice is c 15 d 2 displaystyle c15d2 literssec where d displaystyle d is measured in centimeters'</li></ul> | | 15 | <ul><li>'phenotypic plasticity refers to some of the changes in an organisms behavior morphology and physiology in response to a unique environment fundamental to the way in which organisms cope with environmental variation phenotypic plasticity encompasses all types of environmentally induced changes eg morphological physiological behavioural phenological that may or may not be permanent throughout an individuals lifespanthe term was originally used to describe developmental effects on morphological characters but is now more broadly used to describe all phenotypic responses to environmental change such as acclimation acclimatization as well as learning the special case when differences in environment induce discrete phenotypes is termed polyphenism generally phenotypic plasticity is more important for immobile organisms eg plants than mobile organisms eg most animals as mobile organisms can often move away from unfavourable environments nevertheless mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype one mobile organism with substantial phenotypic plasticity is acyrthosiphon pisum of the aphid family which exhibits the ability to interchange between asexual and sexual reproduction as well as growing wings between generations when plants become too populated water fleas daphnia magna have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer urban pond waters phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage the allocation of more resources to the roots in soils that contain low concentrations of nutrients the size of the seeds an individual produces depending on the environment and the alteration of leaf shape size and thickness leaves are particularly plastic and their growth may be altered by light levels leaves grown in the light tend to be thicker which maximizes photosynthesis in direct light and have a smaller area which cools the leaf more rapidly due to a thinner boundary layer conversely leaves grown in the shade tend to be thinner with a greater surface area to capture more of the limited light dandelion are well known for exhibiting considerable plasticity in form when growing in sunny versus shaded environments the transport proteins present in roots also change depending on the concentration of the nutrient and the salinity of the soil some plants mesembryanthemum crystallinum for example are able to alter their photosynthetic pathways to use less water when they become water or saltstressedbecause of phenotypic plasticity it is hard to explain and predict the traits when plants are grown in natural conditions unless an explicit environment index can be obtained to quantify environments identification of such'</li><li>'result in what are known as designer babies the concept of a designer baby is that its entire genetic composition could be selected for in an extreme case people would be able to effectively create the offspring that they want with a genotype of their choosing not only does human germline engineering allow for the selection of specific traits but it also allows for enhancement of these traits using human germline editing for selection and enhancement is currently very heavily scrutinized and the main driving force behind the movement of trying to ban human germline engineeringin a 2019 animal study with liang guang small spotted pigs increased muscle mass was achieved with precise editing of the myostatin signal peptide myostatin is a negative regulator of muscle growth so through mutating the signal peptide regions of the gene muscle growth could be promoted in the experimental pigs the myostatin genes in 955 pig embryos were mutated at several locations with crispr and implanted into five surrogates resulting in 16 piglets it was found that only specific mutations to the myostatin signal peptide resulted in increased muscle mass in the piglets mainly due to an increase in muscle fibers a similar animal study created a knockout in the myostatin gene in mice which also increased their muscle mass this showed that muscle mass could be increased with germline editing which is likely applicable to humans because humans also have the myostatin gene to regulate muscle growth human germline engineering may then result in intentionally increased muscle mass with applications such as gene doping human germline engineering is a widely debated topic and in more than 40 countries it is formally outlawed while there is no current legislation explicitly prohibiting germline engineering in the united states the consolidated appropriation act of 2016 bans the use of us fda funds to engage in research regarding human germline modification in april 2015 a research team published an experiment in which they used crispr to edit a gene that is associated with blood disease in nonliving human embryos this experiment was unsuccessful but gene editing tools are used in labs scientists using the crisprcas9 system to modify genetic materials have run into issues when it comes to mammalian alterations due to the complex diploid cells studies have been done in microorganisms regarding loss of function genetic screening and some studies have been done using mice as a subject because rna processes differ between bacteria and mammalian cells scientists have had difficulties coding for mrnas translated data without the interference of rna studies have been done using the cas9 nuclease that uses a single guide rna to allow for larger knockout regions in mice and this was'</li><li>'the individual would experience stress and anxiety there has been reported success in confirming a mouse model of autism by changing the mouses environmentin any of these experiments the ‘ autistic ’ mice have a ‘ normal ’ socializing partner and the scientists observing the mice are unaware blind to the genotypes of the mice the gene expression profile of the central nervous system cns is unique eighty percent of all human genes are expressed in the brain 5000 of these genes are solely expressed in the cns the human brain has the highest amount of gene expression of all studied mammalian brains in comparison tissues outside of the brain will have more similar expression levels in comparison to their mammalian counterparts one source of the increased expression levels in the human brain is from the nonprotein coding region of the genome numerous studies have indicated that the human brain have a higher level of expression in regulatory regions in comparison to other mammalian brains there is also notable enrichment for more alternative splicing events in the human brain gene expression profiles also vary within specific regions of the brain a microarray study showed that the transcriptome profile of the cns clusters together based on region a different study characterized the regulation of gene expression across 10 different regions based on their eqtl signals the cause of the varying expression profiles relates to function neuron migration and cellular heterogeneity of the region even the three layers of the cerebral cortex have distinct expression profilesa study completed at harvard medical school in 2014 was able to identify developmental lineages stemming from single base neuronal mutations the researchers sequenced 36 neurons from the cerebral cortex of three normal individuals and found that highly expressed genes and neural associated genes were significantly enriched for singleneuron snvs these snvs in turn were found to be correlated with chromatin markers of transcription from fetal brain gene expression of the brain changes throughout the different phases of life the most significant levels of expression are found during early development with the rate of gene expression being highest during fetal development this results from the rapid growth of neurons in the embryo neurons at this stage are undergoing neuronal differentiation cell proliferation migration events and dendritic and synaptic development gene expression patterns shift closer towards specialized functional profiles during embryonic development however certain developmental steps are still ongoing at parturition consequently gene expression profiles of the two brain hemispheres appear asymmetrical at birth at birth gene expression profiles appear asymmetrical between brain hemispheres as development continues the gene expression profiles become similar between the hemispheres given a healthy adult expression profiles stay relatively consistent from the late twenties into the late forties from'</li></ul> | | 23 | <ul><li>'interleukin29 il29 is a cytokine and it belongs to type iii interferons group also termed interferons λ ifnλ il29 alternative name ifnλ1 plays an important role in the immune response against pathogenes and especially against viruses by mechanisms similar to type i interferons but targeting primarily cells of epithelial origin and hepatocytesil29 is encoded by the ifnl1 gene located on chromosome 19 in humans it is a pseudogene in mice meaning the il29 protein is not produced in them il29 is with the rest of ifnλ structurally related to the il10 family but its primary amino acid sequence and also function is more similar to type i interferons it binds to a heterodimeric receptor composed of one subunit ifnl1r specific for ifnλ and a second subunit il10rb shared among the il10 family cytokines il29 exhibits antiviral effects by inducing similar signaling pathways as type i interferons il29 receptor signals through jakstat pathways leading to activated expression of interferonstimulated genes and production of antiviral proteins further consequences of il29 signalization comprise the upregulated expression of mhc class i molecules or enhanced expression of the costimulatory molecules and chemokine receptors on pdc which are the main producers of ifnαil29 expression is dominant in virusinfected epithelial cells of the respiratory gastrointestinal and urogenital tracts also in other mucosal tissues and skin hepatocytes infected by hcv or hbv viruses stimulate the immune response by producing il29 ifnλ in general rather than type i interferons it is also produced by maturing macrophages dendritic cells or mastocytesit plays a role in defense against pathogens apart from viruses it affects the function of both innate and adaptive immune system besides described antiviral effects il29 modulates cytokine production of other cells for example it increases secretion of il6 il8 and il10 by monocytes and macrophages enhances the responsiveness of macrophages to ifnγ by increased expression of ifngr1 stimulates t cell polarization towards th1 phenotype and also b cell response to il29 was reported the impact of il29 on cancer cells is complicated depending on cancer cell type it shows protective tumor inhibiting effects in many cases such as skin lung colorectal or hepatocellular cancer but shows tumor promoting effects on multiple'</li><li>'the ability to induce gvl but not gvh after hsct would be very beneficial for those patients there are some strategies to suppress the gvhd after transplantation or to enhance gvl but none of them provide an ideal solution to this problem for some forms of hematopoietic malignancies for example acute myeloid leukemia aml the essential cells during hsct are beside the donors t cells the nk cells which interact with kir receptors nk cells are within the first cells to repopulate hosts bone marrow which means they play important role in the transplant engraftment for their role in the gvl effect their alloreactivity is required because kir and hla genes are inherited independently the ideal donor can have compatible hla genes and kir receptors that induce the alloreaction of nk cells at the same time this will occur with most of the nonrelated donor when transplanting hsc during aml tcells are usually selectively depleted to prevent gvhd while nk cells help with the gvl effect which prevent leukemia relapse when using nondepleted tcell transplant cyclophosphamide is used after transplantation to prevent gvhd or transplant rejection other strategies currently clinically used for suppressing gvhd and enhancing gvl are for example optimization of transplant condition or donor lymphocyte infusion dli after transplantation however none of those provide satisfactory universal results thus other options are still being inspected one of the possibilities is the use of cytokines granulocyte colonystimulating factor gcsf is used to mobilize hsc and mediate t cell tolerance during transplantation gcsf can help to enhance gvl effect and suppress gvhd by reducing levels of lps and tnfα using gcsf also increases levels of treg which can also help with prevention of gvhd other cytokines can also be used to prevent or reduce gvhd without eliminating gvl for example kgf il11 il18 and il35 graftversushost disease hematopoietic stem cell transplantation'</li><li>'which is thought to be critical for kinase activity it is thought that irak2 and irakm are catalytically inactive because they lack this aspartate residue in the kd the cterminal domain does not seem to show much similarity between irak family members the cterminal domain is important for the interaction with the signaling molecule traf6 irak1 contains three traf6 interaction motifs irak2 contains two and irakm contains oneirak1 contains a region that is rich in serine proline and threonine prost it is thought that irak1 undergoes hyperphosphorylation in this region the prost region also contains two proline p glutamic acid e serine s and threonine trich pest sequences that are thought to promote the degradation of irak1 interleukin1 receptors il1rs are cytokine receptors that transduce an intracellular signaling cascade in response to the binding of the inflammatory cytokine interleukin1 il1 this signaling cascade results in the initiation of transcription of certain genes involved in inflammation because il1rs do not possess intrinsic kinase activity they rely on the recruitment of adaptor molecules such as iraks to transduce their signals il1 binding to il1r complex triggers the recruitment of the adaptor molecule myd88 through interactions with the tir domain myd88 brings irak4 to the receptor complex preformed complexes of the adaptor molecule tollip and irak1 are also recruited to the receptor complex allowing irak1 to bind myd88 irak1 binding to myd88 brings it into close proximity with irak4 so that irak4 can phosphorylate and activate irak1 once phosphorylated irak1 recruits the adaptor protein tnf receptor associated factor 6 traf6 and the irak1traf6 complex dissociates from the il1r complex the irak1traf6 complex interacts with a preexisting complex at the plasma membrane consisting of tgfβ activated kinase 1 tak1 and two tak binding proteins tab1 and tab2 tak1 is a mitogenactivated protein kinase kinase kinase mapkkk this interaction leads to the phosphorylation of tab2 and tak1 which then translocate to the cytosol with traf6 and tab1 irak1 remains at the membrane and is targeted for degradation by ubiquitination once the tak1traf'</li></ul> | | 13 | <ul><li>'##ch art telematic art bio art genetic art interactive art computer animation and graphics and hacktivism and tactical media these latter two ‘ genres ’ in particular have a strong focus on the interplay of art and political activism since the end of the 1990s the first online databases came into being as exemplified by the universitybased archive of digital art rhizome platform located in new york netzspannung until 2005 the database project compart in which early phase of digital art is addressed and the collaborative online platform monoskop in terms of institutional resources media art histories spans diverse organisations archives research centres as well as private initiatives already at this early stage in the development of the field the actors of media art histories were connected by way of digital communication especially by socalled mailing lists such as nettime or rohrpost both channels of communication that remain prime resources for the new media art community in the last few years there was a significant increase of festivals and conferences dedicated to new media art though the dominant festivals in the field continue to be the ars electronica the transmediale the isea intersociety for the electronic arts and siggraph special interest group on graphics and interactive techniques to this day museums and research facilities specializing in new media art are the exception nevertheless zkm zentrum fur kunst und medientechnologie or specific focuses in collections including the whitney museum the new york museum of modern art or the walker art center serve as important spaces for exchange beyond museums that reach a wider audience there are more and more smaller museums and galleries that focus on new media art such as the berlinbased dam – digital art museum additionally archives in which are exhibited artifacts situated at the intersection of the histories of media art and technology are important resources including collections such as that of werner nekes or those cabinets of wonder and curiosity incorporated in art history museums even given this increase in festivals however a variety of significant research initiatives have been discontinued these include the ludwig boltzmann institute for mediaartresearch the daniel langlois foundation for art science and technology and media art net this difficulty in establishing sustainable funding structures as well as support for access to shared data for the scientific research of new media art was made public and addressed by the liverpool declaration scholars and artists based at institutions all over the globe signed the declaration in a call to develop systematic strategies to fulfill the task that digital culture and its research demands in the 21st century already in the late 1990s it became clear that media art research is spread over many disciplines and the need became urgent to give it common ground therefore'</li><li>'lithuanian plaque located on the lithuanian academy of sciences honoring nazi war criminal jonas noreika in 2020 cryptokitties developer dapper labs released the nba topshot project which allowed the purchase of nfts linked to basketball highlights the project was built on top of the flow blockchain in march 2021 an nft of twitter founder jack dorseys firstever tweet sold for 29 million the same nft was listed for sale in 2022 at 48 million but only achieved a top bid of 280 on december 15 2022 donald trump former president of the united states announced a line of nfts featuring images of himself for 99 each it was reported that he made between 100001 and 1 million from the scheme nfts have been proposed for purposes related to scientific and medical purposes suggestions include turning patient data into nfts tracking supply chains and minting patents as nftsthe monetary aspect of the sale of nfts has been used by academic institutions to finance research projects the university of california berkeley announced in may 2021 its intention to auction nfts of two patents of inventions for which the creators had received a nobel prize the patents for crispr gene editing and cancer immunotherapy the university would however retain ownership of the patents 85 of funds gathered through the sale of the collection were to be used to finance research the collection included handwritten notices and faxes by james allison and was named the fourth pillar it sold in june 2022 for 22 ether about us54000 at the time george church a us geneticist announced his intention to sell his dna via nfts and use the profits to finance research conducted by nebula genomics in june 2022 20 nfts with his likeness were published instead of the originally planned nfts of his dna due to the market conditions at the time despite mixed reactions the project is considered to be part of an effort to use the genetic data of 15000 individuals to support genetic research by using nfts the project wants to ensure that the users submitting their genetic data are able to receive direct payment for their contributions several other companies have been involved in similar and often criticized efforts to use blockchainbased genetic data in order to guarantee users more control over their data and enable them to receive direct financial compensation whenever their data is being sold molecule protocol a project based in switzerland is trying to use nfts to digitize the intellectual copyright of individual scientists and research teams to finance research the projects whitepaper explains the aim is to represent the copyright of scientific papers as nfts and enable their trade'</li><li>'a clipping path or deep etch is a closed vector path or shape used to cut out a 2d image in image editing software anything inside the path will be included after the clipping path is applied anything outside the path will be omitted from the output applying the clipping path results in a hard aliased or soft antialiased edge depending on the image editors capabilitiesby convention the inside of the path is defined by its direction reversing the direction of a path reverses what is considered inside or outside an inclusive path is one where what is visually inside the path corresponds to what will be preserved an exclusive path of opposite direction contains what is visually outside the path by convention a clockwise path that is nonselfintersecting is considered inclusive a compound path results from the combination of multiple paths inclusive and exclusive and the boolean operations that ultimately determine what the combined path contains for instance an inclusive path which contains a smaller exclusive path results in a shape with a hole defined by the exclusive path one common use of a clipping path is to cull objects that do not need to be rendered because they are outside the users viewport or obscured by display elements such as a hud clipping planes are used in 3d computer graphics in order to prevent the renderer from calculating surfaces at an extreme distance from the viewer the plane is perpendicular to the camera a set distance away the threshold and occupies the entire viewport used in realtime rendering clipping planes can help preserve processing for objects within clear sight the use of clipping planes can result in a detraction from the realism of a scene as the viewer may notice that everything at the threshold is not rendered correctly or seems to disappear spontaneously the addition of fog — a variably transparent region of color or texture just before the clipping plane — can help soften the transition between what should be in plain sight and opaque and what should be beyond notice and fully transparent and therefore does not need to be rendered clipping path services are professional offerings provided by companies for extracting objects or people from still imagery and typically includes other photo editing and manipulation services addressees of such services are primarily photography and graphic design studios advertising agencies web designers as well as lithographers and printing companies clipping path service companies commonly reside in developing countries such as bangladesh philippine india pakistan and nepal which can provide their services at comparatively low cost to developed countries fostering outsourcing of such activities silhouette'</li></ul> | | 42 | <ul><li>'the tree this is why rapidly growing populations yield trees with long tip branches if the rate of exponential growth is estimated from a gene genealogy it may be combined with knowledge of the duration of infection or the serial interval d displaystyle d for a particular pathogen to estimate the basic reproduction number r 0 displaystyle r0 the two may be linked by the following equation r r 0 − 1 d displaystyle rfrac r01d for example one of the first estimates of r 0 displaystyle r0 was for pandemic h1n1 influenza in 2009 by using a coalescentbased analysis of 11 hemagglutinin sequences in combination with prior data about the infectious period for influenza compartmental models infectious disease epidemics are often characterized by highly nonlinear and rapid changes in the number of infected individuals and the effective population size of the virus in such cases birth rates are highly variable which can diminish the correspondence between effective population size and the prevalence of infection many mathematical models have been developed in the field of mathematical epidemiology to describe the nonlinear time series of prevalence of infection and the number of susceptible hosts a well studied example is the susceptibleinfectedrecovered sir system of differential equations which describes the fractions of the population s t displaystyle st susceptible i t displaystyle it infected and r t displaystyle rt recovered as a function of time d s d t − β s i displaystyle frac dsdtbeta si d i d t β s i − γ i displaystyle frac didtbeta sigamma i and d r d t γ i displaystyle frac drdtgamma i here β displaystyle beta is the per capita rate of transmission to susceptible hosts and γ displaystyle gamma is the rate at which infected individuals recover whereupon they are no longer infectious in this case the incidence of new infections per unit time is f t β s i displaystyle ftbeta si which is analogous to the birth rate in classical population genetics models the general formula for the rate of coalescence is λ n t n 2 2 f t i t 2 displaystyle lambda ntn choose 2frac 2ftit2 the ratio 2 n 2 i t 2 displaystyle 2n choose 2it2 can be understood as arising from the probability that two lineages selected uniformly at random are both ancestral to the sample this probability is the ratio of the number of ways to pick two lineages without replacement from the set of lineages and from the set of all infections n 2 i t 2 ≈ 2 n 2 i t 2 displaystyle'</li><li>'dna sense strand looks like the messenger rna mrna transcript and can therefore be used to read the expected codon sequence that will ultimately be used during translation protein synthesis to build an amino acid sequence and then a protein for example the sequence atg within a dna sense strand corresponds to an aug codon in the mrna which codes for the amino acid methionine however the dna sense strand itself is not used as the template for the mrna it is the dna antisense strand that serves as the source for the protein code because with bases complementary to the dna sense strand it is used as a template for the mrna since transcription results in an rna product complementary to the dna template strand the mrna is complementary to the dna antisense strandhence a base triplet 3 ′ tac5 ′ in the dna antisense strand complementary to the 5 ′ atg3 ′ of the dna sense strand is used as the template which results in a 5 ′ aug3 ′ base triplet in the mrna the dna sense strand will have the triplet atg which looks similar to the mrna triplet aug but will not be used to make methionine because it will not be directly used to make mrna the dna sense strand is called a sense strand not because it will be used to make protein it wont be but because it has a sequence that corresponds directly to the rna codon sequence by this logic the rna transcript itself is sometimes described as sense dna strand 1 antisense strand transcribed to → rna strand sensedna strand 2 sense strandsome regions within a doublestranded dna molecule code for genes which are usually instructions specifying the order in which amino acids are assembled to make proteins as well as regulatory sequences splicing sites noncoding introns and other gene products for a cell to use this information one strand of the dna serves as a template for the synthesis of a complementary strand of rna the transcribed dna strand is called the template strand with antisense sequence and the mrna transcript produced from it is said to be sense sequence the complement of antisense the untranscribed dna strand complementary to the transcribed strand is also said to have sense sequence it has the same sense sequence as the mrna transcript though t bases in dna are substituted with u bases in rna the names assigned to each strand actually depend on which direction you are writing the sequence that contains the information for proteins the sense information not on which strand is depicted as on the top or on the bottom which is arbitrary the only biological information that is important for labeling strands is the relative locations of the'</li><li>'in molecular biology and genetics the sense of a nucleic acid molecule particularly of a strand of dna or rna refers to the nature of the roles of the strand and its complement in specifying a sequence of amino acids depending on the context sense may have slightly different meanings for example the negativesense strand of dna is equivalent to the template strand whereas the positivesense strand is the nontemplate strand whose nucleotide sequence is equivalent to the sequence of the mrna transcript because of the complementary nature of basepairing between nucleic acid polymers a doublestranded dna molecule will be composed of two strands with sequences that are reverse complements of each other to help molecular biologists specifically identify each strand individually the two strands are usually differentiated as the sense strand and the antisense strand an individual strand of dna is referred to as positivesense also positive or simply sense if its nucleotide sequence corresponds directly to the sequence of an rna transcript which is translated or translatable into a sequence of amino acids provided that any thymine bases in the dna sequence are replaced with uracil bases in the rna sequence the other strand of the doublestranded dna molecule is referred to as negativesense also negative − or antisense and is reverse complementary to both the positivesense strand and the rna transcript it is actually the antisense strand that is used as the template from which rna polymerases construct the rna transcript but the complementary basepairing by which nucleic acid polymerization occurs means that the sequence of the rna transcript will look identical to the positivesense strand apart from the rna transcripts use of uracil instead of thymine sometimes the phrases coding strand and template strand are encountered in place of sense and antisense respectively and in the context of a doublestranded dna molecule the usage of these terms is essentially equivalent however the codingsense strand need not always contain a code that is used to make a protein both proteincoding and noncoding rnas may be transcribed the terms sense and antisense are relative only to the particular rna transcript in question and not to the dna strand as a whole in other words either dna strand can serve as the sense or antisense strand most organisms with sufficiently large genomes make use of both strands with each strand functioning as the template strand for different rna transcripts in different places along the same dna molecule in some cases rna transcripts can be transcribed in both directions ie on either strand from a common promoter region or be transcribed from within introns on either strand see ambisense below the'</li></ul> | | 27 | <ul><li>'2nev1 where u is the ion velocity solving for u the following relation is found u 2 n e v 1 m displaystyle usqrt frac 2nev1m lets say that for at a certain ionization voltage a singly charged hydrogen ion acquires a resulting velocity of 14x106 ms−1 at 10kv a singly charged deuterium ion under the sample conditions would have acquired roughly 14x106141 ms−1 if a detector was placed at a distance of 1 m the ion flight times would be 114x106 and 14114x106 s thus the time of the ion arrival can be used to infer the ion type itself if the evaporation time is known from the above equation it can be rearranged to show that m n − 2 e v 1 u 2 displaystyle frac mnfrac 2ev1u2 given a known flight distance f for the ion and a known flight time t u f t displaystyle ufrac ft and thus one can substitute these values to obtain the masstocharge for the ion m n − 2 e v 1 t f 2 displaystyle frac mn2ev1leftfrac tfright2 thus for an ion which traverses a 1 m flight path across a time of 2000 ns given an initial accelerating voltage of 5000 v v in si units is kgm2s3a1 and noting that one amu is 1×10−27 kg the masstocharge ratio more correctly the masstoionisation value ratio becomes 386 amucharge the number of electrons removed and thus net positive charge on the ion is not known directly but can be inferred from the histogram spectrum of observed ions the magnification in an atom is due to the projection of ions radially away from the small sharp tip subsequently in the farfield the ions will be greatly magnified this magnification is sufficient to observe field variations due to individual atoms thus allowing in field ion and field evaporation modes for the imaging of single atoms the standard projection model for the atom probe is an emitter geometry that is based upon a revolution of a conic section such as a sphere hyperboloid or paraboloid for these tip models solutions to the field may be approximated or obtained analytically the magnification for a spherical emitter is inversely proportional to the radius of the tip given a projection directly onto a spherical screen the following equation can be obtained geometrically m r s c r e e n r t'</li><li>'transport of cancer proteins and in delivering microrna to the surrounding healthy tissue it leads to a change of healthy cell phenotype and creates a tumorfriendly environment microvesicles play an important role in tumor angiogenesis and in the degradation of matrix due to the presence of metalloproteases which facilitate metastasis they are also involved in intensification of the function of regulatory tlymphocytes and in the induction of apoptosis of cytotoxic tlymphocytes because microvesicles released from a tumor cell contain fas ligand and trail they prevent differentiation of monocytes to dendritic cells tumor microvesicles also carry tumor antigen so they can be an instrument for developing tumor vaccines circulating mirna and segments of dna in all body fluids can be potential markers for tumor diagnostics rheumatoid arthritis is a chronic systemic autoimmune disease characterized by inflammation of joints in the early stage there are abundant th17 cells producing proinflammatory cytokines il17a il17f tnf il21 and il22 in the synovial fluid regulatory tlymphocytes have a limited capability to control these cells in the late stage the extent of inflammation correlates with numbers of activated macrophages that contribute to joint inflammation and bone and cartilage destruction because they have the ability to transform themselves into osteoclasts that destroy bone tissue synthesis of reactive oxygen species proteases and prostaglandins by neutrophils is increased activation of platelets via collagen receptor gpvi stimulates the release of microvesicles from platelet cytoplasmic membranes these microparticles are detectable at a high level in synovial fluid and they promote joint inflammation by transporting proinflammatory cytokine il1 in addition to detecting cancer it is possible to use microvesicles as biological markers to give prognoses for various diseases many types of neurological diseases are associated with increased level of specific types of circulating microvesicles for example elevated levels of phosphorylated tau proteins can be used to diagnose patients in early stages of alzheimers additionally it is possible to detect increased levels of cd133 in microvesicles of patients with epilepsy circulating microvesicles may be useful for the delivery of drugs to very specific targets using electroporation or centrifugation to insert drugs into microvesicles targeting specific cells it is possible to target the drug very efficiently this targeting can help by reducing necessary'</li><li>'as a field in the 1980s occurred through convergence of drexlers theoretical and public work which developed and popularized a conceptual framework for nanotechnology and highvisibility experimental advances that drew additional widescale attention to the prospects of atomic control of matter in the 1980s two major breakthroughs sparked the growth of nanotechnology in the modern era first the invention of the scanning tunneling microscope in 1981 which enabled visualization of individual atoms and bonds and was successfully used to manipulate individual atoms in 1989 the microscopes developers gerd binnig and heinrich rohrer at ibm zurich research laboratory received a nobel prize in physics in 1986 binnig quate and gerber also invented the analogous atomic force microscope that year second fullerenes were discovered in 1985 by harry kroto richard smalley and robert curl who together won the 1996 nobel prize in chemistry c60 was not initially described as nanotechnology the term was used regarding subsequent work with related carbon nanotubes sometimes called graphene tubes or bucky tubes which suggested potential applications for nanoscale electronics and devices the discovery of carbon nanotubes is largely attributed to sumio iijima of nec in 1991 for which iijima won the inaugural 2008 kavli prize in nanoscience in the early 2000s the field garnered increased scientific political and commercial attention that led to both controversy and progress controversies emerged regarding the definitions and potential implications of nanotechnologies exemplified by the royal societys report on nanotechnology challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology which culminated in a public debate between drexler and smalley in 2001 and 2003meanwhile commercialization of products based on advancements in nanoscale technologies began emerging these products are limited to bulk applications of nanomaterials and do not involve atomic control of matter some examples include the silver nano platform for using silver nanoparticles as an antibacterial agent nanoparticlebased transparent sunscreens carbon fiber strengthening using silica nanoparticles and carbon nanotubes for stainresistant textilesgovernments moved to promote and fund research into nanotechnology such as in the us with the national nanotechnology initiative which formalized a sizebased definition of nanotechnology and established funding for research on the nanoscale and in europe via the european framework programmes for research and technological development by the mid2000s new and serious scientific attention began to flourish projects emerged to produce nanotechnology roadmaps which center on atomically precise manipulation of matter and discuss existing and projected capabilities goals and applications nano'</li></ul> | | 12 | <ul><li>'in mathematics a composition of an integer n is a way of writing n as the sum of a sequence of strictly positive integers two sequences that differ in the order of their terms define different compositions of their sum while they are considered to define the same partition of that number every integer has finitely many distinct compositions negative numbers do not have any compositions but 0 has one composition the empty sequence each positive integer n has 2n−1 distinct compositions a weak composition of an integer n is similar to a composition of n but allowing terms of the sequence to be zero it is a way of writing n as the sum of a sequence of nonnegative integers as a consequence every positive integer admits infinitely many weak compositions if their length is not bounded adding a number of terms 0 to the end of a weak composition is usually not considered to define a different weak composition in other words weak compositions are assumed to be implicitly extended indefinitely by terms 0 to further generalize an arestricted composition of an integer n for a subset a of the nonnegative or positive integers is an ordered collection of one or more elements in a whose sum is n the sixteen compositions of 5 are 5 4 1 3 2 3 1 1 2 3 2 2 1 2 1 2 2 1 1 1 1 4 1 3 1 1 2 2 1 2 1 1 1 1 3 1 1 2 1 1 1 1 2 1 1 1 1 1compare this with the seven partitions of 5 5 4 1 3 2 3 1 1 2 2 1 2 1 1 1 1 1 1 1 1it is possible to put constraints on the parts of the compositions for example the five compositions of 5 into distinct terms are 5 4 1 3 2 2 3 1 4compare this with the three partitions of 5 into distinct terms 5 4 1 3 2 conventionally the empty composition is counted as the sole composition of 0 and there are no compositions of negative integers there are 2n−1 compositions of n ≥ 1 here is a proof placing either a plus sign or a comma in each of the n − 1 boxes of the array 1 [UNK] 1 [UNK] … [UNK] 1 [UNK] 1 [UNK] n displaystyle big overbrace 1square 1square ldots square 1square 1 nbig produces a unique composition of n conversely every composition of n determines an assignment of pluses and commas since there are n − 1 binary choices the result follows the same argument shows that the number of compositions of n into exactly k parts a kcomposition is given by the binomial coefficient n − 1 k −'</li><li>'displaystyle ykbf tesum i1infty tialpha kigamma kesum i1infty tibeta kiquad k1dots n we arrive at the wronskian determinant formula τ α → β → γ → n t y 1 t y 2 t [UNK] y n t y 1 ′ t y 2 ′ t [UNK] y n ′ t [UNK] [UNK] [UNK] [UNK] y 1 n − 1 t y 2 n − 1 t [UNK] y n n − 1 t displaystyle tau vec alpha vec beta vec gamma nbf tbeginvmatrixy1bf ty2bf tcdots ynbf ty1bf ty2bf tcdots ynbf tvdots vdots ddots vdots y1n1bf ty2n1bf tcdots ynn1bf tendvmatrix which gives the general n displaystyle n soliton τ displaystyle tau function let x displaystyle x be a compact riemann surface of genus g displaystyle g and fix a canonical homology basis a 1 … a g b 1 … b g displaystyle a1dots agb1dots bg of h 1 x z displaystyle h1xmathbf z with intersection numbers a i ∘ a j b i ∘ b j 0 a i ∘ b j δ i j 1 ≤ i j ≤ g displaystyle aicirc ajbicirc bj0quad aicirc bjdelta ijquad 1leq ijleq g let ω i i 1 … g displaystyle omega ii1dots g be a basis for the space h 1 x displaystyle h1x of holomorphic differentials satisfying the standard normalization conditions [UNK] a i ω j δ i j [UNK] b j ω j b i j displaystyle oint aiomega jdelta ijquad oint bjomega jbij where b displaystyle b is the riemann matrix of periods the matrix b displaystyle b belongs to the siegel upper half space s g b ∈ m a t g × g c b t b im b is positive definite displaystyle mathbf s gleftbin mathrm mat gtimes gmathbf c colon btb textimbtext is positive definiteright the riemann θ displaystyle theta function on c g displaystyle mathbf c g corresponding to the period matrix b displaystyle b is defined to be θ z b [UNK] n ∈ z g e i π n'</li><li>'combinatorial chemistry comprises chemical synthetic methods that make it possible to prepare a large number tens to thousands or even millions of compounds in a single process these compound libraries can be made as mixtures sets of individual compounds or chemical structures generated by computer software combinatorial chemistry can be used for the synthesis of small molecules and for peptides strategies that allow identification of useful components of the libraries are also part of combinatorial chemistry the methods used in combinatorial chemistry are applied outside chemistry too combinatorial chemistry had been invented by furka a eotvos lorand university budapest hungary who described the principle of it the combinatorial synthesis and a deconvolution procedure in a document that was notarized in 1982 the principle of the combinatorial method is synthesize a multicomponent compound mixture combinatorial library in a single stepwise procedure and screen it to find drug candidates or other kinds of useful compounds also in a single process the most important innovation of the combinatorial method is to use mixtures in the synthesis and screening that ensures the high productivity of the process motivations that led to the invention had been published in 2002 synthesis of molecules in a combinatorial fashion can quickly lead to large numbers of molecules for example a molecule with three points of diversity r1 r2 and r3 can generate n r 1 × n r 2 × n r 3 displaystyle nr1times nr2times nr3 possible structures where n r 1 displaystyle nr1 n r 2 displaystyle nr2 and n r 3 displaystyle nr3 are the numbers of different substituents utilizedthe basic principle of combinatorial chemistry is to prepare libraries of a very large number of compounds then identify the useful components of the libraries although combinatorial chemistry has only really been taken up by industry since the 1990s its roots can be seen as far back as the 1960s when a researcher at rockefeller university bruce merrifield started investigating the solidphase synthesis of peptides in its modern form combinatorial chemistry has probably had its biggest impact in the pharmaceutical industry researchers attempting to optimize the activity profile of a compound create a library of many different but related compounds advances in robotics have led to an industrial approach to combinatorial synthesis enabling companies to routinely produce over 100000 new and unique compounds per yearin order to handle the vast number of structural possibilities researchers often create a virtual library a computational enumeration of all possible structures of a given pharmacophore with all available reactants such a library can consist of thousands to'</li></ul> | | 37 | <ul><li>'fits the form conjunction introduction bob likes apples bob likes oranges therefore bob likes apples and bob likes orangesconjunction elimination is another classically valid simple argument form intuitively it permits the inference from any conjunction of either element of that conjunction a displaystyle a and b displaystyle b therefore a displaystyle a or alternatively a displaystyle a and b displaystyle b therefore b displaystyle b in logical operator notation a ∧ b displaystyle aland b [UNK] a displaystyle vdash a or alternatively a ∧ b displaystyle aland b [UNK] b displaystyle vdash b a conjunction a ∧ b displaystyle aland b is proven false by establishing either ¬ a displaystyle neg a or ¬ b displaystyle neg b in terms of the object language this reads ¬ a → ¬ a ∧ b displaystyle neg ato neg aland b this formula can be seen as a special case of a → c → a ∧ b → c displaystyle ato cto aland bto c when c displaystyle c is a false proposition if a displaystyle a implies ¬ b displaystyle neg b then both ¬ a displaystyle neg a as well as a displaystyle a prove the conjunction false a → ¬ b → ¬ a ∧ b displaystyle ato neg bto neg aland b in other words a conjunction can actually be proven false just by knowing about the relation of its conjuncts and not necessary about their truth values this formula can be seen as a special case of a → b → c → a ∧ b → c displaystyle ato bto cto aland bto c when c displaystyle c is a false proposition either of the above are constructively valid proofs by contradiction commutativity yes associativity yes distributivity with various operations especially with or idempotency yes monotonicity yes truthpreserving yeswhen all inputs are true the output is true falsehoodpreserving yeswhen all inputs are false the output is false walsh spectrum 1111 nonlinearity 1 the function is bent if using binary values for true 1 and false 0 then logical conjunction works exactly like normal arithmetic multiplication in highlevel computer programming and digital electronics logical conjunction is commonly represented by an infix operator usually as a keyword such as and an algebraic multiplication or the ampersand symbol sometimes doubled as in many languages also provide shortcircuit control structures corresponding to logical conjunction logical conjunction is often used for bitwise operations where 0 corresponds to false and 1'</li><li>'into the truth value of them on the other hand some signs can be declarative assertions of propositions without forming a sentence nor even being linguistic eg traffic signs convey definite meaning which is either true or false propositions are also spoken of as the content of beliefs and similar intentional attitudes such as desires preferences and hopes for example i desire that i have a new car or i wonder whether it will snow or whether it is the case that it will snow desire belief doubt and so on are thus called propositional attitudes when they take this sort of content bertrand russell held that propositions were structured entities with objects and properties as constituents one important difference between ludwig wittgensteins view according to which a proposition is the set of possible worldsstates of affairs in which it is true is that on the russellian account two propositions that are true in all the same states of affairs can still be differentiated for instance the proposition two plus two equals four is distinct on a russellian account from the proposition three plus three equals six if propositions are sets of possible worlds however then all mathematical truths and all other necessary truths are the same set the set of all possible worlds in relation to the mind propositions are discussed primarily as they fit into propositional attitudes propositional attitudes are simply attitudes characteristic of folk psychology belief desire etc that one can take toward a proposition eg it is raining snow is white etc in english propositions usually follow folk psychological attitudes by a that clause eg jane believes that it is raining in philosophy of mind and psychology mental states are often taken to primarily consist in propositional attitudes the propositions are usually said to be the mental content of the attitude for example if jane has a mental state of believing that it is raining her mental content is the proposition it is raining furthermore since such mental states are about something namely propositions they are said to be intentional mental states explaining the relation of propositions to the mind is especially difficult for nonmentalist views of propositions such as those of the logical positivists and russell described above and gottlob freges view that propositions are platonist entities that is existing in an abstract nonphysical realm so some recent views of propositions have taken them to be mental although propositions cannot be particular thoughts since those are not shareable they could be types of cognitive events or properties of thoughts which could be the same across different thinkersphilosophical debates surrounding propositions as they relate to propositional attitudes have also recently centered on whether they are internal or external to the agent or whether they are mindde'</li><li>'relations emphasize the role inflectional morphology in english the subject can or must agree with the finite verb in person and number and in languages that have morphological case the subject and object and other verb arguments are identified in terms of the case markers that they bear eg nominative accusative dative genitive ergative absolutive etc inflectional morphology may be a more reliable means for defining the grammatical relations than the configuration but its utility can be very limited in many cases for instance inflectional morphology is not going to help in languages that lack inflectional morphology almost entirely such as mandarin and even with english inflectional morphology does not help much since english largely lacks morphological case the difficulties facing attempts to define the grammatical relations in terms of thematic or configurational or morphological criteria can be overcome by an approach that posits prototypical traits the prototypical subject has a cluster of thematic configurational andor morphological traits and the same is true of the prototypical object and other verb arguments across languages and across constructions within a language there can be many cases where a given subject argument may not be a prototypical subject but it has enough subjectlike traits to be granted subject status similarly a given object argument may not be prototypical in one way or another but if it has enough objectlike traits then it can nevertheless receive the status of object this third strategy is tacitly preferred by most work in theoretical syntax all those theories of syntax that avoid providing concrete definitions of the grammatical relations but yet reference them often are perhaps unknowingly pursuing an approach in terms of prototypical traits in dependency grammar dg theories of syntax every headdependent dependency bears a syntactic function the result is that an inventory consisting of dozens of distinct syntactic functions is needed for each language for example a determinernoun dependency might be assumed to bear the det determiner function and an adjectivenoun dependency is assumed to bear the attr attribute function these functions are often produced as labels on the dependencies themselves in the syntactic tree eg the tree contains the following syntactic functions attr attribute ccomp clause complement det determiner mod modifier obj object subj subject and vcomp verb complement the actual inventories of syntactic functions will differ from the one suggested here in the number and types of functions that are assumed in this regard this tree is merely intended to be illustrative of the importance that the syntactic functions can take on in some theories of syntax and grammar dependency grammar headdirectionality parameter'</li></ul> | | 35 | <ul><li>'##capes structure robin thwaites brian slater 2004 the concept of pedodiversity and its application in diverse geoecological systems 1 zinck j a 1988 physiography and soils lecturenotes for soil students soil science division soil survey courses subject matter k6 itc enschede the netherlands'</li><li>'decaying carcasses of salmon that have completed spawning and died numerical modeling suggests that residence time of mdn within a salmon spawning reach is inversely proportional to the amount of redd construction within the river measurements of respiration within a salmonbearing river in alaska further suggest that salmon bioturbation of the river bed plays a significant role in mobilizing mdn and limiting primary productivity while salmon spawning is active the river ecosystem was found to switch from a net autotrophic to heterotrophic system in response to decreased primary production and increased respiration the decreased primary production in this study was attributed to the loss of benthic primary producers who were dislodged due to bioturbation while increased respiration was thought to be due to increased respiration of organic carbon also attributed to sediment mobilization from salmon redd construction while marine derived nutrients are generally thought to increase productivity in riparian and freshwater ecosystems several studies have suggested that temporal effects of bioturbation should be considered when characterizing salmon influences on nutrient cycles major marine bioturbators range from small infaunal invertebrates to fish and marine mammals in most marine sediments however they are dominated by small invertebrates including polychaetes bivalves burrowing shrimp and amphipods shallow and coastal coastal ecosystems such as estuaries are generally highly productive which results in the accumulation of large quantities of detritus organic waste these large quantities in addition to typically small sediment grain size and dense populations make bioturbators important in estuarine respiration bioturbators enhance the transport of oxygen into sediments through irrigation and increase the surface area of oxygenated sediments through burrow construction bioturbators also transport organic matter deeper into sediments through general reworking activities and production of fecal matter this ability to replenish oxygen and other solutes at sediment depth allows for enhanced respiration by both bioturbators as well as the microbial community thus altering estuarine elemental cyclingthe effects of bioturbation on the nitrogen cycle are welldocumented coupled denitrification and nitrification are enhanced due to increased oxygen and nitrate delivery to deep sediments and increased surface area across which oxygen and nitrate can be exchanged the enhanced nitrificationdenitrification coupling contributes to greater removal of biologically available nitrogen in shallow and coastal environments which can be further enhanced by the excretion of ammonium by bioturbators and other organisms residing in bioturbator burrows while both nitrification and denitrification are enhanced by bioturbation the effects of bioturbat'</li><li>'resistance was reported due to calcium carbonate precipitation resulting from microbial activity the increase of soil strength from micp is a result of the bonding of the grains and the increased density of the soil research has shown a linear relationship between the amount of carbonate precipitation and the increase in strength and porosity a 90 decrease in porosity has also been observed in micp treated soil light microscopic imaging suggested that the mechanical strength enhancement of cemented sandy material is caused mostly due to pointtopoint contacts of calcium carbonate crystals and adjacent sand grainsonedimensional column experiments allowed the monitoring of treatment progration by the means of change in pore fluid chemistry triaxial compression tests on untreated and biocemented ottawa sand have shown an increase in shear strength by a factor of 18 changes in ph and concentrations of urea ammonium calcium and calcium carbonate in pore fluid with the distance from the injection point in 5meter column experiments have shown that bacterial activity resulted in successful hydrolysis of urea increase in ph and precipitation of calcite however such activity decreased as the distance from the injection point increased shear wave velocity measurements demonstrated that positive correlation exists between shear wave velocity and the amount of precipitated calciteone of the first patents on ground improvement by micp was the patent “ microbial biocementation ” by murdoch university australia a large scale 100 m3 have shown a significant increase in shear wave velocity was observed during the treatment originally micp was tested and designed for underground applications in water saturated ground requiring injection and production pumps recent work has demonstrated that surface percolation or irrigation is also feasible and in fact provides more strength per amount of calcite provided because crystals form more readily at the bridging points between sand particles over which the water percolatesbenefits of micp for liquefaction prevention micp has the potential to be a costeffective and green alternative to traditional methods of stabilizing soils such as chemical grouting which typically involve the injection of synthetic materials into the soil these synthetic additives are typically costly and can create environmental hazards by modifying the ph and contaminating soils and groundwater excluding sodium silicate all traditional chemical additives are toxic soils engineered with micp meet green construction requirements because the process exerts minimal disturbance to the soil and the environment possible limitations of micp as a cementation technique micp treatment may be limited to deep soil due to limitations of bacterial growth and movement in subsoil micp may be limited to the soils containing limited amounts of fines due to the reduction in pore'</li></ul> | | 5 | <ul><li>'hemolithin sometimes confused with the similar space polymer hemoglycin is a proposed protein containing iron and lithium of extraterrestrial origin according to an unpublished preprint the result has not been published in any peerreviewed scientific journal the protein was purportedly found inside two cv3 meteorites allende and acfer086 by a team of scientists led by harvard university biochemist julie mcgeoch the report of the discovery was met with some skepticism and suggestions that the researchers had extrapolated too far from incomplete data the detected hemolithin protein was reported to have been found inside two cv3 meteorites allende and acfer 086 acfer086 where the complete molecule was detected rather than fragments allende was discovered in agemour algeria in 1990 according to the researchers mass spectrometry hemolithin is largely composed of glycine and hydroxyglycine amino acids the researchers noted that the protein was related to “ very high extraterrestrial ratios of deuteriumhydrogen dh such high dh ratios are not found anywhere on earth but are consistent with longperiod comets and suggest as reported that the protein was formed in the protosolar disc or perhaps even earlier in interstellar molecular clouds that existed long before the sun ’ s birtha natural development of hemolithin may have started with glycine forming first and then later linking with other glycine molecules into polymer chains and later still combining with iron and oxygen atoms the iron and oxygen atoms reside at the end of the newly found molecule the researchers speculate that the iron oxide grouping formed at the end of the molecule may be able to absorb photons thereby enabling the molecule to split water h2o into hydrogen and oxygen and as a result produce a source of energy that might be useful to the development of lifeexobiologist and chemist jeffrey bada expressed concerns about the possible protein discovery commenting the main problem is the occurrence of hydroxyglycine which to my knowledge has never before been reported in meteorites or in prebiotic experiments nor is it found in any proteins thus this amino acid is a strange one to find in a meteorite and i am highly suspicious of the results likewise lee cronin of the university of glasgow stated the structure makes no sense hemolithin is the name given to a protein molecule isolated from two cv3 meteorites allende and acfer086 its deuterium to hydrogen ratio is 26 times terrestrial which is consistent with it having formed in an interstellar molecular cloud or later in'</li><li>'mars surface via telepresence from mars orbit permitting rapid exploration and use of human cognition to take advantage of chance discoveries and feedback from the results obtained so farthey found that telepresence exploration of mars has many advantages the astronauts have near realtime control of the robots and can respond immediately to discoveries it also prevents contamination both ways and has mobility benefits as wellreturn of the sample to orbit has the advantage that it permits analysis of the sample without delay to detect volatiles that may be lost during a voyage home this was the conclusion of a meeting of researchers at the nasa goddard space flight center in 2012 similar methods could be used to directly explore other biologically sensitive moons such as europa titan or enceladus once the human presence in the vicinity becomes possible in august 2019 scientists reported that a capsule containing tardigrades a resilient microbial animal in a cryptobiotic state may have survived for a while on the moon after the april 2019 crash landing of beresheet a failed israeli lunar lander'</li><li>'soil neutron absorption elements cl fe ti s etc monitoring of the neutron component of the natural radiation background and estimation of neutron radiation dose at the martian surface from galactic cosmic rays and solar particle events the potential to monitor seasonal changes of the neutron environment due to variations of atmospheric and subsurface properties astrobiology life on mars water on mars'</li></ul> | | 8 | <ul><li>'airbus a380 boeing 787 airbus a400m airbus a350 sukhoi superjet 100 atr 42 atr 72 600 agustawestland aw101 agustawestland aw189 agustawestland aw169 irkut mc21 bombardier global express bombardier cseries learjet 85 comac arj21 comac c919 and agustawestland aw149'</li><li>'an air data inertial reference unit adiru is a key component of the integrated air data inertial reference system adirs which supplies air data airspeed angle of attack and altitude and inertial reference position and attitude information to the pilots electronic flight instrument system displays as well as other systems on the aircraft such as the engines autopilot aircraft flight control system and landing gear systems an adiru acts as a single fault tolerant source of navigational data for both pilots of an aircraft it may be complemented by a secondary attitude air data reference unit saaru as in the boeing 777 designthis device is used on various military aircraft as well as civilian airliners starting with the airbus a320 and boeing 777 an adirs consists of up to three fault tolerant adirus located in the aircraft electronic rack an associated control and display unit cdu in the cockpit and remotely mounted air data modules adms the no 3 adiru is a redundant unit that may be selected to supply data to either the commanders or the copilots displays in the event of a partial or complete failure of either the no 1 or no 2 adiru there is no crosschannel redundancy between the nos 1 and 2 adirus as no 3 adiru is the only alternate source of air and inertial reference data an inertial reference ir fault in adiru no 1 or 2 will cause a loss of attitude and navigation information on their associated primary flight display pfd and navigation display nd screens an air data reference adr fault will cause the loss of airspeed and altitude information on the affected display in either case the information can only be restored by selecting the no 3 adirueach adiru comprises an adr and an inertial reference ir component the air data reference adr component of an adiru provides airspeed mach number angle of attack temperature and barometric altitude data ram air pressure and static pressures used in calculating airspeed are measured by small adms located as close as possible to the respective pitot and static pressure sensors adms transmit their pressures to the adirus through arinc 429 data buses the ir component of an adiru gives attitude flight path vector ground speed and positional data the ring laser gyroscope is a core enabling technology in the system and is used together with accelerometers gps and other sensors to provide raw data the primary benefits of a ring laser over older mechanical gyroscopes are that there are no moving parts it is rugged and lightweight frictionless and does not resist a change in pre'</li><li>'##level requirements but usually not both in general cast position papers were issued to harmonize review of software projects conducted under do178b or do254 but they were also intended to inform the development and eventual release of do178c and supporting publications as much of the discussion and rationale recorded in the casts is not included in the newer publications the casts remain a source of insight into the updated standards this cast15 position paper is no longer provided on the faas publications site as the teams concerns were addressed by faq 81 in do248c supporting information for do178c and do278a and by changes and clarification in the release of do178 revision c the faq was originated by european certification authorities who were concerned with the risk of applicants developing untraceable and unverifiable gaps in their requirements and it does not recommend merging high and low levels of requirements into a single level the note the applicant may be required to justify software development processes that produce a single level of requirements was added to do178c section 50 page 31however neither publication completely incorporates the full discussion of this topic that is recorded cast15 much of the same content of the original cast15 position paper is published in the 2012 easa certification memo easa cmswceh002 section 23 merging highlevel and lowlevel requirements do178cdo178b provides guidance for merging highlevel and lowlevel software requirements nominally in the do178cdo178b context the highlevel requirements for a certified software product are distinct from the lowlevel software requirements the former being outputs of the software requirements process and the latter being outputs of the software design process highlevel requirements are essentially those system requirements that are allocated to the software product an outside view of what the full software product shall be and do lowlevel requirements are the results of decomposition and elaboration of requirements such that the source code may be produced reviewed and tested directly from the lowlevel requirements an inside view of how the software product shall be implemented to do itin some applications the systemhighlevel requirements are of sufficient simplicity and detail that the source code can be produced and verified directly in this situation the systemhighlevel requirements are also considered to be lowlevel requirements which means that in addition to accomplishing the objectives for highlevel requirements the same requirements must also accomplish the objectives for lowlevel requirementsthe concern that prompted cast15 is that some applicants for software certification interpreted the above guidance as permitting'</li></ul> | | 10 | <ul><li>'alternative upstream 3 splice sites by recruiting u2af35 and u2af65 to specific ese pyrimidine sequences in the exon of the premrna transcriptsr proteins can also alternatively select different downstream 5 splice sites by binding to ese upstream of the splice site the suspected mechanism is that alternative 5 splice sites are chosen when sr proteins bind to upstream ese and interacts with u170k and together recruit u1 to the 5 splice sitein constitutive splicing sr proteins bind to u2af and u170k to bridge the gap between the two components of the spliceosome to mark the 3 and 5 splice sites constitutively spliced exons have many different sr protein binding sequences that act as constitutive splicing enhancers the difference between alternative and constitutive splicing is that during alternative splicing the splice site choice is regulated exon independent roles exon independent roles of sr proteins are called exon independent because it is not known if sr proteins must bind to exons in order for them to perform exon independent activities sr proteins can bind to u1 and u2af while they are bound to the 3 and 5 splice sites at the same time without binding to the premrna transcript the sr protein thus creates a bridge across the intron in what is called a crossintron interaction sr proteins also recruit the trisnrnp molecule u4u6 · u5 to the maturing spliceosome complex by interacting with rs domains in the trisnrnp sr proteins might be able to bind directly to the 5 splice site and recruit the u1 complex of the spliceosome sr proteins can be either shuttling sr proteins or nonshuttling sr proteins some sr proteins associate with rna export factor tap a nuclear export factor to shuttle rna out of the nucleus the shuttling property of the sr protein is determined by the phosphorylation status of the rs domain when hyperphosphorylated sr proteins bind to premrna transcripts but sr proteins become partially dephosphorylated during transcription allowing them to interact with nxf1 thus the phosphorylation of the rs domain determines if the sr proteins stays with the rna transcript after cotranscription splicing and while the mrnp matures if the rs domain remains phosphorylated then the sr protein will not shuttle from the nucleus to the cytosol the phosphorylated sr protein will be'</li><li>'##ps also adenylates rhoa and cell division cycle 42 cdc42 leading to a disaggregation of the actin filament network as a result the host cells actin cytoskeleton control is disabled leading to cell roundingibpa is secreted into eukaryotic cells from h somni a gramnegative bacterium in cattle that causes respiratory epithelium infection this effector contains two fic domains at the cterminal region ampylation of the ibpa fic domain of rho family gtpases is responsible for its cytotoxicity both fic domains have similar effects on host cells ’ cytoskeleton as vops the ampylation on a tyrosine residue of the switch 1 region blocks the interaction of the gtpases with downstream substrates such as pak drra is the doticm type iv translocation system substrate drra from legionella pneumophila it is the effector secreted by l pneumophila to modify gtpases of the host cells this modification increases the survival of bacteria in host cells drra is composed of rab1b specific guanine nucleotide exchange factor gef domain a cterminal lipid binding domain and an nterminal domain with unclear cytotoxic properties research works show that nterminal and fulllength drra shows ampylators activity toward hosts rab1b protein ras related protein which is also the substrate of rab1b gef domain rab1b protein is the gtpase rab to regulate vesicle transportation and membrane fusion the adenylation by bacteria ampylators prolong gtpbound state of rab1b thus the role of effector drra is connected toward the benefits of bacterias vacuoles for their replication during the infection plants and yeasts have no known endogenous ampylating enzymes but animal genomes are endowed with a single copy of a gene encoding a ficdomain ampylase that was likely acquired by an early ancestor of animals via horizontal gene transfer from a prokaryote the human protein referred to commonly as ficd had been previously identified as huntingtin associated protein e hype an assignment arising from a yeast twohybrid screen but of questionable relevance as huntingtin and hypeficd are localised to different cellular compartments cg9523 homologues in drosophila melanogaster cg9523 and c'</li><li>'in cellular biology inclusions are diverse intracellular nonliving substances ergastic substances that are not bound by membranes inclusions are stored nutrientsdeutoplasmic substances secretory products and pigment granules examples of inclusions are glycogen granules in the liver and muscle cells lipid droplets in fat cells pigment granules in certain cells of skin and hair and crystals of various types cytoplasmic inclusions are an example of a biomolecular condensate arising by liquidsolid liquidgel or liquidliquid phase separation these structures were first observed by o f muller in 1786 glycogen glycogen is the most common form of glucose in animals and is especially abundant in cells of muscles and liver it appears in electron micrograph as clusters or a rosette of beta particles that resemble ribosomes located near the smooth endoplasmic reticulum glycogen is an important energy source of the cell therefore it will be available on demand the enzymes responsible for glycogenolysis degrade glycogen into individual molecules of glucose and can be utilized by multiple organs of the bodylipids lipids are triglycerides in storage form is the common form of inclusions not only are stored in specialized cells adipocytes but also are located as individuals droplets in various cell type especially hepatocytes these are fluid at body temperature and appear in living cells as refractile spherical droplets lipid yields more than twice as many calories per gram as does carbohydrate on demand they serve as a local store of energy and a potential source of short carbon chains that are used by the cell in its synthesis of membranes and other lipid containing structural components or secretory productscrystals crystalline inclusions have long been recognized as normal constituents of certain cell types such as sertoli cells and leydig cells of the human testis and occasionally in macrophages it is believed that these structures are crystalline forms of certain proteins which is located everywhere in the cell such as in nucleus mitochondria endoplasmic reticulum golgi body and free in cytoplasmic matrixpigments the most common pigment in the body besides hemoglobin of red blood cells is melanin manufactured by melanocytes of the skin and hair pigments cells of the retina and specialized nerve cells in the substantia nigra of the brain these pigments have protective functions in skin and aid in the sense of sight in the retina but their functions'</li></ul> | | 41 | <ul><li>'distinctive the unique and the special in any place partners initially focused on design and culture as resources for livability in the early 1980s partners launched a program to document the economic value of design and cultural amenities the economics of amenity program explored how cultural amenities and the quality of life in a community are linked to economic development and job creation this work was the catalyst for a significant array of economic impact studies of the arts across the globecore concepts used by partners were cultural planning and cultural resources which they saw as the planning of urban resources including quality design architecture parks the natural environment animation and especially arts activity and tourism from the late 1970s onwards unesco and the council of europe began to investigate the cultural industries from the perspective of cities it was nick garnham who when seconded to the greater london council in 19834 set up a cultural industries unit to put the cultural industries on the agenda drawing on rereading and adapting the original work by theodor adorno and walter benjamin in the 1930s which had seen the culture industry as a kind of monster and influenced also by hans magnus enzensberger he saw the cultural industries as a potentially liberating force this investigation into the cultural industries of the time found that a city and nation that emphasized its development of cultural industries added value exports and new jobs while supporting competitiveness continues to expand a citys and nations growth in the global economythe first mention of the creative city as a concept was in a seminar organized by the australia council the city of melbourne the ministry of planning and environment victoria and the ministry for the arts victoria in september 1988 its focus was to explore how arts and cultural concerns could be better integrated into the planning process for city development a keynote speech by david yencken former secretary for planning and environment for victoria spelled out a broader agenda stating that whilst efficiency of cities is important there is much more needed the city should be emotionally satisfying and stimulate creativity amongst its citizensanother important early player was comedia founded in 1978 by charles landry its 1991 study glasgow the creative city and its cultural economy was followed in 1994 by a study on urban creativity called the creative city in britain and germany as well as being the centre of a creative economy and being home to a sizeable creative class creative cities have also been theorized to embody a particular structure this structure comprises three categories of people spaces organizations and institutions the upperground the underground and the middlegroundthe upper ground consists of firms and businesses engaged in creative industries these are the organizations that create the economic growth one hopes to find in a creative city by taking the creative product of the citys residents'</li><li>'economically active males in rural areas are employed in nonagricultural work compared to 50 percent in france suggesting that there are no economic opportunities in rural areas in egypt outside of farming egypt had similar levels of urbanization in the late 1940s to sweden switzerland and france but significantly lower levels of industrialization based on the normal relationship davis and golden found between urbanization and industrialization egypt had higher levels of urbanization than expected dyckman gives an example of a consequence of urbanization in cairo when he explains that urban dwellers actually have lower literacy rates than those in surrounding villages due to a lack of development both the unesco report and davis and golden identify south korea as an example of an overurbanized country davis and golden discussed how following the removal of the japanese after world war ii urbanization continued but economic growth stagnated population growth and urbanization were driven by migration from overpopulated rural areas even though the majority of jobs available were still in the agricultural sector the 172 percent of koreas population that were urban dwellers in 1949 were attributed largely to the presence of rural migrants developed country developing country migration industrialization rural flight urban primacy urbanization'</li><li>'dampen street noise and improve air quality current leading examples as of 2018 which need to be described and explained here in greater detail include the hammarby sjostad district in stockholm sweden freiburg germany bedzed in hackbridge sutton england a suburb of london and serenbe near atlanta georgia in the us a suburb in western sydney australia newington was the home to the athletes of the 2000 summer olympics and 2000 summer paralympics it was built on a brownfield site and it was developed by mirvac lend lease village consortium from 1997 redevelopment of the village was completed in 1999 but further development is still occurring after the games newington stimulated the australian market for green products and it became a solar village housing approximately 5000 people unfortunately the development failed to build neighborhood centers with walkto services which perpetuates automobile dependence furthermore newington does not provide any affordable housing key sustainable urbanism thresholds high performance buildings solar panels are installed in every home in newington “ at the time of its construction it was the largest solar village in the world … the collective energy generated by these photovoltaic panels will prevent 1309 tons of co2 from entering the atmosphere per year the equivalent of 262 cars being taken off the road ” by using window awnings wool insulation slab construction and efficient water fixtures over 90 percent of the homes are designed to consume 50 percent less energy and water than conventional homes sustainable corridors and biophilia at newington 90 percent of the plantings are native species 21 acres of the development site is incorporated into the millennium parklands 40 percent of stormwater runoff infiltrates the groundwater supply and the rest is cleansed onsite and channeled to the ponds in the parklands providing important habitats in addition the haslams creek was rehabilitated from a concrete channel to a natural watercourse dongtan is a development in eastern chongming island which is roughly a onehour trip from downtown shanghai it was once planned as “ the world ’ s first ecocity ” attempting to become an energy selfsufficient carbonneutral and mostly carfree ecocity housing 500000 residents the first phase of the development is supposed to complete by 2010 and entire development by 2050 but the dongtan project has been delayed indefinitely due to financial issues among other thingskey sustainable urbanism thresholds compactness dongtan is planned to achieve densities of 84112 people per acre which will support efficient mass transit social infrastructure and a range of businesses most homes will midrise apartment buildings clustered toward the city center parks lakes and other public open space will be scattered around the densely'</li></ul> | | 3 | <ul><li>'of wages gary beckers household production functions and similar topics note that people often purchase goods and then combine them with time to produce something that has meaning or practicality to them which produce utility conformity reciprocity cultural anthropology westernization'</li><li>'##ltering food productioncarole l crumleys burgundian landscape project 1974 – present is carried out by a multidisciplinary research team aimed at identifying the multiple factors which have contributed to the longterm durability of the agricultural economy of burgundy francethomas h mcgoverns inuitnorse project 1976 – present uses archaeology environmental reconstruction and textual analysis to examine the changing ecology of nordic colonizers and indigenous peoples in greenland iceland faeroes and shetlandin recent years the approaches to historical ecology have been expanded to include coastal and marine environments stellwagen bank national marine sanctuary project 1984 – present examines massachusetts usa cod fishing in the 17th through 19th centuries through historical recordsflorida keys coral reef ecoregion project 1990 – present researchers at the scripps institute of oceanography are examining archival records including natural history descriptions maps and charts family and personal papers and state and colonial records in order to understand the impact of overfishing and habitat loss in the florida keys usa which contains the third largest coral reef in the world monterey bay national marine sanctuary historical ecology 2008 – present seeks to collect relevant historical data on fishing whaling and trade of the furs of aquatic animals in order form a baseline for environmental restorations of the california usa coast historical ecology is interdisciplinary in principle at the same time it borrows heavily from the rich intellectual history of environmental anthropology western scholars have known since the time of plato that the history of environmental changes cannot be separated from human history several ideas have been used to describe human interaction with the environment the first of which is the concept of the great chain of being or inherent design in nature in this all forms of life are ordered with humanity as the highest being due to its knowledge and ability to modify nature this lends to the concept of another nature a manmade nature which involves design or modification by humans as opposed to design inherent in natureinterest in environmental transformation continued to increase in the 18th 19th and 20th centuries resulting in a series of new intellectual approaches one of these approaches was environmental determinism developed by geographer friedrich ratzel this view held that it is not social conditions but environmental conditions which determine the culture of a population ratzsel also viewed humans as restricted by nature for their behaviors are limited to and defined by their environment a later approach was the historical viewpoint of franz boas which refuted environmental determinism claiming that it is not nature but specifics of history that shape human cultures this approach recognized that although the environment may place limitations on societies every environment will impact each culture differently julian stewards'</li><li>'in respect of a certain social action performed towards neighbours indiscriminately an individual is only just breaking even in terms of inclusive fitness if he could learn to recognise those of his neighbours who really were close relatives and could devote his beneficial actions to them alone an advantage to inclusive fitness would at once appear thus a mutation causing such discriminatory behaviour itself benefits inclusive fitness and would be selected in fact the individual may not need to perform any discrimination so sophisticated as we suggest here a difference in the generosity of his behaviour according to whether the situations evoking it were encountered near to or far from his own home might occasion an advantage of a similar kind traditional sociobiology did not consider the divergent consequences between these basic possibilities for the expression of social behavior and instead assumed that the expression operates in the recognition manner whereby individuals are behaviorally primed to discriminate which others are their true genetic relatives and engage in cooperative behavior with them but when expression has evolved to be primarily locationbased or contextbased — depending on a societys particular demographics and history — social ties and cooperation may or may not coincide with blood ties reviews of the mammal primate and human evidence demonstrate that expression of social behaviors in these species are primarily locationbased and contextbased see nurture kinship and examples of what used to be labeled as fictive kinship are readily understood in this perspective social cooperation however does not mean people see each other as family or familylike nor that people will value those known not to be related with them more than the ones who are or simply neglect relatedness'</li></ul> | | 39 | <ul><li>'time begins there is an incubation period where no strength develops once enough time has passed for the molten material to begin solidifying the joint strength begins to develop before plateauing at the maximum strength if power is applied after full joint strength is achieved the strength will start to decline slowly the joint gap is the distance between the electrofusion fitting and the pipe material when no joint gap is present the resulting joint strength is high but not maximum as joint gap increases the joint strength increases to a point then begins to decline fairly sharply at larger gaps sufficient pressure cannot build during the fusion time and the joint strength is low the effect of joint gap on strength is why the scraping of the pipes before welding is a critical step uneven or inconsistent scraping can result in areas where the joint gap is large leading to low joint strength pipe materials with higher molecular weights mw or densities will have slower material flow rates when in the molten state during fusion despite the differences in flow rates the final joint strength is generally consistent over a fairly wide range of pipe molecular weights'</li><li>'transport between two contacting bodies such as particles in a granular medium the contact pressure is the factor of most influence on overall contact conductance as contact pressure grows true contact area increases and contact conductance grows contact resistance becomes smallersince the contact pressure is the most important factor most studies correlations and mathematical models for measurement of contact conductance are done as a function of this factor the thermal contact resistance of certain sandwich kinds of materials that are manufactured by rolling under high temperatures may sometimes be ignored because the decrease in thermal conductivity between them is negligible no truly smooth surfaces really exist and surface imperfections are visible under a microscope as a result when two bodies are pressed together contact is only performed in a finite number of points separated by relatively large gaps as can be shown in fig 2 since the actual contact area is reduced another resistance for heat flow exists the gasesfluids filling these gaps may largely influence the total heat flow across the interface the thermal conductivity of the interstitial material and its pressure examined through reference to the knudsen number are the two properties governing its influence on contact conductance and thermal transport in heterogeneous materials in generalin the absence of interstitial materials as in a vacuum the contact resistance will be much larger since flow through the intimate contact points is dominant one can characterise a surface that has undergone certain finishing operations by three main properties of roughness waviness and fractal dimension among these roughness and fractality are of most importance with roughness often indicated in terms of a rms value σ displaystyle sigma and surface fractality denoted generally by df the effect of surface structures on thermal conductivity at interfaces is analogous to the concept of electrical contact resistance also known as ecr involving contact patch restricted transport of phonons rather than electrons when the two bodies come in contact surface deformation may occur on both bodies this deformation may either be plastic or elastic depending on the material properties and the contact pressure when a surface undergoes plastic deformation contact resistance is lowered since the deformation causes the actual contact area to increase the presence of dust particles acids etc can also influence the contact conductance going back to formula 2 calculation of the thermal contact conductance may prove difficult even impossible due to the difficulty in measuring the contact area a displaystyle a a product of surface characteristics as explained earlier because of this contact conductanceresistance is usually found experimentally by using a standard apparatusthe results of such experiments are usually published in engineering literature on journals such as journal of heat transfer international journal of heat and mass transfer etc unfortunately a'</li><li>'##ta are bosons eg photons and gluons all these fields have zeropoint energy recent experiments advocate the idea that particles themselves can be thought of as excited states of the underlying quantum vacuum and that all properties of matter are merely vacuum fluctuations arising from interactions of the zeropoint fieldthe idea that empty space can have an intrinsic energy associated with it and that there is no such thing as a true vacuum is seemingly unintuitive it is often argued that the entire universe is completely bathed in the zeropoint radiation and as such it can add only some constant amount to calculations physical measurements will therefore reveal only deviations from this value for many practical calculations zeropoint energy is dismissed by fiat in the mathematical model as a term that has no physical effect such treatment causes problems however as in einsteins theory of general relativity the absolute energy value of space is not an arbitrary constant and gives rise to the cosmological constant for decades most physicists assumed that there was some undiscovered fundamental principle that will remove the infinite zeropoint energy and make it completely vanish if the vacuum has no intrinsic absolute value of energy it will not gravitate it was believed that as the universe expands from the aftermath of the big bang the energy contained in any unit of empty space will decrease as the total energy spreads out to fill the volume of the universe galaxies and all matter in the universe should begin to decelerate this possibility was ruled out in 1998 by the discovery that the expansion of the universe is not slowing down but is in fact accelerating meaning empty space does indeed have some intrinsic energy the discovery of dark energy is best explained by zeropoint energy though it still remains a mystery as to why the value appears to be so small compared to the huge value obtained through theory – the cosmological constant problemmany physical effects attributed to zeropoint energy have been experimentally verified such as spontaneous emission casimir force lamb shift magnetic moment of the electron and delbruck scattering these effects are usually called radiative corrections in more complex nonlinear theories eg qcd zeropoint energy can give rise to a variety of complex phenomena such as multiple stable states symmetry breaking chaos and emergence many physicists believe that the vacuum holds the key to a full understanding of nature and that studying it is critical in the search for the theory of everything active areas of research include the effects of virtual particles quantum entanglement the difference if any between inertial and gravitational mass variation in the speed of light a reason for the observed value of the cosmological constant and the nature of dark energy zeropoint energy evolved from historical'</li></ul> | | 2 | <ul><li>'in this case the domain is the set of all possible maps which are generally implemented as raster grids a raster grid is a twodimensional array of cells tomlin called them locations or points each cell occupying a square area of geographic space and being coded with a value representing the measured property of a given geographic phenomenon usually a field at that location each operation 1 takes one or more raster grids as inputs 2 creates an output grid with matching cell geometry 3 scans through each cell of the input grid or spatially matching cells of multiple inputs 4 performs the operation on the cell values and writes the result to the corresponding cell in the output grid originally the inputs and the output grids were required to have the identical cell geometry ie covering the same spatial extent with the same cell arrangement so that each cell corresponds between inputs and outputs but many modern gis implementations do not require this performing interpolation as needed to derive values at corresponding locations tomlin classified the many possible map algebra operations into three types to which some systems add a fourth local operators operations that operate on one cell location at a time during the scan phase a simple example would be an arithmetic operator such as addition to compute map3 map1 map2 the software scans through each matching cell of the input grids adds the numeric values in each using normal arithmetic and puts the result in the matching cell of the output grid due to this decomposition of operations on maps into operations on individual cell values any operation that can be performed on numbers eg arithmetic statistics trigonometry logic can be performed in map algebra for example a localmean operator would take in two or more grids and compute the arithmetic mean of each set of spatially corresponding cells in addition a range of gisspecific operations has been defined such as reclassifying a large range of values to a smaller range of values eg 45 land cover categories to 3 levels of habitat suitability which dates to the original imgrid implementation of 1975 a common use of local functions is for implementing mathematical models such as an index that are designed to compute a resultant value at a location from a set of input variables focal operators functions that operate on a geometric neighborhood around each cell a common example is calculating slope from a grid of elevation values looking at a single cell with a single elevation it is impossible to judge a trend such as slope thus the slope of each cell is computed from the value of the corresponding cell in the input elevation grid and the values of its immediate neighbors other functions allow for the size and shape of the neighborhood eg a'</li><li>'in mathematics specifically the field of algebra sklyanin algebras are a class of noncommutative algebra named after evgeny sklyanin this class of algebras was first studied in the classification of artinschelter regular algebras of global dimension 3 in the 1980s sklyanin algebras can be grouped into two different types the nondegenerate sklyanin algebras and the degenerate sklyanin algebras which have very different properties a need to understand the nondegenerate sklyanin algebras better has led to the development of the study of point modules in noncommutative geometry let k displaystyle k be a field with a primitive cube root of unity let d displaystyle mathfrak d be the following subset of the projective plane p k 2 displaystyle textbf pk2 d 1 0 0 0 1 0 0 0 1 [UNK] a b c a 3 b 3 c 3 displaystyle mathfrak d100010001sqcup abcbig a3b3c3 each point a b c ∈ p k 2 displaystyle abcin textbf pk2 gives rise to a quadratic 3dimensional sklyanin algebra s a b c k ⟨ x y z ⟩ f 1 f 2 f 3 displaystyle sabcklangle xyzrangle f1f2f3 where f 1 a y z b z y c x 2 f 2 a z x b x z c y 2 f 3 a x y b y x c z 2 displaystyle f1ayzbzycx2quad f2azxbxzcy2quad f3axybyxcz2 whenever a b c ∈ d displaystyle abcin mathfrak d we call s a b c displaystyle sabc a degenerate sklyanin algebra and whenever a b c ∈ p 2 [UNK] d displaystyle abcin textbf p2setminus mathfrak d we say the algebra is nondegenerate the nondegenerate case shares many properties with the commutative polynomial ring k x y z displaystyle kxyz whereas the degenerate case enjoys almost none of these properties generally the nondegenerate sklyanin algebras are more challenging to understand than their degenerate counterparts let s deg displaystyle stextdeg be a degenerate sklyanin algebra s deg displaystyle stextdeg contains nonzero zero divisors the hilbert series of s de'</li><li>'translating equations of the second degree into churchs rra illustrating his method using the formulae e1 e2 and e4 in chapter 11 of lof this translation into rra sheds light on the names spencerbrown gave to e1 and e4 namely memory and counter rra thus formalizes and clarifies lofs notion of an imaginary truth value gottfried leibniz in memoranda not published before the late 19th and early 20th centuries invented boolean logic his notation was isomorphic to that of lof concatenation read as conjunction and nonx read as the complement of x recognition of leibnizs pioneering role in algebraic logic was foreshadowed by lewis 1918 and rescher 1954 but a full appreciation of leibnizs accomplishments had to await the work of wolfgang lenzen published in the 1980s and reviewed in lenzen 2004 charles sanders peirce 1839 – 1914 anticipated the primary algebra in three veins of work two papers he wrote in 1886 proposed a logical algebra employing but one symbol the streamer nearly identical to the cross of lof the semantics of the streamer are identical to those of the cross except that peirce never wrote a streamer with nothing under it an excerpt from one of these papers was published in 1976 but they were not published in full until 1993 in a 1902 encyclopedia article peirce notated boolean algebra and sentential logic in the manner of this entry except that he employed two styles of brackets toggling between and with each increment in formula depth the syntax of his alpha existential graphs is merely concatenation read as conjunction and enclosure by ovals read as negation if primary algebra concatenation is read as conjunction then these graphs are isomorphic to the primary algebra kauffman 2001ironically lof cites vol 4 of peirces collected papers the source for the formalisms in 2 and 3 above 13 were virtually unknown at the time when 1960s and in the place where uk lof was written peirces semiotics about which lof is silent may yet shed light on the philosophical aspects of lof kauffman 2001 discusses another notation similar to that of lof that of a 1917 article by jean nicod who was a disciple of bertrand russells the above formalisms are like the primary algebra all instances of boundary mathematics ie mathematics whose syntax is limited to letters and brackets enclosing devices a minimalist syntax of this nature is a boundary notation boundary notation is free of infix operators prefix or postfix operator symbols the very well known curly braces of'</li></ul> | | 19 | <ul><li>'examination detects central arterial vessels and cfm exploration reveals their radial position ceus examination shows central tumor filling of the circulatory bed during arterial phase and completely enhancement during portal venous phase during this phase the center of the lesion becomes hypoechoic enhancing the tumor scar during the late phase the tumor remains isoechoic to the liver which strengthens the diagnosis of benign lesion it is a benign tumor made up of normal or atypical hepatocytes it has an incidence of 003 its development is induced by intake of anabolic hormones and oral contraceptives the tumor is asymptomatic but may be associated with right upper quadrant pain in case of internal bleeding 2d ultrasound shows a welldefined unencapsulated solid mass it may have a heterogeneous structure in case of intratumoral hemorrhage doppler examination shows no circulatory signal ceus exploration is quite ambiguous and cannot always establish a differential diagnosis with hepatocellular carcinoma thus during the arterial phase there is a centripetal and inhomogeneous enhancement during the portal venous phase there is a moderate wash out during late phase the appearance is isoechoic or hypoechoic due to lack of kupffer cells malignant liver tumors develop on cirrhotic liver hepatocellular carcinoma hcc or normal liver metastases they are single or multiple especially metastases have a variable generally imprecise delineation may have a very pronounced circulatory signal hepatocellular carcinoma and some types of metastases have a heterogeneous structure the result of intratumoral circulatory disorders consequence of hemorrhage or necrosis and are firm to touch even rigid the patients general status correlates with the underlying disease vascular and parenchymal decompensation for liver cirrhosis weight loss lack of appetite and anemia with cancer it is the most common liver malignancy it develops secondary to cirrhosis therefore ultrasound examination every 6 months combined with alpha fetoprotein afp determination is an effective method for early detection and treatment monitoring for this type of tumor clinically hcc overlaps with advanced liver cirrhosis long evolution repeated vascular and parenchymal decompensation sometimes bleeding due to variceal leakage in addition to accelerated weight loss in the recent past and lack of appetite hcc appearance on 2d ultrasound is that of a solid tumor with imprecise del'</li><li>'barriers to control access to their internal environment polar compounds cannot diffuse across these cell membranes and the uptake of useful molecules is mediated through transport proteins that specifically select substrates from the extracellular mixture this selective uptake means that most hydrophilic molecules cannot enter cells since they are not recognised by any specific transporters in contrast the diffusion of hydrophobic compounds across these barriers cannot be controlled and organisms therefore cannot exclude lipidsoluble xenobiotics using membrane barriers however the existence of a permeability barrier means that organisms were able to evolve detoxification systems that exploit the hydrophobicity common to membranepermeable xenobiotics these systems therefore solve the specificity problem by possessing such broad substrate specificities that they metabolise almost any nonpolar compound useful metabolites are excluded since they are polar and in general contain one or more charged groups the detoxification of the reactive byproducts of normal metabolism cannot be achieved by the systems outlined above because these species are derived from normal cellular constituents and usually share their polar characteristics however since these compounds are few in number specific enzymes can recognize and remove them examples of these specific detoxification systems are the glyoxalase system which removes the reactive aldehyde methylglyoxal and the various antioxidant systems that eliminate reactive oxygen species the metabolism of xenobiotics is often divided into three phases modification conjugation and excretion these reactions act in concert to detoxify xenobiotics and remove them from cells in phase i a variety of enzymes act to introduce reactive and polar groups into their substrates one of the most common modifications is hydroxylation catalysed by the cytochrome p450dependent mixedfunction oxidase system these enzyme complexes act to incorporate an atom of oxygen into nonactivated hydrocarbons which can result in either the introduction of hydroxyl groups or n o and sdealkylation of substrates the reaction mechanism of the p450 oxidases proceeds through the reduction of cytochromebound oxygen and the generation of a highlyreactive oxyferryl species according to the following scheme o2 nadph h rh → nadp h2o rohphase i reactions also termed nonsynthetic reactions may occur by oxidation reduction hydrolysis cyclization decyclization and addition of oxygen or removal of hydrogen carried out by mixed function oxidases often in the liver these oxidative reactions typically involve a cytochrome p450 monooxygenase often abbreviated cyp'</li><li>'an indeterminate lesion and further evaluation may be performed by obtaining a physical sample of the lesion ultrasound ct scan and mri may be used to evaluate the liver for hcc on ct and mri hcc can have three distinct patterns of growth a single large tumor multiple tumors poorly defined tumor with an infiltrative growth patterna systematic review of ct diagnosis found that the sensitivity was 68 95 ci 55 – 80 and specificity was 93 95 ci 89 – 96 compared with pathologic examination of an explanted or resected liver as the reference standard with triplephase helical ct the sensitivity was 90 or higher but these data have not been confirmed with autopsy studieshowever mri has the advantage of delivering highresolution images of the liver without ionizing radiation hcc appears as a highintensity pattern on t2weighted images and a lowintensity pattern on t1weighted images the advantage of mri is that it has improved sensitivity and specificity when compared to ultrasound and ct in cirrhotic patients with whom it can be difficult to differentiate hcc from regenerative nodules a systematic review found that the sensitivity was 81 95 ci 70 – 91 and specificity was 85 95 ci 77 – 93 compared with pathologic examination of an explanted or resected liver as the reference standard the sensitivity is further increased if gadolinium contrastenhanced and diffusionweighted imaging are combined mri is more sensitive and specific than ctliver image reporting and data system lirads is a classification system for the reporting of liver lesions detected on ct and mri radiologists use this standardized system to report on suspicious lesions and to provide an estimated likelihood of malignancy categories range from lirads lr 1 to 5 in order of concern for cancer a biopsy is not needed to confirm the diagnosis of hcc if certain imaging criteria are met macroscopically liver cancer appears as a nodular or infiltrative tumor the nodular type may be solitary large mass or multiple when developed as a complication of cirrhosis tumor nodules are round to oval gray or green if the tumor produces bile well circumscribed but not encapsulated the diffuse type is poorly circumscribed and infiltrates the portal veins or the hepatic veins rarelymicroscopically the four architectural and cytological types patterns of hepatocellular carcinoma are fibrolamellar pseudoglandular adenoid pleomorphic giant cell and'</li></ul> | | 32 | <ul><li>'##nification the moon appears to subtend an angle of about 52° by convention for magnifying glasses and optical microscopes where the size of the object is a linear dimension and the apparent size is an angle the magnification is the ratio between the apparent angular size as seen in the eyepiece and the angular size of the object when placed at the conventional closest distance of distinct vision 25 cm from the eye the linear magnification of a thin lens is where f textstyle f is the focal length and d o textstyle dmathrm o is the distance from the lens to the object for real images m textstyle m is negative and the image is inverted for virtual images m textstyle m is positive and the image is upright with d i textstyle dmathrm i being the distance from the lens to the image h i textstyle hmathrm i the height of the image and h o textstyle hmathrm o the height of the object the magnification can also be written as note again that a negative magnification implies an inverted image the image recorded by a photographic film or image sensor is always a real image and is usually inverted when measuring the height of an inverted image using the cartesian sign convention where the xaxis is the optical axis the value for hi will be negative and as a result m will also be negative however the traditional sign convention used in photography is real is positive virtual is negative therefore in photography object height and distance are always real and positive when the focal length is positive the images height distance and magnification are real and positive only if the focal length is negative the images height distance and magnification are virtual and negative therefore the photographic magnification formulae are traditionally presented as the maximum angular magnification compared to the naked eye of a magnifying glass depends on how the glass and the object are held relative to the eye if the lens is held at a distance from the object such that its front focal point is on the object being viewed the relaxed eye focused to infinity can view the image with angular magnification here f textstyle f is the focal length of the lens in centimeters the constant 25 cm is an estimate of the near point distance of the eye — the closest distance at which the healthy naked eye can focus in this case the angular magnification is independent from the distance kept between the eye and the magnifying glass if instead the lens is held very close to the eye and the object is placed closer to the lens than its focal point so that the observer focuses'</li><li>'in optical testing a ronchi test is a method of determining the surface shape figure of a mirror used in telescopes and other optical devices in 1923 italian physicist vasco ronchi published a description of the eponymous ronchi test which is a variation of the foucault knifeedge test and which uses simple equipment to test the quality of optics especially concave mirrors 1 a ronchi tester consists of a light source a diffuser a ronchi gratinga ronchi grating consists of alternate dark and clear stripes one design is a small frame with several evenly spaced fine wires attached light is emitted through the ronchi grating or a single slit reflected by the mirror being tested then passes through the ronchi grating again and is observed by the person doing the test the observers eye is placed close to the centre of curvature of the mirror under test looking at the mirror through the grating the ronchi grating is a short distance less than 2 cm closer to the mirrorthe observer sees the mirror covered in a pattern of stripes that reveal the shape of the mirror the pattern is compared to a mathematically generated diagram usually done on a computer today of what it should look like for a given figure inputs to the program are line frequency of the ronchi grating focal length and diameter of the mirror and the figure required if the mirror is spherical the pattern consists of straight lines the ronchi test is used in the testing of mirrors for reflecting telescopes especially in the field of amateur telescope making it is much faster to set up than the standard foucault knifeedge test the ronchi test differs from the knifeedge test requiring a specialized target the ronchi grating which amounts to a periodic series of knife edges and being more difficult to interpret this procedure offers a quick evaluation of the mirrors shape and condition it readily identifies a turned edge rolled down outer diameter of the mirror a common fault that can develop in objective mirror making the figure quality of a convex lens may be visually tested using a similar principle the grating is moved around the focal point of the lens while viewing the virtual image through the opposite side distortions in the lens surface figure then appear as asymmetries in the periodic grating image'</li><li>'angles instead of one stereoscopic image from the right angle and distance leon gaumont introduced ives pictures in france and encouraged eugene estanave to work on the technique estanave patented a barrier grid technique for animated autostereograms animated portrait photographs with line sheets were marketed for a while mostly in the 1910s and 1920s in the us magic moving picture postcards with simple 3 phase animation or changing pictures were marketed after 1906 maurice bonnett improved barrier grid autostereography in the 1930s with his reliephographie technique and scanning cameras on 11 april 1898 john jacobson filed an application for us patent no 624043 granted 2 may 1899 for a stereograph of an interlaced stereoscopic picture and a transparent mount for said picture having a corrugated or channeled surface the corrugated lines or channels were not yet really lenticular but this is the first known autostereogram that used a corrugated transparent surface rather than the opaque lines of most barrier grid stereograms french nobel prize winning physicist gabriel lippmann represented eugene estanave at several presentations of estanaves works at the french academy of sciences on 2 march 1908 lippmann presented his own ideas for photographie integrale based on insect eyes he suggested to use a screen of tiny lenses spherical segments should be pressed into a sort of film with photographic emulsion on the other side the screen would be placed inside a lightproof holder and on a tripod for stability when exposed each tiny lens would function as a camera and record the surroundings from a slightly different angle than neighboring lenses when developed and lit from behind the lenses should project the lifesize image of the recorded subject in space he could not yet present concrete results in march 1908 but by the end of 1908 he claimed to have exposed some integral photography plates and to have seen the resulting single fullsized image however the technique remained experimental since no material or technique seemed to deliver the optical quality desired at the time of his death in 1921 lippmann reportedly had a system with only twelve lenses on 11 april 1898 john jacobson filed an application for us patent no 624043 granted 2 may 1899 for a stereograph of an interlaced stereoscopic picture and a transparent mount for said picture having a corrugated or channeled surfacein 1912 louis cheron described in his french patent 443216 a screen with long vertical lenses that would be sufficient for recording stereoscopic depth and the shifting of the relations of objects to each other as the viewer moved while he suggested pinholes for integral photographyin june 1912 swiss nobel prize winning physiologist'</li></ul> | | 31 | <ul><li>'axiom of regularity is assumed the literature contains occasional philosophical and commonsense objections to the transitivity of parthood m4 and m5 are two ways of asserting supplementation the mereological analog of set complementation with m5 being stronger because m4 is derivable from m5 m and m4 yield minimal mereology mm reformulated in terms of proper part mm is simonss 1987 preferred minimal system in any system in which m5 or m5 are assumed or can be derived then it can be proved that two objects having the same proper parts are identical this property is known as extensionality a term borrowed from set theory for which extensionality is the defining axiom mereological systems in which extensionality holds are termed extensional a fact denoted by including the letter e in their symbolic names m6 asserts that any two underlapping objects have a unique sum m7 asserts that any two overlapping objects have a unique product if the universe is finite or if top is assumed then the universe is closed under sum universal closure of product and of supplementation relative to w requires bottom w and n are evidently the mereological analog of the universal and empty sets and sum and product are likewise the analogs of settheoretical union and intersection if m6 and m7 are either assumed or derivable the result is a mereology with closure because sum and product are binary operations m6 and m7 admit the sum and product of only a finite number of objects the unrestricted fusion axiom m8 enables taking the sum of infinitely many objects the same holds for product when defined at this point mereology often invokes set theory but any recourse to set theory is eliminable by replacing a formula with a quantified variable ranging over a universe of sets by a schematic formula with one free variable the formula comes out true is satisfied whenever the name of an object that would be a member of the set if it existed replaces the free variable hence any axiom with sets can be replaced by an axiom schema with monadic atomic subformulae m8 and m8 are schemas of just this sort the syntax of a firstorder theory can describe only a denumerable number of sets hence only denumerably many sets may be eliminated in this fashion but this limitation is not binding for the sort of mathematics contemplated here if m8 holds then w exists for infinite universes hence top need be assumed only if the universe is infinite and m8 does not hold top postulating w is not controversial but bottom postulating'</li><li>'by john smith it is a declaration about a different speaker and it is false the term “ i ” means different things so “ i am spartacus ” means different things a related problem is when identical sentences have the same truthvalue yet express different propositions the sentence “ i am a philosopher ” could have been spoken by both socrates and plato in both instances the statement is true but means something different these problems are addressed in predicate logic by using a variable for the problematic term so that “ x is a philosopher ” can have socrates or plato substituted for x illustrating that “ socrates is a philosopher ” and “ plato is a philosopher ” are different propositions similarly “ i am spartacus ” becomes “ x is spartacus ” where x is replaced with terms representing the individuals spartacus and john smith in other words the example problems can be averted if sentences are formulated with precision such that their terms have unambiguous meanings a number of philosophers and linguists claim that all definitions of a proposition are too vague to be useful for them it is just a misleading concept that should be removed from philosophy and semantics w v quine who granted the existence of sets in mathematics maintained that the indeterminacy of translation prevented any meaningful discussion of propositions and that they should be discarded in favor of sentences p f strawson on the other hand advocated for the use of the term statement categorical proposition probabilistic proposition'</li><li>'bundle theory originated by the 18th century scottish philosopher david hume is the ontological theory about objecthood in which an object consists only of a collection bundle of properties relations or tropes according to bundle theory an object consists of its properties and nothing more thus there cannot be an object without properties and one cannot conceive of such an object for example when we think of an apple we think of its properties redness roundness being a type of fruit etc there is nothing above and beyond these properties the apple is nothing more than the collection of its properties in particular there is no substance in which the properties are inherent the difficulty in conceiving and or describing an object without also conceiving and or describing its properties is a common justification for bundle theory especially among current philosophers in the angloamerican tradition the inability to comprehend any aspect of the thing other than its properties implies this argument maintains that one cannot conceive of a bare particular a substance without properties an implication that directly opposes substance theory the conceptual difficulty of bare particulars was illustrated by john locke when he described a substance by itself apart from its properties as something i know not what the idea then we have to which we give the general name substance being nothing but the supposed but unknown support of those qualities we find existing which we imagine cannot subsist sine re substante without something to support them we call that support substantia which according to the true import of the word is in plain english standing under or upholdingwhether a relation of an object is one of its properties may complicate such an argument however the argument concludes that the conceptual challenge of bare particulars leaves a bundle of properties and nothing more as the only possible conception of an object thus justifying bundle theory bundle theory maintains that properties are bundled together in a collection without describing how they are tied together for example bundle theory regards an apple as red four inches 100 mm wide and juicy but lacking an underlying substance the apple is said to be a bundle of properties including redness being four inches 100 mm wide and juiciness hume used the term bundle in this sense also referring to the personal identity in his main work i may venture to affirm of the rest of mankind that they are nothing but a bundle or collection of different perceptions which succeed each other with inconceivable rapidity and are in a perpetual flux and movementcritics question how bundle theory accounts for the properties compresence the togetherness relation between those properties without an underlying substance critics also question how any two given properties are determined to be properties of'</li></ul> | | 24 | <ul><li>'##cific art to move the work is to destroy the work outdoor sitespecific artworks often include landscaping combined with permanently sited sculptural elements it is sometimes linked with environmental art outdoor sitespecific artworks can also include dance performances created especially for the site more broadly the term is sometimes used for any work that is more or less permanently attached to a particular location in this sense a building with interesting architecture could also be considered a piece of sitespecific art in geneva switzerland the contemporary art funds are looking for original ways to integrate art into architecture and the public space since 1980 the neon parallax project initiated in 2004 is conceived specifically for the plaine de plainpalais a public square of 95000 square meters in the heart of the city the concept consists of commissioning luminous artistic works for the rooftops of the buildings bordering the plaza in the same way advertisements are installed on the citys glamorous lakefront the 14 artists invited had to respect the same legal sizes of luminous advertisements in geneva the project thus creates a parallax both between locations and messages but also by the way one interprets neon signs in the public realmsitespecific performance art sitespecific visual art and interventions are commissioned for the annual infecting the city festival in cape town south africa the sitespecific nature of the work allows artists to interrogate the contemporary and historic reality of the central business district and create work that allows the citys users to engage and interact with public spaces in new and memorable ways'</li><li>'regions of the united states receive the greatest environmental benefits provided by scv roofs which are reduced rainwater input into storm water retention systems during rainfall and increased energy performance ratings in buildings scv and green roofs increase energy efficiencies of buildings by stabilizing roof surface temperatures in other regions of the united states the greatest environmental benefits of green roof design may be different based upon the type of climate the area possesses recent advancements in soil engineering and plastic technologies allow vegetated roofs the ability to adapt to different locations within the humid subtropical region of the united states soil media moisture content and capacity levels can be regulated by using soil elements that adapt to the climate of each specific geographic location and client needs the amount of moisture retained depends on the maximum moisture retention capacity the permeability and the depth of the soil media high density plastics permit scv roof systems to withstand the weather elements and adjust to varying building types of the region as defined by green roof industry standards extensive green roofs have a soil media of less than 6 inches in depth and intensive green roofs have a soil media of more than 6 inches in depth most scv roofs that are greater than 6 inches in depth are expensive and found on residential high rise structures often containing pools and other amenities an scv roofs requires a unique soil media mixture to adapt to the harsh weather conditions and physical characteristics of the southern united states expanded shall and clay are typically used to form a base and comprise up to 90 of some soil media mixtures used throughout the united states perlite vermiculite ash tire crumbs sand peat moss and recycled vegetation are some of the other elements utilized in soil media engineering albedo and heat transfer rates are key variables to consider when designing an scv roof and do not have a significant effect on green roofs in the northern continental united states there are three basic scv and green roof systems available in todays market builtup modular and mat these systems vary from manufacture to manufacture and are composed of different materials such as foam high density plastic and fabrics many of the systems have geographic limitations and do not perform well in humid subtropical regions based upon the intent of the system and the materials being used multilayered systems containing the following functional layers root barrier protection layer drainage layer filter layer growing medium and plant level selfcontained units typically square in shape that require only the soil medium and vegetative layer for a functioning green roof these systems are easy to install and remove some modular systems are pregrown at nurseries to client specifications forming an instant vegetative layer singledlayered systems of this type are drained'</li><li>'of urban desire sarah bergmanns pollinator pathway combines art ecology and urban planning just dont call it a bee thing seattle metropolitan deena prichep july 9 2012 part science part art pollinator pathway connects seattle green spaces the salt blog npr claire thompson september 19 2012 bee boulevard an urban corridor becomes a haven for native pollinators grist tracey byrne february 14 2015 pollinator pathway® what is it really about beepeeking online journal promoting environmental stewardship and the enhancement of urban ecosystems'</li></ul> | | 7 | <ul><li>'the spiral cochlear ganglion is a group of neuron cell bodies in the modiolus the conical central axis of the cochlea these bipolar neurons innervate the hair cells of the organ of corti they project their axons to the ventral and dorsal cochlear nuclei as the cochlear nerve a branch of the vestibulocochlear nerve cn viii neurons whose cell bodies lie in the spiral ganglion are strung along the bony core of the cochlea and send fibers axons into the central nervous system cns these bipolar neurons are the first neurons in the auditory system to fire an action potential and supply all of the brains auditory input their dendrites make synaptic contact with the base of hair cells and their axons are bundled together to form the auditory portion of eighth cranial nerve the number of neurons in the spiral ganglion is estimated to be about 35000 – 50000two apparent subtypes of spiral ganglion cells exist type i spiral ganglion cells comprise the vast majority of spiral ganglion cells 9095 in cats and 88 in humans and exclusively innervate the inner hair cells they are myelinated bipolar neurons type ii spiral ganglion cells make up the remainder in contrast to type i cells they are unipolar and unmyelinated in most mammals they innervate the outer hair cells with each type ii neuron sampling many 1520 outer hair cells in addition outer hair cells form reciprocal synapses onto type ii spiral ganglion cells suggesting that the type ii cells have both afferent and efferent roles the rudiment of the cochlear nerve appears about the end of the third week as a group of ganglion cells closely applied to the cephalic edge of the auditory vesicle the ganglion gradually splits into two parts the vestibular ganglion and the spiral ganglion the axons of neurons in the spiral ganglion travel to the brainstem forming the cochlear nerve'</li><li>'at workplaces such as domtar in kinsgsport mill tn 3m in hutchinson mn and northrop grumman in linthicum md there are currently no standards or regulations for workers that already have a hearing loss osha provides recommendations only for addressing the needs of these employees who are exposed to high noise levels communication and the use of hearing protection devices with hearing aids are some of the issues that these workers face hearing protection is required to protect the residual hearing of workers even if there is a diagnosis of severe to profound deafness specialized hearing protectors are available passive hearing protectors that supply no amplification to the users active hearing protectors that contain a power supply communication headsetsappropriate hearing protection should be determined by the worker with the hearingimpairment as well as the professional running the conservation program hearing aids that are turned off are not acceptable forms of hearing protection not only do hearing aids amplify helpful sounds but they also amplify the background noise of the environment the worker is in these employees may want to continue to wear their amplification because of communication needs or localization but amplifying the noise may exceed the osha 8hour permissible exposure limit pel of 90 dba professionals in charge of the hearing conservation program may allow workers to wear hearing aids under earmuffs on a casebycase basis however when in hazardous noise hearing aids should not be worn hearing aids must be removed and audiometric testing requirements must be followed see above employers should consider using manual techniques to obtain thresholds instead of a microprocessor audiometer this is dependent on the severity of the hearing loss hearing aids can be worn during the testing instructions but then should be removed immediately afterwards there are not regulations to protect children from excessive noise exposure but it is estimated that 52 million kids have noiseinduced hearing loss nihl due to increased worry among both parents and experts regarding nihl in children it has been suggested that hearing conservation programs be implemented in schools as part of their studies regarding health and wellness the necessity for these programs is supported by the following reasons 1 children are not sheltered from loud noises in their daily lives and 2 promoting healthy behaviors at a young age is critical to future application the creation of a hearing conservation program for children will strongly differ from those created for the occupational settings discussed above while children may not be exposed to factory of industrial noise on a daily basis they may be exposed to noise sources such as firearms music power tools sports and noisy toys all of these encounters with noise cumulatively increases their risk for developing noiseinduce'</li><li>'noise reduction technology is used to provide noise protection like passive options but also use circuitry to give audibility to sounds that are below a dangerous level about 85 db and try to limit the average output level to about 82 to 85 db to keep the exposure at a safe levelstrategies to help protect your hearing from firearms also include using muzzle brakes and suppressors shooting fewer rounds and avoiding using a firearm with a short barrel it is recommended to shoot outdoors or in a soundtreated environment rather than a reverberant environment an enclosed area with soundreflecting surfaces if there are multiple people shooting make sure there is a large distance between the shooters and that they are not firing at the same time types of ear protection include earmuffs external this ear protection fits snug around the persons external ear earplugs internal these are ear protection that fit inside of the persons ear canal there are many different types of ear plugs the most commonly known are foam musician or custom earplugs that are made from a mold of a persons ear helmet covering various parts of the head including the earsin some occasions multiple types of ear protection can be used together to increase the nrr for example foam earplugs can be worn inconjunction with earmuffs each type of ear protection has what is called a noise reduction rating nrr this gives the consumer an estimate of how much noise is being reduced before reaching the individuals ear it is important for the consumer to know that this is only a single number estimate derived from a laboratory experiment and the nrr will vary per individual wearing the hearing protection niosh and osha have derating values to help give the person an idea of how much sound is being attenuated while wearing the hearing protection osha uses a half derating while niosh uses 70 for preformed earplugs 50 for formable earplugs and 25 for earmuffsbut all such derating are not consistent with each other and do not take into account the individual characteristics of the worker therefore no derating allows the specialist to predict the noise attenuation of a particular model for a particular worker that is the use of laboratory test results nrr snr hml ets does not predict the effectiveness of the protection of a particular worker at all the range of actual values may be for example from 0 to 35 decibels earmuff style hearing protection devices are designed to fit over the outer ear or pinna earmuff hpds typically consist of two ear cups and a head band ear cups are'</li></ul> | | 0 | <ul><li>'an acoustic network is a method of positioning equipment using sound waves it is primarily used in water and can be as small or as large as required by the users specifications the simplest acoustic network consists of one measurement resulting in a single range between sound source and sound receiver bigger networks are only limited by the amount of equipment available and computing power needed to resolve the resulting data the latest acoustic networks used in the marine seismic industry can resolve a network of some 16000 individual ranges in a matter of seconds the principle behind all acoustic networks is the same distance speed x travel time if the travel time and speed of the sound signal are known we can calculate the distance between source and receiver in most networks the speed of the acoustic signal is assumed at a specific value this value is either derived from measuring a signal between two known points or by using specific equipment to calculate it from environmental conditions the diagram below shows the basic operation of measuring a single range at a specified time the processor issues a signal to the source which then sends out the sound wave once the sound wave is received another signal is received at the processor resulting in a time difference between transmission and reception this gives the travel time using the travel time and assumed speed of the signal the processor can calculate the distance between source and receiverif the operator is using acoustic ranges to position items in unknown locations they will need to use more than the single range example shown above as there is only one measurement the receiver could be anywhere on a circle with a radius equal to the calculated range and centered on the transmitter if a second transmitter is added to the system the number of possible positions for the receiver is reduced to two it is only when three or more ranges are introduced into the system is the position of the receiver achieved'</li><li>'its rather low operating frequency of around 1 kilohertz gave it a very broad beam unsuitable for detecting and localising small targets in peacetime the oscillator was used for depth finding where the lack of directionality was not a concern and fessenden designed a commercial fathometer using a carbon microphone as receiver for the submarine signal company submarine signals – marine hazard signaling system underwater acoustics – study of the propagation of sound in water underwater acoustic communication – wireless technique of sending and receiving messages through water hydrophone – underwater microphone list of reginald fessenden patents frost gary lewis 2001 inventing schemes and strategies the making and selling of the fessenden oscillator technology and culture 42 3 462 – 488 doi101353tech20010109 s2cid 110194817 project muse 33762 fay h j w february 1917 submarine signaling fessenden oscillator journal of the american society for naval engineers 29 1 101 – 113 doi101111j155935841917tb01183x rolt kenneth d 1994 the fessenden oscillator history electroacoustic model and performance estimate j acoust soc am 95 5 2832 bibcode1994asaj952832r doi1011211409629'</li><li>'with earthenware vessels inserted in the walls of the choir expressly for acoustic purposes in england a set of eleven jars survives high in the chancel walls of st andrews church at lyddington rutlandat st peter mancroft in norwich two lshaped trenches accommodating a number of acoustic jars were discovered beneath the wooden floor on which the choir stalls had previously stood the trenches had rubble walls and concrete bottoms and the surfaces were rendered over earthenware jars were built into the walls at intervals of about three feet with the mouths facing into the trench the jars were about 9 ½ inches long and 8 inches across at their widest narrowing to 6 inches at the mouth a similar discovery was made at st peter parmentergate in the same city at fountains abbey in yorkshire several earthenware vessels were discovered mortared into the base of the choir screen their necks protruding through the stonework both their use in roman times and usefulness have been debated thomas noble howe wrote in his commentary on vitruvius de architectura these vessels bronze or clay may be another example of vitruvius singling out a highly technical feature of greek architecture that was uncommon but between eight and sixteen potential sites with evidence of echea have been identified it is debatable whether such vessels amplified or deadened sound echea were used with a due regard to the laws and harmony of physics according to roman writer vitruvius there is also the possibility that echea were not used at all as they may have never existed brill states that it is possible that vitruvius following the teachings on harmony by aristoxenus took speculation for realitythe utility of the medieval jars has also been called into question the chronicler of metz in the only medieval source on the purpose of the jars mocks the prior for believing that they might have improved the sound of the choir and the archaeologist ralph merrifield suggested that their use might have owed more to a tradition of votive deposits than to the theories of vitruviusfrom an acoustical perspective there is little consensus on the effect of echea and it is an active area of research for certain archaeoacousticians modern experiments have indicated that their effect would have been to absorb the resonance of certain frequencies acting as a helmholtz resonator rather than to amplify sound however in 2011 at the acoustics of ancient theatres conference p karampatzakis and v zafranas presented evidence that vitruvius account of sound amplification was possible through the construction of a hypothetical model'</li></ul> | ## Evaluation ### Metrics | Label | F1 | |:--------|:-------| | **all** | 0.7426 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-ocontrastive-3e-300samples-20iter") # Run inference preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:----| | Word count | 1 | 369.2581 | 509 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 300 | | 1 | 300 | | 2 | 300 | | 3 | 300 | | 4 | 300 | | 5 | 300 | | 6 | 300 | | 7 | 300 | | 8 | 300 | | 9 | 300 | | 10 | 300 | | 11 | 295 | | 12 | 300 | | 13 | 278 | | 14 | 300 | | 15 | 300 | | 16 | 300 | | 17 | 300 | | 18 | 300 | | 19 | 300 | | 20 | 300 | | 21 | 300 | | 22 | 300 | | 23 | 300 | | 24 | 300 | | 25 | 300 | | 26 | 300 | | 27 | 300 | | 28 | 300 | | 29 | 300 | | 30 | 300 | | 31 | 300 | | 32 | 284 | | 33 | 300 | | 34 | 300 | | 35 | 300 | | 36 | 300 | | 37 | 300 | | 38 | 300 | | 39 | 300 | | 40 | 300 | | 41 | 300 | | 42 | 300 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (3, 8) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 0.01) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - max_length: 512 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:----------:|:--------:|:-------------:|:---------------:| | 0.0000 | 1 | 0.3121 | - | | 0.0778 | 2500 | 0.0449 | - | | 0.1556 | 5000 | 0.0196 | - | | **0.2333** | **7500** | **0.0425** | **0.089** | | 0.3111 | 10000 | 0.0068 | - | | 0.3889 | 12500 | 0.0034 | - | | 0.4667 | 15000 | 0.0029 | 0.1051 | | 0.5444 | 17500 | 0.0402 | - | | 0.6222 | 20000 | 0.0156 | - | | 0.7000 | 22500 | 0.0009 | 0.1067 | | 0.7778 | 25000 | 0.045 | - | | 0.8556 | 27500 | 0.0014 | - | | 0.9333 | 30000 | 0.0004 | 0.1201 | | 1.0111 | 32500 | 0.0041 | - | | 1.0889 | 35000 | 0.0056 | - | | 1.1667 | 37500 | 0.0005 | 0.1324 | | 1.2444 | 40000 | 0.0021 | - | | 1.3222 | 42500 | 0.0007 | - | | 1.4000 | 45000 | 0.0005 | 0.1424 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "TRANSLATION" ]
[ "BEAR", "CRAFT", "MIRNA" ]
Non_BioNLP
pankajrajdeo/UMLS-ED-Bioformer-16L-V-1.25
pankajrajdeo
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:187491593", "loss:CustomTripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:pankajrajdeo/UMLS-ED-Bioformer-16L-V-1", "base_model:finetune:pankajrajdeo/UMLS-ED-Bioformer-16L-V-1", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,731
1,736
36
0
--- base_model: - pankajrajdeo/UMLS-ED-Bioformer-16L-V-1 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:187491593 - loss:CustomTripletLoss widget: - source_sentence: Hylocharis xantusii sentences: - Xantus's hummingbird - C5721346 - C1623532 - Iole viridescens viridescens - source_sentence: HTLV1+2 RNA XXX Ql PCR sentences: - HTLV 1+2 RNA:MevcEşik:Zmlı:XXX:Srl:Prob.amf.hdf - Nota de progreso:Tipo:Punto temporal:{Configuración}:Documento:Pain medicine - C0368469 - C4070921 - source_sentence: Degeneração Nigroestriatal sentences: - C0270733 - hiperinsulinismo debido a deficiencia de 3-hidroxiacil-coenzima A deshidrogenasa de cadena corta - Striatonigral atrophy - C4303473 - source_sentence: Clostridioides difficile As:titer:moment:serum:semikwantitatief sentences: - Dehidroepiandrosteron:MevcEşik:Zmlı:İdrar:Srl - C0485219 - C0364328 - Clostridium difficile Ac:Título:Pt:Soro:Qn - source_sentence: E Vicotrat sentences: - C2742706 - C2350910 - germanium L-cysteine alpha-tocopherol complex - Eosine I Bluish, Dipotassium Salt --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("pankajrajdeo/937457_bioformer_16L") # Run inference sentences = [ 'E Vicotrat', 'Eosine I Bluish, Dipotassium Salt', 'C2742706', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 187,491,593 training samples * Columns: <code>anchor</code>, <code>positive</code>, <code>negative_id</code>, <code>positive_id</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative_id | positive_id | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 13.27 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.25 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 6.27 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 6.49 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 13.53 tokens</li><li>max: 118 tokens</li></ul> | * Samples: | anchor | positive | negative_id | positive_id | negative | |:----------------------------------------------|:------------------------------------------------------------------------------------------------|:----------------------|:----------------------|:------------------------------------------------------------------------------------------------| | <code>Zaburzenie metabolizmu minerałów</code> | <code>Distúrbio não especificado do metabolismo de minerais</code> | <code>C2887914</code> | <code>C0154260</code> | <code>Acute alcoholic hepatic failure</code> | | <code>testy funkčnosti placenty</code> | <code>Metoder som brukes til å vurdere morkakefunksjon.</code> | <code>C2350391</code> | <code>C0032049</code> | <code>Hjärtmuskelscintigrafi</code> | | <code>Tsefapiriin:Susc:Pt:Is:OrdQn</code> | <code>cefapirina:susceptibilidad:punto en el tiempo:cepa clínica:ordinal o cuantitativo:</code> | <code>C0942365</code> | <code>C0801894</code> | <code>2 proyecciones:hallazgo:punto en el tiempo:tobillo.izquierdo:Narrativo:radiografía</code> | * Loss: <code>__main__.CustomTripletLoss</code> with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 50 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 50 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:------:|:-------------:| | 0.0003 | 1000 | 1.0069 | | 0.0005 | 2000 | 0.9728 | | 0.0008 | 3000 | 0.9549 | | 0.0011 | 4000 | 0.9217 | | 0.0013 | 5000 | 0.9116 | | 0.0016 | 6000 | 0.8662 | | 0.0019 | 7000 | 0.8412 | | 0.0021 | 8000 | 0.7979 | | 0.0024 | 9000 | 0.7829 | | 0.0027 | 10000 | 0.7578 | | 0.0029 | 11000 | 0.7402 | | 0.0032 | 12000 | 0.7069 | | 0.0035 | 13000 | 0.6906 | | 0.0037 | 14000 | 0.6644 | | 0.0040 | 15000 | 0.6516 | | 0.0043 | 16000 | 0.6344 | | 0.0045 | 17000 | 0.6395 | | 0.0048 | 18000 | 0.6082 | | 0.0051 | 19000 | 0.5944 | | 0.0053 | 20000 | 0.5955 | | 0.0056 | 21000 | 0.576 | | 0.0059 | 22000 | 0.5723 | | 0.0061 | 23000 | 0.5475 | | 0.0064 | 24000 | 0.5452 | | 0.0067 | 25000 | 0.5485 | | 0.0069 | 26000 | 0.5143 | | 0.0072 | 27000 | 0.5062 | | 0.0075 | 28000 | 0.5118 | | 0.0077 | 29000 | 0.4992 | | 0.0080 | 30000 | 0.5031 | | 0.0083 | 31000 | 0.4762 | | 0.0085 | 32000 | 0.4773 | | 0.0088 | 33000 | 0.4742 | | 0.0091 | 34000 | 0.4692 | | 0.0093 | 35000 | 0.464 | | 0.0096 | 36000 | 0.4687 | | 0.0099 | 37000 | 0.4592 | | 0.0101 | 38000 | 0.4468 | | 0.0104 | 39000 | 0.4425 | | 0.0107 | 40000 | 0.4477 | | 0.0109 | 41000 | 0.4336 | | 0.0112 | 42000 | 0.4331 | | 0.0115 | 43000 | 0.4248 | | 0.0117 | 44000 | 0.4189 | | 0.0120 | 45000 | 0.4147 | | 0.0123 | 46000 | 0.4112 | | 0.0125 | 47000 | 0.4051 | | 0.0128 | 48000 | 0.399 | | 0.0131 | 49000 | 0.3921 | | 0.0133 | 50000 | 0.3917 | | 0.0136 | 51000 | 0.4058 | | 0.0139 | 52000 | 0.3843 | | 0.0141 | 53000 | 0.3811 | | 0.0144 | 54000 | 0.3733 | | 0.0147 | 55000 | 0.3787 | | 0.0149 | 56000 | 0.3859 | | 0.0152 | 57000 | 0.3742 | | 0.0155 | 58000 | 0.3682 | | 0.0157 | 59000 | 0.3705 | | 0.0160 | 60000 | 0.3483 | | 0.0163 | 61000 | 0.3469 | | 0.0165 | 62000 | 0.3586 | | 0.0168 | 63000 | 0.3346 | | 0.0171 | 64000 | 0.3474 | | 0.0173 | 65000 | 0.3625 | | 0.0176 | 66000 | 0.3501 | | 0.0179 | 67000 | 0.3456 | | 0.0181 | 68000 | 0.3383 | | 0.0184 | 69000 | 0.3457 | | 0.0187 | 70000 | 0.3437 | | 0.0189 | 71000 | 0.3395 | | 0.0192 | 72000 | 0.3399 | | 0.0195 | 73000 | 0.324 | | 0.0197 | 74000 | 0.338 | | 0.0200 | 75000 | 0.3268 | | 0.0203 | 76000 | 0.3298 | | 0.0205 | 77000 | 0.3282 | | 0.0208 | 78000 | 0.3356 | | 0.0211 | 79000 | 0.3187 | | 0.0213 | 80000 | 0.3155 | | 0.0216 | 81000 | 0.3181 | | 0.0219 | 82000 | 0.3085 | | 0.0221 | 83000 | 0.3168 | | 0.0224 | 84000 | 0.3162 | | 0.0227 | 85000 | 0.3126 | | 0.0229 | 86000 | 0.3026 | | 0.0232 | 87000 | 0.3017 | | 0.0235 | 88000 | 0.2963 | | 0.0237 | 89000 | 0.3002 | | 0.0240 | 90000 | 0.297 | | 0.0243 | 91000 | 0.2993 | | 0.0245 | 92000 | 0.306 | | 0.0248 | 93000 | 0.2964 | | 0.0251 | 94000 | 0.2992 | | 0.0253 | 95000 | 0.2921 | | 0.0256 | 96000 | 0.3103 | | 0.0259 | 97000 | 0.2897 | | 0.0261 | 98000 | 0.2843 | | 0.0264 | 99000 | 0.2914 | | 0.0267 | 100000 | 0.2952 | | 0.0269 | 101000 | 0.2922 | | 0.0272 | 102000 | 0.2807 | | 0.0275 | 103000 | 0.2797 | | 0.0277 | 104000 | 0.2849 | | 0.0280 | 105000 | 0.2959 | | 0.0283 | 106000 | 0.2823 | | 0.0285 | 107000 | 0.2637 | | 0.0288 | 108000 | 0.2804 | | 0.0291 | 109000 | 0.2761 | | 0.0293 | 110000 | 0.2821 | | 0.0296 | 111000 | 0.2876 | | 0.0299 | 112000 | 0.2699 | | 0.0301 | 113000 | 0.2758 | | 0.0304 | 114000 | 0.2802 | | 0.0307 | 115000 | 0.2689 | | 0.0309 | 116000 | 0.2871 | | 0.0312 | 117000 | 0.2603 | | 0.0315 | 118000 | 0.2728 | | 0.0317 | 119000 | 0.2769 | | 0.0320 | 120000 | 0.2527 | | 0.0323 | 121000 | 0.2677 | | 0.0325 | 122000 | 0.2748 | | 0.0328 | 123000 | 0.2648 | | 0.0331 | 124000 | 0.2645 | | 0.0333 | 125000 | 0.2637 | | 0.0336 | 126000 | 0.2613 | | 0.0339 | 127000 | 0.261 | | 0.0341 | 128000 | 0.2568 | | 0.0344 | 129000 | 0.2611 | | 0.0347 | 130000 | 0.2486 | | 0.0349 | 131000 | 0.2535 | | 0.0352 | 132000 | 0.2525 | | 0.0355 | 133000 | 0.2457 | | 0.0357 | 134000 | 0.2545 | | 0.0360 | 135000 | 0.2596 | | 0.0363 | 136000 | 0.2505 | | 0.0365 | 137000 | 0.2454 | | 0.0368 | 138000 | 0.2696 | | 0.0371 | 139000 | 0.2567 | | 0.0373 | 140000 | 0.2517 | | 0.0376 | 141000 | 0.2436 | | 0.0379 | 142000 | 0.2452 | | 0.0381 | 143000 | 0.2427 | | 0.0384 | 144000 | 0.2525 | | 0.0387 | 145000 | 0.243 | | 0.0389 | 146000 | 0.2417 | | 0.0392 | 147000 | 0.2599 | | 0.0395 | 148000 | 0.246 | | 0.0397 | 149000 | 0.2379 | | 0.0400 | 150000 | 0.2449 | | 0.0403 | 151000 | 0.2333 | | 0.0405 | 152000 | 0.2399 | | 0.0408 | 153000 | 0.2409 | | 0.0411 | 154000 | 0.2407 | | 0.0413 | 155000 | 0.2369 | | 0.0416 | 156000 | 0.2361 | | 0.0419 | 157000 | 0.2331 | | 0.0421 | 158000 | 0.232 | | 0.0424 | 159000 | 0.2337 | | 0.0427 | 160000 | 0.2331 | | 0.0429 | 161000 | 0.2328 | | 0.0432 | 162000 | 0.2278 | | 0.0435 | 163000 | 0.2335 | | 0.0437 | 164000 | 0.2301 | | 0.0440 | 165000 | 0.2381 | | 0.0443 | 166000 | 0.2298 | | 0.0445 | 167000 | 0.2355 | | 0.0448 | 168000 | 0.2254 | | 0.0451 | 169000 | 0.2301 | | 0.0453 | 170000 | 0.2319 | | 0.0456 | 171000 | 0.2314 | | 0.0459 | 172000 | 0.236 | | 0.0461 | 173000 | 0.2348 | | 0.0464 | 174000 | 0.231 | | 0.0467 | 175000 | 0.2291 | | 0.0469 | 176000 | 0.2246 | | 0.0472 | 177000 | 0.2259 | | 0.0475 | 178000 | 0.2254 | | 0.0477 | 179000 | 0.2223 | | 0.0480 | 180000 | 0.2285 | | 0.0483 | 181000 | 0.2306 | | 0.0485 | 182000 | 0.2233 | | 0.0488 | 183000 | 0.2117 | | 0.0491 | 184000 | 0.2219 | | 0.0493 | 185000 | 0.2226 | | 0.0496 | 186000 | 0.2161 | | 0.0499 | 187000 | 0.2195 | | 0.0501 | 188000 | 0.2208 | | 0.0504 | 189000 | 0.2198 | | 0.0507 | 190000 | 0.2236 | | 0.0509 | 191000 | 0.2178 | | 0.0512 | 192000 | 0.2087 | | 0.0515 | 193000 | 0.2222 | | 0.0517 | 194000 | 0.211 | | 0.0520 | 195000 | 0.2287 | | 0.0523 | 196000 | 0.2219 | | 0.0525 | 197000 | 0.2096 | | 0.0528 | 198000 | 0.2112 | | 0.0531 | 199000 | 0.2108 | | 0.0533 | 200000 | 0.2098 | | 0.0536 | 201000 | 0.2176 | | 0.0539 | 202000 | 0.2118 | | 0.0541 | 203000 | 0.2248 | | 0.0544 | 204000 | 0.2124 | | 0.0547 | 205000 | 0.2133 | | 0.0549 | 206000 | 0.2101 | | 0.0552 | 207000 | 0.208 | | 0.0555 | 208000 | 0.2129 | | 0.0557 | 209000 | 0.208 | | 0.0560 | 210000 | 0.2093 | | 0.0563 | 211000 | 0.2123 | | 0.0565 | 212000 | 0.205 | | 0.0568 | 213000 | 0.2012 | | 0.0571 | 214000 | 0.2078 | | 0.0573 | 215000 | 0.2107 | | 0.0576 | 216000 | 0.206 | | 0.0579 | 217000 | 0.2055 | | 0.0581 | 218000 | 0.2067 | | 0.0584 | 219000 | 0.2143 | | 0.0587 | 220000 | 0.204 | | 0.0589 | 221000 | 0.2071 | | 0.0592 | 222000 | 0.2026 | | 0.0595 | 223000 | 0.1994 | | 0.0597 | 224000 | 0.2045 | | 0.0600 | 225000 | 0.2155 | | 0.0603 | 226000 | 0.2075 | | 0.0605 | 227000 | 0.195 | | 0.0608 | 228000 | 0.2028 | | 0.0611 | 229000 | 0.1973 | | 0.0613 | 230000 | 0.2034 | | 0.0616 | 231000 | 0.2039 | | 0.0619 | 232000 | 0.1937 | | 0.0621 | 233000 | 0.2 | | 0.0624 | 234000 | 0.1958 | | 0.0627 | 235000 | 0.1986 | | 0.0629 | 236000 | 0.1975 | | 0.0632 | 237000 | 0.2061 | | 0.0635 | 238000 | 0.2021 | | 0.0637 | 239000 | 0.1957 | | 0.0640 | 240000 | 0.1997 | | 0.0643 | 241000 | 0.1968 | | 0.0645 | 242000 | 0.1881 | | 0.0648 | 243000 | 0.2038 | | 0.0651 | 244000 | 0.1991 | | 0.0653 | 245000 | 0.1841 | | 0.0656 | 246000 | 0.1919 | | 0.0659 | 247000 | 0.187 | | 0.0661 | 248000 | 0.1889 | | 0.0664 | 249000 | 0.1987 | | 0.0667 | 250000 | 0.1992 | | 0.0669 | 251000 | 0.1913 | | 0.0672 | 252000 | 0.1995 | | 0.0675 | 253000 | 0.1875 | | 0.0677 | 254000 | 0.1923 | | 0.0680 | 255000 | 0.1773 | | 0.0683 | 256000 | 0.1869 | | 0.0685 | 257000 | 0.1975 | | 0.0688 | 258000 | 0.1865 | | 0.0691 | 259000 | 0.1889 | | 0.0693 | 260000 | 0.1896 | | 0.0696 | 261000 | 0.1829 | | 0.0699 | 262000 | 0.1843 | | 0.0701 | 263000 | 0.195 | | 0.0704 | 264000 | 0.1818 | | 0.0707 | 265000 | 0.1855 | | 0.0709 | 266000 | 0.1841 | | 0.0712 | 267000 | 0.1889 | | 0.0715 | 268000 | 0.1814 | | 0.0717 | 269000 | 0.1917 | | 0.0720 | 270000 | 0.1862 | | 0.0723 | 271000 | 0.1869 | | 0.0725 | 272000 | 0.1859 | | 0.0728 | 273000 | 0.182 | | 0.0731 | 274000 | 0.1896 | | 0.0733 | 275000 | 0.1936 | | 0.0736 | 276000 | 0.1846 | | 0.0739 | 277000 | 0.18 | | 0.0741 | 278000 | 0.1812 | | 0.0744 | 279000 | 0.1859 | | 0.0747 | 280000 | 0.1785 | | 0.0749 | 281000 | 0.1806 | | 0.0752 | 282000 | 0.182 | | 0.0755 | 283000 | 0.1848 | | 0.0757 | 284000 | 0.1798 | | 0.0760 | 285000 | 0.1853 | | 0.0763 | 286000 | 0.1834 | | 0.0765 | 287000 | 0.1815 | | 0.0768 | 288000 | 0.1819 | | 0.0771 | 289000 | 0.1808 | | 0.0773 | 290000 | 0.1851 | | 0.0776 | 291000 | 0.1823 | | 0.0779 | 292000 | 0.179 | | 0.0781 | 293000 | 0.1825 | | 0.0784 | 294000 | 0.1751 | | 0.0787 | 295000 | 0.1778 | | 0.0789 | 296000 | 0.1773 | | 0.0792 | 297000 | 0.1795 | | 0.0795 | 298000 | 0.1854 | | 0.0797 | 299000 | 0.1818 | | 0.0800 | 300000 | 0.1734 | | 0.0803 | 301000 | 0.1787 | | 0.0805 | 302000 | 0.1807 | | 0.0808 | 303000 | 0.1817 | | 0.0811 | 304000 | 0.1722 | | 0.0813 | 305000 | 0.1762 | | 0.0816 | 306000 | 0.1741 | | 0.0819 | 307000 | 0.1754 | | 0.0821 | 308000 | 0.1713 | | 0.0824 | 309000 | 0.1724 | | 0.0827 | 310000 | 0.1745 | | 0.0829 | 311000 | 0.1774 | | 0.0832 | 312000 | 0.1763 | | 0.0835 | 313000 | 0.1768 | | 0.0837 | 314000 | 0.1717 | | 0.0840 | 315000 | 0.1692 | | 0.0843 | 316000 | 0.1721 | | 0.0845 | 317000 | 0.1673 | | 0.0848 | 318000 | 0.1762 | | 0.0851 | 319000 | 0.1784 | | 0.0853 | 320000 | 0.1697 | | 0.0856 | 321000 | 0.172 | | 0.0859 | 322000 | 0.1658 | | 0.0861 | 323000 | 0.1761 | | 0.0864 | 324000 | 0.1729 | | 0.0867 | 325000 | 0.1672 | | 0.0869 | 326000 | 0.1671 | | 0.0872 | 327000 | 0.1685 | | 0.0875 | 328000 | 0.1729 | | 0.0877 | 329000 | 0.166 | | 0.0880 | 330000 | 0.1712 | | 0.0883 | 331000 | 0.1737 | | 0.0885 | 332000 | 0.1723 | | 0.0888 | 333000 | 0.1705 | | 0.0891 | 334000 | 0.1718 | | 0.0893 | 335000 | 0.1689 | | 0.0896 | 336000 | 0.1747 | | 0.0899 | 337000 | 0.1696 | | 0.0901 | 338000 | 0.1712 | | 0.0904 | 339000 | 0.1674 | | 0.0907 | 340000 | 0.1709 | | 0.0909 | 341000 | 0.169 | | 0.0912 | 342000 | 0.1714 | | 0.0915 | 343000 | 0.1544 | | 0.0917 | 344000 | 0.1755 | | 0.0920 | 345000 | 0.1689 | | 0.0923 | 346000 | 0.1561 | | 0.0925 | 347000 | 0.1712 | | 0.0928 | 348000 | 0.1583 | | 0.0931 | 349000 | 0.159 | | 0.0933 | 350000 | 0.1715 | | 0.0936 | 351000 | 0.1608 | | 0.0939 | 352000 | 0.1703 | | 0.0941 | 353000 | 0.1682 | | 0.0944 | 354000 | 0.1622 | | 0.0947 | 355000 | 0.1663 | | 0.0949 | 356000 | 0.1632 | | 0.0952 | 357000 | 0.1663 | | 0.0955 | 358000 | 0.1643 | | 0.0957 | 359000 | 0.1674 | | 0.0960 | 360000 | 0.1634 | | 0.0963 | 361000 | 0.1616 | | 0.0965 | 362000 | 0.1691 | | 0.0968 | 363000 | 0.1594 | | 0.0971 | 364000 | 0.1589 | | 0.0973 | 365000 | 0.1568 | | 0.0976 | 366000 | 0.1586 | | 0.0979 | 367000 | 0.1555 | | 0.0981 | 368000 | 0.161 | | 0.0984 | 369000 | 0.1615 | | 0.0987 | 370000 | 0.1691 | | 0.0989 | 371000 | 0.151 | | 0.0992 | 372000 | 0.1653 | | 0.0995 | 373000 | 0.1545 | | 0.0997 | 374000 | 0.1627 | | 0.1000 | 375000 | 0.1688 | | 0.1003 | 376000 | 0.1594 | | 0.1005 | 377000 | 0.1619 | | 0.1008 | 378000 | 0.1517 | | 0.1011 | 379000 | 0.1605 | | 0.1013 | 380000 | 0.1576 | | 0.1016 | 381000 | 0.1589 | | 0.1019 | 382000 | 0.1643 | | 0.1021 | 383000 | 0.164 | | 0.1024 | 384000 | 0.158 | | 0.1027 | 385000 | 0.1584 | | 0.1029 | 386000 | 0.1565 | | 0.1032 | 387000 | 0.1566 | | 0.1035 | 388000 | 0.1625 | | 0.1037 | 389000 | 0.1569 | | 0.1040 | 390000 | 0.159 | | 0.1043 | 391000 | 0.1541 | | 0.1045 | 392000 | 0.159 | | 0.1048 | 393000 | 0.1536 | | 0.1051 | 394000 | 0.166 | | 0.1053 | 395000 | 0.1639 | | 0.1056 | 396000 | 0.1491 | | 0.1059 | 397000 | 0.1567 | | 0.1061 | 398000 | 0.1566 | | 0.1064 | 399000 | 0.1641 | | 0.1067 | 400000 | 0.1552 | | 0.1069 | 401000 | 0.1476 | | 0.1072 | 402000 | 0.157 | | 0.1075 | 403000 | 0.1538 | | 0.1077 | 404000 | 0.152 | | 0.1080 | 405000 | 0.1525 | | 0.1083 | 406000 | 0.155 | | 0.1085 | 407000 | 0.1538 | | 0.1088 | 408000 | 0.1506 | | 0.1091 | 409000 | 0.1481 | | 0.1093 | 410000 | 0.1603 | | 0.1096 | 411000 | 0.1509 | | 0.1099 | 412000 | 0.1628 | | 0.1101 | 413000 | 0.151 | | 0.1104 | 414000 | 0.1581 | | 0.1107 | 415000 | 0.1511 | | 0.1109 | 416000 | 0.1552 | | 0.1112 | 417000 | 0.1553 | | 0.1115 | 418000 | 0.1508 | | 0.1117 | 419000 | 0.1515 | | 0.1120 | 420000 | 0.1526 | | 0.1123 | 421000 | 0.15 | | 0.1125 | 422000 | 0.1497 | | 0.1128 | 423000 | 0.1526 | | 0.1131 | 424000 | 0.1547 | | 0.1133 | 425000 | 0.151 | | 0.1136 | 426000 | 0.1471 | | 0.1139 | 427000 | 0.1576 | | 0.1141 | 428000 | 0.1522 | | 0.1144 | 429000 | 0.1506 | | 0.1147 | 430000 | 0.1495 | | 0.1149 | 431000 | 0.1518 | | 0.1152 | 432000 | 0.1467 | | 0.1155 | 433000 | 0.1511 | | 0.1157 | 434000 | 0.1516 | | 0.1160 | 435000 | 0.1476 | | 0.1163 | 436000 | 0.1526 | | 0.1165 | 437000 | 0.1474 | | 0.1168 | 438000 | 0.1445 | | 0.1171 | 439000 | 0.1408 | | 0.1173 | 440000 | 0.1412 | | 0.1176 | 441000 | 0.1445 | | 0.1179 | 442000 | 0.145 | | 0.1181 | 443000 | 0.1402 | | 0.1184 | 444000 | 0.154 | | 0.1187 | 445000 | 0.1446 | | 0.1189 | 446000 | 0.1476 | | 0.1192 | 447000 | 0.1565 | | 0.1195 | 448000 | 0.1409 | | 0.1197 | 449000 | 0.1511 | | 0.1200 | 450000 | 0.139 | | 0.1203 | 451000 | 0.1463 | | 0.1205 | 452000 | 0.1453 | | 0.1208 | 453000 | 0.1432 | | 0.1211 | 454000 | 0.1559 | | 0.1213 | 455000 | 0.1354 | | 0.1216 | 456000 | 0.1419 | | 0.1219 | 457000 | 0.1452 | | 0.1221 | 458000 | 0.147 | | 0.1224 | 459000 | 0.1453 | | 0.1227 | 460000 | 0.153 | | 0.1229 | 461000 | 0.1496 | | 0.1232 | 462000 | 0.1464 | | 0.1235 | 463000 | 0.1423 | | 0.1237 | 464000 | 0.1403 | | 0.1240 | 465000 | 0.1458 | | 0.1243 | 466000 | 0.1508 | | 0.1245 | 467000 | 0.1442 | | 0.1248 | 468000 | 0.1521 | | 0.1251 | 469000 | 0.1424 | | 0.1253 | 470000 | 0.1545 | | 0.1256 | 471000 | 0.1389 | | 0.1259 | 472000 | 0.1408 | | 0.1261 | 473000 | 0.1398 | | 0.1264 | 474000 | 0.1333 | | 0.1267 | 475000 | 0.1436 | | 0.1269 | 476000 | 0.1423 | | 0.1272 | 477000 | 0.1393 | | 0.1275 | 478000 | 0.1465 | | 0.1277 | 479000 | 0.1484 | | 0.1280 | 480000 | 0.1412 | | 0.1283 | 481000 | 0.143 | | 0.1285 | 482000 | 0.139 | | 0.1288 | 483000 | 0.1447 | | 0.1291 | 484000 | 0.1388 | | 0.1293 | 485000 | 0.1414 | | 0.1296 | 486000 | 0.1444 | | 0.1299 | 487000 | 0.1365 | | 0.1301 | 488000 | 0.1403 | | 0.1304 | 489000 | 0.1398 | | 0.1307 | 490000 | 0.1302 | | 0.1309 | 491000 | 0.1443 | | 0.1312 | 492000 | 0.1402 | | 0.1315 | 493000 | 0.1451 | | 0.1317 | 494000 | 0.1397 | | 0.1320 | 495000 | 0.137 | | 0.1323 | 496000 | 0.1493 | | 0.1325 | 497000 | 0.1415 | | 0.1328 | 498000 | 0.1365 | | 0.1331 | 499000 | 0.1323 | | 0.1333 | 500000 | 0.1384 | | 0.1336 | 501000 | 0.1307 | | 0.1339 | 502000 | 0.1385 | | 0.1341 | 503000 | 0.1394 | | 0.1344 | 504000 | 0.1393 | | 0.1347 | 505000 | 0.1455 | | 0.1349 | 506000 | 0.1374 | | 0.1352 | 507000 | 0.1381 | | 0.1355 | 508000 | 0.1363 | | 0.1357 | 509000 | 0.1392 | | 0.1360 | 510000 | 0.1399 | | 0.1363 | 511000 | 0.1356 | | 0.1365 | 512000 | 0.1395 | | 0.1368 | 513000 | 0.1402 | | 0.1371 | 514000 | 0.1382 | | 0.1373 | 515000 | 0.1408 | | 0.1376 | 516000 | 0.1398 | | 0.1379 | 517000 | 0.1405 | | 0.1381 | 518000 | 0.1351 | | 0.1384 | 519000 | 0.1371 | | 0.1387 | 520000 | 0.1302 | | 0.1389 | 521000 | 0.14 | | 0.1392 | 522000 | 0.1363 | | 0.1395 | 523000 | 0.1313 | | 0.1397 | 524000 | 0.1299 | | 0.1400 | 525000 | 0.1372 | | 0.1403 | 526000 | 0.1416 | | 0.1405 | 527000 | 0.1295 | | 0.1408 | 528000 | 0.1359 | | 0.1411 | 529000 | 0.1383 | | 0.1413 | 530000 | 0.1378 | | 0.1416 | 531000 | 0.135 | | 0.1419 | 532000 | 0.1405 | | 0.1421 | 533000 | 0.14 | | 0.1424 | 534000 | 0.1321 | | 0.1427 | 535000 | 0.1303 | | 0.1429 | 536000 | 0.1319 | | 0.1432 | 537000 | 0.1312 | | 0.1435 | 538000 | 0.1338 | | 0.1437 | 539000 | 0.1361 | | 0.1440 | 540000 | 0.139 | | 0.1443 | 541000 | 0.1364 | | 0.1445 | 542000 | 0.1316 | | 0.1448 | 543000 | 0.1331 | | 0.1451 | 544000 | 0.1269 | | 0.1453 | 545000 | 0.1294 | | 0.1456 | 546000 | 0.135 | | 0.1459 | 547000 | 0.1328 | | 0.1461 | 548000 | 0.1296 | | 0.1464 | 549000 | 0.1305 | | 0.1467 | 550000 | 0.1334 | | 0.1469 | 551000 | 0.1362 | | 0.1472 | 552000 | 0.1318 | | 0.1475 | 553000 | 0.1312 | | 0.1477 | 554000 | 0.1293 | | 0.1480 | 555000 | 0.1324 | | 0.1483 | 556000 | 0.1256 | | 0.1485 | 557000 | 0.1227 | | 0.1488 | 558000 | 0.1239 | | 0.1491 | 559000 | 0.1287 | | 0.1493 | 560000 | 0.1307 | | 0.1496 | 561000 | 0.1336 | | 0.1499 | 562000 | 0.133 | | 0.1501 | 563000 | 0.1278 | | 0.1504 | 564000 | 0.1339 | | 0.1507 | 565000 | 0.1321 | | 0.1509 | 566000 | 0.1322 | | 0.1512 | 567000 | 0.1262 | | 0.1515 | 568000 | 0.1331 | | 0.1517 | 569000 | 0.1361 | | 0.1520 | 570000 | 0.1307 | | 0.1523 | 571000 | 0.133 | | 0.1525 | 572000 | 0.1293 | | 0.1528 | 573000 | 0.1283 | | 0.1531 | 574000 | 0.1275 | | 0.1533 | 575000 | 0.1329 | | 0.1536 | 576000 | 0.1307 | | 0.1539 | 577000 | 0.1245 | | 0.1541 | 578000 | 0.1313 | | 0.1544 | 579000 | 0.1256 | | 0.1547 | 580000 | 0.1257 | | 0.1549 | 581000 | 0.1194 | | 0.1552 | 582000 | 0.125 | | 0.1555 | 583000 | 0.1345 | | 0.1557 | 584000 | 0.1308 | | 0.1560 | 585000 | 0.1318 | | 0.1563 | 586000 | 0.1348 | | 0.1565 | 587000 | 0.1231 | | 0.1568 | 588000 | 0.1282 | | 0.1571 | 589000 | 0.1281 | | 0.1573 | 590000 | 0.1221 | | 0.1576 | 591000 | 0.1234 | | 0.1579 | 592000 | 0.1334 | | 0.1581 | 593000 | 0.1249 | | 0.1584 | 594000 | 0.1216 | | 0.1587 | 595000 | 0.1295 | | 0.1589 | 596000 | 0.1191 | | 0.1592 | 597000 | 0.1267 | | 0.1595 | 598000 | 0.1273 | | 0.1597 | 599000 | 0.124 | | 0.1600 | 600000 | 0.1271 | | 0.1603 | 601000 | 0.1284 | | 0.1605 | 602000 | 0.1285 | | 0.1608 | 603000 | 0.1288 | | 0.1611 | 604000 | 0.1252 | | 0.1613 | 605000 | 0.1255 | | 0.1616 | 606000 | 0.1289 | | 0.1619 | 607000 | 0.1294 | | 0.1621 | 608000 | 0.1294 | | 0.1624 | 609000 | 0.1288 | | 0.1627 | 610000 | 0.1336 | | 0.1629 | 611000 | 0.125 | | 0.1632 | 612000 | 0.1288 | | 0.1635 | 613000 | 0.122 | | 0.1637 | 614000 | 0.1204 | | 0.1640 | 615000 | 0.1245 | | 0.1643 | 616000 | 0.1303 | | 0.1645 | 617000 | 0.1187 | | 0.1648 | 618000 | 0.1223 | | 0.1651 | 619000 | 0.1311 | | 0.1653 | 620000 | 0.1202 | | 0.1656 | 621000 | 0.1271 | | 0.1659 | 622000 | 0.1218 | | 0.1661 | 623000 | 0.1218 | | 0.1664 | 624000 | 0.1247 | | 0.1667 | 625000 | 0.1289 | | 0.1669 | 626000 | 0.1261 | | 0.1672 | 627000 | 0.1262 | | 0.1675 | 628000 | 0.1251 | | 0.1677 | 629000 | 0.1271 | | 0.1680 | 630000 | 0.1243 | | 0.1683 | 631000 | 0.1266 | | 0.1685 | 632000 | 0.1257 | | 0.1688 | 633000 | 0.1215 | | 0.1691 | 634000 | 0.1236 | | 0.1693 | 635000 | 0.1267 | | 0.1696 | 636000 | 0.1209 | | 0.1699 | 637000 | 0.1188 | | 0.1701 | 638000 | 0.1267 | | 0.1704 | 639000 | 0.1259 | | 0.1707 | 640000 | 0.1225 | | 0.1709 | 641000 | 0.1183 | | 0.1712 | 642000 | 0.1202 | | 0.1715 | 643000 | 0.1279 | | 0.1717 | 644000 | 0.1191 | | 0.1720 | 645000 | 0.1206 | | 0.1723 | 646000 | 0.1178 | | 0.1725 | 647000 | 0.1234 | | 0.1728 | 648000 | 0.1259 | | 0.1731 | 649000 | 0.1227 | | 0.1733 | 650000 | 0.1211 | | 0.1736 | 651000 | 0.1216 | | 0.1739 | 652000 | 0.1182 | | 0.1741 | 653000 | 0.1205 | | 0.1744 | 654000 | 0.1187 | | 0.1747 | 655000 | 0.1144 | | 0.1749 | 656000 | 0.1216 | | 0.1752 | 657000 | 0.1287 | | 0.1755 | 658000 | 0.122 | | 0.1757 | 659000 | 0.1213 | | 0.1760 | 660000 | 0.1217 | | 0.1763 | 661000 | 0.1256 | | 0.1765 | 662000 | 0.1227 | | 0.1768 | 663000 | 0.1219 | | 0.1771 | 664000 | 0.1261 | | 0.1773 | 665000 | 0.1169 | | 0.1776 | 666000 | 0.1192 | | 0.1779 | 667000 | 0.1187 | | 0.1781 | 668000 | 0.1117 | | 0.1784 | 669000 | 0.1189 | | 0.1787 | 670000 | 0.12 | | 0.1789 | 671000 | 0.1204 | | 0.1792 | 672000 | 0.1208 | | 0.1795 | 673000 | 0.119 | | 0.1797 | 674000 | 0.1161 | | 0.1800 | 675000 | 0.1167 | | 0.1803 | 676000 | 0.1235 | | 0.1805 | 677000 | 0.1276 | | 0.1808 | 678000 | 0.1188 | | 0.1811 | 679000 | 0.1135 | | 0.1813 | 680000 | 0.1187 | | 0.1816 | 681000 | 0.1165 | | 0.1819 | 682000 | 0.1224 | | 0.1821 | 683000 | 0.125 | | 0.1824 | 684000 | 0.1146 | | 0.1827 | 685000 | 0.1162 | | 0.1829 | 686000 | 0.1172 | | 0.1832 | 687000 | 0.1197 | | 0.1835 | 688000 | 0.113 | | 0.1837 | 689000 | 0.1216 | | 0.1840 | 690000 | 0.1144 | | 0.1843 | 691000 | 0.1274 | | 0.1845 | 692000 | 0.1136 | | 0.1848 | 693000 | 0.1202 | | 0.1851 | 694000 | 0.1249 | | 0.1853 | 695000 | 0.1195 | | 0.1856 | 696000 | 0.1158 | | 0.1859 | 697000 | 0.1145 | | 0.1861 | 698000 | 0.1187 | | 0.1864 | 699000 | 0.1173 | | 0.1867 | 700000 | 0.1181 | | 0.1869 | 701000 | 0.1236 | | 0.1872 | 702000 | 0.1223 | | 0.1875 | 703000 | 0.1147 | | 0.1877 | 704000 | 0.1197 | | 0.1880 | 705000 | 0.1125 | | 0.1883 | 706000 | 0.1175 | | 0.1885 | 707000 | 0.1239 | | 0.1888 | 708000 | 0.1263 | | 0.1891 | 709000 | 0.1229 | | 0.1893 | 710000 | 0.1202 | | 0.1896 | 711000 | 0.1159 | | 0.1899 | 712000 | 0.1232 | | 0.1901 | 713000 | 0.1197 | | 0.1904 | 714000 | 0.121 | | 0.1907 | 715000 | 0.1189 | | 0.1909 | 716000 | 0.1183 | | 0.1912 | 717000 | 0.1091 | | 0.1915 | 718000 | 0.1186 | | 0.1917 | 719000 | 0.115 | | 0.1920 | 720000 | 0.1146 | | 0.1923 | 721000 | 0.1165 | | 0.1925 | 722000 | 0.1192 | | 0.1928 | 723000 | 0.1163 | | 0.1931 | 724000 | 0.1162 | | 0.1933 | 725000 | 0.1156 | | 0.1936 | 726000 | 0.1218 | | 0.1939 | 727000 | 0.1154 | | 0.1941 | 728000 | 0.1131 | | 0.1944 | 729000 | 0.118 | | 0.1947 | 730000 | 0.1156 | | 0.1949 | 731000 | 0.1193 | | 0.1952 | 732000 | 0.1143 | | 0.1955 | 733000 | 0.1211 | | 0.1957 | 734000 | 0.1187 | | 0.1960 | 735000 | 0.12 | | 0.1963 | 736000 | 0.1164 | | 0.1965 | 737000 | 0.1173 | | 0.1968 | 738000 | 0.1151 | | 0.1971 | 739000 | 0.1143 | | 0.1973 | 740000 | 0.1141 | | 0.1976 | 741000 | 0.1174 | | 0.1979 | 742000 | 0.1185 | | 0.1981 | 743000 | 0.1133 | | 0.1984 | 744000 | 0.1174 | | 0.1987 | 745000 | 0.1154 | | 0.1989 | 746000 | 0.1138 | | 0.1992 | 747000 | 0.1203 | | 0.1995 | 748000 | 0.1119 | | 0.1997 | 749000 | 0.111 | | 0.2000 | 750000 | 0.1174 | | 0.2003 | 751000 | 0.1204 | | 0.2005 | 752000 | 0.1177 | | 0.2008 | 753000 | 0.1139 | | 0.2011 | 754000 | 0.1138 | | 0.2013 | 755000 | 0.1179 | | 0.2016 | 756000 | 0.1094 | | 0.2019 | 757000 | 0.1092 | | 0.2021 | 758000 | 0.1108 | | 0.2024 | 759000 | 0.1125 | | 0.2027 | 760000 | 0.1202 | | 0.2029 | 761000 | 0.1119 | | 0.2032 | 762000 | 0.1151 | | 0.2035 | 763000 | 0.1169 | | 0.2037 | 764000 | 0.1109 | | 0.2040 | 765000 | 0.1112 | | 0.2043 | 766000 | 0.1102 | | 0.2045 | 767000 | 0.119 | | 0.2048 | 768000 | 0.1131 | | 0.2051 | 769000 | 0.1155 | | 0.2053 | 770000 | 0.1133 | | 0.2056 | 771000 | 0.1127 | | 0.2059 | 772000 | 0.1116 | | 0.2061 | 773000 | 0.1122 | | 0.2064 | 774000 | 0.1151 | | 0.2067 | 775000 | 0.1163 | | 0.2069 | 776000 | 0.1162 | | 0.2072 | 777000 | 0.1096 | | 0.2075 | 778000 | 0.1151 | | 0.2077 | 779000 | 0.1156 | | 0.2080 | 780000 | 0.1135 | | 0.2083 | 781000 | 0.1084 | | 0.2085 | 782000 | 0.114 | | 0.2088 | 783000 | 0.1128 | | 0.2091 | 784000 | 0.1142 | | 0.2093 | 785000 | 0.1092 | | 0.2096 | 786000 | 0.1067 | | 0.2099 | 787000 | 0.1156 | | 0.2101 | 788000 | 0.1094 | | 0.2104 | 789000 | 0.1078 | | 0.2107 | 790000 | 0.1133 | | 0.2109 | 791000 | 0.1165 | | 0.2112 | 792000 | 0.1116 | | 0.2115 | 793000 | 0.1111 | | 0.2117 | 794000 | 0.1086 | | 0.2120 | 795000 | 0.1114 | | 0.2123 | 796000 | 0.1069 | | 0.2125 | 797000 | 0.1094 | | 0.2128 | 798000 | 0.1125 | | 0.2131 | 799000 | 0.112 | | 0.2133 | 800000 | 0.1107 | | 0.2136 | 801000 | 0.1085 | | 0.2139 | 802000 | 0.1067 | | 0.2141 | 803000 | 0.1149 | | 0.2144 | 804000 | 0.1068 | | 0.2147 | 805000 | 0.1124 | | 0.2149 | 806000 | 0.1109 | | 0.2152 | 807000 | 0.1094 | | 0.2155 | 808000 | 0.1097 | | 0.2157 | 809000 | 0.1106 | | 0.2160 | 810000 | 0.1152 | | 0.2163 | 811000 | 0.1123 | | 0.2165 | 812000 | 0.1102 | | 0.2168 | 813000 | 0.11 | | 0.2171 | 814000 | 0.1 | | 0.2173 | 815000 | 0.1127 | | 0.2176 | 816000 | 0.1135 | | 0.2179 | 817000 | 0.1127 | | 0.2181 | 818000 | 0.108 | | 0.2184 | 819000 | 0.1119 | | 0.2187 | 820000 | 0.1103 | | 0.2189 | 821000 | 0.1084 | | 0.2192 | 822000 | 0.1076 | | 0.2195 | 823000 | 0.1145 | | 0.2197 | 824000 | 0.109 | | 0.2200 | 825000 | 0.1119 | | 0.2203 | 826000 | 0.1117 | | 0.2205 | 827000 | 0.1117 | | 0.2208 | 828000 | 0.1062 | | 0.2211 | 829000 | 0.1113 | | 0.2213 | 830000 | 0.1101 | | 0.2216 | 831000 | 0.1053 | | 0.2219 | 832000 | 0.1122 | | 0.2221 | 833000 | 0.1091 | | 0.2224 | 834000 | 0.1106 | | 0.2227 | 835000 | 0.1062 | | 0.2229 | 836000 | 0.1091 | | 0.2232 | 837000 | 0.1144 | | 0.2235 | 838000 | 0.1106 | | 0.2237 | 839000 | 0.1058 | | 0.2240 | 840000 | 0.1085 | | 0.2243 | 841000 | 0.1154 | | 0.2245 | 842000 | 0.1096 | | 0.2248 | 843000 | 0.1062 | | 0.2251 | 844000 | 0.1089 | | 0.2253 | 845000 | 0.108 | | 0.2256 | 846000 | 0.1086 | | 0.2259 | 847000 | 0.1084 | | 0.2261 | 848000 | 0.1056 | | 0.2264 | 849000 | 0.1042 | | 0.2267 | 850000 | 0.1204 | | 0.2269 | 851000 | 0.1053 | | 0.2272 | 852000 | 0.1053 | | 0.2275 | 853000 | 0.1065 | | 0.2277 | 854000 | 0.1157 | | 0.2280 | 855000 | 0.1112 | | 0.2283 | 856000 | 0.1058 | | 0.2285 | 857000 | 0.1084 | | 0.2288 | 858000 | 0.1066 | | 0.2291 | 859000 | 0.1116 | | 0.2293 | 860000 | 0.1047 | | 0.2296 | 861000 | 0.1145 | | 0.2299 | 862000 | 0.1094 | | 0.2301 | 863000 | 0.1108 | | 0.2304 | 864000 | 0.1038 | | 0.2307 | 865000 | 0.1044 | | 0.2309 | 866000 | 0.106 | | 0.2312 | 867000 | 0.105 | | 0.2315 | 868000 | 0.108 | | 0.2317 | 869000 | 0.1108 | | 0.2320 | 870000 | 0.113 | | 0.2323 | 871000 | 0.108 | | 0.2325 | 872000 | 0.1069 | | 0.2328 | 873000 | 0.1098 | | 0.2331 | 874000 | 0.1021 | | 0.2333 | 875000 | 0.109 | | 0.2336 | 876000 | 0.1104 | | 0.2339 | 877000 | 0.1043 | | 0.2341 | 878000 | 0.1057 | | 0.2344 | 879000 | 0.105 | | 0.2347 | 880000 | 0.1042 | | 0.2349 | 881000 | 0.1116 | | 0.2352 | 882000 | 0.1151 | | 0.2355 | 883000 | 0.1043 | | 0.2357 | 884000 | 0.1023 | | 0.2360 | 885000 | 0.1084 | | 0.2363 | 886000 | 0.1103 | | 0.2365 | 887000 | 0.1028 | | 0.2368 | 888000 | 0.1055 | | 0.2371 | 889000 | 0.1023 | | 0.2373 | 890000 | 0.1099 | | 0.2376 | 891000 | 0.1037 | | 0.2379 | 892000 | 0.1068 | | 0.2381 | 893000 | 0.1128 | | 0.2384 | 894000 | 0.1023 | | 0.2387 | 895000 | 0.1023 | | 0.2389 | 896000 | 0.106 | | 0.2392 | 897000 | 0.1005 | | 0.2395 | 898000 | 0.1013 | | 0.2397 | 899000 | 0.1131 | | 0.2400 | 900000 | 0.107 | | 0.2403 | 901000 | 0.1096 | | 0.2405 | 902000 | 0.0963 | | 0.2408 | 903000 | 0.1076 | | 0.2411 | 904000 | 0.102 | | 0.2413 | 905000 | 0.1147 | | 0.2416 | 906000 | 0.1111 | | 0.2419 | 907000 | 0.1035 | | 0.2421 | 908000 | 0.1059 | | 0.2424 | 909000 | 0.1037 | | 0.2427 | 910000 | 0.1047 | | 0.2429 | 911000 | 0.1049 | | 0.2432 | 912000 | 0.1097 | | 0.2435 | 913000 | 0.1062 | | 0.2437 | 914000 | 0.1016 | | 0.2440 | 915000 | 0.1061 | | 0.2443 | 916000 | 0.1089 | | 0.2445 | 917000 | 0.1032 | | 0.2448 | 918000 | 0.1053 | | 0.2451 | 919000 | 0.1075 | | 0.2453 | 920000 | 0.1048 | | 0.2456 | 921000 | 0.1007 | | 0.2459 | 922000 | 0.11 | | 0.2461 | 923000 | 0.1034 | | 0.2464 | 924000 | 0.1059 | | 0.2467 | 925000 | 0.1063 | | 0.2469 | 926000 | 0.1051 | | 0.2472 | 927000 | 0.1064 | | 0.2475 | 928000 | 0.0986 | | 0.2477 | 929000 | 0.1037 | | 0.2480 | 930000 | 0.1093 | | 0.2483 | 931000 | 0.102 | | 0.2485 | 932000 | 0.0985 | | 0.2488 | 933000 | 0.1023 | | 0.2491 | 934000 | 0.104 | | 0.2493 | 935000 | 0.1108 | | 0.2496 | 936000 | 0.1061 | | 0.2499 | 937000 | 0.1053 | </details> ### Framework Versions - Python: 3.12.2 - Sentence Transformers: 3.2.1 - Transformers: 4.45.2 - PyTorch: 2.5.0 - Accelerate: 1.0.1 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CustomTripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "PCR" ]
BioNLP
Cloyne/sup-SimCSE-VietNamese-phobert-base
Cloyne
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:120210", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base", "base_model:finetune:VoVanPhuc/sup-SimCSE-VietNamese-phobert-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,730
1,730
682
0
--- base_model: VoVanPhuc/sup-SimCSE-VietNamese-phobert-base library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:120210 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Chủ tịch Ủy ban nhân dân xã có quyền ra quyết định cưỡng chế tháo dỡ công trình xây dựng trên đất nông nghiệp khi chưa chuyển mục đích sử dụng đất hay không? sentences: - 'Đối tượng, điều kiện kéo dài tuổi phục vụ tại ngũ 1. Đối tượng: a) Quân nhân chuyên nghiệp có trình độ cao đẳng trở lên đang đảm nhiệm các chức danh: Kỹ thuật viên, Nhân viên Kỹ thuật, Huấn luyện viên, Nghệ sĩ, Nhạc sĩ, Diễn viên làm việc đúng chuyên ngành đào tạo ở các cơ sở nghiên cứu, nhà trường, bệnh viện, trung tâm thể dục thể thao, đoàn nghệ thuật, nhà máy, doanh nghiệp quốc phòng; đơn vị đóng quân ở địa bàn vùng sâu, vùng xa, biên giới, hải đảo. b) Quân nhân chuyên nghiệp đang làm việc thuộc các chuyên ngành hẹp được đào tạo công phu hoặc chuyên ngành Quân đội chưa đào tạo được; thợ bậc cao. c) Quân nhân chuyên nghiệp đang đảm nhiệm chức vụ chỉ huy, quản lý ở các nhà máy, doanh nghiệp quốc phòng. d) Quân nhân chuyên nghiệp không thuộc đối tượng quy định tại điểm a, điểm b, điểm c khoản này do Bộ trưởng Bộ Quốc phòng quyết định. 2. Điều kiện: Quân nhân chuyên nghiệp thuộc đối tượng quy định tại khoản 1 Điều này được kéo dài tuổi phục vụ tại ngũ khi có đủ các điều kiện sau: a) Đơn vị có biên chế và nhu cầu sử dụng; b) Hết hạn tuổi phục vụ tại ngũ cao nhất theo cấp bậc quân hàm quy định tại khoản 2 Điều 17 Luật Quân nhân chuyên nghiệp, công nhân và viên chức quốc phòng; chưa có người thay thế; tự nguyện tiếp tục phục vụ tại ngũ; c) Có đủ phẩm chất chính trị, đạo đức, sức khỏe để hoàn thành nhiệm vụ được giao; d) Có trình độ chuyên môn kỹ thuật, nghiệp vụ giỏi; tay nghề cao; chất lượng, hiệu quả công tác tốt.' - 'Thi hành quyết định cưỡng chế 1. Người ra quyết định cưỡng chế có trách nhiệm gửi ngay quyết định cưỡng chế cho các cá nhân, tổ chức liên quan và tổ chức thực hiện việc cưỡng chế thi hành quyết định xử phạt của mình và của cấp dưới. ..."' - 'Trình tự, thủ tục đăng ký tài khoản định danh điện tử đối với công dân Việt Nam 1. Đăng ký tài khoản định danh điện tử mức độ 1 qua ứng dụng VNelD đối với công dân đã có thẻ Căn cước công dân gắn chíp điện tử a) Công dân sử dụng thiết bị di động tải và cài đặt ứng dụng VNelD. b) Công dân sử dụng ứng dụng VNelD để nhập thông tin về số định danh cá nhân và số điện thoại hoặc địa chỉ thư điện tử; cung cấp các thông tin theo hướng dẫn trên ứng dụng VNelD; thu nhận ảnh chân dung bằng thiết bị di động và gửi yêu cầu đề nghị cấp tài khoản định danh điện tử tới cơ quan quản lý định danh và xác thực điện tử qua ứng dụng VNelD. c) Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử. 2. Đăng ký tài khoản định danh điện tử mức độ 2 a) Đối với công dân đã được cấp thẻ Căn cước công dân gắn chíp điện tử: Công dân đến Công an xã, phường, thị trấn hoặc nơi làm thủ tục cấp thẻ Căn cước công dân để làm thủ tục cấp tài khoản định danh điện tử. Công dân xuất trình thẻ Căn cước công dân gắn chíp điện tử, cung cấp thông tin về số điện thoại hoặc địa chỉ thư điện tử và đề nghị bổ sung thông tin được tích hợp vào tài khoản định danh điện tử. Cán bộ tiếp nhận nhập thông tin công dân cung cấp vào hệ thống định danh và xác thực điện tử; chụp ảnh chân dung, thu nhận vân tay của công dân đến làm thủ tục để xác thực với Cơ sở dữ liệu căn cước công dân và khẳng định sự đồng ý đăng ký tạo lập tài khoản định danh điện tử. Cơ quan quản lý định danh điện tử thông báo kết quả đăng ký tài khoản qua ứng dụng VNelD hoặc tin nhắn SMS hoặc địa chỉ thư điện tử. b) Cơ quan Công an tiến hành cấp tài khoản định danh điện tử mức độ 2 cùng với cấp thẻ Căn cước công dân với trường hợp công dân chưa được cấp Căn cước công dân gắn chíp điện tử.' - source_sentence: Mức hưởng chế độ thai sản đối với lao động nam là người nước ngoài được pháp luật quy định như thế nào? sentences: - '"Điều 21. Thông báo kết quả và xác nhận nhập học 1. Cơ sở đào tạo gửi giấy báo trúng tuyển cho những thí sinh trúng tuyển, trong đó ghi rõ những thủ tục cần thiết đối với thí sinh khi nhập học và phương thức nhập học của thí sinh. 2. Thí sinh xác nhận nhập học bằng hình thức trực tuyến trên hệ thống, trước khi nhập học tại cơ sở đào tạo. 3. Đối với những thí sinh không xác nhận nhập học trong thời hạn quy định: a) Nếu không có lý do chính đáng thì coi như thí sinh từ chối nhập học và cơ sở đào tạo có quyền không tiếp nhận; b) Nếu do ốm đau, tai nạn, có giấy xác nhận của bệnh viện quận, huyện trở lên hoặc do thiên tai có xác nhận của UBND quận, huyện trở lên, cơ sở đào tạo xem xét quyết định tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí sinh vào học sau; c) Nếu do sai sót, nhầm lẫn của cán bộ thực hiện công tác tuyển sinh hoặc cá nhân thí sinh gây ra, cơ sở đào tạo chủ động phối hợp với các cá nhân, tổ chức liên quan xem xét các minh chứng và quyết định việc tiếp nhận thí sinh vào học hoặc bảo lưu kết quả tuyển sinh để thí sinh vào học sau. 4. Thí sinh đã xác nhận nhập học tại một cơ sở đào tạo không được tham gia xét tuyển ở nơi khác hoặc ở các đợt xét tuyển bổ sung, trừ trường hợp được cơ sở đào tạo cho phép."' - 'Tổ chức, nhiệm vụ, quyền hạn của Ban Chỉ huy ... 2. Nhiệm vụ, quyền hạn của Ban Chỉ huy: a) Chỉ đạo xây dựng, ban hành quy định về công tác bảo đảm an toàn PCCC và CNCH tại Trụ sở cơ quan Bộ Tư pháp. b) Hướng dẫn, phối hợp với các đơn vị thuộc Bộ và chỉ đạo Đội PCCC và CNCH cơ sở tổ chức tuyên truyền, bồi dưỡng nghiệp vụ PCCC và CNCH. c) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp xây dựng, trình cấp có thẩm quyền phê duyệt và tổ chức thực tập phương án PCCC, phương án CNCH. d) Chỉ đạo Đội PCCC và CNCH cơ sở tại Trụ sở cơ quan Bộ Tư pháp quản lý các trang thiết bị PCCC và CNCH. đ) Chỉ đạo chữa cháy, CNCH khi xảy ra cháy, sự cố, tai nạn tại Trụ sở cơ quan Bộ Tư pháp. e) Chỉ đạo việc tổ chức lập và lưu giữ hồ sơ quản lý, theo dõi hoạt động PCCC, CNCH tại Trụ sở cơ quan Bộ Tư pháp. g) Chỉ đạo việc sơ kết, tổng kết các hoạt động về PCCC và CNCH của cơ quan; kiểm tra, đôn đốc việc chấp hành các quy định về PCCC và CNCH. h) Đề xuất việc khen thưởng, kỷ luật các tập thể, cá nhân trong việc thực hiện công tác PCCC, CNCH. i) Chỉ đạo Đội PCCC và CNCH cơ sở dự trù kinh phí cho các hoạt động PCCC và CNCH tại Trụ sở cơ quan Bộ Tư pháp. k) Thực hiện các nhiệm vụ khác do Bộ trưởng giao và theo quy định của pháp luật.' - 'Mức hưởng chế độ thai sản ... b) Mức hưởng một ngày đối với trường hợp quy định tại Điều 32 và khoản 2 Điều 34 của Luật này được tính bằng mức hưởng chế độ thai sản theo tháng chia cho 24 ngày.' - source_sentence: Doanh nghiệp được áp dụng chế độ ưu tiên không cung cấp báo cáo kiểm toán đúng thời hạn bị phạt bao nhiêu tiền? sentences: - 'Thay đổi Thẩm phán, Hội thẩm 1. Thẩm phán, Hội thẩm phải từ chối tham gia xét xử hoặc bị thay đổi khi thuộc một trong các trường hợp: a) Trường hợp quy định tại Điều 49 của Bộ luật này; b) Họ cùng trong một Hội đồng xét xử và là người thân thích với nhau; c) Đã tham gia xét xử sơ thẩm hoặc phúc thẩm hoặc tiến hành tố tụng vụ án đó với tư cách là Điều tra viên, Cán bộ điều tra, Kiểm sát viên, Kiểm tra viên, Thẩm tra viên, Thư ký Tòa án. 2. Việc thay đổi Thẩm phán, Hội thẩm trước khi mở phiên tòa do Chánh án hoặc Phó Chánh án Tòa án được phân công giải quyết vụ án quyết định. Thẩm phán bị thay đổi là Chánh án Tòa án thì do Chánh án Tòa án trên một cấp quyết định. Việc thay đổi Thẩm phán, Hội thẩm tại phiên tòa do Hội đồng xét xử quyết định trước khi bắt đầu xét hỏi bằng cách biểu quyết tại phòng nghị án. Khi xem xét thay đổi thành viên nào thì thành viên đó được trình bày ý kiến của mình, Hội đồng quyết định theo đa số. Trường hợp phải thay đổi Thẩm phán, Hội thẩm tại phiên tòa thì Hội đồng xét xử ra quyết định hoãn phiên tòa.' - '“Điều 21. Chấm dứt hưởng trợ cấp thất nghiệp 1. Các trường hợp người lao động đang hưởng trợ cấp thất nghiệp bị chấm dứt hưởng trợ cấp thất nghiệp được quy định như sau: e) Trong thời gian hưởng trợ cấp thất nghiệp, 03 tháng liên tục không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm với trung tâm dịch vụ việc làm theo quy định Ngày mà người lao động được xác định bị chấm dứt hưởng trợ cấp thất nghiệp là ngày kết thúc của thời hạn thông báo tìm kiếm việc làm của tháng thứ 3 liên tục mà người lao động không thực hiện thông báo hằng tháng về việc tìm kiếm việc làm."' - 'Vi phạm quy định về thời hạn làm thủ tục hải quan, nộp hồ sơ thuế ... 2. Phạt tiền từ 1.000.000 đồng đến 2.000.000 đồng đối với hành vi không thực hiện đúng thời hạn quy định thuộc một trong các trường hợp sau: a) Cung cấp báo cáo kiểm toán, báo cáo tài chính của doanh nghiệp được áp dụng chế độ ưu tiên; b) Thông báo cho cơ quan hải quan quyết định xử lý vi phạm pháp luật về quản lý thuế, kế toán đối với doanh nghiệp được áp dụng chế độ ưu tiên; c) Báo cáo về lượng hàng hóa nhập khẩu phục vụ xây dựng nhà xưởng, hàng hóa gửi kho bên ngoài của doanh nghiệp chế xuất; d) Báo cáo về lượng hàng hóa trung chuyển đưa vào, đưa ra, còn lưu tại cảng; đ) Báo cáo thống kê thông quan hàng bưu chính đưa vào Việt Nam để chuyển tiếp đi quốc tế. ...' - source_sentence: Tài chính của Hội Kiểm toán viên hành nghề Việt Nam được chi cho những khoản nào? sentences: - 'Giải thể và xử lý tài chính khi giải thể 1. Khi xét thấy hoạt động của Hội không có hiệu quả, không mang lại lợi ích cho Hội viên hoặc gây phiền hà, cản trở cho Hội viên thì BCH Hội quyết định triệu tập Đại hội để bàn biện pháp củng cố tổ chức hoặc giải thể Hội. Nếu giải thể Hội thì do Đại hội đại biểu hoặc Đại hội toàn quốc của Hội thông qua và đề nghị cơ quan Nhà nước có thẩm quyền xem xét, quyết định. 2. Khi Hội bị giải thể, Ban Thường trực và Ban Kiểm tra của Hội phải tiến hành kiểm kê tài sản, kiểm quỹ và báo cáo BCH Hội quyết định việc xử lý tài sản, tiền tồn quỹ và tiến hành thủ tục giải thể theo quy định của pháp luật.' - '"Điều 14. Miễn trừ đối với thỏa thuận hạn chế cạnh tranh bị cấm 1. Thỏa thuận hạn chế cạnh tranh quy định tại các khoản 1, 2, 3, 7, 8, 9, 10 và 11 Điều 11 bị cấm theo quy định tại Điều 12 của Luật này được miễn trừ có thời hạn nếu có lợi cho người tiêu dùng và đáp ứng một trong các điều kiện sau đây: a) Tác động thúc đẩy tiến bộ kỹ thuật, công nghệ, nâng cao chất lượng hàng hóa, dịch vụ; b) Tăng cường sức cạnh tranh của doanh nghiệp Việt Nam trên thị trường quốc tế; c) Thúc đẩy việc áp dụng thống nhất tiêu chuẩn chất lượng, định mức kỹ thuật của chủng loại sản phẩm; d) Thống nhất các điều kiện thực hiện hợp đồng, giao hàng, thanh toán nhưng không liên quan đến giá và các yếu tố của giá. 2. Thỏa thuận lao động, thỏa thuận hợp tác trong các ngành, lĩnh vực đặc thù được thực hiện theo quy định của luật khác thì thực hiện theo quy định của luật đó".' - '"Điều 2. Sửa đổi, bổ sung một số điều của Nghị định số 15/2019/NĐ-CP ngày 01 tháng 02 năm 2019 của Chính phủ quy định chi tiết một số điều và biện pháp thi hành Luật Giáo dục nghề nghiệp ... 12. Sửa đổi, bổ sung Điều 24 như sau: Điều 24. Thẩm quyền cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài 1. Tổng cục Giáo dục nghề nghiệp cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài đối với trường cao đẳng. 2. Sở Lao động - Thương binh và Xã hội nơi trường trung cấp, trung tâm giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp tổ chức hoạt động liên kết đào tạo với nước ngoài cấp giấy chứng nhận đăng ký hoạt động liên kết đào tạo với nước ngoài đối với trường trung cấp, trung tâm giáo dục nghề nghiệp, trung tâm giáo dục nghề nghiệp - giáo dục thường xuyên và doanh nghiệp."' - source_sentence: NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào? sentences: - 'Hồ sơ, thủ tục xác định trường hợp được bồi thường [...] 3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung. 4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.' - 'Chuyển nhượng quyền thăm dò khoáng sản 1. Tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản phải có đủ điều kiện để được cấp Giấy phép thăm dò khoáng sản theo quy định của Luật này. 2. Việc chuyển nhượng quyền thăm dò khoáng sản phải được cơ quan quản lý nhà nước có thẩm quyền cấp Giấy phép thăm dò khoáng sản chấp thuận; trường hợp được chấp thuận, tổ chức, cá nhân nhận chuyển nhượng quyền thăm dò khoáng sản được cấp Giấy phép thăm dò khoáng sản mới. 3. Tổ chức, cá nhân chuyển nhượng quyền thăm dò khoáng sản đã thực hiện được ít nhất 50% dự toán của đề án thăm dò khoáng sản. 4. Chính phủ quy định chi tiết việc chuyển nhượng quyền thăm dò khoáng sản.' - '"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế: ... 6. Sửa đổi, bổ sung Điều 12 như sau: “Điều 12. Đối tượng tham gia bảo hiểm y tế 1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm: a) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động); b) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.= ... 4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm: a) Người thuộc hộ gia đình cận nghèo; b) Học sinh, sinh viên. 5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này. 6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”' --- # SentenceTransformer based on VoVanPhuc/sup-SimCSE-VietNamese-phobert-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [VoVanPhuc/sup-SimCSE-VietNamese-phobert-base](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) <!-- at revision 608779b86741a8acd8c8d38132974ff04086b138 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Cloyne/SimCSE-finetuned-vietnamese-legal-documents") # Run inference sentences = [ 'NLĐ ký nhiều hợp đồng lao động thì đóng BHYT như thế nào?', '"Sửa đổi, bổ sung một số điều của Luật bảo hiểm y tế:\n...\n6. Sửa đổi, bổ sung Điều 12 như sau:\n“Điều 12. Đối tượng tham gia bảo hiểm y tế\n1. Nhóm do người lao động và người sử dụng lao động đóng, bao gồm:\na) Người lao động làm việc theo hợp đồng lao động không xác định thời hạn, hợp đồng lao động có thời hạn từ đủ 3 tháng trở lên; người lao động là người quản lý doanh nghiệp hưởng tiền lương; cán bộ, công chức, viên chức (sau đây gọi chung là người lao động);\nb) Người hoạt động không chuyên trách ở xã, phường, thị trấn theo quy định của pháp luật.=\n...\n4. Nhóm được ngân sách nhà nước hỗ trợ mức đóng, bao gồm:\na) Người thuộc hộ gia đình cận nghèo;\nb) Học sinh, sinh viên.\n5. Nhóm tham gia bảo hiểm y tế theo hộ gia đình gồm những người thuộc hộ gia đình, trừ đối tượng quy định tại các khoản 1, 2, 3 và 4 Điều này.\n6. Chính phủ quy định các đối tượng khác ngoài các đối tượng quy định tại các khoản 3, 4 và 5 Điều này; quy định việc cấp thẻ bảo hiểm y tế đối với đối tượng do Bộ Quốc phòng, Bộ Công an quản lý và đối tượng quy định tại điểm 1 khoản 3 Điều này; quy định lộ trình thực hiện bảo hiểm y tế, phạm vi quyền lợi, mức hưởng bảo hiểm y tế, khám bệnh, chữa bệnh bảo hiểm y tế, quản lý, sử dụng phần kinh phí dành cho khám bệnh, chữa bệnh bảo hiểm y tế, giám định bảo hiểm y tế, thanh toán, quyết toán bảo hiểm y tế đối với các đối tượng quy định tại điểm a khoản 3 Điều này.”', 'Hồ sơ, thủ tục xác định trường hợp được bồi thường\n[...]\n3. Trong thời hạn 05 ngày làm việc, kể từ ngày nhận được đơn và các giấy tờ hợp lệ, nếu xác định yêu cầu thuộc trách nhiệm giải quyết của mình thì Sở Y tế phải thụ lý và thông báo bằng văn bản về việc thụ lý đơn cho người bị thiệt hại hoặc thân nhân của người bị thiệt hại (sau đây gọi tắt là người bị thiệt hại). Trường hợp hồ sơ không đầy đủ thì Sở Y tế có văn bản hướng dẫn người bị thiệt hại bổ sung.\n4. Trong thời hạn 15 ngày, kể từ ngày nhận được đơn yêu cầu của người bị thiệt hại, Sở Y tế phải hoàn thành việc xác định nguyên nhân gây tai biến, mức độ tổn thương và thông báo bằng văn bản cho người yêu cầu đồng thời báo cáo Bộ Y tế.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 120,210 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 25.08 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 206.98 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật<br>Trong phạm vi điều chỉnh của văn bản quy phạm pháp luật:<br>1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.<br>2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.<br>3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | | <code>Điều kiện để giáo viên trong cơ sở giáo dục mầm non, tiểu học ngoài công lập bị ảnh hưởng bởi Covid-19 được hưởng chính sách hỗ trợ là gì?</code> | <code>Điều kiện được hưởng<br>Cán bộ quản lý, giáo viên, nhân viên được hưởng chính sách khi bảo đảm các điều kiện sau:<br>1. Là người đang làm việc tại cơ sở giáo dục ngoài công lập trước khi cơ sở phải tạm dừng hoạt động theo yêu cầu của cơ quan nhà nước có thẩm quyền để phòng, chống dịch COVID-19 tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>2. Nghỉ việc không hưởng lương từ 01 tháng trở lên tính từ ngày 01 tháng 5 năm 2021 đến hết ngày 31 tháng 12 năm 2021.<br>3. Chưa được hưởng chính sách hỗ trợ đối với người lao động tạm hoãn hợp đồng lao động, nghỉ việc không hưởng lương theo quy định tại khoản 4, khoản 5, khoản 6 Mục II Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19, Nghị quyết số 126/NQ-CP ngày 08 tháng 10 năm 2021 của Chính phủ sửa đổi, bổ sung Nghị quyết số 68/NQ-CP ngày 01 tháng 7 năm 2021 của Chính phủ về một số chính sách hỗ trợ người lao động và người sử dụng lao động gặp khó khăn do đại dịch COVID-19 (sau đây gọi tắt là Nghị quyết số 68/NQ-CP) do không tham gia Bảo hiểm xã hội bắt buộc.<br>4. Có xác nhận làm việc tại cơ sở giáo dục ngoài công lập ít nhất hết năm học 2021 - 2022 theo kế hoạch năm học của địa phương, bao gồm cơ sở giáo dục ngoài công lập đã làm việc trước đây hoặc cơ sở giáo dục ngoài công lập khác trong trường hợp cơ sở giáo dục ngoài công lập trước đây làm việc không hoạt động trở lại.</code> | | <code>Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?</code> | <code>Nguyên tắc áp dụng<br>1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một mức phụ cấp ưu đãi theo nghề cao nhất.<br>2. Công chức, viên chức đã hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày 22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số 64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### train * Dataset: train * Size: 13,357 evaluation samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 24.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 202.71 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Toà án cấp nào có thẩm quyền giải quyết việc đòi tài sản đã cho người khác vay theo hợp đồng cho vay?</code> | <code>"Điều 35. Thẩm quyền của Tòa án nhân dân cấp huyện<br>1. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết theo thủ tục sơ thẩm những tranh chấp sau đây:<br>a) Tranh chấp về dân sự, hôn nhân và gia đình quy định tại Điều 26 và Điều 28 của Bộ luật này, trừ tranh chấp quy định tại khoản 7 Điều 26 của Bộ luật này;<br>b) Tranh chấp về kinh doanh, thương mại quy định tại khoản 1 Điều 30 của Bộ luật này;<br>c) Tranh chấp về lao động quy định tại Điều 32 của Bộ luật này.<br>2. Tòa án nhân dân cấp huyện có thẩm quyền giải quyết những yêu cầu sau đây:<br>a) Yêu cầu về dân sự quy định tại các khoản 1, 2, 3, 4, 6, 7, 8, 9 và 10 Điều 27 của Bộ luật này;<br>b) Yêu cầu về hôn nhân và gia đình quy định tại các khoản 1, 2, 3, 4, 5, 6, 7, 8, 10 và 11 Điều 29 của Bộ luật này;<br>c) Yêu cầu về kinh doanh, thương mại quy định tại khoản 1 và khoản 6 Điều 31 của Bộ luật này;<br>d) Yêu cầu về lao động quy định tại khoản 1 và khoản 5 Điều 33 của Bộ luật này.<br>3. Những tranh chấp, yêu cầu quy định tại khoản 1 và khoản 2 Điều này mà có đương sự hoặc tài sản ở nước ngoài hoặc cần phải ủy thác tư pháp cho cơ quan đại diện nước Cộng hòa xã hội chủ nghĩa Việt Nam ở nước ngoài, cho Tòa án, cơ quan có thẩm quyền của nước ngoài không thuộc thẩm quyền giải quyết của Tòa án nhân dân cấp huyện, trừ trường hợp quy định tại khoản 4 Điều này.<br>4. Tòa án nhân dân cấp huyện nơi cư trú của công dân Việt Nam hủy việc kết hôn trái pháp luật, giải quyết việc ly hôn, các tranh chấp về quyền và nghĩa vụ của vợ chồng, cha mẹ và con, về nhận cha, mẹ, con, nuôi con nuôi và giám hộ giữa công dân Việt Nam cư trú ở khu vực biên giới với công dân của nước láng giềng cùng cư trú ở khu vực biên giới với Việt Nam theo quy định của Bộ luật này và các quy định khác của pháp luật Việt Nam."</code> | | <code>Những phiếu bầu nào được xem là không hợp lệ?</code> | <code>Phiếu bầu không hợp lệ<br>1. Những phiếu bầu sau đây là phiếu bầu không hợp lệ:<br>a) Phiếu không theo mẫu quy định do Tổ bầu cử phát ra;<br>b) Phiếu không có dấu của Tổ bầu cử;<br>c) Phiếu để số người được bầu nhiều hơn số lượng đại biểu được bầu đã ấn định cho đơn vị bầu cử;<br>d) Phiếu gạch xóa hết tên những người ứng cử;<br>đ) Phiếu ghi thêm tên người ngoài danh sách những người ứng cử hoặc phiếu có ghi thêm nội dung khác.<br>2. Trường hợp có phiếu bầu được cho là không hợp lệ thì Tổ trường Tổ bầu cử đưa ra để toàn Tổ xem xét, quyết định. Tổ bầu cử không được gạch xóa hoặc sửa các tên ghi trên phiếu bầu.</code> | | <code>Đề nghị tạm đình chỉ chấp hành quyết định áp dụng biện pháp đưa vào trường giáo dưỡng cho học sinh cần đảm bảo nguyên tắc gì?</code> | <code>Nguyên tắc xét duyệt, đề nghị giảm thời hạn, tạm đình chỉ chấp hành quyết định, miễn chấp hành phần thời gian còn lại cho học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc<br>1. Tuân thủ quy định của pháp luật về thi hành biện pháp xử lý hành chính đưa vào trường giáo dưỡng, cơ sở giáo dục bắt buộc, quy định tại Thông tư này và quy định của pháp luật có liên quan.<br>2. Bảo đảm khách quan, công khai, minh bạch, đúng trình tự, thủ tục, thẩm quyền; tôn trọng và bảo vệ quyền, lợi ích hợp pháp của học sinh trường giáo dưỡng, trại viên cơ sở giáo dục bắt buộc.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | train loss | |:------:|:-----:|:-------------:|:----------:| | 0.0665 | 500 | 0.2809 | 0.2215 | | 0.1331 | 1000 | 0.1307 | 0.1547 | | 0.1996 | 1500 | 0.0978 | 0.1366 | | 0.2662 | 2000 | 0.1054 | 0.1221 | | 0.3327 | 2500 | 0.0824 | 0.1215 | | 0.3993 | 3000 | 0.0776 | 0.1223 | | 0.4658 | 3500 | 0.0797 | 0.1161 | | 0.5323 | 4000 | 0.0774 | 0.1070 | | 0.5989 | 4500 | 0.0661 | 0.1007 | | 0.6654 | 5000 | 0.059 | 0.0945 | | 0.7320 | 5500 | 0.0674 | 0.0889 | | 0.7985 | 6000 | 0.0495 | 0.0783 | | 0.8651 | 6500 | 0.0587 | 0.0822 | | 0.9316 | 7000 | 0.0585 | 0.0868 | | 0.9981 | 7500 | 0.0482 | 0.0733 | | 1.0647 | 8000 | 0.0459 | 0.0786 | | 1.1312 | 8500 | 0.0487 | 0.0691 | | 1.1978 | 9000 | 0.0335 | 0.0719 | | 1.2643 | 9500 | 0.0365 | 0.0711 | | 1.3308 | 10000 | 0.0279 | 0.0668 | | 1.3974 | 10500 | 0.0235 | 0.0675 | | 1.4639 | 11000 | 0.0206 | 0.0599 | | 1.5305 | 11500 | 0.0175 | 0.0653 | | 1.5970 | 12000 | 0.0144 | 0.0664 | | 1.6636 | 12500 | 0.0167 | 0.0598 | | 1.7301 | 13000 | 0.0173 | 0.0583 | | 1.7966 | 13500 | 0.0127 | 0.0540 | | 1.8632 | 14000 | 0.0164 | 0.0595 | | 1.9297 | 14500 | 0.014 | 0.0552 | | 1.9963 | 15000 | 0.0114 | 0.0535 | | 2.0628 | 15500 | 0.0097 | 0.0552 | | 2.1294 | 16000 | 0.0111 | 0.0549 | | 2.1959 | 16500 | 0.0076 | 0.0544 | | 2.2624 | 17000 | 0.009 | 0.0589 | | 2.3290 | 17500 | 0.0084 | 0.0543 | | 2.3955 | 18000 | 0.0049 | 0.0520 | | 2.4621 | 18500 | 0.0068 | 0.0505 | | 2.5286 | 19000 | 0.0037 | 0.0489 | | 2.5952 | 19500 | 0.0031 | 0.0461 | | 2.6617 | 20000 | 0.0041 | 0.0496 | | 2.7282 | 20500 | 0.0051 | 0.0464 | | 2.7948 | 21000 | 0.0029 | 0.0475 | | 2.8613 | 21500 | 0.0032 | 0.0458 | | 2.9279 | 22000 | 0.003 | 0.0449 | | 2.9944 | 22500 | 0.0035 | 0.0458 | | 3.0610 | 23000 | 0.0033 | 0.0443 | | 3.1275 | 23500 | 0.0032 | 0.0416 | | 3.1940 | 24000 | 0.002 | 0.0449 | | 3.2606 | 24500 | 0.0022 | 0.0447 | | 3.3271 | 25000 | 0.0017 | 0.0430 | | 3.3937 | 25500 | 0.002 | 0.0418 | | 3.4602 | 26000 | 0.0019 | 0.0415 | | 3.5268 | 26500 | 0.0008 | 0.0406 | | 3.5933 | 27000 | 0.0007 | 0.0414 | | 3.6598 | 27500 | 0.0008 | 0.0416 | | 3.7264 | 28000 | 0.0011 | 0.0418 | | 3.7929 | 28500 | 0.0006 | 0.0416 | | 3.8595 | 29000 | 0.0005 | 0.0417 | | 3.9260 | 29500 | 0.0007 | 0.0413 | | 3.9925 | 30000 | 0.0008 | 0.0412 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.2.1 - Transformers: 4.45.1 - PyTorch: 2.4.0 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.20.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CHIA" ]
Non_BioNLP
twadada/nmc-cls-nopca
twadada
null
[ "mteb", "model-index", "region:us" ]
1,725
1,725
0
0
--- tags: - mteb model-index: - name: nomic_classification_noPCA results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: None config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 69.5223880597015 - type: ap value: 32.188029961997856 - type: f1 value: 63.507035721662255 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: None config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 69.82842500000001 - type: ap value: 65.05751252754372 - type: f1 value: 69.20990625365141 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: None config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 36.588 - type: f1 value: 35.418251249983605 - task: type: Retrieval dataset: name: MTEB ArguAna type: None config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 14.793999999999999 - type: map_at_10 value: 25.576999999999998 - type: map_at_100 value: 26.693 - type: map_at_1000 value: 26.756 - type: map_at_3 value: 22.451 - type: map_at_5 value: 24.093999999999998 - type: mrr_at_1 value: 15.22 - type: mrr_at_10 value: 25.743 - type: mrr_at_100 value: 26.86 - type: mrr_at_1000 value: 26.922 - type: mrr_at_3 value: 22.582 - type: mrr_at_5 value: 24.275 - type: ndcg_at_1 value: 14.793999999999999 - type: ndcg_at_10 value: 31.554 - type: ndcg_at_100 value: 37.367 - type: ndcg_at_1000 value: 39.156 - type: ndcg_at_3 value: 25.044 - type: ndcg_at_5 value: 28.019 - type: precision_at_1 value: 14.793999999999999 - type: precision_at_10 value: 5.064 - type: precision_at_100 value: 0.787 - type: precision_at_1000 value: 0.093 - type: precision_at_3 value: 10.857999999999999 - type: precision_at_5 value: 7.965999999999999 - type: recall_at_1 value: 14.793999999999999 - type: recall_at_10 value: 50.63999999999999 - type: recall_at_100 value: 78.73400000000001 - type: recall_at_1000 value: 93.172 - type: recall_at_3 value: 32.574999999999996 - type: recall_at_5 value: 39.829 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: None config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 30.971489779907653 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: None config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 21.568018798597137 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: None config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 49.98416553839047 - type: mrr value: 64.6418678274634 - task: type: STS dataset: name: MTEB BIOSSES type: None config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 49.466850107369204 - type: cos_sim_spearman value: 57.07419204044854 - type: euclidean_pearson value: 52.9281249064291 - type: euclidean_spearman value: 57.07419204044854 - type: manhattan_pearson value: 53.02817511738712 - type: manhattan_spearman value: 57.26395540047999 - task: type: Classification dataset: name: MTEB Banking77Classification type: None config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 62.948051948051955 - type: f1 value: 61.58649884446328 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: None config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 29.961219232394214 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: None config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 20.82397852889056 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: None config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 19.189999999999998 - type: map_at_10 value: 25.446999999999996 - type: map_at_100 value: 26.392 - type: map_at_1000 value: 26.548 - type: map_at_3 value: 23.368 - type: map_at_5 value: 24.65 - type: mrr_at_1 value: 24.464 - type: mrr_at_10 value: 30.43 - type: mrr_at_100 value: 31.217 - type: mrr_at_1000 value: 31.313999999999997 - type: mrr_at_3 value: 28.732000000000003 - type: mrr_at_5 value: 29.797 - type: ndcg_at_1 value: 24.464 - type: ndcg_at_10 value: 29.677 - type: ndcg_at_100 value: 34.116 - type: ndcg_at_1000 value: 37.489 - type: ndcg_at_3 value: 26.755000000000003 - type: ndcg_at_5 value: 28.249999999999996 - type: precision_at_1 value: 24.464 - type: precision_at_10 value: 5.5649999999999995 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 12.876000000000001 - type: precision_at_5 value: 9.442 - type: recall_at_1 value: 19.189999999999998 - type: recall_at_10 value: 37.124 - type: recall_at_100 value: 56.796 - type: recall_at_1000 value: 79.963 - type: recall_at_3 value: 27.982000000000003 - type: recall_at_5 value: 32.289 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: None config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 14.969 - type: map_at_10 value: 20.576 - type: map_at_100 value: 21.414 - type: map_at_1000 value: 21.523999999999997 - type: map_at_3 value: 18.709 - type: map_at_5 value: 19.723 - type: mrr_at_1 value: 19.363 - type: mrr_at_10 value: 24.97 - type: mrr_at_100 value: 25.688 - type: mrr_at_1000 value: 25.759 - type: mrr_at_3 value: 23.067999999999998 - type: mrr_at_5 value: 24.105999999999998 - type: ndcg_at_1 value: 19.363 - type: ndcg_at_10 value: 24.465999999999998 - type: ndcg_at_100 value: 28.308 - type: ndcg_at_1000 value: 30.989 - type: ndcg_at_3 value: 21.285999999999998 - type: ndcg_at_5 value: 22.668 - type: precision_at_1 value: 19.363 - type: precision_at_10 value: 4.713 - type: precision_at_100 value: 0.8170000000000001 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 10.34 - type: precision_at_5 value: 7.478 - type: recall_at_1 value: 14.969 - type: recall_at_10 value: 31.616 - type: recall_at_100 value: 48.775 - type: recall_at_1000 value: 67.186 - type: recall_at_3 value: 22.386 - type: recall_at_5 value: 26.141 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: None config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 23.652 - type: map_at_10 value: 31.282 - type: map_at_100 value: 32.265 - type: map_at_1000 value: 32.362 - type: map_at_3 value: 28.918 - type: map_at_5 value: 30.227999999999998 - type: mrr_at_1 value: 27.461000000000002 - type: mrr_at_10 value: 34.54 - type: mrr_at_100 value: 35.394999999999996 - type: mrr_at_1000 value: 35.458 - type: mrr_at_3 value: 32.340999999999994 - type: mrr_at_5 value: 33.642 - type: ndcg_at_1 value: 27.461000000000002 - type: ndcg_at_10 value: 35.715 - type: ndcg_at_100 value: 40.328 - type: ndcg_at_1000 value: 42.724000000000004 - type: ndcg_at_3 value: 31.329 - type: ndcg_at_5 value: 33.451 - type: precision_at_1 value: 27.461000000000002 - type: precision_at_10 value: 5.806 - type: precision_at_100 value: 0.8789999999999999 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 13.980999999999998 - type: precision_at_5 value: 9.793000000000001 - type: recall_at_1 value: 23.652 - type: recall_at_10 value: 46.078 - type: recall_at_100 value: 66.542 - type: recall_at_1000 value: 84.24199999999999 - type: recall_at_3 value: 34.237 - type: recall_at_5 value: 39.469 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: None config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 8.362 - type: map_at_10 value: 11.64 - type: map_at_100 value: 12.299 - type: map_at_1000 value: 12.388 - type: map_at_3 value: 10.452 - type: map_at_5 value: 11.04 - type: mrr_at_1 value: 9.04 - type: mrr_at_10 value: 12.45 - type: mrr_at_100 value: 13.129 - type: mrr_at_1000 value: 13.211999999999998 - type: mrr_at_3 value: 11.243 - type: mrr_at_5 value: 11.825 - type: ndcg_at_1 value: 9.04 - type: ndcg_at_10 value: 13.821 - type: ndcg_at_100 value: 17.593 - type: ndcg_at_1000 value: 20.468 - type: ndcg_at_3 value: 11.399 - type: ndcg_at_5 value: 12.392 - type: precision_at_1 value: 9.04 - type: precision_at_10 value: 2.282 - type: precision_at_100 value: 0.445 - type: precision_at_1000 value: 0.073 - type: precision_at_3 value: 4.896 - type: precision_at_5 value: 3.5709999999999997 - type: recall_at_1 value: 8.362 - type: recall_at_10 value: 19.843 - type: recall_at_100 value: 38.153 - type: recall_at_1000 value: 61.06700000000001 - type: recall_at_3 value: 13.296 - type: recall_at_5 value: 15.565000000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: None config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 5.335 - type: map_at_10 value: 8.286 - type: map_at_100 value: 8.969000000000001 - type: map_at_1000 value: 9.065 - type: map_at_3 value: 7.319000000000001 - type: map_at_5 value: 7.8020000000000005 - type: mrr_at_1 value: 6.965000000000001 - type: mrr_at_10 value: 10.523 - type: mrr_at_100 value: 11.244 - type: mrr_at_1000 value: 11.326 - type: mrr_at_3 value: 9.370000000000001 - type: mrr_at_5 value: 9.954 - type: ndcg_at_1 value: 6.965000000000001 - type: ndcg_at_10 value: 10.491 - type: ndcg_at_100 value: 14.155000000000001 - type: ndcg_at_1000 value: 16.853 - type: ndcg_at_3 value: 8.476 - type: ndcg_at_5 value: 9.31 - type: precision_at_1 value: 6.965000000000001 - type: precision_at_10 value: 2.027 - type: precision_at_100 value: 0.44999999999999996 - type: precision_at_1000 value: 0.077 - type: precision_at_3 value: 4.146 - type: precision_at_5 value: 3.06 - type: recall_at_1 value: 5.335 - type: recall_at_10 value: 15.195 - type: recall_at_100 value: 31.796999999999997 - type: recall_at_1000 value: 51.55800000000001 - type: recall_at_3 value: 9.623 - type: recall_at_5 value: 11.667 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: None config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 13.664000000000001 - type: map_at_10 value: 18.562 - type: map_at_100 value: 19.586000000000002 - type: map_at_1000 value: 19.716 - type: map_at_3 value: 16.705000000000002 - type: map_at_5 value: 17.733999999999998 - type: mrr_at_1 value: 17.421 - type: mrr_at_10 value: 22.729 - type: mrr_at_100 value: 23.626 - type: mrr_at_1000 value: 23.708000000000002 - type: mrr_at_3 value: 20.645 - type: mrr_at_5 value: 21.770999999999997 - type: ndcg_at_1 value: 17.421 - type: ndcg_at_10 value: 22.342000000000002 - type: ndcg_at_100 value: 27.556000000000004 - type: ndcg_at_1000 value: 30.805 - type: ndcg_at_3 value: 18.962 - type: ndcg_at_5 value: 20.469 - type: precision_at_1 value: 17.421 - type: precision_at_10 value: 4.2540000000000004 - type: precision_at_100 value: 0.831 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 9.014999999999999 - type: precision_at_5 value: 6.564 - type: recall_at_1 value: 13.664000000000001 - type: recall_at_10 value: 29.945 - type: recall_at_100 value: 53.376999999999995 - type: recall_at_1000 value: 76.566 - type: recall_at_3 value: 20.183999999999997 - type: recall_at_5 value: 24.331 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: None config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 10.358 - type: map_at_10 value: 14.518 - type: map_at_100 value: 15.289 - type: map_at_1000 value: 15.418999999999999 - type: map_at_3 value: 12.818 - type: map_at_5 value: 13.750000000000002 - type: mrr_at_1 value: 12.9 - type: mrr_at_10 value: 17.366 - type: mrr_at_100 value: 18.116 - type: mrr_at_1000 value: 18.211 - type: mrr_at_3 value: 15.772 - type: mrr_at_5 value: 16.611 - type: ndcg_at_1 value: 12.9 - type: ndcg_at_10 value: 17.554 - type: ndcg_at_100 value: 21.823 - type: ndcg_at_1000 value: 25.258000000000003 - type: ndcg_at_3 value: 14.533 - type: ndcg_at_5 value: 15.885 - type: precision_at_1 value: 12.9 - type: precision_at_10 value: 3.39 - type: precision_at_100 value: 0.659 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 6.8870000000000005 - type: precision_at_5 value: 5.137 - type: recall_at_1 value: 10.358 - type: recall_at_10 value: 23.904 - type: recall_at_100 value: 43.437 - type: recall_at_1000 value: 68.142 - type: recall_at_3 value: 15.834999999999999 - type: recall_at_5 value: 19.201999999999998 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 11.687750000000001 - type: map_at_10 value: 16.009916666666665 - type: map_at_100 value: 16.77191666666667 - type: map_at_1000 value: 16.886333333333333 - type: map_at_3 value: 14.501916666666665 - type: map_at_5 value: 15.308416666666663 - type: mrr_at_1 value: 14.372416666666668 - type: mrr_at_10 value: 18.827499999999997 - type: mrr_at_100 value: 19.551416666666668 - type: mrr_at_1000 value: 19.637999999999998 - type: mrr_at_3 value: 17.313166666666667 - type: mrr_at_5 value: 18.12 - type: ndcg_at_1 value: 14.372416666666668 - type: ndcg_at_10 value: 19.01425 - type: ndcg_at_100 value: 22.99116666666667 - type: ndcg_at_1000 value: 26.01925 - type: ndcg_at_3 value: 16.268333333333334 - type: ndcg_at_5 value: 17.459666666666664 - type: precision_at_1 value: 14.372416666666668 - type: precision_at_10 value: 3.445166666666666 - type: precision_at_100 value: 0.6432499999999998 - type: precision_at_1000 value: 0.10583333333333335 - type: precision_at_3 value: 7.581499999999999 - type: precision_at_5 value: 5.4688333333333325 - type: recall_at_1 value: 11.687750000000001 - type: recall_at_10 value: 25.38433333333333 - type: recall_at_100 value: 43.76141666666666 - type: recall_at_1000 value: 65.92333333333332 - type: recall_at_3 value: 17.578916666666665 - type: recall_at_5 value: 20.69425 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: None config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 7.733 - type: map_at_10 value: 11.264000000000001 - type: map_at_100 value: 11.866 - type: map_at_1000 value: 11.953999999999999 - type: map_at_3 value: 9.707 - type: map_at_5 value: 10.632 - type: mrr_at_1 value: 9.663 - type: mrr_at_10 value: 13.394 - type: mrr_at_100 value: 13.991000000000001 - type: mrr_at_1000 value: 14.069999999999999 - type: mrr_at_3 value: 11.834999999999999 - type: mrr_at_5 value: 12.709999999999999 - type: ndcg_at_1 value: 9.663 - type: ndcg_at_10 value: 13.956 - type: ndcg_at_100 value: 17.143 - type: ndcg_at_1000 value: 19.741 - type: ndcg_at_3 value: 10.989 - type: ndcg_at_5 value: 12.437 - type: precision_at_1 value: 9.663 - type: precision_at_10 value: 2.561 - type: precision_at_100 value: 0.45399999999999996 - type: precision_at_1000 value: 0.074 - type: precision_at_3 value: 5.164 - type: precision_at_5 value: 3.8960000000000004 - type: recall_at_1 value: 7.733 - type: recall_at_10 value: 20.479 - type: recall_at_100 value: 35.349000000000004 - type: recall_at_1000 value: 55.38999999999999 - type: recall_at_3 value: 12.044 - type: recall_at_5 value: 15.831999999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: None config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 5.382 - type: map_at_10 value: 8.177 - type: map_at_100 value: 8.715 - type: map_at_1000 value: 8.827 - type: map_at_3 value: 7.196 - type: map_at_5 value: 7.727 - type: mrr_at_1 value: 6.917 - type: mrr_at_10 value: 10.183 - type: mrr_at_100 value: 10.744 - type: mrr_at_1000 value: 10.841000000000001 - type: mrr_at_3 value: 9.119 - type: mrr_at_5 value: 9.654 - type: ndcg_at_1 value: 6.917 - type: ndcg_at_10 value: 10.203 - type: ndcg_at_100 value: 13.214 - type: ndcg_at_1000 value: 16.413 - type: ndcg_at_3 value: 8.338 - type: ndcg_at_5 value: 9.156 - type: precision_at_1 value: 6.917 - type: precision_at_10 value: 2.003 - type: precision_at_100 value: 0.418 - type: precision_at_1000 value: 0.083 - type: precision_at_3 value: 4.118 - type: precision_at_5 value: 3.09 - type: recall_at_1 value: 5.382 - type: recall_at_10 value: 14.432 - type: recall_at_100 value: 28.605000000000004 - type: recall_at_1000 value: 52.38699999999999 - type: recall_at_3 value: 9.251 - type: recall_at_5 value: 11.339 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: None config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 11.106 - type: map_at_10 value: 14.943999999999999 - type: map_at_100 value: 15.586 - type: map_at_1000 value: 15.690999999999999 - type: map_at_3 value: 13.722999999999999 - type: map_at_5 value: 14.45 - type: mrr_at_1 value: 13.433 - type: mrr_at_10 value: 17.608999999999998 - type: mrr_at_100 value: 18.297 - type: mrr_at_1000 value: 18.393 - type: mrr_at_3 value: 16.247 - type: mrr_at_5 value: 17.063 - type: ndcg_at_1 value: 13.433 - type: ndcg_at_10 value: 17.563000000000002 - type: ndcg_at_100 value: 21.25 - type: ndcg_at_1000 value: 24.296 - type: ndcg_at_3 value: 15.214 - type: ndcg_at_5 value: 16.387999999999998 - type: precision_at_1 value: 13.433 - type: precision_at_10 value: 2.9010000000000002 - type: precision_at_100 value: 0.532 - type: precision_at_1000 value: 0.089 - type: precision_at_3 value: 6.872 - type: precision_at_5 value: 4.907 - type: recall_at_1 value: 11.106 - type: recall_at_10 value: 23.131 - type: recall_at_100 value: 40.505 - type: recall_at_1000 value: 63.135 - type: recall_at_3 value: 16.511 - type: recall_at_5 value: 19.55 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: None config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 12.153 - type: map_at_10 value: 16.474 - type: map_at_100 value: 17.294 - type: map_at_1000 value: 17.443 - type: map_at_3 value: 15.345 - type: map_at_5 value: 15.787999999999998 - type: mrr_at_1 value: 15.415000000000001 - type: mrr_at_10 value: 19.62 - type: mrr_at_100 value: 20.392 - type: mrr_at_1000 value: 20.485999999999997 - type: mrr_at_3 value: 18.511 - type: mrr_at_5 value: 19.035 - type: ndcg_at_1 value: 15.415000000000001 - type: ndcg_at_10 value: 19.377 - type: ndcg_at_100 value: 23.538999999999998 - type: ndcg_at_1000 value: 27.106 - type: ndcg_at_3 value: 17.505000000000003 - type: ndcg_at_5 value: 17.979 - type: precision_at_1 value: 15.415000000000001 - type: precision_at_10 value: 3.696 - type: precision_at_100 value: 0.8 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 8.432 - type: precision_at_5 value: 5.731 - type: recall_at_1 value: 12.153 - type: recall_at_10 value: 24.393 - type: recall_at_100 value: 44.316 - type: recall_at_1000 value: 69.236 - type: recall_at_3 value: 18.285999999999998 - type: recall_at_5 value: 20.032 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: None config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 8.349 - type: map_at_10 value: 10.949 - type: map_at_100 value: 11.588 - type: map_at_1000 value: 11.699 - type: map_at_3 value: 9.763 - type: map_at_5 value: 10.177 - type: mrr_at_1 value: 9.427000000000001 - type: mrr_at_10 value: 12.116 - type: mrr_at_100 value: 12.778 - type: mrr_at_1000 value: 12.878 - type: mrr_at_3 value: 10.875 - type: mrr_at_5 value: 11.272 - type: ndcg_at_1 value: 9.427000000000001 - type: ndcg_at_10 value: 13.006 - type: ndcg_at_100 value: 16.869 - type: ndcg_at_1000 value: 20.089000000000002 - type: ndcg_at_3 value: 10.434000000000001 - type: ndcg_at_5 value: 11.131 - type: precision_at_1 value: 9.427000000000001 - type: precision_at_10 value: 2.144 - type: precision_at_100 value: 0.455 - type: precision_at_1000 value: 0.08099999999999999 - type: precision_at_3 value: 4.251 - type: precision_at_5 value: 2.957 - type: recall_at_1 value: 8.349 - type: recall_at_10 value: 18.472 - type: recall_at_100 value: 37.485 - type: recall_at_1000 value: 62.208 - type: recall_at_3 value: 11.312 - type: recall_at_5 value: 12.914 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: None config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 4.817 - type: map_at_10 value: 8.848 - type: map_at_100 value: 10.052999999999999 - type: map_at_1000 value: 10.212 - type: map_at_3 value: 7.140000000000001 - type: map_at_5 value: 8.04 - type: mrr_at_1 value: 11.01 - type: mrr_at_10 value: 18.343 - type: mrr_at_100 value: 19.442999999999998 - type: mrr_at_1000 value: 19.517 - type: mrr_at_3 value: 15.733 - type: mrr_at_5 value: 17.065 - type: ndcg_at_1 value: 11.01 - type: ndcg_at_10 value: 13.533000000000001 - type: ndcg_at_100 value: 19.312 - type: ndcg_at_1000 value: 22.830000000000002 - type: ndcg_at_3 value: 10.218 - type: ndcg_at_5 value: 11.436 - type: precision_at_1 value: 11.01 - type: precision_at_10 value: 4.58 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 7.883 - type: precision_at_5 value: 6.41 - type: recall_at_1 value: 4.817 - type: recall_at_10 value: 17.477 - type: recall_at_100 value: 38.059 - type: recall_at_1000 value: 58.464000000000006 - type: recall_at_3 value: 9.588000000000001 - type: recall_at_5 value: 12.740000000000002 - task: type: Retrieval dataset: name: MTEB DBPedia type: None config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 2.213 - type: map_at_10 value: 5.106 - type: map_at_100 value: 7.088 - type: map_at_1000 value: 7.704 - type: map_at_3 value: 3.714 - type: map_at_5 value: 4.265 - type: mrr_at_1 value: 26.0 - type: mrr_at_10 value: 36.775999999999996 - type: mrr_at_100 value: 37.643 - type: mrr_at_1000 value: 37.695 - type: mrr_at_3 value: 34.292 - type: mrr_at_5 value: 35.754000000000005 - type: ndcg_at_1 value: 18.0 - type: ndcg_at_10 value: 14.601 - type: ndcg_at_100 value: 17.272000000000002 - type: ndcg_at_1000 value: 23.013 - type: ndcg_at_3 value: 16.767000000000003 - type: ndcg_at_5 value: 15.622 - type: precision_at_1 value: 26.0 - type: precision_at_10 value: 13.15 - type: precision_at_100 value: 4.495 - type: precision_at_1000 value: 1.0370000000000001 - type: precision_at_3 value: 21.333 - type: precision_at_5 value: 17.599999999999998 - type: recall_at_1 value: 2.213 - type: recall_at_10 value: 8.852 - type: recall_at_100 value: 22.954 - type: recall_at_1000 value: 42.836 - type: recall_at_3 value: 4.753 - type: recall_at_5 value: 5.957 - task: type: Classification dataset: name: MTEB EmotionClassification type: None config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 38.765 - type: f1 value: 35.41826314285653 - task: type: Retrieval dataset: name: MTEB FEVER type: None config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 12.147 - type: map_at_10 value: 17.898 - type: map_at_100 value: 18.756 - type: map_at_1000 value: 18.843 - type: map_at_3 value: 15.85 - type: map_at_5 value: 17.008000000000003 - type: mrr_at_1 value: 12.870999999999999 - type: mrr_at_10 value: 18.928 - type: mrr_at_100 value: 19.802 - type: mrr_at_1000 value: 19.884 - type: mrr_at_3 value: 16.804 - type: mrr_at_5 value: 18.0 - type: ndcg_at_1 value: 12.870999999999999 - type: ndcg_at_10 value: 21.579 - type: ndcg_at_100 value: 26.144000000000002 - type: ndcg_at_1000 value: 28.698 - type: ndcg_at_3 value: 17.339 - type: ndcg_at_5 value: 19.419 - type: precision_at_1 value: 12.870999999999999 - type: precision_at_10 value: 3.4819999999999998 - type: precision_at_100 value: 0.596 - type: precision_at_1000 value: 0.084 - type: precision_at_3 value: 7.396 - type: precision_at_5 value: 5.506 - type: recall_at_1 value: 12.147 - type: recall_at_10 value: 32.232 - type: recall_at_100 value: 53.911 - type: recall_at_1000 value: 73.883 - type: recall_at_3 value: 20.694000000000003 - type: recall_at_5 value: 25.689 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: None config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 5.649 - type: map_at_10 value: 8.784 - type: map_at_100 value: 9.571 - type: map_at_1000 value: 9.75 - type: map_at_3 value: 7.442 - type: map_at_5 value: 8.144 - type: mrr_at_1 value: 11.111 - type: mrr_at_10 value: 15.584999999999999 - type: mrr_at_100 value: 16.356 - type: mrr_at_1000 value: 16.471 - type: mrr_at_3 value: 13.889000000000001 - type: mrr_at_5 value: 14.823 - type: ndcg_at_1 value: 11.111 - type: ndcg_at_10 value: 12.137 - type: ndcg_at_100 value: 16.381 - type: ndcg_at_1000 value: 20.915 - type: ndcg_at_3 value: 10.045 - type: ndcg_at_5 value: 10.850999999999999 - type: precision_at_1 value: 11.111 - type: precision_at_10 value: 3.4410000000000003 - type: precision_at_100 value: 0.7779999999999999 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 6.481000000000001 - type: precision_at_5 value: 5.122999999999999 - type: recall_at_1 value: 5.649 - type: recall_at_10 value: 15.611 - type: recall_at_100 value: 32.497 - type: recall_at_1000 value: 61.314 - type: recall_at_3 value: 9.363000000000001 - type: recall_at_5 value: 11.781 - task: type: Retrieval dataset: name: MTEB HotpotQA type: None config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 12.950999999999999 - type: map_at_10 value: 17.712 - type: map_at_100 value: 18.362000000000002 - type: map_at_1000 value: 18.45 - type: map_at_3 value: 16.325 - type: map_at_5 value: 17.071 - type: mrr_at_1 value: 25.901000000000003 - type: mrr_at_10 value: 31.275 - type: mrr_at_100 value: 31.924999999999997 - type: mrr_at_1000 value: 31.988 - type: mrr_at_3 value: 29.709999999999997 - type: mrr_at_5 value: 30.553 - type: ndcg_at_1 value: 25.901000000000003 - type: ndcg_at_10 value: 22.958000000000002 - type: ndcg_at_100 value: 26.253 - type: ndcg_at_1000 value: 28.573999999999998 - type: ndcg_at_3 value: 20.175 - type: ndcg_at_5 value: 21.468 - type: precision_at_1 value: 25.901000000000003 - type: precision_at_10 value: 5.055 - type: precision_at_100 value: 0.772 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 12.631 - type: precision_at_5 value: 8.605 - type: recall_at_1 value: 12.950999999999999 - type: recall_at_10 value: 25.273 - type: recall_at_100 value: 38.589 - type: recall_at_1000 value: 54.159 - type: recall_at_3 value: 18.947 - type: recall_at_5 value: 21.512 - task: type: Classification dataset: name: MTEB ImdbClassification type: None config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 66.6856 - type: ap value: 61.623516914776424 - type: f1 value: 66.36169217459741 - task: type: Retrieval dataset: name: MTEB MSMARCO type: None config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 5.1339999999999995 - type: map_at_10 value: 8.908000000000001 - type: map_at_100 value: 9.722999999999999 - type: map_at_1000 value: 9.817 - type: map_at_3 value: 7.509 - type: map_at_5 value: 8.198 - type: mrr_at_1 value: 5.3580000000000005 - type: mrr_at_10 value: 9.188 - type: mrr_at_100 value: 10.012 - type: mrr_at_1000 value: 10.104000000000001 - type: mrr_at_3 value: 7.758 - type: mrr_at_5 value: 8.466 - type: ndcg_at_1 value: 5.315 - type: ndcg_at_10 value: 11.344 - type: ndcg_at_100 value: 15.823 - type: ndcg_at_1000 value: 18.701 - type: ndcg_at_3 value: 8.372 - type: ndcg_at_5 value: 9.625 - type: precision_at_1 value: 5.315 - type: precision_at_10 value: 1.966 - type: precision_at_100 value: 0.42900000000000005 - type: precision_at_1000 value: 0.068 - type: precision_at_3 value: 3.682 - type: precision_at_5 value: 2.834 - type: recall_at_1 value: 5.1339999999999995 - type: recall_at_10 value: 18.948 - type: recall_at_100 value: 40.849999999999994 - type: recall_at_1000 value: 64.048 - type: recall_at_3 value: 10.659 - type: recall_at_5 value: 13.691 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: None config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 85.54947560419517 - type: f1 value: 84.8242629182151 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: None config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 50.348837209302324 - type: f1 value: 33.552311863643 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: None config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.76529926025555 - type: f1 value: 53.50819461312456 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: None config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.64425016812373 - type: f1 value: 64.67534424488329 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: None config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 26.33530655295464 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: None config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 23.27607219103413 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: None config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 27.082984442235798 - type: mrr value: 27.558608635217794 - task: type: Retrieval dataset: name: MTEB NFCorpus type: None config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 2.622 - type: map_at_10 value: 5.769 - type: map_at_100 value: 7.276000000000001 - type: map_at_1000 value: 8.394 - type: map_at_3 value: 4.566 - type: map_at_5 value: 5.1979999999999995 - type: mrr_at_1 value: 26.935 - type: mrr_at_10 value: 37.354 - type: mrr_at_100 value: 38.151 - type: mrr_at_1000 value: 38.224000000000004 - type: mrr_at_3 value: 34.778 - type: mrr_at_5 value: 36.342 - type: ndcg_at_1 value: 25.386999999999997 - type: ndcg_at_10 value: 19.151 - type: ndcg_at_100 value: 18.927 - type: ndcg_at_1000 value: 28.666999999999998 - type: ndcg_at_3 value: 22.261 - type: ndcg_at_5 value: 20.921 - type: precision_at_1 value: 26.935 - type: precision_at_10 value: 13.715 - type: precision_at_100 value: 5.276 - type: precision_at_1000 value: 1.8270000000000002 - type: precision_at_3 value: 20.64 - type: precision_at_5 value: 17.337 - type: recall_at_1 value: 2.622 - type: recall_at_10 value: 9.516 - type: recall_at_100 value: 22.083 - type: recall_at_1000 value: 55.879999999999995 - type: recall_at_3 value: 6.051 - type: recall_at_5 value: 7.738 - task: type: Retrieval dataset: name: MTEB NQ type: None config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 9.020999999999999 - type: map_at_10 value: 14.432 - type: map_at_100 value: 15.384 - type: map_at_1000 value: 15.473 - type: map_at_3 value: 12.362 - type: map_at_5 value: 13.397 - type: mrr_at_1 value: 10.197000000000001 - type: mrr_at_10 value: 15.914 - type: mrr_at_100 value: 16.814 - type: mrr_at_1000 value: 16.892 - type: mrr_at_3 value: 13.764999999999999 - type: mrr_at_5 value: 14.838000000000001 - type: ndcg_at_1 value: 10.197000000000001 - type: ndcg_at_10 value: 18.109 - type: ndcg_at_100 value: 23.055999999999997 - type: ndcg_at_1000 value: 25.569999999999997 - type: ndcg_at_3 value: 13.771 - type: ndcg_at_5 value: 15.618000000000002 - type: precision_at_1 value: 10.197000000000001 - type: precision_at_10 value: 3.305 - type: precision_at_100 value: 0.615 - type: precision_at_1000 value: 0.08499999999999999 - type: precision_at_3 value: 6.354 - type: precision_at_5 value: 4.855 - type: recall_at_1 value: 9.020999999999999 - type: recall_at_10 value: 28.153 - type: recall_at_100 value: 51.278999999999996 - type: recall_at_1000 value: 70.742 - type: recall_at_3 value: 16.478 - type: recall_at_5 value: 20.766000000000002 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: None config: default split: test revision: None metrics: - type: map_at_1 value: 58.165 - type: map_at_10 value: 69.596 - type: map_at_100 value: 70.317 - type: map_at_1000 value: 70.358 - type: map_at_3 value: 66.952 - type: map_at_5 value: 68.57600000000001 - type: mrr_at_1 value: 67.09 - type: mrr_at_10 value: 74.236 - type: mrr_at_100 value: 74.541 - type: mrr_at_1000 value: 74.554 - type: mrr_at_3 value: 72.815 - type: mrr_at_5 value: 73.719 - type: ndcg_at_1 value: 67.16 - type: ndcg_at_10 value: 74.18299999999999 - type: ndcg_at_100 value: 76.452 - type: ndcg_at_1000 value: 77.105 - type: ndcg_at_3 value: 70.881 - type: ndcg_at_5 value: 72.603 - type: precision_at_1 value: 67.16 - type: precision_at_10 value: 11.115 - type: precision_at_100 value: 1.3379999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 30.607 - type: precision_at_5 value: 20.214 - type: recall_at_1 value: 58.165 - type: recall_at_10 value: 82.73700000000001 - type: recall_at_100 value: 91.767 - type: recall_at_1000 value: 95.94500000000001 - type: recall_at_3 value: 73.134 - type: recall_at_5 value: 77.98 - type: map_at_1 value: 2.0500000000000003 - type: map_at_10 value: 5.06 - type: map_at_100 value: 6.146999999999999 - type: map_at_1000 value: 6.368 - type: map_at_3 value: 3.6220000000000003 - type: map_at_5 value: 4.324999999999999 - type: mrr_at_1 value: 10.100000000000001 - type: mrr_at_10 value: 17.000999999999998 - type: mrr_at_100 value: 18.223 - type: mrr_at_1000 value: 18.329 - type: mrr_at_3 value: 14.267 - type: mrr_at_5 value: 15.626999999999999 - type: ndcg_at_1 value: 10.100000000000001 - type: ndcg_at_10 value: 9.467 - type: ndcg_at_100 value: 14.862 - type: ndcg_at_1000 value: 19.794999999999998 - type: ndcg_at_3 value: 8.405999999999999 - type: ndcg_at_5 value: 7.539 - type: precision_at_1 value: 10.100000000000001 - type: precision_at_10 value: 5.12 - type: precision_at_100 value: 1.3 - type: precision_at_1000 value: 0.249 - type: precision_at_3 value: 7.832999999999999 - type: precision_at_5 value: 6.72 - type: recall_at_1 value: 2.0500000000000003 - type: recall_at_10 value: 10.388 - type: recall_at_100 value: 26.401999999999997 - type: recall_at_1000 value: 50.707 - type: recall_at_3 value: 4.760000000000001 - type: recall_at_5 value: 6.8180000000000005 - type: map_at_1 value: 0.149 - type: map_at_10 value: 0.8999999999999999 - type: map_at_100 value: 3.943 - type: map_at_1000 value: 9.314 - type: map_at_3 value: 0.366 - type: map_at_5 value: 0.521 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 75.102 - type: mrr_at_100 value: 75.48 - type: mrr_at_1000 value: 75.48 - type: mrr_at_3 value: 73.333 - type: mrr_at_5 value: 74.233 - type: ndcg_at_1 value: 57.99999999999999 - type: ndcg_at_10 value: 49.122 - type: ndcg_at_100 value: 32.473 - type: ndcg_at_1000 value: 27.815 - type: ndcg_at_3 value: 55.00000000000001 - type: ndcg_at_5 value: 51.571 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 53.0 - type: precision_at_100 value: 33.2 - type: precision_at_1000 value: 13.308 - type: precision_at_3 value: 61.333000000000006 - type: precision_at_5 value: 56.00000000000001 - type: recall_at_1 value: 0.149 - type: recall_at_10 value: 1.176 - type: recall_at_100 value: 7.074 - type: recall_at_1000 value: 26.454 - type: recall_at_3 value: 0.406 - type: recall_at_5 value: 0.617 - task: type: Clustering dataset: name: MTEB RedditClustering type: None config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 24.5493374372718 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: None config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 38.00016239175056 - task: type: STS dataset: name: MTEB SICK-R type: None config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 71.89984343607804 - type: cos_sim_spearman value: 63.43467385580481 - type: euclidean_pearson value: 69.73579823240381 - type: euclidean_spearman value: 63.43475730968674 - type: manhattan_pearson value: 69.8228923194184 - type: manhattan_spearman value: 63.58845076244296 - task: type: STS dataset: name: MTEB STS12 type: None config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 65.4759399816162 - type: cos_sim_spearman value: 65.35498422056574 - type: euclidean_pearson value: 68.36154404872667 - type: euclidean_spearman value: 65.35640076124622 - type: manhattan_pearson value: 68.40246600630756 - type: manhattan_spearman value: 65.42497257424823 - task: type: STS dataset: name: MTEB STS13 type: None config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 55.68971853486872 - type: cos_sim_spearman value: 59.06834913713076 - type: euclidean_pearson value: 59.706488047671044 - type: euclidean_spearman value: 59.06838710110628 - type: manhattan_pearson value: 60.072587773127125 - type: manhattan_spearman value: 59.50320399915506 - task: type: STS dataset: name: MTEB STS14 type: None config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 57.557307550750295 - type: cos_sim_spearman value: 60.68273711090555 - type: euclidean_pearson value: 60.20869968120602 - type: euclidean_spearman value: 60.68272743643783 - type: manhattan_pearson value: 60.50545955587273 - type: manhattan_spearman value: 60.95213012572679 - task: type: STS dataset: name: MTEB STS15 type: None config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 58.18042156345806 - type: cos_sim_spearman value: 65.32579360012554 - type: euclidean_pearson value: 64.31792924588922 - type: euclidean_spearman value: 65.32579038611287 - type: manhattan_pearson value: 64.40233473846389 - type: manhattan_spearman value: 65.40214278585312 - task: type: STS dataset: name: MTEB STS16 type: None config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 44.50178221672106 - type: cos_sim_spearman value: 55.784111367310665 - type: euclidean_pearson value: 53.41468235269096 - type: euclidean_spearman value: 55.784384888922425 - type: manhattan_pearson value: 53.53280905200817 - type: manhattan_spearman value: 55.88198774771714 - task: type: STS dataset: name: MTEB STS17 (en-en) type: None config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 55.13631567761868 - type: cos_sim_spearman value: 65.57248978406166 - type: euclidean_pearson value: 64.03946971471025 - type: euclidean_spearman value: 65.57337456557754 - type: manhattan_pearson value: 63.61261893563087 - type: manhattan_spearman value: 65.1385835431766 - task: type: STS dataset: name: MTEB STS22 (en) type: None config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 29.45864805131749 - type: cos_sim_spearman value: 50.95973110970679 - type: euclidean_pearson value: 40.95705582088806 - type: euclidean_spearman value: 50.95973110970679 - type: manhattan_pearson value: 41.674733184754345 - type: manhattan_spearman value: 51.614159713070464 - task: type: STS dataset: name: MTEB STSBenchmark type: None config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 56.77386223789116 - type: cos_sim_spearman value: 59.293606859595485 - type: euclidean_pearson value: 60.19020154248288 - type: euclidean_spearman value: 59.29359527035356 - type: manhattan_pearson value: 60.282114876575186 - type: manhattan_spearman value: 59.37212976911096 - task: type: Reranking dataset: name: MTEB SciDocsRR type: None config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 61.571833060295766 - type: mrr value: 85.19842267391286 - task: type: Retrieval dataset: name: MTEB SciFact type: None config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 31.556 - type: map_at_10 value: 38.556000000000004 - type: map_at_100 value: 39.701 - type: map_at_1000 value: 39.775 - type: map_at_3 value: 36.342999999999996 - type: map_at_5 value: 37.702000000000005 - type: mrr_at_1 value: 33.0 - type: mrr_at_10 value: 40.035 - type: mrr_at_100 value: 41.053 - type: mrr_at_1000 value: 41.115 - type: mrr_at_3 value: 38.0 - type: mrr_at_5 value: 39.333 - type: ndcg_at_1 value: 33.0 - type: ndcg_at_10 value: 42.512 - type: ndcg_at_100 value: 48.231 - type: ndcg_at_1000 value: 50.283 - type: ndcg_at_3 value: 38.272 - type: ndcg_at_5 value: 40.589 - type: precision_at_1 value: 33.0 - type: precision_at_10 value: 5.867 - type: precision_at_100 value: 0.903 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 15.0 - type: precision_at_5 value: 10.333 - type: recall_at_1 value: 31.556 - type: recall_at_10 value: 53.261 - type: recall_at_100 value: 80.072 - type: recall_at_1000 value: 96.39999999999999 - type: recall_at_3 value: 42.111 - type: recall_at_5 value: 47.556 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: None config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.28217821782178 - type: cos_sim_ap value: 52.992820561773925 - type: cos_sim_f1 value: 53.79000561482312 - type: cos_sim_precision value: 61.33162612035852 - type: cos_sim_recall value: 47.9 - type: dot_accuracy value: 99.28217821782178 - type: dot_ap value: 52.992820561773925 - type: dot_f1 value: 53.79000561482312 - type: dot_precision value: 61.33162612035852 - type: dot_recall value: 47.9 - type: euclidean_accuracy value: 99.28217821782178 - type: euclidean_ap value: 52.992820561773925 - type: euclidean_f1 value: 53.79000561482312 - type: euclidean_precision value: 61.33162612035852 - type: euclidean_recall value: 47.9 - type: manhattan_accuracy value: 99.2871287128713 - type: manhattan_ap value: 53.606023555965066 - type: manhattan_f1 value: 54.131054131054135 - type: manhattan_precision value: 62.913907284768214 - type: manhattan_recall value: 47.5 - type: max_accuracy value: 99.2871287128713 - type: max_ap value: 53.606023555965066 - type: max_f1 value: 54.131054131054135 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: None config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 36.32857312142378 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: None config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 27.504144051226 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: None config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 40.76457049974378 - type: mrr value: 41.07083174178763 - task: type: Summarization dataset: name: MTEB SummEval type: None config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.243788760312984 - type: cos_sim_spearman value: 31.03280248261321 - type: dot_pearson value: 30.243788764677664 - type: dot_spearman value: 31.061614308694534 - task: type: Retrieval dataset: name: MTEB Touche2020 type: None config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 1.6129999999999998 - type: map_at_10 value: 4.543 - type: map_at_100 value: 7.212000000000001 - type: map_at_1000 value: 8.495 - type: map_at_3 value: 2.6069999999999998 - type: map_at_5 value: 3.451 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 32.638 - type: mrr_at_100 value: 34.361000000000004 - type: mrr_at_1000 value: 34.39 - type: mrr_at_3 value: 28.912 - type: mrr_at_5 value: 31.156 - type: ndcg_at_1 value: 23.469 - type: ndcg_at_10 value: 13.056000000000001 - type: ndcg_at_100 value: 22.066 - type: ndcg_at_1000 value: 34.38 - type: ndcg_at_3 value: 16.495 - type: ndcg_at_5 value: 15.158 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 11.224 - type: precision_at_100 value: 5.061 - type: precision_at_1000 value: 1.2630000000000001 - type: precision_at_3 value: 15.645999999999999 - type: precision_at_5 value: 15.101999999999999 - type: recall_at_1 value: 1.6129999999999998 - type: recall_at_10 value: 8.486 - type: recall_at_100 value: 31.317 - type: recall_at_1000 value: 69.62899999999999 - type: recall_at_3 value: 3.421 - type: recall_at_5 value: 5.328 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: None config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.8716 - type: ap value: 13.588164805433928 - type: f1 value: 54.41439485369954 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: None config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 57.06564799094511 - type: f1 value: 57.20282472039746 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: None config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 23.406257160147327 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: None config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 81.0037551409668 - type: cos_sim_ap value: 55.47970632502296 - type: cos_sim_f1 value: 53.07814214311838 - type: cos_sim_precision value: 49.33152050759121 - type: cos_sim_recall value: 57.440633245382585 - type: dot_accuracy value: 81.0037551409668 - type: dot_ap value: 55.47970632502296 - type: dot_f1 value: 53.07814214311838 - type: dot_precision value: 49.33152050759121 - type: dot_recall value: 57.440633245382585 - type: euclidean_accuracy value: 81.0037551409668 - type: euclidean_ap value: 55.47970632502296 - type: euclidean_f1 value: 53.07814214311838 - type: euclidean_precision value: 49.33152050759121 - type: euclidean_recall value: 57.440633245382585 - type: manhattan_accuracy value: 81.0037551409668 - type: manhattan_ap value: 55.80897704541587 - type: manhattan_f1 value: 53.43965722447036 - type: manhattan_precision value: 48.67736339982654 - type: manhattan_recall value: 59.23482849604221 - type: max_accuracy value: 81.0037551409668 - type: max_ap value: 55.80897704541587 - type: max_f1 value: 53.43965722447036 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: None config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 83.57783211083944 - type: cos_sim_ap value: 71.5815660869356 - type: cos_sim_f1 value: 63.43217907021236 - type: cos_sim_precision value: 63.04860424431429 - type: cos_sim_recall value: 63.82044964582691 - type: dot_accuracy value: 83.57783211083944 - type: dot_ap value: 71.58156546802053 - type: dot_f1 value: 63.43217907021236 - type: dot_precision value: 63.04860424431429 - type: dot_recall value: 63.82044964582691 - type: euclidean_accuracy value: 83.57783211083944 - type: euclidean_ap value: 71.5815657331584 - type: euclidean_f1 value: 63.43217907021236 - type: euclidean_precision value: 63.04860424431429 - type: euclidean_recall value: 63.82044964582691 - type: manhattan_accuracy value: 83.69231963363993 - type: manhattan_ap value: 71.80465347479418 - type: manhattan_f1 value: 63.49836524988323 - type: manhattan_precision value: 64.2081234256927 - type: manhattan_recall value: 62.804126886356634 - type: max_accuracy value: 83.69231963363993 - type: max_ap value: 71.80465347479418 - type: max_f1 value: 63.49836524988323 ---
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF
AndreasX
feature-extraction
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "mteb", "llama-cpp", "gguf-my-repo", "es", "en", "base_model:jinaai/jina-embeddings-v2-base-es", "base_model:quantized:jinaai/jina-embeddings-v2-base-es", "license:apache-2.0", "model-index", "autotrain_compatible", "region:us" ]
1,729
1,729
15
0
--- base_model: jinaai/jina-embeddings-v2-base-es language: - es - en license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - llama-cpp - gguf-my-repo inference: false model-index: - name: jina-embeddings-v2-base-es results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.25373134328358 - type: ap value: 37.05201236793268 - type: f1 value: 68.16770391201077 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 78.30885 - type: ap value: 73.01622441156408 - type: f1 value: 78.20769284466313 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.324 - type: f1 value: 37.89543008761673 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.678000000000004 - type: f1 value: 38.122639506976 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 23.968999999999998 - type: map_at_10 value: 40.691 - type: map_at_100 value: 41.713 - type: map_at_1000 value: 41.719 - type: map_at_3 value: 35.42 - type: map_at_5 value: 38.442 - type: mrr_at_1 value: 24.395 - type: mrr_at_10 value: 40.853 - type: mrr_at_100 value: 41.869 - type: mrr_at_1000 value: 41.874 - type: mrr_at_3 value: 35.68 - type: mrr_at_5 value: 38.572 - type: ndcg_at_1 value: 23.968999999999998 - type: ndcg_at_10 value: 50.129999999999995 - type: ndcg_at_100 value: 54.364000000000004 - type: ndcg_at_1000 value: 54.494 - type: ndcg_at_3 value: 39.231 - type: ndcg_at_5 value: 44.694 - type: precision_at_1 value: 23.968999999999998 - type: precision_at_10 value: 8.036999999999999 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.761 - type: precision_at_5 value: 12.717 - type: recall_at_1 value: 23.968999999999998 - type: recall_at_10 value: 80.36999999999999 - type: recall_at_100 value: 98.578 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 50.28399999999999 - type: recall_at_5 value: 63.585 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 41.54886683150053 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.186028697637234 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.19432643698725 - type: mrr value: 75.28646176845622 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.3828259381228 - type: cos_sim_spearman value: 83.04647058342209 - type: euclidean_pearson value: 84.02895346096244 - type: euclidean_spearman value: 82.34524978635342 - type: manhattan_pearson value: 84.35030723233426 - type: manhattan_spearman value: 83.17177464337936 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.25649350649351 - type: f1 value: 85.22320474023192 - task: type: Clustering dataset: name: MTEB BigPatentClustering type: jinaai/big-patent-clustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 20.42929408254094 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.165318177498136 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 28.89030154229562 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.119 - type: map_at_10 value: 42.092 - type: map_at_100 value: 43.506 - type: map_at_1000 value: 43.631 - type: map_at_3 value: 38.373000000000005 - type: map_at_5 value: 40.501 - type: mrr_at_1 value: 38.196999999999996 - type: mrr_at_10 value: 48.237 - type: mrr_at_100 value: 48.914 - type: mrr_at_1000 value: 48.959 - type: mrr_at_3 value: 45.279 - type: mrr_at_5 value: 47.11 - type: ndcg_at_1 value: 38.196999999999996 - type: ndcg_at_10 value: 48.849 - type: ndcg_at_100 value: 53.713 - type: ndcg_at_1000 value: 55.678000000000004 - type: ndcg_at_3 value: 43.546 - type: ndcg_at_5 value: 46.009 - type: precision_at_1 value: 38.196999999999996 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.5190000000000001 - type: precision_at_1000 value: 0.199 - type: precision_at_3 value: 21.65 - type: precision_at_5 value: 15.708 - type: recall_at_1 value: 30.119 - type: recall_at_10 value: 61.788 - type: recall_at_100 value: 82.14399999999999 - type: recall_at_1000 value: 95.003 - type: recall_at_3 value: 45.772 - type: recall_at_5 value: 53.04600000000001 - type: map_at_1 value: 28.979 - type: map_at_10 value: 37.785000000000004 - type: map_at_100 value: 38.945 - type: map_at_1000 value: 39.071 - type: map_at_3 value: 35.083999999999996 - type: map_at_5 value: 36.571999999999996 - type: mrr_at_1 value: 36.242000000000004 - type: mrr_at_10 value: 43.552 - type: mrr_at_100 value: 44.228 - type: mrr_at_1000 value: 44.275999999999996 - type: mrr_at_3 value: 41.359 - type: mrr_at_5 value: 42.598 - type: ndcg_at_1 value: 36.242000000000004 - type: ndcg_at_10 value: 42.94 - type: ndcg_at_100 value: 47.343 - type: ndcg_at_1000 value: 49.538 - type: ndcg_at_3 value: 39.086999999999996 - type: ndcg_at_5 value: 40.781 - type: precision_at_1 value: 36.242000000000004 - type: precision_at_10 value: 7.954999999999999 - type: precision_at_100 value: 1.303 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 18.556 - type: precision_at_5 value: 13.145999999999999 - type: recall_at_1 value: 28.979 - type: recall_at_10 value: 51.835 - type: recall_at_100 value: 70.47 - type: recall_at_1000 value: 84.68299999999999 - type: recall_at_3 value: 40.410000000000004 - type: recall_at_5 value: 45.189 - type: map_at_1 value: 37.878 - type: map_at_10 value: 49.903 - type: map_at_100 value: 50.797000000000004 - type: map_at_1000 value: 50.858000000000004 - type: map_at_3 value: 46.526 - type: map_at_5 value: 48.615 - type: mrr_at_1 value: 43.135 - type: mrr_at_10 value: 53.067 - type: mrr_at_100 value: 53.668000000000006 - type: mrr_at_1000 value: 53.698 - type: mrr_at_3 value: 50.449 - type: mrr_at_5 value: 52.117000000000004 - type: ndcg_at_1 value: 43.135 - type: ndcg_at_10 value: 55.641 - type: ndcg_at_100 value: 59.427 - type: ndcg_at_1000 value: 60.655 - type: ndcg_at_3 value: 49.969 - type: ndcg_at_5 value: 53.075 - type: precision_at_1 value: 43.135 - type: precision_at_10 value: 8.997 - type: precision_at_100 value: 1.1809999999999998 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.215 - type: precision_at_5 value: 15.586 - type: recall_at_1 value: 37.878 - type: recall_at_10 value: 69.405 - type: recall_at_100 value: 86.262 - type: recall_at_1000 value: 95.012 - type: recall_at_3 value: 54.458 - type: recall_at_5 value: 61.965 - type: map_at_1 value: 24.853 - type: map_at_10 value: 32.402 - type: map_at_100 value: 33.417 - type: map_at_1000 value: 33.498 - type: map_at_3 value: 30.024 - type: map_at_5 value: 31.407 - type: mrr_at_1 value: 26.667 - type: mrr_at_10 value: 34.399 - type: mrr_at_100 value: 35.284 - type: mrr_at_1000 value: 35.345 - type: mrr_at_3 value: 32.109 - type: mrr_at_5 value: 33.375 - type: ndcg_at_1 value: 26.667 - type: ndcg_at_10 value: 36.854 - type: ndcg_at_100 value: 42.196 - type: ndcg_at_1000 value: 44.303 - type: ndcg_at_3 value: 32.186 - type: ndcg_at_5 value: 34.512 - type: precision_at_1 value: 26.667 - type: precision_at_10 value: 5.559 - type: precision_at_100 value: 0.88 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.333 - type: precision_at_5 value: 9.379 - type: recall_at_1 value: 24.853 - type: recall_at_10 value: 48.636 - type: recall_at_100 value: 73.926 - type: recall_at_1000 value: 89.94 - type: recall_at_3 value: 36.266 - type: recall_at_5 value: 41.723 - type: map_at_1 value: 14.963999999999999 - type: map_at_10 value: 22.591 - type: map_at_100 value: 23.735999999999997 - type: map_at_1000 value: 23.868000000000002 - type: map_at_3 value: 20.093 - type: map_at_5 value: 21.499 - type: mrr_at_1 value: 18.407999999999998 - type: mrr_at_10 value: 26.863 - type: mrr_at_100 value: 27.87 - type: mrr_at_1000 value: 27.947 - type: mrr_at_3 value: 24.254 - type: mrr_at_5 value: 25.784000000000002 - type: ndcg_at_1 value: 18.407999999999998 - type: ndcg_at_10 value: 27.549 - type: ndcg_at_100 value: 33.188 - type: ndcg_at_1000 value: 36.312 - type: ndcg_at_3 value: 22.862 - type: ndcg_at_5 value: 25.130999999999997 - type: precision_at_1 value: 18.407999999999998 - type: precision_at_10 value: 5.087 - type: precision_at_100 value: 0.923 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.987 - type: precision_at_5 value: 8.209 - type: recall_at_1 value: 14.963999999999999 - type: recall_at_10 value: 38.673 - type: recall_at_100 value: 63.224999999999994 - type: recall_at_1000 value: 85.443 - type: recall_at_3 value: 25.840000000000003 - type: recall_at_5 value: 31.503999999999998 - type: map_at_1 value: 27.861000000000004 - type: map_at_10 value: 37.562 - type: map_at_100 value: 38.906 - type: map_at_1000 value: 39.021 - type: map_at_3 value: 34.743 - type: map_at_5 value: 36.168 - type: mrr_at_1 value: 34.455999999999996 - type: mrr_at_10 value: 43.428 - type: mrr_at_100 value: 44.228 - type: mrr_at_1000 value: 44.278 - type: mrr_at_3 value: 41.001 - type: mrr_at_5 value: 42.315000000000005 - type: ndcg_at_1 value: 34.455999999999996 - type: ndcg_at_10 value: 43.477 - type: ndcg_at_100 value: 48.953 - type: ndcg_at_1000 value: 51.19200000000001 - type: ndcg_at_3 value: 38.799 - type: ndcg_at_5 value: 40.743 - type: precision_at_1 value: 34.455999999999996 - type: precision_at_10 value: 7.902000000000001 - type: precision_at_100 value: 1.244 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 12.859000000000002 - type: recall_at_1 value: 27.861000000000004 - type: recall_at_10 value: 55.36 - type: recall_at_100 value: 78.384 - type: recall_at_1000 value: 93.447 - type: recall_at_3 value: 41.926 - type: recall_at_5 value: 47.257 - type: map_at_1 value: 26.375 - type: map_at_10 value: 35.571000000000005 - type: map_at_100 value: 36.785000000000004 - type: map_at_1000 value: 36.905 - type: map_at_3 value: 32.49 - type: map_at_5 value: 34.123999999999995 - type: mrr_at_1 value: 32.647999999999996 - type: mrr_at_10 value: 40.598 - type: mrr_at_100 value: 41.484 - type: mrr_at_1000 value: 41.546 - type: mrr_at_3 value: 37.9 - type: mrr_at_5 value: 39.401 - type: ndcg_at_1 value: 32.647999999999996 - type: ndcg_at_10 value: 41.026 - type: ndcg_at_100 value: 46.365 - type: ndcg_at_1000 value: 48.876 - type: ndcg_at_3 value: 35.843 - type: ndcg_at_5 value: 38.118 - type: precision_at_1 value: 32.647999999999996 - type: precision_at_10 value: 7.443 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 16.819 - type: precision_at_5 value: 11.985999999999999 - type: recall_at_1 value: 26.375 - type: recall_at_10 value: 52.471000000000004 - type: recall_at_100 value: 75.354 - type: recall_at_1000 value: 92.35 - type: recall_at_3 value: 37.893 - type: recall_at_5 value: 43.935 - type: map_at_1 value: 25.012666666666668 - type: map_at_10 value: 33.685833333333335 - type: map_at_100 value: 34.849250000000005 - type: map_at_1000 value: 34.970083333333335 - type: map_at_3 value: 31.065083333333334 - type: map_at_5 value: 32.494416666666666 - type: mrr_at_1 value: 29.772666666666662 - type: mrr_at_10 value: 37.824666666666666 - type: mrr_at_100 value: 38.66741666666666 - type: mrr_at_1000 value: 38.72916666666666 - type: mrr_at_3 value: 35.54575 - type: mrr_at_5 value: 36.81524999999999 - type: ndcg_at_1 value: 29.772666666666662 - type: ndcg_at_10 value: 38.78241666666666 - type: ndcg_at_100 value: 43.84591666666667 - type: ndcg_at_1000 value: 46.275416666666665 - type: ndcg_at_3 value: 34.33416666666667 - type: ndcg_at_5 value: 36.345166666666664 - type: precision_at_1 value: 29.772666666666662 - type: precision_at_10 value: 6.794916666666667 - type: precision_at_100 value: 1.106416666666667 - type: precision_at_1000 value: 0.15033333333333335 - type: precision_at_3 value: 15.815083333333336 - type: precision_at_5 value: 11.184166666666664 - type: recall_at_1 value: 25.012666666666668 - type: recall_at_10 value: 49.748500000000014 - type: recall_at_100 value: 72.11341666666667 - type: recall_at_1000 value: 89.141 - type: recall_at_3 value: 37.242999999999995 - type: recall_at_5 value: 42.49033333333333 - type: map_at_1 value: 23.177 - type: map_at_10 value: 29.310000000000002 - type: map_at_100 value: 30.188 - type: map_at_1000 value: 30.29 - type: map_at_3 value: 27.356 - type: map_at_5 value: 28.410999999999998 - type: mrr_at_1 value: 26.074 - type: mrr_at_10 value: 32.002 - type: mrr_at_100 value: 32.838 - type: mrr_at_1000 value: 32.909 - type: mrr_at_3 value: 30.317 - type: mrr_at_5 value: 31.222 - type: ndcg_at_1 value: 26.074 - type: ndcg_at_10 value: 32.975 - type: ndcg_at_100 value: 37.621 - type: ndcg_at_1000 value: 40.253 - type: ndcg_at_3 value: 29.452 - type: ndcg_at_5 value: 31.020999999999997 - type: precision_at_1 value: 26.074 - type: precision_at_10 value: 5.077 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.526000000000002 - type: precision_at_5 value: 8.588999999999999 - type: recall_at_1 value: 23.177 - type: recall_at_10 value: 41.613 - type: recall_at_100 value: 63.287000000000006 - type: recall_at_1000 value: 83.013 - type: recall_at_3 value: 31.783 - type: recall_at_5 value: 35.769 - type: map_at_1 value: 15.856 - type: map_at_10 value: 22.651 - type: map_at_100 value: 23.649 - type: map_at_1000 value: 23.783 - type: map_at_3 value: 20.591 - type: map_at_5 value: 21.684 - type: mrr_at_1 value: 19.408 - type: mrr_at_10 value: 26.51 - type: mrr_at_100 value: 27.356 - type: mrr_at_1000 value: 27.439999999999998 - type: mrr_at_3 value: 24.547 - type: mrr_at_5 value: 25.562 - type: ndcg_at_1 value: 19.408 - type: ndcg_at_10 value: 27.072000000000003 - type: ndcg_at_100 value: 31.980999999999998 - type: ndcg_at_1000 value: 35.167 - type: ndcg_at_3 value: 23.338 - type: ndcg_at_5 value: 24.94 - type: precision_at_1 value: 19.408 - type: precision_at_10 value: 4.9590000000000005 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.132 - type: precision_at_3 value: 11.138 - type: precision_at_5 value: 7.949000000000001 - type: recall_at_1 value: 15.856 - type: recall_at_10 value: 36.578 - type: recall_at_100 value: 58.89 - type: recall_at_1000 value: 81.743 - type: recall_at_3 value: 25.94 - type: recall_at_5 value: 30.153999999999996 - type: map_at_1 value: 25.892 - type: map_at_10 value: 33.899 - type: map_at_100 value: 34.955000000000005 - type: map_at_1000 value: 35.066 - type: map_at_3 value: 31.41 - type: map_at_5 value: 32.669 - type: mrr_at_1 value: 30.224 - type: mrr_at_10 value: 37.936 - type: mrr_at_100 value: 38.777 - type: mrr_at_1000 value: 38.85 - type: mrr_at_3 value: 35.821 - type: mrr_at_5 value: 36.894 - type: ndcg_at_1 value: 30.224 - type: ndcg_at_10 value: 38.766 - type: ndcg_at_100 value: 43.806 - type: ndcg_at_1000 value: 46.373999999999995 - type: ndcg_at_3 value: 34.325 - type: ndcg_at_5 value: 36.096000000000004 - type: precision_at_1 value: 30.224 - type: precision_at_10 value: 6.446000000000001 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 15.392 - type: precision_at_5 value: 10.671999999999999 - type: recall_at_1 value: 25.892 - type: recall_at_10 value: 49.573 - type: recall_at_100 value: 71.885 - type: recall_at_1000 value: 89.912 - type: recall_at_3 value: 37.226 - type: recall_at_5 value: 41.74 - type: map_at_1 value: 23.915 - type: map_at_10 value: 33.613 - type: map_at_100 value: 35.333999999999996 - type: map_at_1000 value: 35.563 - type: map_at_3 value: 31.203999999999997 - type: map_at_5 value: 32.479 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 38.440000000000005 - type: mrr_at_100 value: 39.459 - type: mrr_at_1000 value: 39.513999999999996 - type: mrr_at_3 value: 36.495 - type: mrr_at_5 value: 37.592 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.341 - type: ndcg_at_100 value: 45.382 - type: ndcg_at_1000 value: 47.921 - type: ndcg_at_3 value: 35.671 - type: ndcg_at_5 value: 37.299 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.567 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 17.194000000000003 - type: precision_at_5 value: 12.253 - type: recall_at_1 value: 23.915 - type: recall_at_10 value: 49.491 - type: recall_at_100 value: 76.483 - type: recall_at_1000 value: 92.674 - type: recall_at_3 value: 38.878 - type: recall_at_5 value: 43.492 - type: map_at_1 value: 20.283 - type: map_at_10 value: 26.851000000000003 - type: map_at_100 value: 27.973 - type: map_at_1000 value: 28.087 - type: map_at_3 value: 24.887 - type: map_at_5 value: 25.804 - type: mrr_at_1 value: 22.366 - type: mrr_at_10 value: 28.864 - type: mrr_at_100 value: 29.903000000000002 - type: mrr_at_1000 value: 29.988 - type: mrr_at_3 value: 27.017999999999997 - type: mrr_at_5 value: 27.813 - type: ndcg_at_1 value: 22.366 - type: ndcg_at_10 value: 30.898999999999997 - type: ndcg_at_100 value: 36.176 - type: ndcg_at_1000 value: 39.036 - type: ndcg_at_3 value: 26.932000000000002 - type: ndcg_at_5 value: 28.416999999999998 - type: precision_at_1 value: 22.366 - type: precision_at_10 value: 4.824 - type: precision_at_100 value: 0.804 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 7.8740000000000006 - type: recall_at_1 value: 20.283 - type: recall_at_10 value: 41.559000000000005 - type: recall_at_100 value: 65.051 - type: recall_at_1000 value: 86.47200000000001 - type: recall_at_3 value: 30.524 - type: recall_at_5 value: 34.11 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.326 - type: map_at_10 value: 19.357 - type: map_at_100 value: 21.014 - type: map_at_1000 value: 21.188000000000002 - type: map_at_3 value: 16.305 - type: map_at_5 value: 17.886 - type: mrr_at_1 value: 24.820999999999998 - type: mrr_at_10 value: 36.150999999999996 - type: mrr_at_100 value: 37.080999999999996 - type: mrr_at_1000 value: 37.123 - type: mrr_at_3 value: 32.952999999999996 - type: mrr_at_5 value: 34.917 - type: ndcg_at_1 value: 24.820999999999998 - type: ndcg_at_10 value: 27.131 - type: ndcg_at_100 value: 33.841 - type: ndcg_at_1000 value: 37.159 - type: ndcg_at_3 value: 22.311 - type: ndcg_at_5 value: 24.026 - type: precision_at_1 value: 24.820999999999998 - type: precision_at_10 value: 8.450000000000001 - type: precision_at_100 value: 1.557 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 16.612 - type: precision_at_5 value: 12.808 - type: recall_at_1 value: 11.326 - type: recall_at_10 value: 32.548 - type: recall_at_100 value: 55.803000000000004 - type: recall_at_1000 value: 74.636 - type: recall_at_3 value: 20.549 - type: recall_at_5 value: 25.514 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 7.481 - type: map_at_10 value: 15.043999999999999 - type: map_at_100 value: 20.194000000000003 - type: map_at_1000 value: 21.423000000000002 - type: map_at_3 value: 11.238 - type: map_at_5 value: 12.828999999999999 - type: mrr_at_1 value: 54.50000000000001 - type: mrr_at_10 value: 64.713 - type: mrr_at_100 value: 65.216 - type: mrr_at_1000 value: 65.23 - type: mrr_at_3 value: 62.74999999999999 - type: mrr_at_5 value: 63.87500000000001 - type: ndcg_at_1 value: 43.375 - type: ndcg_at_10 value: 32.631 - type: ndcg_at_100 value: 36.338 - type: ndcg_at_1000 value: 43.541000000000004 - type: ndcg_at_3 value: 36.746 - type: ndcg_at_5 value: 34.419 - type: precision_at_1 value: 54.50000000000001 - type: precision_at_10 value: 24.825 - type: precision_at_100 value: 7.698 - type: precision_at_1000 value: 1.657 - type: precision_at_3 value: 38.917 - type: precision_at_5 value: 32.35 - type: recall_at_1 value: 7.481 - type: recall_at_10 value: 20.341 - type: recall_at_100 value: 41.778 - type: recall_at_1000 value: 64.82 - type: recall_at_3 value: 12.748000000000001 - type: recall_at_5 value: 15.507000000000001 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.580000000000005 - type: f1 value: 41.5149462395095 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 61.683 - type: map_at_10 value: 73.071 - type: map_at_100 value: 73.327 - type: map_at_1000 value: 73.341 - type: map_at_3 value: 71.446 - type: map_at_5 value: 72.557 - type: mrr_at_1 value: 66.44200000000001 - type: mrr_at_10 value: 77.725 - type: mrr_at_100 value: 77.89399999999999 - type: mrr_at_1000 value: 77.898 - type: mrr_at_3 value: 76.283 - type: mrr_at_5 value: 77.29700000000001 - type: ndcg_at_1 value: 66.44200000000001 - type: ndcg_at_10 value: 78.43 - type: ndcg_at_100 value: 79.462 - type: ndcg_at_1000 value: 79.754 - type: ndcg_at_3 value: 75.53800000000001 - type: ndcg_at_5 value: 77.332 - type: precision_at_1 value: 66.44200000000001 - type: precision_at_10 value: 9.878 - type: precision_at_100 value: 1.051 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 29.878 - type: precision_at_5 value: 18.953 - type: recall_at_1 value: 61.683 - type: recall_at_10 value: 90.259 - type: recall_at_100 value: 94.633 - type: recall_at_1000 value: 96.60499999999999 - type: recall_at_3 value: 82.502 - type: recall_at_5 value: 86.978 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.724 - type: map_at_10 value: 29.487999999999996 - type: map_at_100 value: 31.243 - type: map_at_1000 value: 31.419999999999998 - type: map_at_3 value: 25.612000000000002 - type: map_at_5 value: 27.859 - type: mrr_at_1 value: 35.802 - type: mrr_at_10 value: 44.684000000000005 - type: mrr_at_100 value: 45.578 - type: mrr_at_1000 value: 45.621 - type: mrr_at_3 value: 42.361 - type: mrr_at_5 value: 43.85 - type: ndcg_at_1 value: 35.802 - type: ndcg_at_10 value: 37.009 - type: ndcg_at_100 value: 43.903 - type: ndcg_at_1000 value: 47.019 - type: ndcg_at_3 value: 33.634 - type: ndcg_at_5 value: 34.965 - type: precision_at_1 value: 35.802 - type: precision_at_10 value: 10.386 - type: precision_at_100 value: 1.7309999999999999 - type: precision_at_1000 value: 0.231 - type: precision_at_3 value: 22.84 - type: precision_at_5 value: 17.037 - type: recall_at_1 value: 17.724 - type: recall_at_10 value: 43.708000000000006 - type: recall_at_100 value: 69.902 - type: recall_at_1000 value: 88.51 - type: recall_at_3 value: 30.740000000000002 - type: recall_at_5 value: 36.742000000000004 - task: type: Clustering dataset: name: MTEB FloresClusteringS2S type: jinaai/flores_clustering config: default split: test revision: 480b580487f53a46f881354a8348335d4edbb2de metrics: - type: v_measure value: 39.79120149869612 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 34.801 - type: map_at_10 value: 50.42100000000001 - type: map_at_100 value: 51.254 - type: map_at_1000 value: 51.327999999999996 - type: map_at_3 value: 47.56 - type: map_at_5 value: 49.379 - type: mrr_at_1 value: 69.602 - type: mrr_at_10 value: 76.385 - type: mrr_at_100 value: 76.668 - type: mrr_at_1000 value: 76.683 - type: mrr_at_3 value: 75.102 - type: mrr_at_5 value: 75.949 - type: ndcg_at_1 value: 69.602 - type: ndcg_at_10 value: 59.476 - type: ndcg_at_100 value: 62.527 - type: ndcg_at_1000 value: 64.043 - type: ndcg_at_3 value: 55.155 - type: ndcg_at_5 value: 57.623000000000005 - type: precision_at_1 value: 69.602 - type: precision_at_10 value: 12.292 - type: precision_at_100 value: 1.467 - type: precision_at_1000 value: 0.167 - type: precision_at_3 value: 34.634 - type: precision_at_5 value: 22.728 - type: recall_at_1 value: 34.801 - type: recall_at_10 value: 61.458 - type: recall_at_100 value: 73.363 - type: recall_at_1000 value: 83.43 - type: recall_at_3 value: 51.951 - type: recall_at_5 value: 56.82000000000001 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 67.46079999999999 - type: ap value: 61.81278199159353 - type: f1 value: 67.26505019954826 - task: type: Reranking dataset: name: MTEB MIRACL type: jinaai/miracl config: default split: test revision: d28a029f35c4ff7f616df47b0edf54e6882395e6 metrics: - type: map value: 73.90464144118539 - type: mrr value: 82.44674693216022 - task: type: Retrieval dataset: name: MTEB MIRACLRetrieval type: jinaai/miracl config: default split: test revision: None metrics: - type: map_at_1 value: 21.299 - type: map_at_10 value: 70.547 - type: map_at_100 value: 72.394 - type: map_at_1000 value: 72.39999999999999 - type: map_at_3 value: 41.317 - type: map_at_5 value: 53.756 - type: mrr_at_1 value: 72.84 - type: mrr_at_10 value: 82.466 - type: mrr_at_100 value: 82.52199999999999 - type: mrr_at_1000 value: 82.52199999999999 - type: mrr_at_3 value: 80.607 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.994 - type: ndcg_at_10 value: 80.89 - type: ndcg_at_100 value: 83.30199999999999 - type: ndcg_at_1000 value: 83.337 - type: ndcg_at_3 value: 70.357 - type: ndcg_at_5 value: 72.529 - type: precision_at_1 value: 72.994 - type: precision_at_10 value: 43.056 - type: precision_at_100 value: 4.603 - type: precision_at_1000 value: 0.461 - type: precision_at_3 value: 61.626000000000005 - type: precision_at_5 value: 55.525000000000006 - type: recall_at_1 value: 21.299 - type: recall_at_10 value: 93.903 - type: recall_at_100 value: 99.86699999999999 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 46.653 - type: recall_at_5 value: 65.72200000000001 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.37163702690378 - type: f1 value: 90.18615216514222 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.88992661774515 - type: f1 value: 89.3738963046966 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.97218422252622 - type: f1 value: 54.03096570916335 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 68.75917278185457 - type: f1 value: 49.144083814705844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.75991930060525 - type: f1 value: 69.37993796176502 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.93006052454606 - type: f1 value: 66.04029135274683 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.81977135171486 - type: f1 value: 74.10477122507747 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.23402824478816 - type: f1 value: 71.75572665880296 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.189750849969215 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.78357393555938 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.605612998328358 - type: mrr value: 31.595529205695833 - task: type: Retrieval dataset: name: MTEB MintakaESRetrieval type: jinaai/mintakaqa config: default split: test revision: None metrics: - type: map_at_1 value: 16.213 - type: map_at_10 value: 24.079 - type: map_at_100 value: 25.039 - type: map_at_1000 value: 25.142999999999997 - type: map_at_3 value: 21.823 - type: map_at_5 value: 23.069 - type: mrr_at_1 value: 16.213 - type: mrr_at_10 value: 24.079 - type: mrr_at_100 value: 25.039 - type: mrr_at_1000 value: 25.142999999999997 - type: mrr_at_3 value: 21.823 - type: mrr_at_5 value: 23.069 - type: ndcg_at_1 value: 16.213 - type: ndcg_at_10 value: 28.315 - type: ndcg_at_100 value: 33.475 - type: ndcg_at_1000 value: 36.838 - type: ndcg_at_3 value: 23.627000000000002 - type: ndcg_at_5 value: 25.879 - type: precision_at_1 value: 16.213 - type: precision_at_10 value: 4.183 - type: precision_at_100 value: 0.6709999999999999 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 9.612 - type: precision_at_5 value: 6.865 - type: recall_at_1 value: 16.213 - type: recall_at_10 value: 41.832 - type: recall_at_100 value: 67.12 - type: recall_at_1000 value: 94.843 - type: recall_at_3 value: 28.837000000000003 - type: recall_at_5 value: 34.323 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.692 - type: map_at_10 value: 10.783 - type: map_at_100 value: 13.447999999999999 - type: map_at_1000 value: 14.756 - type: map_at_3 value: 7.646 - type: map_at_5 value: 9.311 - type: mrr_at_1 value: 42.415000000000006 - type: mrr_at_10 value: 50.471 - type: mrr_at_100 value: 51.251999999999995 - type: mrr_at_1000 value: 51.292 - type: mrr_at_3 value: 48.4 - type: mrr_at_5 value: 49.809 - type: ndcg_at_1 value: 40.867 - type: ndcg_at_10 value: 30.303 - type: ndcg_at_100 value: 27.915 - type: ndcg_at_1000 value: 36.734 - type: ndcg_at_3 value: 35.74 - type: ndcg_at_5 value: 33.938 - type: precision_at_1 value: 42.415000000000006 - type: precision_at_10 value: 22.105 - type: precision_at_100 value: 7.173 - type: precision_at_1000 value: 2.007 - type: precision_at_3 value: 33.437 - type: precision_at_5 value: 29.349999999999998 - type: recall_at_1 value: 4.692 - type: recall_at_10 value: 14.798 - type: recall_at_100 value: 28.948 - type: recall_at_1000 value: 59.939 - type: recall_at_3 value: 8.562 - type: recall_at_5 value: 11.818 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 27.572999999999997 - type: map_at_10 value: 42.754 - type: map_at_100 value: 43.8 - type: map_at_1000 value: 43.838 - type: map_at_3 value: 38.157000000000004 - type: map_at_5 value: 40.9 - type: mrr_at_1 value: 31.373 - type: mrr_at_10 value: 45.321 - type: mrr_at_100 value: 46.109 - type: mrr_at_1000 value: 46.135 - type: mrr_at_3 value: 41.483 - type: mrr_at_5 value: 43.76 - type: ndcg_at_1 value: 31.373 - type: ndcg_at_10 value: 50.7 - type: ndcg_at_100 value: 55.103 - type: ndcg_at_1000 value: 55.955999999999996 - type: ndcg_at_3 value: 42.069 - type: ndcg_at_5 value: 46.595 - type: precision_at_1 value: 31.373 - type: precision_at_10 value: 8.601 - type: precision_at_100 value: 1.11 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.399 - type: precision_at_5 value: 14.224 - type: recall_at_1 value: 27.572999999999997 - type: recall_at_10 value: 72.465 - type: recall_at_100 value: 91.474 - type: recall_at_1000 value: 97.78099999999999 - type: recall_at_3 value: 50.087 - type: recall_at_5 value: 60.516000000000005 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.525 - type: map_at_10 value: 84.417 - type: map_at_100 value: 85.07000000000001 - type: map_at_1000 value: 85.085 - type: map_at_3 value: 81.45 - type: map_at_5 value: 83.317 - type: mrr_at_1 value: 81.17999999999999 - type: mrr_at_10 value: 87.34100000000001 - type: mrr_at_100 value: 87.461 - type: mrr_at_1000 value: 87.46199999999999 - type: mrr_at_3 value: 86.372 - type: mrr_at_5 value: 87.046 - type: ndcg_at_1 value: 81.17999999999999 - type: ndcg_at_10 value: 88.144 - type: ndcg_at_100 value: 89.424 - type: ndcg_at_1000 value: 89.517 - type: ndcg_at_3 value: 85.282 - type: ndcg_at_5 value: 86.874 - type: precision_at_1 value: 81.17999999999999 - type: precision_at_10 value: 13.385 - type: precision_at_100 value: 1.533 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.29 - type: precision_at_5 value: 24.546 - type: recall_at_1 value: 70.525 - type: recall_at_10 value: 95.22500000000001 - type: recall_at_100 value: 99.572 - type: recall_at_1000 value: 99.98899999999999 - type: recall_at_3 value: 87.035 - type: recall_at_5 value: 91.526 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 48.284384328108736 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 56.02508021518392 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.046 - type: map_at_100 value: 11.802999999999999 - type: map_at_1000 value: 12.074 - type: map_at_3 value: 7.071 - type: map_at_5 value: 8.556 - type: mrr_at_1 value: 19.8 - type: mrr_at_10 value: 30.105999999999998 - type: mrr_at_100 value: 31.16 - type: mrr_at_1000 value: 31.224 - type: mrr_at_3 value: 26.633000000000003 - type: mrr_at_5 value: 28.768 - type: ndcg_at_1 value: 19.8 - type: ndcg_at_10 value: 17.358 - type: ndcg_at_100 value: 24.566 - type: ndcg_at_1000 value: 29.653000000000002 - type: ndcg_at_3 value: 16.052 - type: ndcg_at_5 value: 14.325 - type: precision_at_1 value: 19.8 - type: precision_at_10 value: 9.07 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.318 - type: precision_at_3 value: 14.933 - type: precision_at_5 value: 12.68 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.398 - type: recall_at_100 value: 39.683 - type: recall_at_1000 value: 64.625 - type: recall_at_3 value: 9.113 - type: recall_at_5 value: 12.873000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 87.90508618312852 - type: cos_sim_spearman value: 83.01323463129205 - type: euclidean_pearson value: 84.35845059002891 - type: euclidean_spearman value: 82.85508559018527 - type: manhattan_pearson value: 84.3682368950498 - type: manhattan_spearman value: 82.8619728517302 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 89.28294535873366 - type: cos_sim_spearman value: 81.61879268131732 - type: euclidean_pearson value: 85.99053604863724 - type: euclidean_spearman value: 80.95176684739084 - type: manhattan_pearson value: 85.98054086663903 - type: manhattan_spearman value: 80.9911070430335 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.15898098455258 - type: cos_sim_spearman value: 86.8247985072307 - type: euclidean_pearson value: 86.25342429918649 - type: euclidean_spearman value: 87.13468603023252 - type: manhattan_pearson value: 86.2006134067688 - type: manhattan_spearman value: 87.06135811996896 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.57403998481877 - type: cos_sim_spearman value: 83.55947075172618 - type: euclidean_pearson value: 84.97097562965358 - type: euclidean_spearman value: 83.6287075601467 - type: manhattan_pearson value: 84.87092197104133 - type: manhattan_spearman value: 83.53783891641335 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.14632780204231 - type: cos_sim_spearman value: 88.74903634923868 - type: euclidean_pearson value: 88.03922995855112 - type: euclidean_spearman value: 88.72852190525855 - type: manhattan_pearson value: 87.9694791024271 - type: manhattan_spearman value: 88.66461452107418 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.75989818558652 - type: cos_sim_spearman value: 86.03107893122942 - type: euclidean_pearson value: 85.21908960133018 - type: euclidean_spearman value: 85.93012720153482 - type: manhattan_pearson value: 85.1969170195502 - type: manhattan_spearman value: 85.8975254197784 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.16803898789955 - type: cos_sim_spearman value: 88.56139047950525 - type: euclidean_pearson value: 88.09685325747859 - type: euclidean_spearman value: 88.0457609458947 - type: manhattan_pearson value: 88.07054413001431 - type: manhattan_spearman value: 88.10784098889314 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.7160384474547 - type: cos_sim_spearman value: 86.4899235500562 - type: euclidean_pearson value: 85.90854477703468 - type: euclidean_spearman value: 86.16085009124498 - type: manhattan_pearson value: 85.9249735317884 - type: manhattan_spearman value: 86.25038421339116 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 89.37914622360788 - type: cos_sim_spearman value: 88.24619159322809 - type: euclidean_pearson value: 89.00538382632769 - type: euclidean_spearman value: 88.44675863524736 - type: manhattan_pearson value: 88.97372120683606 - type: manhattan_spearman value: 88.33509324222129 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 66.22181360203069 - type: cos_sim_spearman value: 65.6218291833768 - type: euclidean_pearson value: 67.14543788822508 - type: euclidean_spearman value: 65.21269939987857 - type: manhattan_pearson value: 67.03304607195636 - type: manhattan_spearman value: 65.18885316423805 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 65.71694059677084 - type: cos_sim_spearman value: 67.96591844540954 - type: euclidean_pearson value: 65.6964079162296 - type: euclidean_spearman value: 67.53027948900173 - type: manhattan_pearson value: 65.93545097673741 - type: manhattan_spearman value: 67.7261811805062 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 75.43544796375058 - type: cos_sim_spearman value: 78.80462701160789 - type: euclidean_pearson value: 76.19135575163138 - type: euclidean_spearman value: 78.4974732597096 - type: manhattan_pearson value: 76.3254742699264 - type: manhattan_spearman value: 78.51884307690416 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.46805293607684 - type: cos_sim_spearman value: 87.83792784689113 - type: euclidean_pearson value: 87.3872143683234 - type: euclidean_spearman value: 87.61611384542778 - type: manhattan_pearson value: 87.38542672601992 - type: manhattan_spearman value: 87.61423971087297 - task: type: STS dataset: name: MTEB STSES type: PlanTL-GOB-ES/sts-es config: default split: test revision: 0912bb6c9393c76d62a7c5ee81c4c817ff47c9f4 metrics: - type: cos_sim_pearson value: 82.55286866116202 - type: cos_sim_spearman value: 80.22150503320272 - type: euclidean_pearson value: 83.27223445187087 - type: euclidean_spearman value: 80.59078590992925 - type: manhattan_pearson value: 83.23095887013197 - type: manhattan_spearman value: 80.87994285189795 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.29717302265792 - type: mrr value: 94.02156304117088 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 49.9 - type: map_at_10 value: 58.626 - type: map_at_100 value: 59.519999999999996 - type: map_at_1000 value: 59.55200000000001 - type: map_at_3 value: 56.232000000000006 - type: map_at_5 value: 57.833 - type: mrr_at_1 value: 52.333 - type: mrr_at_10 value: 60.039 - type: mrr_at_100 value: 60.732 - type: mrr_at_1000 value: 60.75899999999999 - type: mrr_at_3 value: 58.278 - type: mrr_at_5 value: 59.428000000000004 - type: ndcg_at_1 value: 52.333 - type: ndcg_at_10 value: 62.67 - type: ndcg_at_100 value: 66.465 - type: ndcg_at_1000 value: 67.425 - type: ndcg_at_3 value: 58.711999999999996 - type: ndcg_at_5 value: 60.958999999999996 - type: precision_at_1 value: 52.333 - type: precision_at_10 value: 8.333 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 22.778000000000002 - type: precision_at_5 value: 15.267 - type: recall_at_1 value: 49.9 - type: recall_at_10 value: 73.394 - type: recall_at_100 value: 90.43299999999999 - type: recall_at_1000 value: 98.167 - type: recall_at_3 value: 63.032999999999994 - type: recall_at_5 value: 68.444 - task: type: Clustering dataset: name: MTEB SpanishNewsClusteringP2P type: jinaai/spanish_news_clustering config: default split: test revision: b5edc3d3d7c12c7b9f883e9da50f6732f3624142 metrics: - type: v_measure value: 48.30543557796266 - task: type: Retrieval dataset: name: MTEB SpanishPassageRetrievalS2P type: jinaai/spanish_passage_retrieval config: default split: test revision: None metrics: - type: map_at_1 value: 14.443 - type: map_at_10 value: 28.736 - type: map_at_100 value: 34.514 - type: map_at_1000 value: 35.004000000000005 - type: map_at_3 value: 20.308 - type: map_at_5 value: 25.404 - type: mrr_at_1 value: 50.29900000000001 - type: mrr_at_10 value: 63.757 - type: mrr_at_100 value: 64.238 - type: mrr_at_1000 value: 64.24600000000001 - type: mrr_at_3 value: 59.480999999999995 - type: mrr_at_5 value: 62.924 - type: ndcg_at_1 value: 50.29900000000001 - type: ndcg_at_10 value: 42.126999999999995 - type: ndcg_at_100 value: 57.208000000000006 - type: ndcg_at_1000 value: 60.646 - type: ndcg_at_3 value: 38.722 - type: ndcg_at_5 value: 40.007999999999996 - type: precision_at_1 value: 50.29900000000001 - type: precision_at_10 value: 19.82 - type: precision_at_100 value: 4.82 - type: precision_at_1000 value: 0.5910000000000001 - type: precision_at_3 value: 31.537 - type: precision_at_5 value: 28.262999999999998 - type: recall_at_1 value: 14.443 - type: recall_at_10 value: 43.885999999999996 - type: recall_at_100 value: 85.231 - type: recall_at_1000 value: 99.07000000000001 - type: recall_at_3 value: 22.486 - type: recall_at_5 value: 33.035 - type: map_at_1 value: 15.578 - type: map_at_10 value: 52.214000000000006 - type: map_at_100 value: 64.791 - type: map_at_1000 value: 64.791 - type: map_at_3 value: 33.396 - type: map_at_5 value: 41.728 - type: mrr_at_1 value: 73.653 - type: mrr_at_10 value: 85.116 - type: mrr_at_100 value: 85.205 - type: mrr_at_1000 value: 85.205 - type: mrr_at_3 value: 84.631 - type: mrr_at_5 value: 85.05 - type: ndcg_at_1 value: 76.64699999999999 - type: ndcg_at_10 value: 70.38600000000001 - type: ndcg_at_100 value: 82.27600000000001 - type: ndcg_at_1000 value: 82.27600000000001 - type: ndcg_at_3 value: 70.422 - type: ndcg_at_5 value: 69.545 - type: precision_at_1 value: 76.64699999999999 - type: precision_at_10 value: 43.653 - type: precision_at_100 value: 7.718999999999999 - type: precision_at_1000 value: 0.772 - type: precision_at_3 value: 64.671 - type: precision_at_5 value: 56.766000000000005 - type: recall_at_1 value: 15.578 - type: recall_at_10 value: 67.459 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 36.922 - type: recall_at_5 value: 49.424 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81683168316832 - type: cos_sim_ap value: 95.61502659412484 - type: cos_sim_f1 value: 90.6813627254509 - type: cos_sim_precision value: 90.86345381526104 - type: cos_sim_recall value: 90.5 - type: dot_accuracy value: 99.8039603960396 - type: dot_ap value: 95.36783483182609 - type: dot_f1 value: 89.90825688073394 - type: dot_precision value: 91.68399168399168 - type: dot_recall value: 88.2 - type: euclidean_accuracy value: 99.81188118811882 - type: euclidean_ap value: 95.51583052324564 - type: euclidean_f1 value: 90.46214355948868 - type: euclidean_precision value: 88.97485493230174 - type: euclidean_recall value: 92.0 - type: manhattan_accuracy value: 99.8079207920792 - type: manhattan_ap value: 95.44030644653718 - type: manhattan_f1 value: 90.37698412698413 - type: manhattan_precision value: 89.66535433070865 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.81683168316832 - type: max_ap value: 95.61502659412484 - type: max_f1 value: 90.6813627254509 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.39046705023096 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.57429225651293 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.17622570658746 - type: mrr value: 50.99844293778118 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.97416289382191 - type: cos_sim_spearman value: 29.871890597161432 - type: dot_pearson value: 28.768845892613644 - type: dot_spearman value: 28.872458999448686 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.646 - type: map_at_100 value: 9.491 - type: map_at_1000 value: 23.75 - type: map_at_3 value: 0.588 - type: map_at_5 value: 0.9129999999999999 - type: mrr_at_1 value: 84.0 - type: mrr_at_10 value: 89.889 - type: mrr_at_100 value: 89.889 - type: mrr_at_1000 value: 89.889 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 75.0 - type: ndcg_at_10 value: 67.368 - type: ndcg_at_100 value: 52.834 - type: ndcg_at_1000 value: 49.144 - type: ndcg_at_3 value: 72.866 - type: ndcg_at_5 value: 70.16 - type: precision_at_1 value: 84.0 - type: precision_at_10 value: 71.8 - type: precision_at_100 value: 54.04 - type: precision_at_1000 value: 21.709999999999997 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 74.0 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 1.9029999999999998 - type: recall_at_100 value: 13.012 - type: recall_at_1000 value: 46.105000000000004 - type: recall_at_3 value: 0.63 - type: recall_at_5 value: 1.0030000000000001 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.5 - type: map_at_10 value: 8.193999999999999 - type: map_at_100 value: 14.01 - type: map_at_1000 value: 15.570999999999998 - type: map_at_3 value: 4.361000000000001 - type: map_at_5 value: 5.9270000000000005 - type: mrr_at_1 value: 16.326999999999998 - type: mrr_at_10 value: 33.326 - type: mrr_at_100 value: 34.592 - type: mrr_at_1000 value: 34.592 - type: mrr_at_3 value: 29.252 - type: mrr_at_5 value: 30.680000000000003 - type: ndcg_at_1 value: 15.306000000000001 - type: ndcg_at_10 value: 19.819 - type: ndcg_at_100 value: 33.428000000000004 - type: ndcg_at_1000 value: 45.024 - type: ndcg_at_3 value: 19.667 - type: ndcg_at_5 value: 19.625 - type: precision_at_1 value: 16.326999999999998 - type: precision_at_10 value: 18.367 - type: precision_at_100 value: 7.367 - type: precision_at_1000 value: 1.496 - type: precision_at_3 value: 23.128999999999998 - type: precision_at_5 value: 21.633 - type: recall_at_1 value: 1.5 - type: recall_at_10 value: 14.362 - type: recall_at_100 value: 45.842 - type: recall_at_1000 value: 80.42 - type: recall_at_3 value: 5.99 - type: recall_at_5 value: 8.701 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.04740000000001 - type: ap value: 13.58661943759992 - type: f1 value: 53.727487131754195 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.06395019807584 - type: f1 value: 61.36753664680866 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 40.19881263066229 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.19401561661799 - type: cos_sim_ap value: 71.62462506173092 - type: cos_sim_f1 value: 66.0641327225455 - type: cos_sim_precision value: 62.234662934453 - type: cos_sim_recall value: 70.3957783641161 - type: dot_accuracy value: 84.69333015437802 - type: dot_ap value: 69.83805526490895 - type: dot_f1 value: 64.85446235265817 - type: dot_precision value: 59.59328028293546 - type: dot_recall value: 71.13456464379946 - type: euclidean_accuracy value: 85.38475293556655 - type: euclidean_ap value: 72.05594596250286 - type: euclidean_f1 value: 66.53543307086615 - type: euclidean_precision value: 62.332872291378514 - type: euclidean_recall value: 71.34564643799473 - type: manhattan_accuracy value: 85.3907134767837 - type: manhattan_ap value: 72.04585410650152 - type: manhattan_f1 value: 66.57132642116554 - type: manhattan_precision value: 60.704194740273856 - type: manhattan_recall value: 73.6939313984169 - type: max_accuracy value: 85.3907134767837 - type: max_ap value: 72.05594596250286 - type: max_f1 value: 66.57132642116554 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.30414871735165 - type: cos_sim_ap value: 86.4398673359918 - type: cos_sim_f1 value: 78.9243598692186 - type: cos_sim_precision value: 75.47249350101876 - type: cos_sim_recall value: 82.7071142593163 - type: dot_accuracy value: 89.26145845461248 - type: dot_ap value: 86.32172118414802 - type: dot_f1 value: 78.8277467755645 - type: dot_precision value: 75.79418662497335 - type: dot_recall value: 82.11425931629196 - type: euclidean_accuracy value: 89.24205378973105 - type: euclidean_ap value: 86.23988673522649 - type: euclidean_f1 value: 78.67984857951413 - type: euclidean_precision value: 75.2689684269742 - type: euclidean_recall value: 82.41453649522637 - type: manhattan_accuracy value: 89.18189932859859 - type: manhattan_ap value: 86.21003833972824 - type: manhattan_f1 value: 78.70972564850115 - type: manhattan_precision value: 76.485544094145 - type: manhattan_recall value: 81.0671388974438 - type: max_accuracy value: 89.30414871735165 - type: max_ap value: 86.4398673359918 - type: max_f1 value: 78.9243598692186 - task: type: Clustering dataset: name: MTEB WikiCitiesClustering type: jinaai/cities_wiki_clustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 73.254610626148 - task: type: Retrieval dataset: name: MTEB XMarketES type: jinaai/xmarket_ml config: default split: test revision: 705db869e8107dfe6e34b832af90446e77d813e3 metrics: - type: map_at_1 value: 5.506 - type: map_at_10 value: 11.546 - type: map_at_100 value: 14.299999999999999 - type: map_at_1000 value: 15.146999999999998 - type: map_at_3 value: 8.748000000000001 - type: map_at_5 value: 10.036000000000001 - type: mrr_at_1 value: 17.902 - type: mrr_at_10 value: 25.698999999999998 - type: mrr_at_100 value: 26.634 - type: mrr_at_1000 value: 26.704 - type: mrr_at_3 value: 23.244999999999997 - type: mrr_at_5 value: 24.555 - type: ndcg_at_1 value: 17.902 - type: ndcg_at_10 value: 19.714000000000002 - type: ndcg_at_100 value: 25.363000000000003 - type: ndcg_at_1000 value: 30.903999999999996 - type: ndcg_at_3 value: 17.884 - type: ndcg_at_5 value: 18.462 - type: precision_at_1 value: 17.902 - type: precision_at_10 value: 10.467 - type: precision_at_100 value: 3.9699999999999998 - type: precision_at_1000 value: 1.1320000000000001 - type: precision_at_3 value: 14.387 - type: precision_at_5 value: 12.727 - type: recall_at_1 value: 5.506 - type: recall_at_10 value: 19.997999999999998 - type: recall_at_100 value: 42.947 - type: recall_at_1000 value: 67.333 - type: recall_at_3 value: 11.158 - type: recall_at_5 value: 14.577000000000002 - task: type: Retrieval dataset: name: MTEB XPQAESRetrieval type: jinaai/xpqa config: default split: test revision: None metrics: - type: map_at_1 value: 32.53 - type: map_at_10 value: 58.68600000000001 - type: map_at_100 value: 60.45399999999999 - type: map_at_1000 value: 60.51499999999999 - type: map_at_3 value: 50.356 - type: map_at_5 value: 55.98 - type: mrr_at_1 value: 61.791 - type: mrr_at_10 value: 68.952 - type: mrr_at_100 value: 69.524 - type: mrr_at_1000 value: 69.538 - type: mrr_at_3 value: 67.087 - type: mrr_at_5 value: 68.052 - type: ndcg_at_1 value: 61.791 - type: ndcg_at_10 value: 65.359 - type: ndcg_at_100 value: 70.95700000000001 - type: ndcg_at_1000 value: 71.881 - type: ndcg_at_3 value: 59.999 - type: ndcg_at_5 value: 61.316 - type: precision_at_1 value: 61.791 - type: precision_at_10 value: 18.184 - type: precision_at_100 value: 2.317 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 42.203 - type: precision_at_5 value: 31.374999999999996 - type: recall_at_1 value: 32.53 - type: recall_at_10 value: 73.098 - type: recall_at_100 value: 94.029 - type: recall_at_1000 value: 99.842 - type: recall_at_3 value: 54.525 - type: recall_at_5 value: 63.796 --- # AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF This model was converted to GGUF format from [`jinaai/jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jinaai/jina-embeddings-v2-base-es) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
OrcaDB/gte-base-en-v1.5
OrcaDB
sentence-similarity
[ "transformers", "safetensors", "new", "feature-extraction", "sentence-transformers", "gte", "mteb", "transformers.js", "sentence-similarity", "custom_code", "en", "arxiv:2407.19669", "arxiv:2308.03281", "license:apache-2.0", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,731
1,731
62,202
0
--- language: - en library_name: transformers license: apache-2.0 tags: - sentence-transformers - gte - mteb - transformers.js - sentence-similarity model-index: - name: gte-base-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.7910447761194 - type: ap value: 37.053785713650626 - type: f1 value: 68.51101510998551 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.016875 - type: ap value: 89.17750268426342 - type: f1 value: 92.9970977240524 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.312000000000005 - type: f1 value: 52.98175784163017 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 38.193 - type: map_at_10 value: 54.848 - type: map_at_100 value: 55.388000000000005 - type: map_at_1000 value: 55.388999999999996 - type: map_at_3 value: 50.427 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 39.047 - type: mrr_at_10 value: 55.153 - type: mrr_at_100 value: 55.686 - type: mrr_at_1000 value: 55.688 - type: mrr_at_3 value: 50.676 - type: mrr_at_5 value: 53.417 - type: ndcg_at_1 value: 38.193 - type: ndcg_at_10 value: 63.486 - type: ndcg_at_100 value: 65.58 - type: ndcg_at_1000 value: 65.61 - type: ndcg_at_3 value: 54.494 - type: ndcg_at_5 value: 59.339 - type: precision_at_1 value: 38.193 - type: precision_at_10 value: 9.075 - type: precision_at_100 value: 0.9939999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.096 - type: precision_at_5 value: 15.619 - type: recall_at_1 value: 38.193 - type: recall_at_10 value: 90.754 - type: recall_at_100 value: 99.431 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.28699999999999 - type: recall_at_5 value: 78.094 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.508221208908964 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 42.04668382560096 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.828759903716815 - type: mrr value: 74.37343358395991 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.03673698773017 - type: cos_sim_spearman value: 83.6470866785058 - type: euclidean_pearson value: 82.64048673096565 - type: euclidean_spearman value: 83.63142367101115 - type: manhattan_pearson value: 82.71493099760228 - type: manhattan_spearman value: 83.60491704294326 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.73376623376623 - type: f1 value: 86.70294049278262 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.31923804167062 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 37.552547125348454 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 30.567 - type: map_at_10 value: 41.269 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.84 - type: map_at_3 value: 37.567 - type: map_at_5 value: 39.706 - type: mrr_at_1 value: 37.053000000000004 - type: mrr_at_10 value: 46.900999999999996 - type: mrr_at_100 value: 47.662 - type: mrr_at_1000 value: 47.713 - type: mrr_at_3 value: 43.801 - type: mrr_at_5 value: 45.689 - type: ndcg_at_1 value: 37.053000000000004 - type: ndcg_at_10 value: 47.73 - type: ndcg_at_100 value: 53.128 - type: ndcg_at_1000 value: 55.300000000000004 - type: ndcg_at_3 value: 42.046 - type: ndcg_at_5 value: 44.782 - type: precision_at_1 value: 37.053000000000004 - type: precision_at_10 value: 9.142 - type: precision_at_100 value: 1.485 - type: precision_at_1000 value: 0.197 - type: precision_at_3 value: 20.076 - type: precision_at_5 value: 14.535 - type: recall_at_1 value: 30.567 - type: recall_at_10 value: 60.602999999999994 - type: recall_at_100 value: 83.22800000000001 - type: recall_at_1000 value: 96.696 - type: recall_at_3 value: 44.336999999999996 - type: recall_at_5 value: 51.949 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 28.538000000000004 - type: map_at_10 value: 38.757999999999996 - type: map_at_100 value: 40.129 - type: map_at_1000 value: 40.262 - type: map_at_3 value: 35.866 - type: map_at_5 value: 37.417 - type: mrr_at_1 value: 36.051 - type: mrr_at_10 value: 44.868 - type: mrr_at_100 value: 45.568999999999996 - type: mrr_at_1000 value: 45.615 - type: mrr_at_3 value: 42.558 - type: mrr_at_5 value: 43.883 - type: ndcg_at_1 value: 36.051 - type: ndcg_at_10 value: 44.584 - type: ndcg_at_100 value: 49.356 - type: ndcg_at_1000 value: 51.39 - type: ndcg_at_3 value: 40.389 - type: ndcg_at_5 value: 42.14 - type: precision_at_1 value: 36.051 - type: precision_at_10 value: 8.446 - type: precision_at_100 value: 1.411 - type: precision_at_1000 value: 0.19 - type: precision_at_3 value: 19.639 - type: precision_at_5 value: 13.796 - type: recall_at_1 value: 28.538000000000004 - type: recall_at_10 value: 54.99000000000001 - type: recall_at_100 value: 75.098 - type: recall_at_1000 value: 87.848 - type: recall_at_3 value: 42.236000000000004 - type: recall_at_5 value: 47.377 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 37.188 - type: map_at_10 value: 50.861000000000004 - type: map_at_100 value: 51.917 - type: map_at_1000 value: 51.964999999999996 - type: map_at_3 value: 47.144000000000005 - type: map_at_5 value: 49.417 - type: mrr_at_1 value: 42.571 - type: mrr_at_10 value: 54.086999999999996 - type: mrr_at_100 value: 54.739000000000004 - type: mrr_at_1000 value: 54.762 - type: mrr_at_3 value: 51.285000000000004 - type: mrr_at_5 value: 53.0 - type: ndcg_at_1 value: 42.571 - type: ndcg_at_10 value: 57.282 - type: ndcg_at_100 value: 61.477000000000004 - type: ndcg_at_1000 value: 62.426 - type: ndcg_at_3 value: 51.0 - type: ndcg_at_5 value: 54.346000000000004 - type: precision_at_1 value: 42.571 - type: precision_at_10 value: 9.467 - type: precision_at_100 value: 1.2550000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 23.114 - type: precision_at_5 value: 16.250999999999998 - type: recall_at_1 value: 37.188 - type: recall_at_10 value: 73.068 - type: recall_at_100 value: 91.203 - type: recall_at_1000 value: 97.916 - type: recall_at_3 value: 56.552 - type: recall_at_5 value: 64.567 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 25.041000000000004 - type: map_at_10 value: 33.86 - type: map_at_100 value: 34.988 - type: map_at_1000 value: 35.064 - type: map_at_3 value: 31.049 - type: map_at_5 value: 32.845 - type: mrr_at_1 value: 26.893 - type: mrr_at_10 value: 35.594 - type: mrr_at_100 value: 36.617 - type: mrr_at_1000 value: 36.671 - type: mrr_at_3 value: 33.051 - type: mrr_at_5 value: 34.61 - type: ndcg_at_1 value: 26.893 - type: ndcg_at_10 value: 38.674 - type: ndcg_at_100 value: 44.178 - type: ndcg_at_1000 value: 46.089999999999996 - type: ndcg_at_3 value: 33.485 - type: ndcg_at_5 value: 36.402 - type: precision_at_1 value: 26.893 - type: precision_at_10 value: 5.989 - type: precision_at_100 value: 0.918 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 14.2 - type: precision_at_5 value: 10.26 - type: recall_at_1 value: 25.041000000000004 - type: recall_at_10 value: 51.666000000000004 - type: recall_at_100 value: 76.896 - type: recall_at_1000 value: 91.243 - type: recall_at_3 value: 38.035999999999994 - type: recall_at_5 value: 44.999 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 15.909999999999998 - type: map_at_10 value: 23.901 - type: map_at_100 value: 25.165 - type: map_at_1000 value: 25.291000000000004 - type: map_at_3 value: 21.356 - type: map_at_5 value: 22.816 - type: mrr_at_1 value: 20.025000000000002 - type: mrr_at_10 value: 28.382 - type: mrr_at_100 value: 29.465000000000003 - type: mrr_at_1000 value: 29.535 - type: mrr_at_3 value: 25.933 - type: mrr_at_5 value: 27.332 - type: ndcg_at_1 value: 20.025000000000002 - type: ndcg_at_10 value: 29.099000000000004 - type: ndcg_at_100 value: 35.127 - type: ndcg_at_1000 value: 38.096000000000004 - type: ndcg_at_3 value: 24.464 - type: ndcg_at_5 value: 26.709 - type: precision_at_1 value: 20.025000000000002 - type: precision_at_10 value: 5.398 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 11.774 - type: precision_at_5 value: 8.632 - type: recall_at_1 value: 15.909999999999998 - type: recall_at_10 value: 40.672000000000004 - type: recall_at_100 value: 66.855 - type: recall_at_1000 value: 87.922 - type: recall_at_3 value: 28.069 - type: recall_at_5 value: 33.812 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 30.175 - type: map_at_10 value: 41.36 - type: map_at_100 value: 42.701 - type: map_at_1000 value: 42.817 - type: map_at_3 value: 37.931 - type: map_at_5 value: 39.943 - type: mrr_at_1 value: 35.611 - type: mrr_at_10 value: 46.346 - type: mrr_at_100 value: 47.160000000000004 - type: mrr_at_1000 value: 47.203 - type: mrr_at_3 value: 43.712 - type: mrr_at_5 value: 45.367000000000004 - type: ndcg_at_1 value: 35.611 - type: ndcg_at_10 value: 47.532000000000004 - type: ndcg_at_100 value: 53.003 - type: ndcg_at_1000 value: 55.007 - type: ndcg_at_3 value: 42.043 - type: ndcg_at_5 value: 44.86 - type: precision_at_1 value: 35.611 - type: precision_at_10 value: 8.624 - type: precision_at_100 value: 1.332 - type: precision_at_1000 value: 0.169 - type: precision_at_3 value: 20.083000000000002 - type: precision_at_5 value: 14.437 - type: recall_at_1 value: 30.175 - type: recall_at_10 value: 60.5 - type: recall_at_100 value: 83.399 - type: recall_at_1000 value: 96.255 - type: recall_at_3 value: 45.448 - type: recall_at_5 value: 52.432 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 22.467000000000002 - type: map_at_10 value: 33.812999999999995 - type: map_at_100 value: 35.248000000000005 - type: map_at_1000 value: 35.359 - type: map_at_3 value: 30.316 - type: map_at_5 value: 32.233000000000004 - type: mrr_at_1 value: 28.310999999999996 - type: mrr_at_10 value: 38.979 - type: mrr_at_100 value: 39.937 - type: mrr_at_1000 value: 39.989999999999995 - type: mrr_at_3 value: 36.244 - type: mrr_at_5 value: 37.871 - type: ndcg_at_1 value: 28.310999999999996 - type: ndcg_at_10 value: 40.282000000000004 - type: ndcg_at_100 value: 46.22 - type: ndcg_at_1000 value: 48.507 - type: ndcg_at_3 value: 34.596 - type: ndcg_at_5 value: 37.267 - type: precision_at_1 value: 28.310999999999996 - type: precision_at_10 value: 7.831 - type: precision_at_100 value: 1.257 - type: precision_at_1000 value: 0.164 - type: precision_at_3 value: 17.275 - type: precision_at_5 value: 12.556999999999999 - type: recall_at_1 value: 22.467000000000002 - type: recall_at_10 value: 54.14099999999999 - type: recall_at_100 value: 79.593 - type: recall_at_1000 value: 95.063 - type: recall_at_3 value: 38.539 - type: recall_at_5 value: 45.403 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 24.18591666666667 - type: map_at_10 value: 33.84258333333333 - type: map_at_100 value: 35.11391666666666 - type: map_at_1000 value: 35.23258333333333 - type: map_at_3 value: 30.764249999999997 - type: map_at_5 value: 32.52333333333334 - type: mrr_at_1 value: 28.54733333333333 - type: mrr_at_10 value: 37.81725 - type: mrr_at_100 value: 38.716499999999996 - type: mrr_at_1000 value: 38.77458333333333 - type: mrr_at_3 value: 35.157833333333336 - type: mrr_at_5 value: 36.69816666666667 - type: ndcg_at_1 value: 28.54733333333333 - type: ndcg_at_10 value: 39.51508333333334 - type: ndcg_at_100 value: 44.95316666666666 - type: ndcg_at_1000 value: 47.257083333333334 - type: ndcg_at_3 value: 34.205833333333324 - type: ndcg_at_5 value: 36.78266666666667 - type: precision_at_1 value: 28.54733333333333 - type: precision_at_10 value: 7.082583333333334 - type: precision_at_100 value: 1.1590833333333332 - type: precision_at_1000 value: 0.15516666666666662 - type: precision_at_3 value: 15.908750000000001 - type: precision_at_5 value: 11.505416666666669 - type: recall_at_1 value: 24.18591666666667 - type: recall_at_10 value: 52.38758333333333 - type: recall_at_100 value: 76.13666666666667 - type: recall_at_1000 value: 91.99066666666667 - type: recall_at_3 value: 37.78333333333334 - type: recall_at_5 value: 44.30141666666666 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 21.975 - type: map_at_10 value: 29.781000000000002 - type: map_at_100 value: 30.847 - type: map_at_1000 value: 30.94 - type: map_at_3 value: 27.167 - type: map_at_5 value: 28.633999999999997 - type: mrr_at_1 value: 24.387 - type: mrr_at_10 value: 32.476 - type: mrr_at_100 value: 33.337 - type: mrr_at_1000 value: 33.403 - type: mrr_at_3 value: 29.881999999999998 - type: mrr_at_5 value: 31.339 - type: ndcg_at_1 value: 24.387 - type: ndcg_at_10 value: 34.596 - type: ndcg_at_100 value: 39.635 - type: ndcg_at_1000 value: 42.079 - type: ndcg_at_3 value: 29.516 - type: ndcg_at_5 value: 31.959 - type: precision_at_1 value: 24.387 - type: precision_at_10 value: 5.6129999999999995 - type: precision_at_100 value: 0.8909999999999999 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.73 - type: precision_at_5 value: 9.171999999999999 - type: recall_at_1 value: 21.975 - type: recall_at_10 value: 46.826 - type: recall_at_100 value: 69.554 - type: recall_at_1000 value: 87.749 - type: recall_at_3 value: 33.016 - type: recall_at_5 value: 38.97 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 15.614 - type: map_at_10 value: 22.927 - type: map_at_100 value: 24.185000000000002 - type: map_at_1000 value: 24.319 - type: map_at_3 value: 20.596 - type: map_at_5 value: 21.854000000000003 - type: mrr_at_1 value: 18.858 - type: mrr_at_10 value: 26.535999999999998 - type: mrr_at_100 value: 27.582 - type: mrr_at_1000 value: 27.665 - type: mrr_at_3 value: 24.295 - type: mrr_at_5 value: 25.532 - type: ndcg_at_1 value: 18.858 - type: ndcg_at_10 value: 27.583000000000002 - type: ndcg_at_100 value: 33.635 - type: ndcg_at_1000 value: 36.647 - type: ndcg_at_3 value: 23.348 - type: ndcg_at_5 value: 25.257 - type: precision_at_1 value: 18.858 - type: precision_at_10 value: 5.158 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 11.092 - type: precision_at_5 value: 8.1 - type: recall_at_1 value: 15.614 - type: recall_at_10 value: 37.916 - type: recall_at_100 value: 65.205 - type: recall_at_1000 value: 86.453 - type: recall_at_3 value: 26.137 - type: recall_at_5 value: 31.087999999999997 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 23.078000000000003 - type: map_at_10 value: 31.941999999999997 - type: map_at_100 value: 33.196999999999996 - type: map_at_1000 value: 33.303 - type: map_at_3 value: 28.927000000000003 - type: map_at_5 value: 30.707 - type: mrr_at_1 value: 26.866 - type: mrr_at_10 value: 35.557 - type: mrr_at_100 value: 36.569 - type: mrr_at_1000 value: 36.632 - type: mrr_at_3 value: 32.897999999999996 - type: mrr_at_5 value: 34.437 - type: ndcg_at_1 value: 26.866 - type: ndcg_at_10 value: 37.372 - type: ndcg_at_100 value: 43.248 - type: ndcg_at_1000 value: 45.632 - type: ndcg_at_3 value: 31.852999999999998 - type: ndcg_at_5 value: 34.582 - type: precision_at_1 value: 26.866 - type: precision_at_10 value: 6.511 - type: precision_at_100 value: 1.078 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 14.582999999999998 - type: precision_at_5 value: 10.634 - type: recall_at_1 value: 23.078000000000003 - type: recall_at_10 value: 50.334 - type: recall_at_100 value: 75.787 - type: recall_at_1000 value: 92.485 - type: recall_at_3 value: 35.386 - type: recall_at_5 value: 42.225 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 22.203999999999997 - type: map_at_10 value: 31.276 - type: map_at_100 value: 32.844 - type: map_at_1000 value: 33.062999999999995 - type: map_at_3 value: 27.733999999999998 - type: map_at_5 value: 29.64 - type: mrr_at_1 value: 27.272999999999996 - type: mrr_at_10 value: 36.083 - type: mrr_at_100 value: 37.008 - type: mrr_at_1000 value: 37.076 - type: mrr_at_3 value: 33.004 - type: mrr_at_5 value: 34.664 - type: ndcg_at_1 value: 27.272999999999996 - type: ndcg_at_10 value: 37.763000000000005 - type: ndcg_at_100 value: 43.566 - type: ndcg_at_1000 value: 46.356 - type: ndcg_at_3 value: 31.673000000000002 - type: ndcg_at_5 value: 34.501 - type: precision_at_1 value: 27.272999999999996 - type: precision_at_10 value: 7.470000000000001 - type: precision_at_100 value: 1.502 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 14.756 - type: precision_at_5 value: 11.225 - type: recall_at_1 value: 22.203999999999997 - type: recall_at_10 value: 51.437999999999995 - type: recall_at_100 value: 76.845 - type: recall_at_1000 value: 94.38600000000001 - type: recall_at_3 value: 34.258 - type: recall_at_5 value: 41.512 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 17.474 - type: map_at_10 value: 26.362999999999996 - type: map_at_100 value: 27.456999999999997 - type: map_at_1000 value: 27.567999999999998 - type: map_at_3 value: 23.518 - type: map_at_5 value: 25.068 - type: mrr_at_1 value: 18.669 - type: mrr_at_10 value: 27.998 - type: mrr_at_100 value: 28.953 - type: mrr_at_1000 value: 29.03 - type: mrr_at_3 value: 25.230999999999998 - type: mrr_at_5 value: 26.654 - type: ndcg_at_1 value: 18.669 - type: ndcg_at_10 value: 31.684 - type: ndcg_at_100 value: 36.864999999999995 - type: ndcg_at_1000 value: 39.555 - type: ndcg_at_3 value: 26.057000000000002 - type: ndcg_at_5 value: 28.587 - type: precision_at_1 value: 18.669 - type: precision_at_10 value: 5.3420000000000005 - type: precision_at_100 value: 0.847 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 11.583 - type: precision_at_5 value: 8.466 - type: recall_at_1 value: 17.474 - type: recall_at_10 value: 46.497 - type: recall_at_100 value: 69.977 - type: recall_at_1000 value: 89.872 - type: recall_at_3 value: 31.385999999999996 - type: recall_at_5 value: 37.283 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 17.173 - type: map_at_10 value: 30.407 - type: map_at_100 value: 32.528 - type: map_at_1000 value: 32.698 - type: map_at_3 value: 25.523 - type: map_at_5 value: 28.038 - type: mrr_at_1 value: 38.958 - type: mrr_at_10 value: 51.515 - type: mrr_at_100 value: 52.214000000000006 - type: mrr_at_1000 value: 52.237 - type: mrr_at_3 value: 48.502 - type: mrr_at_5 value: 50.251000000000005 - type: ndcg_at_1 value: 38.958 - type: ndcg_at_10 value: 40.355000000000004 - type: ndcg_at_100 value: 47.68 - type: ndcg_at_1000 value: 50.370000000000005 - type: ndcg_at_3 value: 33.946 - type: ndcg_at_5 value: 36.057 - type: precision_at_1 value: 38.958 - type: precision_at_10 value: 12.508 - type: precision_at_100 value: 2.054 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 25.581 - type: precision_at_5 value: 19.256999999999998 - type: recall_at_1 value: 17.173 - type: recall_at_10 value: 46.967 - type: recall_at_100 value: 71.47200000000001 - type: recall_at_1000 value: 86.238 - type: recall_at_3 value: 30.961 - type: recall_at_5 value: 37.539 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 8.999 - type: map_at_10 value: 18.989 - type: map_at_100 value: 26.133 - type: map_at_1000 value: 27.666 - type: map_at_3 value: 13.918 - type: map_at_5 value: 16.473 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.161 - type: mrr_at_100 value: 74.516 - type: mrr_at_1000 value: 74.524 - type: mrr_at_3 value: 72.875 - type: mrr_at_5 value: 73.613 - type: ndcg_at_1 value: 54.37499999999999 - type: ndcg_at_10 value: 39.902 - type: ndcg_at_100 value: 44.212 - type: ndcg_at_1000 value: 51.62 - type: ndcg_at_3 value: 45.193 - type: ndcg_at_5 value: 42.541000000000004 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 30.425 - type: precision_at_100 value: 9.754999999999999 - type: precision_at_1000 value: 2.043 - type: precision_at_3 value: 48.25 - type: precision_at_5 value: 40.65 - type: recall_at_1 value: 8.999 - type: recall_at_10 value: 24.133 - type: recall_at_100 value: 49.138999999999996 - type: recall_at_1000 value: 72.639 - type: recall_at_3 value: 15.287999999999998 - type: recall_at_5 value: 19.415 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.38999999999999 - type: f1 value: 41.444205512055234 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.35000000000001 - type: map_at_10 value: 92.837 - type: map_at_100 value: 92.996 - type: map_at_1000 value: 93.006 - type: map_at_3 value: 92.187 - type: map_at_5 value: 92.595 - type: mrr_at_1 value: 93.864 - type: mrr_at_10 value: 96.723 - type: mrr_at_100 value: 96.72500000000001 - type: mrr_at_1000 value: 96.72500000000001 - type: mrr_at_3 value: 96.64 - type: mrr_at_5 value: 96.71499999999999 - type: ndcg_at_1 value: 93.864 - type: ndcg_at_10 value: 94.813 - type: ndcg_at_100 value: 95.243 - type: ndcg_at_1000 value: 95.38600000000001 - type: ndcg_at_3 value: 94.196 - type: ndcg_at_5 value: 94.521 - type: precision_at_1 value: 93.864 - type: precision_at_10 value: 10.951 - type: precision_at_100 value: 1.1400000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 35.114000000000004 - type: precision_at_5 value: 21.476 - type: recall_at_1 value: 87.35000000000001 - type: recall_at_10 value: 96.941 - type: recall_at_100 value: 98.397 - type: recall_at_1000 value: 99.21600000000001 - type: recall_at_3 value: 95.149 - type: recall_at_5 value: 96.131 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 24.476 - type: map_at_10 value: 40.11 - type: map_at_100 value: 42.229 - type: map_at_1000 value: 42.378 - type: map_at_3 value: 34.512 - type: map_at_5 value: 38.037 - type: mrr_at_1 value: 47.839999999999996 - type: mrr_at_10 value: 57.053 - type: mrr_at_100 value: 57.772 - type: mrr_at_1000 value: 57.799 - type: mrr_at_3 value: 54.552 - type: mrr_at_5 value: 56.011 - type: ndcg_at_1 value: 47.839999999999996 - type: ndcg_at_10 value: 48.650999999999996 - type: ndcg_at_100 value: 55.681000000000004 - type: ndcg_at_1000 value: 57.979 - type: ndcg_at_3 value: 43.923 - type: ndcg_at_5 value: 46.037 - type: precision_at_1 value: 47.839999999999996 - type: precision_at_10 value: 13.395000000000001 - type: precision_at_100 value: 2.0660000000000003 - type: precision_at_1000 value: 0.248 - type: precision_at_3 value: 29.064 - type: precision_at_5 value: 22.006 - type: recall_at_1 value: 24.476 - type: recall_at_10 value: 56.216 - type: recall_at_100 value: 81.798 - type: recall_at_1000 value: 95.48299999999999 - type: recall_at_3 value: 39.357 - type: recall_at_5 value: 47.802 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 42.728 - type: map_at_10 value: 57.737 - type: map_at_100 value: 58.531 - type: map_at_1000 value: 58.594 - type: map_at_3 value: 54.869 - type: map_at_5 value: 56.55 - type: mrr_at_1 value: 85.456 - type: mrr_at_10 value: 90.062 - type: mrr_at_100 value: 90.159 - type: mrr_at_1000 value: 90.16 - type: mrr_at_3 value: 89.37899999999999 - type: mrr_at_5 value: 89.81 - type: ndcg_at_1 value: 85.456 - type: ndcg_at_10 value: 67.755 - type: ndcg_at_100 value: 70.341 - type: ndcg_at_1000 value: 71.538 - type: ndcg_at_3 value: 63.735 - type: ndcg_at_5 value: 65.823 - type: precision_at_1 value: 85.456 - type: precision_at_10 value: 13.450000000000001 - type: precision_at_100 value: 1.545 - type: precision_at_1000 value: 0.16999999999999998 - type: precision_at_3 value: 38.861000000000004 - type: precision_at_5 value: 24.964 - type: recall_at_1 value: 42.728 - type: recall_at_10 value: 67.252 - type: recall_at_100 value: 77.265 - type: recall_at_1000 value: 85.246 - type: recall_at_3 value: 58.292 - type: recall_at_5 value: 62.41100000000001 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 87.4836 - type: ap value: 82.29552224030336 - type: f1 value: 87.42791432227448 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 23.015 - type: map_at_10 value: 35.621 - type: map_at_100 value: 36.809 - type: map_at_1000 value: 36.853 - type: map_at_3 value: 31.832 - type: map_at_5 value: 34.006 - type: mrr_at_1 value: 23.738999999999997 - type: mrr_at_10 value: 36.309999999999995 - type: mrr_at_100 value: 37.422 - type: mrr_at_1000 value: 37.461 - type: mrr_at_3 value: 32.592999999999996 - type: mrr_at_5 value: 34.736 - type: ndcg_at_1 value: 23.724999999999998 - type: ndcg_at_10 value: 42.617 - type: ndcg_at_100 value: 48.217999999999996 - type: ndcg_at_1000 value: 49.309 - type: ndcg_at_3 value: 34.905 - type: ndcg_at_5 value: 38.769 - type: precision_at_1 value: 23.724999999999998 - type: precision_at_10 value: 6.689 - type: precision_at_100 value: 0.9480000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.89 - type: precision_at_5 value: 10.897 - type: recall_at_1 value: 23.015 - type: recall_at_10 value: 64.041 - type: recall_at_100 value: 89.724 - type: recall_at_1000 value: 98.00999999999999 - type: recall_at_3 value: 43.064 - type: recall_at_5 value: 52.31099999999999 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.49794801641588 - type: f1 value: 96.28931114498003 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.81121751025992 - type: f1 value: 63.18740125901853 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.66644250168123 - type: f1 value: 74.93211186867839 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 81.77202420981843 - type: f1 value: 81.63681969283554 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.596687684870645 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.26965660101405 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.33619694846802 - type: mrr value: 32.53719657720334 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.0729999999999995 - type: map_at_10 value: 13.245999999999999 - type: map_at_100 value: 16.747999999999998 - type: map_at_1000 value: 18.163 - type: map_at_3 value: 10.064 - type: map_at_5 value: 11.513 - type: mrr_at_1 value: 49.536 - type: mrr_at_10 value: 58.092 - type: mrr_at_100 value: 58.752 - type: mrr_at_1000 value: 58.78 - type: mrr_at_3 value: 56.398 - type: mrr_at_5 value: 57.389 - type: ndcg_at_1 value: 47.059 - type: ndcg_at_10 value: 35.881 - type: ndcg_at_100 value: 32.751999999999995 - type: ndcg_at_1000 value: 41.498000000000005 - type: ndcg_at_3 value: 42.518 - type: ndcg_at_5 value: 39.550999999999995 - type: precision_at_1 value: 49.536 - type: precision_at_10 value: 26.316 - type: precision_at_100 value: 8.084 - type: precision_at_1000 value: 2.081 - type: precision_at_3 value: 39.938 - type: precision_at_5 value: 34.056 - type: recall_at_1 value: 6.0729999999999995 - type: recall_at_10 value: 16.593 - type: recall_at_100 value: 32.883 - type: recall_at_1000 value: 64.654 - type: recall_at_3 value: 11.174000000000001 - type: recall_at_5 value: 13.528 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 30.043 - type: map_at_10 value: 45.318999999999996 - type: map_at_100 value: 46.381 - type: map_at_1000 value: 46.412 - type: map_at_3 value: 40.941 - type: map_at_5 value: 43.662 - type: mrr_at_1 value: 33.98 - type: mrr_at_10 value: 47.870000000000005 - type: mrr_at_100 value: 48.681999999999995 - type: mrr_at_1000 value: 48.703 - type: mrr_at_3 value: 44.341 - type: mrr_at_5 value: 46.547 - type: ndcg_at_1 value: 33.98 - type: ndcg_at_10 value: 52.957 - type: ndcg_at_100 value: 57.434 - type: ndcg_at_1000 value: 58.103 - type: ndcg_at_3 value: 44.896 - type: ndcg_at_5 value: 49.353 - type: precision_at_1 value: 33.98 - type: precision_at_10 value: 8.786 - type: precision_at_100 value: 1.1280000000000001 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 20.577 - type: precision_at_5 value: 14.942 - type: recall_at_1 value: 30.043 - type: recall_at_10 value: 73.593 - type: recall_at_100 value: 93.026 - type: recall_at_1000 value: 97.943 - type: recall_at_3 value: 52.955 - type: recall_at_5 value: 63.132 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.808 - type: map_at_10 value: 84.675 - type: map_at_100 value: 85.322 - type: map_at_1000 value: 85.33800000000001 - type: map_at_3 value: 81.68900000000001 - type: map_at_5 value: 83.543 - type: mrr_at_1 value: 81.5 - type: mrr_at_10 value: 87.59700000000001 - type: mrr_at_100 value: 87.705 - type: mrr_at_1000 value: 87.70599999999999 - type: mrr_at_3 value: 86.607 - type: mrr_at_5 value: 87.289 - type: ndcg_at_1 value: 81.51 - type: ndcg_at_10 value: 88.41799999999999 - type: ndcg_at_100 value: 89.644 - type: ndcg_at_1000 value: 89.725 - type: ndcg_at_3 value: 85.49900000000001 - type: ndcg_at_5 value: 87.078 - type: precision_at_1 value: 81.51 - type: precision_at_10 value: 13.438 - type: precision_at_100 value: 1.532 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.363 - type: precision_at_5 value: 24.57 - type: recall_at_1 value: 70.808 - type: recall_at_10 value: 95.575 - type: recall_at_100 value: 99.667 - type: recall_at_1000 value: 99.98899999999999 - type: recall_at_3 value: 87.223 - type: recall_at_5 value: 91.682 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 58.614831329137715 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 66.86580408560826 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.093 - type: map_at_10 value: 13.014000000000001 - type: map_at_100 value: 15.412999999999998 - type: map_at_1000 value: 15.756999999999998 - type: map_at_3 value: 9.216000000000001 - type: map_at_5 value: 11.036999999999999 - type: mrr_at_1 value: 25.1 - type: mrr_at_10 value: 37.133 - type: mrr_at_100 value: 38.165 - type: mrr_at_1000 value: 38.198 - type: mrr_at_3 value: 33.217 - type: mrr_at_5 value: 35.732 - type: ndcg_at_1 value: 25.1 - type: ndcg_at_10 value: 21.918000000000003 - type: ndcg_at_100 value: 30.983 - type: ndcg_at_1000 value: 36.629 - type: ndcg_at_3 value: 20.544999999999998 - type: ndcg_at_5 value: 18.192 - type: precision_at_1 value: 25.1 - type: precision_at_10 value: 11.44 - type: precision_at_100 value: 2.459 - type: precision_at_1000 value: 0.381 - type: precision_at_3 value: 19.267 - type: precision_at_5 value: 16.16 - type: recall_at_1 value: 5.093 - type: recall_at_10 value: 23.215 - type: recall_at_100 value: 49.902 - type: recall_at_1000 value: 77.403 - type: recall_at_3 value: 11.733 - type: recall_at_5 value: 16.372999999999998 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.9365442977452 - type: cos_sim_spearman value: 79.36960687383745 - type: euclidean_pearson value: 79.6045204840714 - type: euclidean_spearman value: 79.26382712751337 - type: manhattan_pearson value: 79.4805084789529 - type: manhattan_spearman value: 79.21847863209523 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.27906192961453 - type: cos_sim_spearman value: 74.38364712099211 - type: euclidean_pearson value: 78.54358927241223 - type: euclidean_spearman value: 74.22185560806376 - type: manhattan_pearson value: 78.50904327377751 - type: manhattan_spearman value: 74.2627500781748 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.66863742649639 - type: cos_sim_spearman value: 84.70630905216271 - type: euclidean_pearson value: 84.64498334705334 - type: euclidean_spearman value: 84.87204770690148 - type: manhattan_pearson value: 84.65774227976077 - type: manhattan_spearman value: 84.91251851797985 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.1577763924467 - type: cos_sim_spearman value: 80.10314039230198 - type: euclidean_pearson value: 81.51346991046043 - type: euclidean_spearman value: 80.08678485109435 - type: manhattan_pearson value: 81.57058914661894 - type: manhattan_spearman value: 80.1516230725106 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.40310839662533 - type: cos_sim_spearman value: 87.16293477217867 - type: euclidean_pearson value: 86.50688711184775 - type: euclidean_spearman value: 87.08651444923031 - type: manhattan_pearson value: 86.54674677557857 - type: manhattan_spearman value: 87.15079017870971 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.32886275207817 - type: cos_sim_spearman value: 85.0190460590732 - type: euclidean_pearson value: 84.42553652784679 - type: euclidean_spearman value: 85.20027364279328 - type: manhattan_pearson value: 84.42926246281078 - type: manhattan_spearman value: 85.20187419804306 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.76732216967812 - type: cos_sim_spearman value: 90.63701653633909 - type: euclidean_pearson value: 90.26678186114682 - type: euclidean_spearman value: 90.67288073455427 - type: manhattan_pearson value: 90.20772020584582 - type: manhattan_spearman value: 90.60764863983702 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 69.09280387698125 - type: cos_sim_spearman value: 68.62743151172162 - type: euclidean_pearson value: 69.89386398104689 - type: euclidean_spearman value: 68.71191066733556 - type: manhattan_pearson value: 69.92516500604872 - type: manhattan_spearman value: 68.80452846992576 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.13178592019887 - type: cos_sim_spearman value: 86.03947178806887 - type: euclidean_pearson value: 85.87029414285313 - type: euclidean_spearman value: 86.04960843306998 - type: manhattan_pearson value: 85.92946858580146 - type: manhattan_spearman value: 86.12575341860442 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.16657063002837 - type: mrr value: 95.73671063867141 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 63.510999999999996 - type: map_at_10 value: 72.76899999999999 - type: map_at_100 value: 73.303 - type: map_at_1000 value: 73.32499999999999 - type: map_at_3 value: 70.514 - type: map_at_5 value: 71.929 - type: mrr_at_1 value: 66.333 - type: mrr_at_10 value: 73.75 - type: mrr_at_100 value: 74.119 - type: mrr_at_1000 value: 74.138 - type: mrr_at_3 value: 72.222 - type: mrr_at_5 value: 73.122 - type: ndcg_at_1 value: 66.333 - type: ndcg_at_10 value: 76.774 - type: ndcg_at_100 value: 78.78500000000001 - type: ndcg_at_1000 value: 79.254 - type: ndcg_at_3 value: 73.088 - type: ndcg_at_5 value: 75.002 - type: precision_at_1 value: 66.333 - type: precision_at_10 value: 9.833 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.222 - type: precision_at_5 value: 18.333 - type: recall_at_1 value: 63.510999999999996 - type: recall_at_10 value: 87.98899999999999 - type: recall_at_100 value: 96.5 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 77.86699999999999 - type: recall_at_5 value: 82.73899999999999 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.78514851485149 - type: cos_sim_ap value: 94.94214383862038 - type: cos_sim_f1 value: 89.02255639097744 - type: cos_sim_precision value: 89.2462311557789 - type: cos_sim_recall value: 88.8 - type: dot_accuracy value: 99.78217821782178 - type: dot_ap value: 94.69965247836805 - type: dot_f1 value: 88.78695208970439 - type: dot_precision value: 90.54054054054053 - type: dot_recall value: 87.1 - type: euclidean_accuracy value: 99.78118811881188 - type: euclidean_ap value: 94.9865187695411 - type: euclidean_f1 value: 88.99950223992036 - type: euclidean_precision value: 88.60257680872151 - type: euclidean_recall value: 89.4 - type: manhattan_accuracy value: 99.78811881188119 - type: manhattan_ap value: 95.0021236766459 - type: manhattan_f1 value: 89.12071535022356 - type: manhattan_precision value: 88.54886475814413 - type: manhattan_recall value: 89.7 - type: max_accuracy value: 99.78811881188119 - type: max_ap value: 95.0021236766459 - type: max_f1 value: 89.12071535022356 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 68.93190546593995 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 37.602808534760655 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.29214480978073 - type: mrr value: 53.123169722434426 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.967800769650022 - type: cos_sim_spearman value: 31.168490040206926 - type: dot_pearson value: 30.888603021128553 - type: dot_spearman value: 31.028241262520385 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22300000000000003 - type: map_at_10 value: 1.781 - type: map_at_100 value: 9.905999999999999 - type: map_at_1000 value: 23.455000000000002 - type: map_at_3 value: 0.569 - type: map_at_5 value: 0.918 - type: mrr_at_1 value: 84.0 - type: mrr_at_10 value: 91.067 - type: mrr_at_100 value: 91.067 - type: mrr_at_1000 value: 91.067 - type: mrr_at_3 value: 90.667 - type: mrr_at_5 value: 91.067 - type: ndcg_at_1 value: 78.0 - type: ndcg_at_10 value: 73.13499999999999 - type: ndcg_at_100 value: 55.32 - type: ndcg_at_1000 value: 49.532 - type: ndcg_at_3 value: 73.715 - type: ndcg_at_5 value: 72.74199999999999 - type: precision_at_1 value: 84.0 - type: precision_at_10 value: 78.8 - type: precision_at_100 value: 56.32 - type: precision_at_1000 value: 21.504 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 78.0 - type: recall_at_1 value: 0.22300000000000003 - type: recall_at_10 value: 2.049 - type: recall_at_100 value: 13.553 - type: recall_at_1000 value: 46.367999999999995 - type: recall_at_3 value: 0.604 - type: recall_at_5 value: 1.015 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.0380000000000003 - type: map_at_10 value: 10.188 - type: map_at_100 value: 16.395 - type: map_at_1000 value: 18.024 - type: map_at_3 value: 6.236 - type: map_at_5 value: 7.276000000000001 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 46.292 - type: mrr_at_100 value: 47.446 - type: mrr_at_1000 value: 47.446 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 44.32 - type: ndcg_at_1 value: 32.653 - type: ndcg_at_10 value: 25.219 - type: ndcg_at_100 value: 37.802 - type: ndcg_at_1000 value: 49.274 - type: ndcg_at_3 value: 28.605999999999998 - type: ndcg_at_5 value: 26.21 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 21.837 - type: precision_at_100 value: 7.776 - type: precision_at_1000 value: 1.522 - type: precision_at_3 value: 28.571 - type: precision_at_5 value: 25.306 - type: recall_at_1 value: 3.0380000000000003 - type: recall_at_10 value: 16.298000000000002 - type: recall_at_100 value: 48.712 - type: recall_at_1000 value: 83.16799999999999 - type: recall_at_3 value: 7.265000000000001 - type: recall_at_5 value: 9.551 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 83.978 - type: ap value: 24.751887949330015 - type: f1 value: 66.8685134049279 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.573288058856825 - type: f1 value: 61.973261751726604 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 48.75483298792469 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.36824223639506 - type: cos_sim_ap value: 75.53126388573047 - type: cos_sim_f1 value: 67.9912831688245 - type: cos_sim_precision value: 66.11817501869858 - type: cos_sim_recall value: 69.9736147757256 - type: dot_accuracy value: 86.39804494248078 - type: dot_ap value: 75.27598891718046 - type: dot_f1 value: 67.91146284159763 - type: dot_precision value: 63.90505003490807 - type: dot_recall value: 72.45382585751979 - type: euclidean_accuracy value: 86.36228169517793 - type: euclidean_ap value: 75.51438087434647 - type: euclidean_f1 value: 68.02370523061066 - type: euclidean_precision value: 66.46525679758308 - type: euclidean_recall value: 69.65699208443272 - type: manhattan_accuracy value: 86.46361089586935 - type: manhattan_ap value: 75.50800785730111 - type: manhattan_f1 value: 67.9220437187253 - type: manhattan_precision value: 67.79705573080967 - type: manhattan_recall value: 68.04749340369392 - type: max_accuracy value: 86.46361089586935 - type: max_ap value: 75.53126388573047 - type: max_f1 value: 68.02370523061066 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.80350836341057 - type: cos_sim_ap value: 85.51101933260743 - type: cos_sim_f1 value: 77.9152271629704 - type: cos_sim_precision value: 75.27815662910056 - type: cos_sim_recall value: 80.74376347397599 - type: dot_accuracy value: 88.84425815966158 - type: dot_ap value: 85.49726945962519 - type: dot_f1 value: 77.94445269567801 - type: dot_precision value: 75.27251864601261 - type: dot_recall value: 80.81305820757623 - type: euclidean_accuracy value: 88.80350836341057 - type: euclidean_ap value: 85.4882880790211 - type: euclidean_f1 value: 77.87063284615103 - type: euclidean_precision value: 74.61022927689595 - type: euclidean_recall value: 81.42901139513397 - type: manhattan_accuracy value: 88.7161873714441 - type: manhattan_ap value: 85.45753871906821 - type: manhattan_f1 value: 77.8686401480111 - type: manhattan_precision value: 74.95903683123174 - type: manhattan_recall value: 81.01324299353249 - type: max_accuracy value: 88.84425815966158 - type: max_ap value: 85.51101933260743 - type: max_f1 value: 77.94445269567801 --- <!-- **English** | [中文](./README_zh.md) --> # gte-base-en-v1.5 We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**, while further enhancing model performance. The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU). The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)). We also present the [`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct), a SOTA instruction-tuned multi-lingual embedding model that ranked 2nd in MTEB and 1st in C-MTEB. <!-- Provide a longer summary of what this model is. --> - **Developed by:** Institute for Intelligent Computing, Alibaba Group - **Model type:** Text Embeddings - **Paper:** [mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval](https://arxiv.org/pdf/2407.19669) <!-- - **Demo [optional]:** [More Information Needed] --> ### Model list | Models | Language | Model Size | Max Seq. Length | Dimension | MTEB-en | LoCo | |:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: | |[`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| Multiple | 7720 | 32768 | 4096 | 67.34 | 87.57 | |[`gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 434 | 8192 | 1024 | 65.39 | 86.71 | |[`gte-base-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 137 | 8192 | 768 | 64.11 | 87.44 | ## How to Get Started with the Model Use the code below to get started with the model. ```python # Requires transformers>=4.36.0 import torch.nn.functional as F from transformers import AutoModel, AutoTokenizer input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] model_path = 'Alibaba-NLP/gte-base-en-v1.5' tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModel.from_pretrained(model_path, trust_remote_code=True) # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = outputs.last_hidden_state[:, 0] # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` **It is recommended to install xformers and enable unpadding for acceleration, refer to [enable-unpadding-and-xformers](https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers).** Use with `sentence-transformers`: ```python # Requires sentence_transformers>=2.7.0 from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('Alibaba-NLP/gte-base-en-v1.5', trust_remote_code=True) embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` Use with `transformers.js`: ```js // npm i @xenova/transformers import { pipeline, dot } from '@xenova/transformers'; // Create feature extraction pipeline const extractor = await pipeline('feature-extraction', 'Alibaba-NLP/gte-base-en-v1.5', { quantized: false, // Comment out this line to use the quantized version }); // Generate sentence embeddings const sentences = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] const output = await extractor(sentences, { normalize: true, pooling: 'cls' }); // Compute similarity scores const [source_embeddings, ...document_embeddings ] = output.tolist(); const similarities = document_embeddings.map(x => 100 * dot(source_embeddings, x)); console.log(similarities); // [34.504930869007296, 64.03973265120138, 19.520042686034362] ``` ## Training Details ### Training Data - Masked language modeling (MLM): `c4-en` - Weak-supervised contrastive pre-training (CPT): [GTE](https://arxiv.org/pdf/2308.03281.pdf) pre-training data - Supervised contrastive fine-tuning: [GTE](https://arxiv.org/pdf/2308.03281.pdf) fine-tuning data ### Training Procedure To enable the backbone model to support a context length of 8192, we adopted a multi-stage training strategy. The model first undergoes preliminary MLM pre-training on shorter lengths. And then, we resample the data, reducing the proportion of short texts, and continue the MLM pre-training. The entire training process is as follows: - MLM-2048: lr 5e-4, mlm_probability 0.3, batch_size 4096, num_steps 70000, rope_base 10000 - [MLM-8192](https://huggingface.co/Alibaba-NLP/gte-en-mlm-base): lr 5e-5, mlm_probability 0.3, batch_size 1024, num_steps 20000, rope_base 500000 - CPT: max_len 512, lr 2e-4, batch_size 32768, num_steps 100000 - Fine-tuning: TODO ## Evaluation ### MTEB The results of other models are retrieved from [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). The gte evaluation setting: `mteb==1.2.0, fp16 auto mix precision, max_length=8192`, and set ntk scaling factor to 2 (equivalent to rope_base * 2). | Model Name | Param Size (M) | Dimension | Sequence Length | Average (56) | Class. (12) | Clust. (11) | Pair Class. (3) | Reran. (4) | Retr. (15) | STS (10) | Summ. (1) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 434 | 1024 | 8192 | **65.39** | 77.75 | 47.95 | 84.63 | 58.50 | 57.91 | 81.43 | 30.91 | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 335 | 1024 | 512 | 64.68 | 75.64 | 46.71 | 87.2 | 60.11 | 54.39 | 85 | 32.71 | | [multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) | 560 | 1024 | 514 | 64.41 | 77.56 | 47.1 | 86.19 | 58.58 | 52.47 | 84.78 | 30.39 | | [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)| 335 | 1024 | 512 | 64.23 | 75.97 | 46.08 | 87.12 | 60.03 | 54.29 | 83.11 | 31.61 | | [**gte-base-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | 137 | 768 | 8192 | **64.11** | 77.17 | 46.82 | 85.33 | 57.66 | 54.09 | 81.97 | 31.17 | | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)| 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 | ### LoCo | Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [gte-qwen1.5-7b](https://huggingface.co/Alibaba-NLP/gte-qwen1.5-7b) | 4096 | 32768 | 87.57 | 49.37 | 93.10 | 99.67 | 97.54 | 98.21 | | [gte-large-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-v1.5) |1024 | 8192 | 86.71 | 44.55 | 92.61 | 99.82 | 97.81 | 98.74 | | [gte-base-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-v1.5) | 768 | 8192 | 87.44 | 49.91 | 91.78 | 99.82 | 97.13 | 98.58 | ## Citation If you find our paper or models helpful, please consider citing them as follows: ``` @misc{zhang2024mgte, title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval}, author={Xin Zhang and Yanzhao Zhang and Dingkun Long and Wen Xie and Ziqi Dai and Jialong Tang and Huan Lin and Baosong Yang and Pengjun Xie and Fei Huang and Meishan Zhang and Wenjie Li and Min Zhang}, year={2024}, eprint={2407.19669}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.19669}, } @misc{li2023gte, title={Towards General Text Embeddings with Multi-stage Contrastive Learning}, author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang}, year={2023}, eprint={2308.03281}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2308.03281}, } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-526066
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Science", "Research", "Verification", "Dataset", "AI", "custom_code", "en", "dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-526066", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,716
1,716
9
0
--- datasets: - fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-526066 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Science - Research - Verification - Dataset - AI --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: scientific claim verification ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-526066', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
[ "TEXT_CLASSIFICATION" ]
[ "SCIFACT" ]
Non_BioNLP
pankajrajdeo/UMLS-ED-Bioformer-8L-V-1.25-SpecialTokensUntrained
pankajrajdeo
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:187491593", "loss:CustomTripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:pankajrajdeo/UMLS-ED-Bioformer-8L-V-1.25", "base_model:finetune:pankajrajdeo/UMLS-ED-Bioformer-8L-V-1.25", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,736
1,736
9
0
--- base_model: - pankajrajdeo/UMLS-ED-Bioformer-8L-V-1.25 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:187491593 - loss:CustomTripletLoss widget: - source_sentence: Hylocharis xantusii sentences: - Xantus's hummingbird - C5721346 - C1623532 - Iole viridescens viridescens - source_sentence: HTLV1+2 RNA XXX Ql PCR sentences: - HTLV 1+2 RNA:MevcEşik:Zmlı:XXX:Srl:Prob.amf.hdf - Nota de progreso:Tipo:Punto temporal:{Configuración}:Documento:Pain medicine - C0368469 - C4070921 - source_sentence: Degeneração Nigroestriatal sentences: - C0270733 - hiperinsulinismo debido a deficiencia de 3-hidroxiacil-coenzima A deshidrogenasa de cadena corta - Striatonigral atrophy - C4303473 - source_sentence: Clostridioides difficile As:titer:moment:serum:semikwantitatief sentences: - Dehidroepiandrosteron:MevcEşik:Zmlı:İdrar:Srl - C0485219 - C0364328 - Clostridium difficile Ac:Título:Pt:Soro:Qn - source_sentence: E Vicotrat sentences: - C2742706 - C2350910 - germanium L-cysteine alpha-tocopherol complex - Eosine I Bluish, Dipotassium Salt --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 512 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("pankajrajdeo/937457_bioformer_8L") # Run inference sentences = [ 'E Vicotrat', 'Eosine I Bluish, Dipotassium Salt', 'C2742706', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 512] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 187,491,593 training samples * Columns: <code>anchor</code>, <code>positive</code>, <code>negative_id</code>, <code>positive_id</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative_id | positive_id | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 13.27 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.25 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 6.27 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 6.49 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 13.53 tokens</li><li>max: 118 tokens</li></ul> | * Samples: | anchor | positive | negative_id | positive_id | negative | |:----------------------------------------------|:------------------------------------------------------------------------------------------------|:----------------------|:----------------------|:------------------------------------------------------------------------------------------------| | <code>Zaburzenie metabolizmu minerałów</code> | <code>Distúrbio não especificado do metabolismo de minerais</code> | <code>C2887914</code> | <code>C0154260</code> | <code>Acute alcoholic hepatic failure</code> | | <code>testy funkčnosti placenty</code> | <code>Metoder som brukes til å vurdere morkakefunksjon.</code> | <code>C2350391</code> | <code>C0032049</code> | <code>Hjärtmuskelscintigrafi</code> | | <code>Tsefapiriin:Susc:Pt:Is:OrdQn</code> | <code>cefapirina:susceptibilidad:punto en el tiempo:cepa clínica:ordinal o cuantitativo:</code> | <code>C0942365</code> | <code>C0801894</code> | <code>2 proyecciones:hallazgo:punto en el tiempo:tobillo.izquierdo:Narrativo:radiografía</code> | * Loss: <code>__main__.CustomTripletLoss</code> with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 50 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 50 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:------:|:-------------:| | 0.0003 | 1000 | 0.9785 | | 0.0005 | 2000 | 0.925 | | 0.0008 | 3000 | 0.8548 | | 0.0011 | 4000 | 0.7979 | | 0.0013 | 5000 | 0.7635 | | 0.0016 | 6000 | 0.7176 | | 0.0019 | 7000 | 0.6813 | | 0.0021 | 8000 | 0.6225 | | 0.0024 | 9000 | 0.6135 | | 0.0027 | 10000 | 0.5827 | | 0.0029 | 11000 | 0.5695 | | 0.0032 | 12000 | 0.5152 | | 0.0035 | 13000 | 0.5213 | | 0.0037 | 14000 | 0.4895 | | 0.0040 | 15000 | 0.4942 | | 0.0043 | 16000 | 0.4819 | | 0.0045 | 17000 | 0.4799 | | 0.0048 | 18000 | 0.4572 | | 0.0051 | 19000 | 0.4396 | | 0.0053 | 20000 | 0.4389 | | 0.0056 | 21000 | 0.4269 | | 0.0059 | 22000 | 0.4155 | | 0.0061 | 23000 | 0.4034 | | 0.0064 | 24000 | 0.4067 | | 0.0067 | 25000 | 0.401 | | 0.0069 | 26000 | 0.376 | | 0.0072 | 27000 | 0.3715 | | 0.0075 | 28000 | 0.3788 | | 0.0077 | 29000 | 0.362 | | 0.0080 | 30000 | 0.3644 | | 0.0083 | 31000 | 0.3487 | | 0.0085 | 32000 | 0.3432 | | 0.0088 | 33000 | 0.3394 | | 0.0091 | 34000 | 0.3423 | | 0.0093 | 35000 | 0.3314 | | 0.0096 | 36000 | 0.3447 | | 0.0099 | 37000 | 0.3206 | | 0.0101 | 38000 | 0.3283 | | 0.0104 | 39000 | 0.3183 | | 0.0107 | 40000 | 0.3167 | | 0.0109 | 41000 | 0.3169 | | 0.0112 | 42000 | 0.3122 | | 0.0115 | 43000 | 0.3022 | | 0.0117 | 44000 | 0.3066 | | 0.0120 | 45000 | 0.3002 | | 0.0123 | 46000 | 0.3003 | | 0.0125 | 47000 | 0.2907 | | 0.0128 | 48000 | 0.2843 | | 0.0131 | 49000 | 0.2905 | | 0.0133 | 50000 | 0.2816 | | 0.0136 | 51000 | 0.2959 | | 0.0139 | 52000 | 0.2765 | | 0.0141 | 53000 | 0.2813 | | 0.0144 | 54000 | 0.2715 | | 0.0147 | 55000 | 0.2826 | | 0.0149 | 56000 | 0.2845 | | 0.0152 | 57000 | 0.2709 | | 0.0155 | 58000 | 0.2704 | | 0.0157 | 59000 | 0.2667 | | 0.0160 | 60000 | 0.2589 | | 0.0163 | 61000 | 0.2574 | | 0.0165 | 62000 | 0.2598 | | 0.0168 | 63000 | 0.2427 | | 0.0171 | 64000 | 0.2505 | | 0.0173 | 65000 | 0.265 | | 0.0176 | 66000 | 0.263 | | 0.0179 | 67000 | 0.2521 | | 0.0181 | 68000 | 0.2532 | | 0.0184 | 69000 | 0.256 | | 0.0187 | 70000 | 0.2599 | | 0.0189 | 71000 | 0.2558 | | 0.0192 | 72000 | 0.2526 | | 0.0195 | 73000 | 0.2402 | | 0.0197 | 74000 | 0.2471 | | 0.0200 | 75000 | 0.24 | | 0.0203 | 76000 | 0.2562 | | 0.0205 | 77000 | 0.2398 | | 0.0208 | 78000 | 0.2622 | | 0.0211 | 79000 | 0.235 | | 0.0213 | 80000 | 0.2421 | | 0.0216 | 81000 | 0.2378 | | 0.0219 | 82000 | 0.2323 | | 0.0221 | 83000 | 0.232 | | 0.0224 | 84000 | 0.2319 | | 0.0227 | 85000 | 0.2361 | | 0.0229 | 86000 | 0.2252 | | 0.0232 | 87000 | 0.2282 | | 0.0235 | 88000 | 0.2213 | | 0.0237 | 89000 | 0.2228 | | 0.0240 | 90000 | 0.2265 | | 0.0243 | 91000 | 0.2375 | | 0.0245 | 92000 | 0.2328 | | 0.0248 | 93000 | 0.2318 | | 0.0251 | 94000 | 0.2321 | | 0.0253 | 95000 | 0.2205 | | 0.0256 | 96000 | 0.2319 | | 0.0259 | 97000 | 0.2193 | | 0.0261 | 98000 | 0.2188 | | 0.0264 | 99000 | 0.2196 | | 0.0267 | 100000 | 0.2223 | | 0.0269 | 101000 | 0.2268 | | 0.0272 | 102000 | 0.219 | | 0.0275 | 103000 | 0.206 | | 0.0277 | 104000 | 0.2154 | | 0.0280 | 105000 | 0.2261 | | 0.0283 | 106000 | 0.2112 | | 0.0285 | 107000 | 0.2015 | | 0.0288 | 108000 | 0.2115 | | 0.0291 | 109000 | 0.2145 | | 0.0293 | 110000 | 0.2142 | | 0.0296 | 111000 | 0.2217 | | 0.0299 | 112000 | 0.213 | | 0.0301 | 113000 | 0.2089 | | 0.0304 | 114000 | 0.2089 | | 0.0307 | 115000 | 0.2027 | | 0.0309 | 116000 | 0.217 | | 0.0312 | 117000 | 0.2008 | | 0.0315 | 118000 | 0.2035 | | 0.0317 | 119000 | 0.208 | | 0.0320 | 120000 | 0.2006 | | 0.0323 | 121000 | 0.2089 | | 0.0325 | 122000 | 0.212 | | 0.0328 | 123000 | 0.2074 | | 0.0331 | 124000 | 0.203 | | 0.0333 | 125000 | 0.2038 | | 0.0336 | 126000 | 0.1979 | | 0.0339 | 127000 | 0.197 | | 0.0341 | 128000 | 0.1947 | | 0.0344 | 129000 | 0.2034 | | 0.0347 | 130000 | 0.1924 | | 0.0349 | 131000 | 0.1957 | | 0.0352 | 132000 | 0.1894 | | 0.0355 | 133000 | 0.1934 | | 0.0357 | 134000 | 0.1933 | | 0.0360 | 135000 | 0.1953 | | 0.0363 | 136000 | 0.192 | | 0.0365 | 137000 | 0.1871 | | 0.0368 | 138000 | 0.2053 | | 0.0371 | 139000 | 0.1971 | | 0.0373 | 140000 | 0.1904 | | 0.0376 | 141000 | 0.1891 | | 0.0379 | 142000 | 0.1876 | | 0.0381 | 143000 | 0.1875 | | 0.0384 | 144000 | 0.194 | | 0.0387 | 145000 | 0.1932 | | 0.0389 | 146000 | 0.1895 | | 0.0392 | 147000 | 0.1937 | | 0.0395 | 148000 | 0.1888 | | 0.0397 | 149000 | 0.1836 | | 0.0400 | 150000 | 0.1886 | | 0.0403 | 151000 | 0.183 | | 0.0405 | 152000 | 0.1896 | | 0.0408 | 153000 | 0.1851 | | 0.0411 | 154000 | 0.1844 | | 0.0413 | 155000 | 0.184 | | 0.0416 | 156000 | 0.1846 | | 0.0419 | 157000 | 0.1876 | | 0.0421 | 158000 | 0.1848 | | 0.0424 | 159000 | 0.1824 | | 0.0427 | 160000 | 0.1844 | | 0.0429 | 161000 | 0.1864 | | 0.0432 | 162000 | 0.1726 | | 0.0435 | 163000 | 0.1838 | | 0.0437 | 164000 | 0.1818 | | 0.0440 | 165000 | 0.1811 | | 0.0443 | 166000 | 0.176 | | 0.0445 | 167000 | 0.1831 | | 0.0448 | 168000 | 0.1791 | | 0.0451 | 169000 | 0.182 | | 0.0453 | 170000 | 0.1814 | | 0.0456 | 171000 | 0.1783 | | 0.0459 | 172000 | 0.1771 | | 0.0461 | 173000 | 0.1806 | | 0.0464 | 174000 | 0.1821 | | 0.0467 | 175000 | 0.1805 | | 0.0469 | 176000 | 0.1698 | | 0.0472 | 177000 | 0.1796 | | 0.0475 | 178000 | 0.1774 | | 0.0477 | 179000 | 0.1703 | | 0.0480 | 180000 | 0.179 | | 0.0483 | 181000 | 0.1839 | | 0.0485 | 182000 | 0.1695 | | 0.0488 | 183000 | 0.1681 | | 0.0491 | 184000 | 0.1783 | | 0.0493 | 185000 | 0.1792 | | 0.0496 | 186000 | 0.1664 | | 0.0499 | 187000 | 0.1711 | | 0.0501 | 188000 | 0.168 | | 0.0504 | 189000 | 0.1722 | | 0.0507 | 190000 | 0.1776 | | 0.0509 | 191000 | 0.1704 | | 0.0512 | 192000 | 0.161 | | 0.0515 | 193000 | 0.1719 | | 0.0517 | 194000 | 0.1679 | | 0.0520 | 195000 | 0.1731 | | 0.0523 | 196000 | 0.1778 | | 0.0525 | 197000 | 0.1658 | | 0.0528 | 198000 | 0.1607 | | 0.0531 | 199000 | 0.1682 | | 0.0533 | 200000 | 0.1675 | | 0.0536 | 201000 | 0.1708 | | 0.0539 | 202000 | 0.1694 | | 0.0541 | 203000 | 0.1767 | | 0.0544 | 204000 | 0.1665 | | 0.0547 | 205000 | 0.1695 | | 0.0549 | 206000 | 0.1693 | | 0.0552 | 207000 | 0.1697 | | 0.0555 | 208000 | 0.1721 | | 0.0557 | 209000 | 0.1633 | | 0.0560 | 210000 | 0.1712 | | 0.0563 | 211000 | 0.1712 | | 0.0565 | 212000 | 0.1646 | | 0.0568 | 213000 | 0.1639 | | 0.0571 | 214000 | 0.1692 | | 0.0573 | 215000 | 0.1694 | | 0.0576 | 216000 | 0.1684 | | 0.0579 | 217000 | 0.1608 | | 0.0581 | 218000 | 0.1663 | | 0.0584 | 219000 | 0.1669 | | 0.0587 | 220000 | 0.1671 | | 0.0589 | 221000 | 0.1632 | | 0.0592 | 222000 | 0.1642 | | 0.0595 | 223000 | 0.1619 | | 0.0597 | 224000 | 0.1672 | | 0.0600 | 225000 | 0.1704 | | 0.0603 | 226000 | 0.1602 | | 0.0605 | 227000 | 0.1548 | | 0.0608 | 228000 | 0.1631 | | 0.0611 | 229000 | 0.1555 | | 0.0613 | 230000 | 0.1666 | | 0.0616 | 231000 | 0.1611 | | 0.0619 | 232000 | 0.1504 | | 0.0621 | 233000 | 0.159 | | 0.0624 | 234000 | 0.1642 | | 0.0627 | 235000 | 0.1573 | | 0.0629 | 236000 | 0.1612 | | 0.0632 | 237000 | 0.1649 | | 0.0635 | 238000 | 0.1687 | | 0.0637 | 239000 | 0.1601 | | 0.0640 | 240000 | 0.1592 | | 0.0643 | 241000 | 0.1606 | | 0.0645 | 242000 | 0.1545 | | 0.0648 | 243000 | 0.1646 | | 0.0651 | 244000 | 0.1576 | | 0.0653 | 245000 | 0.1514 | | 0.0656 | 246000 | 0.1606 | | 0.0659 | 247000 | 0.1517 | | 0.0661 | 248000 | 0.1503 | | 0.0664 | 249000 | 0.1627 | | 0.0667 | 250000 | 0.1555 | | 0.0669 | 251000 | 0.1566 | | 0.0672 | 252000 | 0.1624 | | 0.0675 | 253000 | 0.1495 | | 0.0677 | 254000 | 0.1535 | | 0.0680 | 255000 | 0.1492 | | 0.0683 | 256000 | 0.1494 | | 0.0685 | 257000 | 0.1708 | | 0.0688 | 258000 | 0.1563 | | 0.0691 | 259000 | 0.1541 | | 0.0693 | 260000 | 0.1568 | | 0.0696 | 261000 | 0.1535 | | 0.0699 | 262000 | 0.1519 | | 0.0701 | 263000 | 0.1571 | | 0.0704 | 264000 | 0.1536 | | 0.0707 | 265000 | 0.147 | | 0.0709 | 266000 | 0.147 | | 0.0712 | 267000 | 0.1537 | | 0.0715 | 268000 | 0.1527 | | 0.0717 | 269000 | 0.1545 | | 0.0720 | 270000 | 0.1523 | | 0.0723 | 271000 | 0.1539 | | 0.0725 | 272000 | 0.1561 | | 0.0728 | 273000 | 0.1513 | | 0.0731 | 274000 | 0.1571 | | 0.0733 | 275000 | 0.1577 | | 0.0736 | 276000 | 0.1613 | | 0.0739 | 277000 | 0.1523 | | 0.0741 | 278000 | 0.1468 | | 0.0744 | 279000 | 0.1534 | | 0.0747 | 280000 | 0.1544 | | 0.0749 | 281000 | 0.1552 | | 0.0752 | 282000 | 0.1514 | | 0.0755 | 283000 | 0.1504 | | 0.0757 | 284000 | 0.149 | | 0.0760 | 285000 | 0.1537 | | 0.0763 | 286000 | 0.1527 | | 0.0765 | 287000 | 0.1482 | | 0.0768 | 288000 | 0.1503 | | 0.0771 | 289000 | 0.1476 | | 0.0773 | 290000 | 0.1535 | | 0.0776 | 291000 | 0.1575 | | 0.0779 | 292000 | 0.1465 | | 0.0781 | 293000 | 0.147 | | 0.0784 | 294000 | 0.147 | | 0.0787 | 295000 | 0.1484 | | 0.0789 | 296000 | 0.1502 | | 0.0792 | 297000 | 0.147 | | 0.0795 | 298000 | 0.1544 | | 0.0797 | 299000 | 0.156 | | 0.0800 | 300000 | 0.1445 | | 0.0803 | 301000 | 0.143 | | 0.0805 | 302000 | 0.1541 | | 0.0808 | 303000 | 0.159 | | 0.0811 | 304000 | 0.1434 | | 0.0813 | 305000 | 0.1511 | | 0.0816 | 306000 | 0.1473 | | 0.0819 | 307000 | 0.1514 | | 0.0821 | 308000 | 0.1491 | | 0.0824 | 309000 | 0.1443 | | 0.0827 | 310000 | 0.1496 | | 0.0829 | 311000 | 0.1535 | | 0.0832 | 312000 | 0.152 | | 0.0835 | 313000 | 0.1496 | | 0.0837 | 314000 | 0.1521 | | 0.0840 | 315000 | 0.1459 | | 0.0843 | 316000 | 0.1449 | | 0.0845 | 317000 | 0.148 | | 0.0848 | 318000 | 0.1566 | | 0.0851 | 319000 | 0.149 | | 0.0853 | 320000 | 0.1502 | | 0.0856 | 321000 | 0.1501 | | 0.0859 | 322000 | 0.1447 | | 0.0861 | 323000 | 0.1468 | | 0.0864 | 324000 | 0.1474 | | 0.0867 | 325000 | 0.1455 | | 0.0869 | 326000 | 0.1374 | | 0.0872 | 327000 | 0.1397 | | 0.0875 | 328000 | 0.1468 | | 0.0877 | 329000 | 0.1436 | | 0.0880 | 330000 | 0.1523 | | 0.0883 | 331000 | 0.1407 | | 0.0885 | 332000 | 0.1446 | | 0.0888 | 333000 | 0.1476 | | 0.0891 | 334000 | 0.1487 | | 0.0893 | 335000 | 0.1486 | | 0.0896 | 336000 | 0.1564 | | 0.0899 | 337000 | 0.1487 | | 0.0901 | 338000 | 0.1492 | | 0.0904 | 339000 | 0.1469 | | 0.0907 | 340000 | 0.1487 | | 0.0909 | 341000 | 0.1513 | | 0.0912 | 342000 | 0.151 | | 0.0915 | 343000 | 0.14 | | 0.0917 | 344000 | 0.1487 | | 0.0920 | 345000 | 0.1527 | | 0.0923 | 346000 | 0.1419 | | 0.0925 | 347000 | 0.1541 | | 0.0928 | 348000 | 0.1426 | | 0.0931 | 349000 | 0.1426 | | 0.0933 | 350000 | 0.1503 | | 0.0936 | 351000 | 0.1392 | | 0.0939 | 352000 | 0.1505 | | 0.0941 | 353000 | 0.1452 | | 0.0944 | 354000 | 0.1462 | | 0.0947 | 355000 | 0.1412 | | 0.0949 | 356000 | 0.1438 | | 0.0952 | 357000 | 0.1457 | | 0.0955 | 358000 | 0.1414 | | 0.0957 | 359000 | 0.1458 | | 0.0960 | 360000 | 0.1477 | | 0.0963 | 361000 | 0.1423 | | 0.0965 | 362000 | 0.1498 | | 0.0968 | 363000 | 0.1426 | | 0.0971 | 364000 | 0.1469 | | 0.0973 | 365000 | 0.136 | | 0.0976 | 366000 | 0.142 | | 0.0979 | 367000 | 0.138 | | 0.0981 | 368000 | 0.1439 | | 0.0984 | 369000 | 0.1402 | | 0.0987 | 370000 | 0.1431 | | 0.0989 | 371000 | 0.1382 | | 0.0992 | 372000 | 0.1456 | | 0.0995 | 373000 | 0.1364 | | 0.0997 | 374000 | 0.1424 | | 0.1000 | 375000 | 0.1499 | | 0.1003 | 376000 | 0.1471 | | 0.1005 | 377000 | 0.1401 | | 0.1008 | 378000 | 0.1365 | | 0.1011 | 379000 | 0.1434 | | 0.1013 | 380000 | 0.1422 | | 0.1016 | 381000 | 0.1318 | | 0.1019 | 382000 | 0.15 | | 0.1021 | 383000 | 0.1437 | | 0.1024 | 384000 | 0.138 | | 0.1027 | 385000 | 0.1394 | | 0.1029 | 386000 | 0.1446 | | 0.1032 | 387000 | 0.1327 | | 0.1035 | 388000 | 0.1448 | | 0.1037 | 389000 | 0.142 | | 0.1040 | 390000 | 0.1446 | | 0.1043 | 391000 | 0.1409 | | 0.1045 | 392000 | 0.1444 | | 0.1048 | 393000 | 0.1353 | | 0.1051 | 394000 | 0.1484 | | 0.1053 | 395000 | 0.1464 | | 0.1056 | 396000 | 0.1293 | | 0.1059 | 397000 | 0.1393 | | 0.1061 | 398000 | 0.1393 | | 0.1064 | 399000 | 0.1473 | | 0.1067 | 400000 | 0.1412 | | 0.1069 | 401000 | 0.1315 | | 0.1072 | 402000 | 0.1419 | | 0.1075 | 403000 | 0.1366 | | 0.1077 | 404000 | 0.1426 | | 0.1080 | 405000 | 0.1401 | | 0.1083 | 406000 | 0.1367 | | 0.1085 | 407000 | 0.139 | | 0.1088 | 408000 | 0.1376 | | 0.1091 | 409000 | 0.1354 | | 0.1093 | 410000 | 0.1405 | | 0.1096 | 411000 | 0.1341 | | 0.1099 | 412000 | 0.1454 | | 0.1101 | 413000 | 0.1375 | | 0.1104 | 414000 | 0.1431 | | 0.1107 | 415000 | 0.1344 | | 0.1109 | 416000 | 0.1313 | | 0.1112 | 417000 | 0.1464 | | 0.1115 | 418000 | 0.1363 | | 0.1117 | 419000 | 0.1346 | | 0.1120 | 420000 | 0.1381 | | 0.1123 | 421000 | 0.1331 | | 0.1125 | 422000 | 0.1349 | | 0.1128 | 423000 | 0.1377 | | 0.1131 | 424000 | 0.1414 | | 0.1133 | 425000 | 0.1366 | | 0.1136 | 426000 | 0.1319 | | 0.1139 | 427000 | 0.1387 | | 0.1141 | 428000 | 0.138 | | 0.1144 | 429000 | 0.1351 | | 0.1147 | 430000 | 0.1373 | | 0.1149 | 431000 | 0.131 | | 0.1152 | 432000 | 0.1302 | | 0.1155 | 433000 | 0.1317 | | 0.1157 | 434000 | 0.1332 | | 0.1160 | 435000 | 0.1344 | | 0.1163 | 436000 | 0.1425 | | 0.1165 | 437000 | 0.1276 | | 0.1168 | 438000 | 0.1314 | | 0.1171 | 439000 | 0.1238 | | 0.1173 | 440000 | 0.1291 | | 0.1176 | 441000 | 0.1311 | | 0.1179 | 442000 | 0.1222 | | 0.1181 | 443000 | 0.1311 | | 0.1184 | 444000 | 0.1423 | | 0.1187 | 445000 | 0.1308 | | 0.1189 | 446000 | 0.1317 | | 0.1192 | 447000 | 0.1369 | | 0.1195 | 448000 | 0.1282 | | 0.1197 | 449000 | 0.1376 | | 0.1200 | 450000 | 0.1253 | | 0.1203 | 451000 | 0.1271 | | 0.1205 | 452000 | 0.131 | | 0.1208 | 453000 | 0.1316 | | 0.1211 | 454000 | 0.1353 | | 0.1213 | 455000 | 0.1277 | | 0.1216 | 456000 | 0.1238 | | 0.1219 | 457000 | 0.1271 | | 0.1221 | 458000 | 0.1319 | | 0.1224 | 459000 | 0.1281 | | 0.1227 | 460000 | 0.1305 | | 0.1229 | 461000 | 0.1376 | | 0.1232 | 462000 | 0.1333 | | 0.1235 | 463000 | 0.1211 | | 0.1237 | 464000 | 0.1211 | | 0.1240 | 465000 | 0.1286 | | 0.1243 | 466000 | 0.1329 | | 0.1245 | 467000 | 0.1227 | | 0.1248 | 468000 | 0.1283 | | 0.1251 | 469000 | 0.1275 | | 0.1253 | 470000 | 0.1362 | | 0.1256 | 471000 | 0.1293 | | 0.1259 | 472000 | 0.1264 | | 0.1261 | 473000 | 0.1241 | | 0.1264 | 474000 | 0.118 | | 0.1267 | 475000 | 0.1279 | | 0.1269 | 476000 | 0.1267 | | 0.1272 | 477000 | 0.1294 | | 0.1275 | 478000 | 0.1299 | | 0.1277 | 479000 | 0.1323 | | 0.1280 | 480000 | 0.1284 | | 0.1283 | 481000 | 0.1299 | | 0.1285 | 482000 | 0.1255 | | 0.1288 | 483000 | 0.1289 | | 0.1291 | 484000 | 0.1256 | | 0.1293 | 485000 | 0.1274 | | 0.1296 | 486000 | 0.1279 | | 0.1299 | 487000 | 0.1234 | | 0.1301 | 488000 | 0.1299 | | 0.1304 | 489000 | 0.1257 | | 0.1307 | 490000 | 0.1195 | | 0.1309 | 491000 | 0.1265 | | 0.1312 | 492000 | 0.1249 | | 0.1315 | 493000 | 0.1254 | | 0.1317 | 494000 | 0.1299 | | 0.1320 | 495000 | 0.1255 | | 0.1323 | 496000 | 0.1316 | | 0.1325 | 497000 | 0.1303 | | 0.1328 | 498000 | 0.1213 | | 0.1331 | 499000 | 0.1182 | | 0.1333 | 500000 | 0.12 | | 0.1336 | 501000 | 0.1193 | | 0.1339 | 502000 | 0.1241 | | 0.1341 | 503000 | 0.1258 | | 0.1344 | 504000 | 0.1279 | | 0.1347 | 505000 | 0.1293 | | 0.1349 | 506000 | 0.1278 | | 0.1352 | 507000 | 0.1241 | | 0.1355 | 508000 | 0.1221 | | 0.1357 | 509000 | 0.1213 | | 0.1360 | 510000 | 0.1232 | | 0.1363 | 511000 | 0.1278 | | 0.1365 | 512000 | 0.1208 | | 0.1368 | 513000 | 0.1203 | | 0.1371 | 514000 | 0.1251 | | 0.1373 | 515000 | 0.1207 | | 0.1376 | 516000 | 0.1233 | | 0.1379 | 517000 | 0.1287 | | 0.1381 | 518000 | 0.1255 | | 0.1384 | 519000 | 0.1234 | | 0.1387 | 520000 | 0.1198 | | 0.1389 | 521000 | 0.1274 | | 0.1392 | 522000 | 0.1209 | | 0.1395 | 523000 | 0.116 | | 0.1397 | 524000 | 0.1154 | | 0.1400 | 525000 | 0.1197 | | 0.1403 | 526000 | 0.1249 | | 0.1405 | 527000 | 0.1127 | | 0.1408 | 528000 | 0.1221 | | 0.1411 | 529000 | 0.122 | | 0.1413 | 530000 | 0.1251 | | 0.1416 | 531000 | 0.123 | | 0.1419 | 532000 | 0.1222 | | 0.1421 | 533000 | 0.1205 | | 0.1424 | 534000 | 0.1196 | | 0.1427 | 535000 | 0.1172 | | 0.1429 | 536000 | 0.1185 | | 0.1432 | 537000 | 0.1249 | | 0.1435 | 538000 | 0.123 | | 0.1437 | 539000 | 0.1227 | | 0.1440 | 540000 | 0.1198 | | 0.1443 | 541000 | 0.1219 | | 0.1445 | 542000 | 0.1183 | | 0.1448 | 543000 | 0.1203 | | 0.1451 | 544000 | 0.117 | | 0.1453 | 545000 | 0.1157 | | 0.1456 | 546000 | 0.1175 | | 0.1459 | 547000 | 0.1178 | | 0.1461 | 548000 | 0.1155 | | 0.1464 | 549000 | 0.1233 | | 0.1467 | 550000 | 0.1127 | | 0.1469 | 551000 | 0.12 | | 0.1472 | 552000 | 0.1229 | | 0.1475 | 553000 | 0.1211 | | 0.1477 | 554000 | 0.1125 | | 0.1480 | 555000 | 0.1178 | | 0.1483 | 556000 | 0.1178 | | 0.1485 | 557000 | 0.1132 | | 0.1488 | 558000 | 0.1119 | | 0.1491 | 559000 | 0.1157 | | 0.1493 | 560000 | 0.1197 | | 0.1496 | 561000 | 0.1151 | | 0.1499 | 562000 | 0.1217 | | 0.1501 | 563000 | 0.1146 | | 0.1504 | 564000 | 0.1202 | | 0.1507 | 565000 | 0.1165 | | 0.1509 | 566000 | 0.1179 | | 0.1512 | 567000 | 0.115 | | 0.1515 | 568000 | 0.1195 | | 0.1517 | 569000 | 0.1258 | | 0.1520 | 570000 | 0.1139 | | 0.1523 | 571000 | 0.1158 | | 0.1525 | 572000 | 0.1194 | | 0.1528 | 573000 | 0.1131 | | 0.1531 | 574000 | 0.1132 | | 0.1533 | 575000 | 0.1198 | | 0.1536 | 576000 | 0.116 | | 0.1539 | 577000 | 0.1173 | | 0.1541 | 578000 | 0.1175 | | 0.1544 | 579000 | 0.1128 | | 0.1547 | 580000 | 0.1127 | | 0.1549 | 581000 | 0.1168 | | 0.1552 | 582000 | 0.1131 | | 0.1555 | 583000 | 0.1213 | | 0.1557 | 584000 | 0.1182 | | 0.1560 | 585000 | 0.1146 | | 0.1563 | 586000 | 0.1189 | | 0.1565 | 587000 | 0.1153 | | 0.1568 | 588000 | 0.1136 | | 0.1571 | 589000 | 0.1121 | | 0.1573 | 590000 | 0.1082 | | 0.1576 | 591000 | 0.1116 | | 0.1579 | 592000 | 0.113 | | 0.1581 | 593000 | 0.1148 | | 0.1584 | 594000 | 0.1085 | | 0.1587 | 595000 | 0.119 | | 0.1589 | 596000 | 0.1073 | | 0.1592 | 597000 | 0.1157 | | 0.1595 | 598000 | 0.1142 | | 0.1597 | 599000 | 0.1125 | | 0.1600 | 600000 | 0.1112 | | 0.1603 | 601000 | 0.1122 | | 0.1605 | 602000 | 0.1173 | | 0.1608 | 603000 | 0.113 | | 0.1611 | 604000 | 0.1068 | | 0.1613 | 605000 | 0.1131 | | 0.1616 | 606000 | 0.1132 | | 0.1619 | 607000 | 0.1142 | | 0.1621 | 608000 | 0.1169 | | 0.1624 | 609000 | 0.1094 | | 0.1627 | 610000 | 0.1206 | | 0.1629 | 611000 | 0.1129 | | 0.1632 | 612000 | 0.1177 | | 0.1635 | 613000 | 0.1101 | | 0.1637 | 614000 | 0.1102 | | 0.1640 | 615000 | 0.1074 | | 0.1643 | 616000 | 0.1156 | | 0.1645 | 617000 | 0.1061 | | 0.1648 | 618000 | 0.1112 | | 0.1651 | 619000 | 0.1166 | | 0.1653 | 620000 | 0.1035 | | 0.1656 | 621000 | 0.1153 | | 0.1659 | 622000 | 0.1105 | | 0.1661 | 623000 | 0.1128 | | 0.1664 | 624000 | 0.1052 | | 0.1667 | 625000 | 0.1146 | | 0.1669 | 626000 | 0.1092 | | 0.1672 | 627000 | 0.1137 | | 0.1675 | 628000 | 0.1139 | | 0.1677 | 629000 | 0.11 | | 0.1680 | 630000 | 0.1062 | | 0.1683 | 631000 | 0.1136 | | 0.1685 | 632000 | 0.1124 | | 0.1688 | 633000 | 0.1087 | | 0.1691 | 634000 | 0.1109 | | 0.1693 | 635000 | 0.1124 | | 0.1696 | 636000 | 0.1074 | | 0.1699 | 637000 | 0.106 | | 0.1701 | 638000 | 0.1102 | | 0.1704 | 639000 | 0.1127 | | 0.1707 | 640000 | 0.108 | | 0.1709 | 641000 | 0.1047 | | 0.1712 | 642000 | 0.107 | | 0.1715 | 643000 | 0.1135 | | 0.1717 | 644000 | 0.1138 | | 0.1720 | 645000 | 0.1087 | | 0.1723 | 646000 | 0.1067 | | 0.1725 | 647000 | 0.1116 | | 0.1728 | 648000 | 0.1107 | | 0.1731 | 649000 | 0.1105 | | 0.1733 | 650000 | 0.1143 | | 0.1736 | 651000 | 0.1098 | | 0.1739 | 652000 | 0.1055 | | 0.1741 | 653000 | 0.1089 | | 0.1744 | 654000 | 0.1047 | | 0.1747 | 655000 | 0.1003 | | 0.1749 | 656000 | 0.1043 | | 0.1752 | 657000 | 0.1112 | | 0.1755 | 658000 | 0.1054 | | 0.1757 | 659000 | 0.1145 | | 0.1760 | 660000 | 0.1093 | | 0.1763 | 661000 | 0.1102 | | 0.1765 | 662000 | 0.1102 | | 0.1768 | 663000 | 0.1086 | | 0.1771 | 664000 | 0.108 | | 0.1773 | 665000 | 0.1046 | | 0.1776 | 666000 | 0.1064 | | 0.1779 | 667000 | 0.1014 | | 0.1781 | 668000 | 0.1039 | | 0.1784 | 669000 | 0.1132 | | 0.1787 | 670000 | 0.1076 | | 0.1789 | 671000 | 0.1075 | | 0.1792 | 672000 | 0.1089 | | 0.1795 | 673000 | 0.1109 | | 0.1797 | 674000 | 0.1035 | | 0.1800 | 675000 | 0.105 | | 0.1803 | 676000 | 0.108 | | 0.1805 | 677000 | 0.1088 | | 0.1808 | 678000 | 0.1094 | | 0.1811 | 679000 | 0.1019 | | 0.1813 | 680000 | 0.1054 | | 0.1816 | 681000 | 0.1041 | | 0.1819 | 682000 | 0.1086 | | 0.1821 | 683000 | 0.1126 | | 0.1824 | 684000 | 0.0996 | | 0.1827 | 685000 | 0.1019 | | 0.1829 | 686000 | 0.1013 | | 0.1832 | 687000 | 0.1043 | | 0.1835 | 688000 | 0.1045 | | 0.1837 | 689000 | 0.1076 | | 0.1840 | 690000 | 0.1046 | | 0.1843 | 691000 | 0.1096 | | 0.1845 | 692000 | 0.0994 | | 0.1848 | 693000 | 0.1049 | | 0.1851 | 694000 | 0.1104 | | 0.1853 | 695000 | 0.1089 | | 0.1856 | 696000 | 0.1039 | | 0.1859 | 697000 | 0.1035 | | 0.1861 | 698000 | 0.1056 | | 0.1864 | 699000 | 0.1058 | | 0.1867 | 700000 | 0.1074 | | 0.1869 | 701000 | 0.1074 | | 0.1872 | 702000 | 0.1122 | | 0.1875 | 703000 | 0.1013 | | 0.1877 | 704000 | 0.1029 | | 0.1880 | 705000 | 0.0997 | | 0.1883 | 706000 | 0.1052 | | 0.1885 | 707000 | 0.1135 | | 0.1888 | 708000 | 0.1114 | | 0.1891 | 709000 | 0.111 | | 0.1893 | 710000 | 0.104 | | 0.1896 | 711000 | 0.1018 | | 0.1899 | 712000 | 0.1077 | | 0.1901 | 713000 | 0.103 | | 0.1904 | 714000 | 0.1083 | | 0.1907 | 715000 | 0.1042 | | 0.1909 | 716000 | 0.1078 | | 0.1912 | 717000 | 0.1014 | | 0.1915 | 718000 | 0.1022 | | 0.1917 | 719000 | 0.1023 | | 0.1920 | 720000 | 0.1041 | | 0.1923 | 721000 | 0.0982 | | 0.1925 | 722000 | 0.1094 | | 0.1928 | 723000 | 0.1085 | | 0.1931 | 724000 | 0.1033 | | 0.1933 | 725000 | 0.1042 | | 0.1936 | 726000 | 0.105 | | 0.1939 | 727000 | 0.1047 | | 0.1941 | 728000 | 0.1014 | | 0.1944 | 729000 | 0.1029 | | 0.1947 | 730000 | 0.1003 | | 0.1949 | 731000 | 0.1071 | | 0.1952 | 732000 | 0.1 | | 0.1955 | 733000 | 0.1074 | | 0.1957 | 734000 | 0.1097 | | 0.1960 | 735000 | 0.1059 | | 0.1963 | 736000 | 0.1042 | | 0.1965 | 737000 | 0.1039 | | 0.1968 | 738000 | 0.104 | | 0.1971 | 739000 | 0.1031 | | 0.1973 | 740000 | 0.1016 | | 0.1976 | 741000 | 0.1039 | | 0.1979 | 742000 | 0.1023 | | 0.1981 | 743000 | 0.0954 | | 0.1984 | 744000 | 0.1035 | | 0.1987 | 745000 | 0.102 | | 0.1989 | 746000 | 0.1081 | | 0.1992 | 747000 | 0.1083 | | 0.1995 | 748000 | 0.1049 | | 0.1997 | 749000 | 0.0957 | | 0.2000 | 750000 | 0.104 | | 0.2003 | 751000 | 0.1074 | | 0.2005 | 752000 | 0.1007 | | 0.2008 | 753000 | 0.1022 | | 0.2011 | 754000 | 0.0987 | | 0.2013 | 755000 | 0.1054 | | 0.2016 | 756000 | 0.0981 | | 0.2019 | 757000 | 0.0948 | | 0.2021 | 758000 | 0.0991 | | 0.2024 | 759000 | 0.1004 | | 0.2027 | 760000 | 0.1111 | | 0.2029 | 761000 | 0.0993 | | 0.2032 | 762000 | 0.1038 | | 0.2035 | 763000 | 0.103 | | 0.2037 | 764000 | 0.105 | | 0.2040 | 765000 | 0.1027 | | 0.2043 | 766000 | 0.0977 | | 0.2045 | 767000 | 0.1067 | | 0.2048 | 768000 | 0.1 | | 0.2051 | 769000 | 0.1039 | | 0.2053 | 770000 | 0.0986 | | 0.2056 | 771000 | 0.1035 | | 0.2059 | 772000 | 0.1013 | | 0.2061 | 773000 | 0.1006 | | 0.2064 | 774000 | 0.1056 | | 0.2067 | 775000 | 0.0997 | | 0.2069 | 776000 | 0.0976 | | 0.2072 | 777000 | 0.0957 | | 0.2075 | 778000 | 0.0996 | | 0.2077 | 779000 | 0.1043 | | 0.2080 | 780000 | 0.0936 | | 0.2083 | 781000 | 0.1004 | | 0.2085 | 782000 | 0.1002 | | 0.2088 | 783000 | 0.101 | | 0.2091 | 784000 | 0.1018 | | 0.2093 | 785000 | 0.0955 | | 0.2096 | 786000 | 0.0933 | | 0.2099 | 787000 | 0.1031 | | 0.2101 | 788000 | 0.1016 | | 0.2104 | 789000 | 0.0948 | | 0.2107 | 790000 | 0.1 | | 0.2109 | 791000 | 0.1032 | | 0.2112 | 792000 | 0.0992 | | 0.2115 | 793000 | 0.098 | | 0.2117 | 794000 | 0.0935 | | 0.2120 | 795000 | 0.0975 | | 0.2123 | 796000 | 0.101 | | 0.2125 | 797000 | 0.0968 | | 0.2128 | 798000 | 0.0955 | | 0.2131 | 799000 | 0.0987 | | 0.2133 | 800000 | 0.0991 | | 0.2136 | 801000 | 0.0949 | | 0.2139 | 802000 | 0.0899 | | 0.2141 | 803000 | 0.1008 | | 0.2144 | 804000 | 0.0943 | | 0.2147 | 805000 | 0.1011 | | 0.2149 | 806000 | 0.0978 | | 0.2152 | 807000 | 0.1021 | | 0.2155 | 808000 | 0.0967 | | 0.2157 | 809000 | 0.0989 | | 0.2160 | 810000 | 0.1007 | | 0.2163 | 811000 | 0.0965 | | 0.2165 | 812000 | 0.0983 | | 0.2168 | 813000 | 0.0965 | | 0.2171 | 814000 | 0.095 | | 0.2173 | 815000 | 0.1011 | | 0.2176 | 816000 | 0.0987 | | 0.2179 | 817000 | 0.0999 | | 0.2181 | 818000 | 0.0952 | | 0.2184 | 819000 | 0.094 | | 0.2187 | 820000 | 0.0981 | | 0.2189 | 821000 | 0.0937 | | 0.2192 | 822000 | 0.0962 | | 0.2195 | 823000 | 0.096 | | 0.2197 | 824000 | 0.091 | | 0.2200 | 825000 | 0.0973 | | 0.2203 | 826000 | 0.0993 | | 0.2205 | 827000 | 0.104 | | 0.2208 | 828000 | 0.0964 | | 0.2211 | 829000 | 0.1015 | | 0.2213 | 830000 | 0.0903 | | 0.2216 | 831000 | 0.0967 | | 0.2219 | 832000 | 0.1029 | | 0.2221 | 833000 | 0.0936 | | 0.2224 | 834000 | 0.0993 | | 0.2227 | 835000 | 0.0864 | | 0.2229 | 836000 | 0.0954 | | 0.2232 | 837000 | 0.0972 | | 0.2235 | 838000 | 0.0974 | | 0.2237 | 839000 | 0.0986 | | 0.2240 | 840000 | 0.0947 | | 0.2243 | 841000 | 0.0999 | | 0.2245 | 842000 | 0.0975 | | 0.2248 | 843000 | 0.0955 | | 0.2251 | 844000 | 0.0968 | | 0.2253 | 845000 | 0.0894 | | 0.2256 | 846000 | 0.096 | | 0.2259 | 847000 | 0.101 | | 0.2261 | 848000 | 0.094 | | 0.2264 | 849000 | 0.0937 | | 0.2267 | 850000 | 0.1052 | | 0.2269 | 851000 | 0.0888 | | 0.2272 | 852000 | 0.0898 | | 0.2275 | 853000 | 0.0908 | | 0.2277 | 854000 | 0.0963 | | 0.2280 | 855000 | 0.0971 | | 0.2283 | 856000 | 0.0968 | | 0.2285 | 857000 | 0.0978 | | 0.2288 | 858000 | 0.0946 | | 0.2291 | 859000 | 0.1004 | | 0.2293 | 860000 | 0.0923 | | 0.2296 | 861000 | 0.0929 | | 0.2299 | 862000 | 0.0952 | | 0.2301 | 863000 | 0.0948 | | 0.2304 | 864000 | 0.0936 | | 0.2307 | 865000 | 0.092 | | 0.2309 | 866000 | 0.0894 | | 0.2312 | 867000 | 0.0922 | | 0.2315 | 868000 | 0.0946 | | 0.2317 | 869000 | 0.0967 | | 0.2320 | 870000 | 0.0965 | | 0.2323 | 871000 | 0.0966 | | 0.2325 | 872000 | 0.0927 | | 0.2328 | 873000 | 0.0931 | | 0.2331 | 874000 | 0.0901 | | 0.2333 | 875000 | 0.0929 | | 0.2336 | 876000 | 0.096 | | 0.2339 | 877000 | 0.0912 | | 0.2341 | 878000 | 0.0915 | | 0.2344 | 879000 | 0.095 | | 0.2347 | 880000 | 0.0938 | | 0.2349 | 881000 | 0.0987 | | 0.2352 | 882000 | 0.0955 | | 0.2355 | 883000 | 0.091 | | 0.2357 | 884000 | 0.0909 | | 0.2360 | 885000 | 0.094 | | 0.2363 | 886000 | 0.095 | | 0.2365 | 887000 | 0.0923 | | 0.2368 | 888000 | 0.0986 | | 0.2371 | 889000 | 0.0945 | | 0.2373 | 890000 | 0.0951 | | 0.2376 | 891000 | 0.0922 | | 0.2379 | 892000 | 0.0896 | | 0.2381 | 893000 | 0.095 | | 0.2384 | 894000 | 0.0915 | | 0.2387 | 895000 | 0.0907 | | 0.2389 | 896000 | 0.0917 | | 0.2392 | 897000 | 0.091 | | 0.2395 | 898000 | 0.093 | | 0.2397 | 899000 | 0.0993 | | 0.2400 | 900000 | 0.0988 | | 0.2403 | 901000 | 0.093 | | 0.2405 | 902000 | 0.0905 | | 0.2408 | 903000 | 0.0968 | | 0.2411 | 904000 | 0.0918 | | 0.2413 | 905000 | 0.0937 | | 0.2416 | 906000 | 0.0971 | | 0.2419 | 907000 | 0.0896 | | 0.2421 | 908000 | 0.0936 | | 0.2424 | 909000 | 0.0923 | | 0.2427 | 910000 | 0.0959 | | 0.2429 | 911000 | 0.0901 | | 0.2432 | 912000 | 0.0937 | | 0.2435 | 913000 | 0.0968 | | 0.2437 | 914000 | 0.0889 | | 0.2440 | 915000 | 0.0921 | | 0.2443 | 916000 | 0.0945 | | 0.2445 | 917000 | 0.088 | | 0.2448 | 918000 | 0.0916 | | 0.2451 | 919000 | 0.0975 | | 0.2453 | 920000 | 0.085 | | 0.2456 | 921000 | 0.0903 | | 0.2459 | 922000 | 0.0988 | | 0.2461 | 923000 | 0.0846 | | 0.2464 | 924000 | 0.0937 | | 0.2467 | 925000 | 0.0951 | | 0.2469 | 926000 | 0.092 | | 0.2472 | 927000 | 0.0989 | | 0.2475 | 928000 | 0.0835 | | 0.2477 | 929000 | 0.0925 | | 0.2480 | 930000 | 0.0953 | | 0.2483 | 931000 | 0.0885 | | 0.2485 | 932000 | 0.0887 | | 0.2488 | 933000 | 0.0868 | | 0.2491 | 934000 | 0.0882 | | 0.2493 | 935000 | 0.0933 | | 0.2496 | 936000 | 0.0896 | | 0.2499 | 937000 | 0.0917 | </details> ### Framework Versions - Python: 3.12.2 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.5.0 - Accelerate: 1.0.1 - Datasets: 3.0.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CustomTripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "PCR" ]
BioNLP
carlfeynman/reproduce-static-retrieval-mrl-en-v1
carlfeynman
sentence-similarity
[ "sentence-transformers", "safetensors", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:68534726", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "dataset:sentence-transformers/gooaq", "dataset:sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1", "dataset:sentence-transformers/s2orc", "dataset:sentence-transformers/all-nli", "dataset:sentence-transformers/paq", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,737
1,737
0
0
--- datasets: - sentence-transformers/gooaq - sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1 - sentence-transformers/s2orc - sentence-transformers/all-nli - sentence-transformers/paq language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:68534726 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: how to sign legal documents as power of attorney? sentences: - 'After the principal''s name, write “by” and then sign your own name. Under or after the signature line, indicate your status as POA by including any of the following identifiers: as POA, as Agent, as Attorney in Fact or as Power of Attorney.' - Most earthquakes occur along the edge of the oceanic and continental plates. The earth's crust (the outer layer of the planet) is made up of several pieces, called plates. The plates under the oceans are called oceanic plates and the rest are continental plates. - Go to System -> VDOM -> VDOM2 and select 'Delete'. This VDOM is now successfully removed from the configuration. - source_sentence: what is upwork sentences: - Upwork, formerly Elance-oDesk, is a global freelancing platform where businesses and independent professionals connect and collaborate remotely.In 2015, Elance-oDesk was rebranded as Upwork. It is based out of Mountain View and San Francisco, California.pwork has nine million registered freelancers and four million registered clients. Three million jobs are posted annually, worth a total of $1 billion USD, making it the world's largest freelancer marketplace. - Upwork, formerly Elance-oDesk, is a global freelancing platform where businesses and independent professionals connect and collaborate remotely.In 2015, Elance-oDesk was rebranded as Upwork. It is based out of Mountain View and San Francisco, California.pwork has nine million registered freelancers and four million registered clients. Three million jobs are posted annually, worth a total of $1 billion USD, making it the world's largest freelancer marketplace. - 'That is, while fructose consumption may increase uric acid levels, to actually precipitate a gout attack, you need to deviate from the narrow band of normal blood pH range: 7.35 to 7.45. Ideally you wanna be at 7.45 or slightly above.' - source_sentence: how many km is a mile sentences: - Periodontal disease is a bacterial infection of the gums and bone that if not treated, can cause you to lose your teeth. Medical research is now showing that these bacteria in your mouth can also travel through your bloodstream into other organs in the body. - Master the formula for converting kilometers to miles. 1 kilometer is equal to 0.621371 miles (often shortened to .62).1 mile is equal to 1.609344 kilometers. Thus, to convert kilometers to miles, simply multiply the number of kilometers by 0.62137. For example, let's say you start with 5 kilometers. People are often interested in this conversion because they want to know how many miles are in a 5K run. The formula is 5 X 0.62137= 3.1 miles. - To find out how many kilometers in miles, multiply by this factor or simply use the converter below. 1 Mile = 1.609344 Kilometers. Mile is an imperial and US customary length unit and equals to 5280 feet. The abbreviation is mi. Kilometer is a metric length unit and equals to 1000 meters. - source_sentence: A group of children walking on a trail. sentences: - The man is performing. - Children are walking. - The people are adults. - source_sentence: A boy with a basketballs glowers at the camera. sentences: - The boy is smiling - The boy scowls - Surfer in red catches a wave. model-index: - name: '[REPRODUCE] Static Embeddings with BERT uncased tokenizer finetuned on various datasets' results: - task: type: information-retrieval name: Information Retrieval dataset: name: NanoClimateFEVER type: NanoClimateFEVER metrics: - type: cosine_accuracy@1 value: 0.32 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.54 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.64 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.82 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.32 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.152 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.11199999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.15666666666666665 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.25 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.31633333333333336 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.44133333333333336 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.35027529831718174 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4537698412698412 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.2754610667422747 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoDBPedia type: NanoDBPedia metrics: - type: cosine_accuracy@1 value: 0.64 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.88 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.92 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.94 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.64 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.6066666666666667 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.5479999999999999 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.45399999999999996 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.05820050708225643 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1660478879214754 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2233296888728599 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.32642161484749216 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.5611886908023029 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7551904761904763 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.42159733554382045 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoFEVER type: NanoFEVER metrics: - type: cosine_accuracy@1 value: 0.54 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.82 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.84 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.94 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.54 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2733333333333334 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.18 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999998 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5066666666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7566666666666667 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8033333333333332 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9033333333333333 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7223300246075101 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.6857460317460319 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.6591296848555135 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoFiQA2018 type: NanoFiQA2018 metrics: - type: cosine_accuracy@1 value: 0.22 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.44 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.5 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.64 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.22 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.18666666666666668 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.132 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09799999999999999 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.12688888888888888 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.29007936507936505 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.3347460317460317 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.453015873015873 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.33206103177846985 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.34974603174603175 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.2723064374777477 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoHotpotQA type: NanoHotpotQA metrics: - type: cosine_accuracy@1 value: 0.66 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.82 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.86 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.94 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.66 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.35999999999999993 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.264 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.14799999999999996 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.33 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.54 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.66 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.74 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6507660730204244 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.746690476190476 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5743825107321581 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoMSMARCO type: NanoMSMARCO metrics: - type: cosine_accuracy@1 value: 0.16 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.44 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.54 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.66 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.16 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.14666666666666667 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.10800000000000001 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.066 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.16 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.44 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.54 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.66 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4069260774532657 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3269126984126984 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.34104660879940385 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoNFCorpus type: NanoNFCorpus metrics: - type: cosine_accuracy@1 value: 0.4 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.54 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.7 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.4 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.34666666666666673 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.3 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.24400000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.06140064224956239 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.09381944627241434 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.11465220470723159 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.13758064454249494 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.3251344168353932 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.49083333333333345 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.15346080343511273 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoNQ type: NanoNQ metrics: - type: cosine_accuracy@1 value: 0.2 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.46 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.58 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.68 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15333333333333332 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.12000000000000002 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.07400000000000001 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.19 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.44 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.55 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.67 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4284752232212853 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3555714285714285 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.35954687250943856 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoQuoraRetrieval type: NanoQuoraRetrieval metrics: - type: cosine_accuracy@1 value: 0.8 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.92 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.96 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.98 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.35999999999999993 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.23999999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.12799999999999997 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7106666666666667 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8653333333333333 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9226666666666667 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9593333333333334 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.874423773707081 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8666666666666666 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8354028527028526 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoSCIDOCS type: NanoSCIDOCS metrics: - type: cosine_accuracy@1 value: 0.28 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.52 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.62 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.72 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.28 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.22666666666666666 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.184 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.14 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.059666666666666666 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1416666666666667 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.18966666666666665 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.2886666666666667 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.2657817193581118 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.4188571428571429 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.20270708890067454 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoArguAna type: NanoArguAna metrics: - type: cosine_accuracy@1 value: 0.12 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.48 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.6 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.68 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.12 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.15999999999999998 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.12 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.068 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.12 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.48 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.6 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.68 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4064179360568565 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.31785714285714284 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.33454708384798976 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoSciFact type: NanoSciFact metrics: - type: cosine_accuracy@1 value: 0.52 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.64 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.68 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.74 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.52 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.22 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.14400000000000002 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.485 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.61 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.655 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.72 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.6053823991819648 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5862222222222221 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.5721097562068183 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: NanoTouche2020 type: NanoTouche2020 metrics: - type: cosine_accuracy@1 value: 0.5918367346938775 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9183673469387755 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9795918367346939 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5918367346938775 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.5850340136054422 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.6000000000000001 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.5204081632653061 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0405610423291237 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.12039267252775386 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.20296687044371778 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.3313283589291373 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.5594653746925154 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.749514091350826 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4414984325557448 name: Cosine Map@100 - task: type: nano-beir name: Nano BEIR dataset: name: NanoBEIR mean type: NanoBEIR_mean metrics: - type: cosine_accuracy@1 value: 0.41937205651491377 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.6475667189952904 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.7168916797488225 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8030769230769231 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.41937205651491377 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2942333856619571 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.23784615384615387 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.17172370486656197 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.23120905747819215 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.399538926035975 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.4702072919822955 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.5623856275385894 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.4991252337717202 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.5464290448780245 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.41870742571611924 name: Cosine Map@100 --- # [REPRODUCE] Static Embeddings with BERT uncased tokenizer finetuned on various datasets This is a [sentence-transformers](https://www.SBERT.net) model trained on the [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq), [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1), [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc), [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) and [paq](https://huggingface.co/datasets/sentence-transformers/paq) datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** inf tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Datasets:** - [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) - [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) - [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) - [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) - [paq](https://huggingface.co/datasets/sentence-transformers/paq) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): StaticEmbedding( (embedding): EmbeddingBag(30522, 1024, mode='mean') ) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("carlfeynman/reproduce-static-retrieval-mrl-en-v1") # Run inference sentences = [ 'A boy with a basketballs glowers at the camera.', 'The boy scowls', 'The boy is smiling', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 | |:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------| | cosine_accuracy@1 | 0.32 | 0.64 | 0.54 | 0.22 | 0.66 | 0.16 | 0.4 | 0.2 | 0.8 | 0.28 | 0.12 | 0.52 | 0.5918 | | cosine_accuracy@3 | 0.54 | 0.88 | 0.82 | 0.44 | 0.82 | 0.44 | 0.54 | 0.46 | 0.92 | 0.52 | 0.48 | 0.64 | 0.9184 | | cosine_accuracy@5 | 0.64 | 0.92 | 0.84 | 0.5 | 0.86 | 0.54 | 0.6 | 0.58 | 0.96 | 0.62 | 0.6 | 0.68 | 0.9796 | | cosine_accuracy@10 | 0.82 | 0.94 | 0.94 | 0.64 | 0.94 | 0.66 | 0.7 | 0.68 | 0.98 | 0.72 | 0.68 | 0.74 | 1.0 | | cosine_precision@1 | 0.32 | 0.64 | 0.54 | 0.22 | 0.66 | 0.16 | 0.4 | 0.2 | 0.8 | 0.28 | 0.12 | 0.52 | 0.5918 | | cosine_precision@3 | 0.2 | 0.6067 | 0.2733 | 0.1867 | 0.36 | 0.1467 | 0.3467 | 0.1533 | 0.36 | 0.2267 | 0.16 | 0.22 | 0.585 | | cosine_precision@5 | 0.152 | 0.548 | 0.18 | 0.132 | 0.264 | 0.108 | 0.3 | 0.12 | 0.24 | 0.184 | 0.12 | 0.144 | 0.6 | | cosine_precision@10 | 0.112 | 0.454 | 0.1 | 0.098 | 0.148 | 0.066 | 0.244 | 0.074 | 0.128 | 0.14 | 0.068 | 0.08 | 0.5204 | | cosine_recall@1 | 0.1567 | 0.0582 | 0.5067 | 0.1269 | 0.33 | 0.16 | 0.0614 | 0.19 | 0.7107 | 0.0597 | 0.12 | 0.485 | 0.0406 | | cosine_recall@3 | 0.25 | 0.166 | 0.7567 | 0.2901 | 0.54 | 0.44 | 0.0938 | 0.44 | 0.8653 | 0.1417 | 0.48 | 0.61 | 0.1204 | | cosine_recall@5 | 0.3163 | 0.2233 | 0.8033 | 0.3347 | 0.66 | 0.54 | 0.1147 | 0.55 | 0.9227 | 0.1897 | 0.6 | 0.655 | 0.203 | | cosine_recall@10 | 0.4413 | 0.3264 | 0.9033 | 0.453 | 0.74 | 0.66 | 0.1376 | 0.67 | 0.9593 | 0.2887 | 0.68 | 0.72 | 0.3313 | | **cosine_ndcg@10** | **0.3503** | **0.5612** | **0.7223** | **0.3321** | **0.6508** | **0.4069** | **0.3251** | **0.4285** | **0.8744** | **0.2658** | **0.4064** | **0.6054** | **0.5595** | | cosine_mrr@10 | 0.4538 | 0.7552 | 0.6857 | 0.3497 | 0.7467 | 0.3269 | 0.4908 | 0.3556 | 0.8667 | 0.4189 | 0.3179 | 0.5862 | 0.7495 | | cosine_map@100 | 0.2755 | 0.4216 | 0.6591 | 0.2723 | 0.5744 | 0.341 | 0.1535 | 0.3595 | 0.8354 | 0.2027 | 0.3345 | 0.5721 | 0.4415 | #### Nano BEIR * Dataset: `NanoBEIR_mean` * Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.4194 | | cosine_accuracy@3 | 0.6476 | | cosine_accuracy@5 | 0.7169 | | cosine_accuracy@10 | 0.8031 | | cosine_precision@1 | 0.4194 | | cosine_precision@3 | 0.2942 | | cosine_precision@5 | 0.2378 | | cosine_precision@10 | 0.1717 | | cosine_recall@1 | 0.2312 | | cosine_recall@3 | 0.3995 | | cosine_recall@5 | 0.4702 | | cosine_recall@10 | 0.5624 | | **cosine_ndcg@10** | **0.4991** | | cosine_mrr@10 | 0.5464 | | cosine_map@100 | 0.4187 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Datasets #### gooaq * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c) * Size: 3,012,496 training samples * Columns: <code>question</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | question | answer | |:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 18 characters</li><li>mean: 43.23 characters</li><li>max: 96 characters</li></ul> | <ul><li>min: 55 characters</li><li>mean: 253.36 characters</li><li>max: 371 characters</li></ul> | * Samples: | question | answer | |:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>what is the difference between broilers and layers?</code> | <code>An egg laying poultry is called egger or layer whereas broilers are reared for obtaining meat. So a layer should be able to produce more number of large sized eggs, without growing too much. On the other hand, a broiler should yield more meat and hence should be able to grow well.</code> | | <code>what is the difference between chronological order and spatial order?</code> | <code>As a writer, you should always remember that unlike chronological order and the other organizational methods for data, spatial order does not take into account the time. Spatial order is primarily focused on the location. All it does is take into account the location of objects and not the time.</code> | | <code>is kamagra same as viagra?</code> | <code>Kamagra is thought to contain the same active ingredient as Viagra, sildenafil citrate. In theory, it should work in much the same way as Viagra, taking about 45 minutes to take effect, and lasting for around 4-6 hours. However, this will vary from person to person.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### msmarco * Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2) * Size: 502,939 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 characters</li><li>mean: 33.26 characters</li><li>max: 197 characters</li></ul> | <ul><li>min: 96 characters</li><li>mean: 356.24 characters</li><li>max: 1006 characters</li></ul> | <ul><li>min: 68 characters</li><li>mean: 327.52 characters</li><li>max: 995 characters</li></ul> | * Samples: | query | positive | negative | |:---------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>when was the sullivan acts</code> | <code>Sullivan Act Tim Sullivan, a major Irish criminal passed the Sullivan Act in 1911 to help his constituents rob strangers or to help them against Italian incomers. That is the crux of story that goes with a very early gun control law.</code> | <code>Sullivan Act Tim Sullivan, a major Irish criminal passed the Sullivan Act in 1911 to help his constituents rob strangers or to help them against Italian incomers. That is the crux of story that goes with a very early gun control law.</code> | | <code>can lavender grow indoors</code> | <code>Growing Lavender Indoors. People ALWAYS ask if you can grow lavender indoors. Well, you can, but most Lavender does best outside. Here is our winter experiment to show you what it would look like. This is one of our 4 Lavender Babies from Fall 2010. Our test specimen is L. x intermedia 'Grosso'.</code> | <code>Lavender can be grown indoors with a bit of effort to keep it in the conditions it loves to thrive. First off begin with choosing a variety that is better able to tolerate the conditions inside a home. To successfully grow Lavender indoors you need to create optimal growing conditions which is hard to do inside a house.</code> | | <code>what kind of barley do you malt</code> | <code>Barley is a wonderfully versatile cereal grain with a rich nutlike flavor and an appealing chewy, pasta-like consistency. Its appearance resembles wheat berries, although it is slightly lighter in color. Sprouted barley is naturally high in maltose, a sugar that serves as the basis for both malt syrup sweetener.</code> | <code>Specialty grains that can be used in this way are usually barley, malted or unmalted, that has been treated differently at the malting company. Crystal malt is one of the specialty grains. It is available in a whole range of colors, from 20 to 120 Lovibond. Crystal malt is malted barley that is heated while wet.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### s2orc * Dataset: [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) at [8cfc394](https://huggingface.co/datasets/sentence-transformers/s2orc/tree/8cfc394e83b2ebfcf38f90b508aea383df742439) * Size: 90,000 training samples * Columns: <code>title</code> and <code>abstract</code> * Approximate statistics based on the first 1000 samples: | | title | abstract | |:--------|:------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 31 characters</li><li>mean: 80.02 characters</li><li>max: 185 characters</li></ul> | <ul><li>min: 84 characters</li><li>mean: 635.31 characters</li><li>max: 1023 characters</li></ul> | * Samples: | title | abstract | |:----------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Modeling Method of Flow Diversion of the Three Outlets in Jingjiang Reach Under Unsteady Flow Conditions</code> | <code>The Yangtze River Flood Protection Physical Model is built under the financial support of World Bank loan.Based on theoretical analysis and experimental study,a modeling method of flow diversion of the three outlets in Jingjiang Reach under unsteady flow conditions was established for the model.Validation tests under both steady and unsteady flow conditions manifested that with this modeling method,the experimental flow diversion proves to be consistent with that of the prototype and therefore meets the requirements for precision.Being validated,this modeling method has been applied to Yangtze River Flood Protection Physical Model to study the flood routing features in Jingjiang reach.</code> | | <code>Enlightening on medical administration by clinical governance in British</code> | <code>Medical quality and safety were the responsibilities of medical system in view of British clinical governance. Medical regulation institutes were considered to be built and be authorized regulation rights. British medical administration was introduced and its enlightening in China was mentioned.</code> | | <code>APPLICATION OF A FUZZY MULTI-CRITERIA DECISION-MAKING MODEL FOR SHIPPING COMPANY PERFORMANCE EVALUATION</code> | <code>Combining fuzzy set theory, Analytic Hierarchy Process (AHP) and concept of entropy, a fuzzy Multiple Criteria Decision-Making (MCDM) model for shipping company performance evaluation is proposed. First, the AHP is used to construct subjective weights for all criteria and sub-criteria. Then, linguistic values characterized by triangular fuzzy numbers and trapezoidal fuzzy numbers are used to denote the evaluation values of all alternatives with respect to various subjective and objective criteria. Finally, the aggregation fuzzy assessment of different shipping companies is ranked to determine the best selection. Utilizing this fuzzy MCDM model, the decision-maker's fuzzy assessment and the trade-off between various evaluations criteria can be taken into account in the aggregation process, thus ensuring more effective and accurate decision-making.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### allnli * Dataset: [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 557,850 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 characters</li><li>mean: 34.88 characters</li><li>max: 193 characters</li></ul> | <ul><li>min: 15 characters</li><li>mean: 46.49 characters</li><li>max: 181 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 50.47 characters</li><li>max: 204 characters</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> | | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> | | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### paq * Dataset: [paq](https://huggingface.co/datasets/sentence-transformers/paq) at [74601d8](https://huggingface.co/datasets/sentence-transformers/paq/tree/74601d8d731019bc9c627ffc4271cdd640e1e748) * Size: 64,371,441 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 25 characters</li><li>mean: 50.56 characters</li><li>max: 104 characters</li></ul> | <ul><li>min: 509 characters</li><li>mean: 620.96 characters</li><li>max: 773 characters</li></ul> | * Samples: | query | answer | |:----------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>in veetla visheshanga ganesh is the husband of</code> | <code>Veetla Visheshanga a song which reminds Ganga's memory. She is actually not Ganga but Gowri and her lover is the groom named Ganesh. When both were about to marry they were stopped by some goons because of which Gowri fell from the mountain but survived with injuries. Gopal who found the truth brought Ganesh to unite them. Gopal insists Gowri to marry Ganesh as both of them are lovers to which Gowri unwillingly accepts. But while Ganesh tries to tie the Mangal Sutra, Gowri stops him and she goes to Gopal saying that he may not need her but she needs him</code> | | <code>when did simon property group became a publicly traded company</code> | <code>of the S&P 100. Simon Property Group has been the subject of several lawsuits and investigations regarding civil rights and discrimination. Simon Property Group was formed in 1993 when the majority of the shopping center interests of Melvin Simon & Associates became a publicly traded company. Melvin Simon & Associates, owned by brothers Melvin Simon and Herbert Simon, was founded in 1960 in Indianapolis, Indiana, and had long been one of the top shopping center developers in the United States. In 1996, Simon DeBartolo Group was created when Simon Property merged with former rival DeBartolo Realty Corp. This was shortly</code> | | <code>what was the nationality of antoine faivre</code> | <code>Theosophy (Boehmian) below. "Theosophy": The scholar of esotericism Wouter Hanegraaff described Christian theosophy as "one of the major currents in the history of Western esotericism". Christian theosophy is an under-researched area; a general history of it has never been written. The French scholar Antoine Faivre had a specific interest in the theosophers and illuminists of the eighteenth and nineteenth centuries. He wrote his doctoral thesis on Karl von Eckartshausen and Christian theosophy. Scholars of esotericism have argued that Faivre's definition of Western esotericism relies on his own specialist focus on Christian theosophy, Renaissance Hermeticism, and Romantic "Naturphilosophie" and therefore creates an "ideal"</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Datasets #### gooaq * Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c) * Size: 3,012,496 evaluation samples * Columns: <code>question</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | question | answer | |:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 18 characters</li><li>mean: 43.17 characters</li><li>max: 98 characters</li></ul> | <ul><li>min: 51 characters</li><li>mean: 254.12 characters</li><li>max: 360 characters</li></ul> | * Samples: | question | answer | |:-----------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>how do i program my directv remote with my tv?</code> | <code>['Press MENU on your remote.', 'Select Settings & Help > Settings > Remote Control > Program Remote.', 'Choose the device (TV, audio, DVD) you wish to program. ... ', 'Follow the on-screen prompts to complete programming.']</code> | | <code>are rodrigues fruit bats nocturnal?</code> | <code>Before its numbers were threatened by habitat destruction, storms, and hunting, some of those groups could number 500 or more members. Sunrise, sunset. Rodrigues fruit bats are most active at dawn, at dusk, and at night.</code> | | <code>why does your heart rate increase during exercise bbc bitesize?</code> | <code>During exercise there is an increase in physical activity and muscle cells respire more than they do when the body is at rest. The heart rate increases during exercise. The rate and depth of breathing increases - this makes sure that more oxygen is absorbed into the blood, and more carbon dioxide is removed from it.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### msmarco * Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1) at [84ed2d3](https://huggingface.co/datasets/sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1/tree/84ed2d35626f617d890bd493b4d6db69a741e0e2) * Size: 502,939 evaluation samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 characters</li><li>mean: 33.36 characters</li><li>max: 137 characters</li></ul> | <ul><li>min: 67 characters</li><li>mean: 347.87 characters</li><li>max: 906 characters</li></ul> | <ul><li>min: 57 characters</li><li>mean: 318.18 characters</li><li>max: 906 characters</li></ul> | * Samples: | query | positive | negative | |:-------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>is cabinet refacing worth the cost?</code> | <code>Fans of refacing say this mini-makeover can give a kitchen a whole new look at a much lower cost than installing all-new cabinets. Cabinet refacing can save up to 50 percent compared to the cost of replacing, says Cheryl Catalano, owner of Kitchen Solvers, a cabinet refacing franchise in Napierville, Illinois. From.</code> | <code>Most cabinet refacing projects cost about $4,000 to $10,000. The price varies based on the materials you select and the size and configuration of your kitchen. Wood veneer doors, for example, will cost less than solid wood doors.</code> | | <code>is the fovea ethmoidalis a bone</code> | <code>Ethmoid bone/fovea ethmoidalis. The medial portion of the ethmoid bone is a cruciate membranous bone composed of the crista galli, cribriform plate, and perpendicular ethmoidal plate. The crista is a thick piece of bone, shaped like a “cock's comb,” that projects intracranially and attaches to the falx cerebri.</code> | <code>Ethmoid bone/fovea ethmoidalis. The medial portion of the ethmoid bone is a cruciate membranous bone composed of the crista galli, cribriform plate, and perpendicular ethmoidal plate. The crista is a thick piece of bone, shaped like a “cock's comb,” that projects intracranially and attaches to the falx cerebri.</code> | | <code>average pitches per inning</code> | <code>The likelihood of a pitcher completing nine innings if he throws an average of 14 pitches or less per inning is reinforced by the totals of the 89 games in which pitchers did actually complete nine innings of work.</code> | <code>The likelihood of a pitcher completing nine innings if he throws an average of 14 pitches or less per inning is reinforced by the totals of the 89 games in which pitchers did actually complete nine innings of work.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### s2orc * Dataset: [s2orc](https://huggingface.co/datasets/sentence-transformers/s2orc) at [8cfc394](https://huggingface.co/datasets/sentence-transformers/s2orc/tree/8cfc394e83b2ebfcf38f90b508aea383df742439) * Size: 10,000 evaluation samples * Columns: <code>title</code> and <code>abstract</code> * Approximate statistics based on the first 1000 samples: | | title | abstract | |:--------|:------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 31 characters</li><li>mean: 80.04 characters</li><li>max: 198 characters</li></ul> | <ul><li>min: 96 characters</li><li>mean: 653.93 characters</li><li>max: 1023 characters</li></ul> | * Samples: | title | abstract | |:-------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Screen Printing Ink Film Thickness Analysis of the Passive RFID Tag Antenna</code> | <code>The relationship between the screen mesh and the theoretical and practical ink film thickness was analyzed based on the main influencing factors of the ink film thickness by screen printing.A calculation model for the ink thickness was established based on the screen under static and compressive deformation.The relation curve between the screen mesh and the ink film thickness was fitted and the suitable printing craft parameter was chosen to print two kinds of RFID tag antennas.The fluctuation of the antenna resistance was analyzed to demonstrate the reliability of the passive RFID tag antenna manufactured by screen printing technology.</code> | | <code>Subclinical organ damage and cardiovascular risk prediction</code> | <code>AbstractTraditional cardiovascular risk factors have poor prognostic value for individuals and screening for subclinical organ damage has been recommended in hypertension in recent guidelines. The aim of this review was to investigate the clinical impact of the additive prognostic information provided by measuring subclinical organ damage. We have (i) reviewed recent studies linking markers of subclinical organ damage in the heart, blood vessels and kidney to cardiovascular risk; (ii) discussed the evidence for improvement in cardiovascular risk prediction using markers of subclinical organ damage; (iii) investigated which and how many markers to measure and (iv) finally discussed whether measuring subclinical organ damage provided benefits beyond risk prediction. In conclusion, more studies and if possible randomized studies are needed to investigate (i) the importance of markers of subclinical organ damage for risk discrimination, calibration and reclassification; and (ii) the econom...</code> | | <code>A Novel Approach to Simulate Climate Change Impacts on Vascular Epiphytes: Case Study in Taiwan</code> | <code>In the wet tropics, epiphytes form a conspicuous layer in the forest canopy, support abundant coexisting biota, and are known to have a critical influence on forest hydrology and nutrient cycling. Since canopy-dwelling plants have no vascular connection to the ground or their host plants, they are likely more sensitive to environmental changes than their soil-rooted counterparts, subsequently regarded as one of the groups most vulnerable to global climate change. Epiphytes have adapted to life in highly dynamic forest canopies by producing many, mostly wind-dispersed, seeds or spores. Consequently, epiphytes should colonize trees rapidly, which, in addition to atmospheric sensitivity and short life cycles, make epiphytes suitable climate change indicators. In this study, we assess the impact of climate change on Taiwanese epiphytes using a modeling approach.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### allnli * Dataset: [allnli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 6,584 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 characters</li><li>mean: 72.82 characters</li><li>max: 300 characters</li></ul> | <ul><li>min: 12 characters</li><li>mean: 34.11 characters</li><li>max: 126 characters</li></ul> | <ul><li>min: 11 characters</li><li>mean: 36.38 characters</li><li>max: 121 characters</li></ul> | * Samples: | anchor | positive | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> | | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` #### paq * Dataset: [paq](https://huggingface.co/datasets/sentence-transformers/paq) at [74601d8](https://huggingface.co/datasets/sentence-transformers/paq/tree/74601d8d731019bc9c627ffc4271cdd640e1e748) * Size: 64,371,441 evaluation samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 25 characters</li><li>mean: 51.3 characters</li><li>max: 108 characters</li></ul> | <ul><li>min: 504 characters</li><li>mean: 623.09 characters</li><li>max: 835 characters</li></ul> | * Samples: | query | answer | |:---------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>when did season 3 of the voice brasil start</code> | <code>The Voice Brasil (season 3) The third season of "The Voice Brasil", premiered on Rede Globo on September 18, 2014 in the 10:30 p.m. (BRT/AMT) slot immediately following the primetime telenovela "Império". The 22- and 24-year-old sertanejo duo Danilo Reis e Rafael won the competition on December 25, 2014 with 43% of the votes cast. This marked Lulu Santos' first win as a coach, the first stolen artist to win a Brazilian season of "The Voice", and the first time in any "The Voice" franchise that a duo won the competition. Online applications for "The Voice Brasil" were open on</code> | | <code>when did the little ranger first come out</code> | <code>Gang" theme song was an instrumental medley of "London Bridge", "Here We Go Round the Mulberry Bush" and "The Farmer in the Dell". It remained in use until the series ended in 1944. The Little Ranger The Little Ranger is a 1938 "Our Gang" short comedy film directed by Gordon Douglas. It was the 169th short in the "Our Gang" series, and the first produced by Metro-Goldwyn-Mayer, who purchased the rights to the series from creator Hal Roach. Snubbed by his girlfriend Darla, Alfalfa accepts the invitation of tomboyish Muggsy to attend the local picture show. While watching the adventures</code> | | <code>what is the name of rachel's sister in ninjaaiden</code> | <code>her among ten female characters who have never been featured on their games' cover arts, Samir Torres of VentureBeat wrote that while "Team Ninja sexualy exploits all of their female characters, yet Rachel somehow got axed from every modern "Ninja Gaiden" box art." Rachel (Ninja Gaiden) In 2004's "Ninja Gaiden", Rachel is a fiend hunter whom the game's protagonist Ryu Hayabusa meets in the Holy Vigoor Empire, where she is on a mission to destroy the fiends, as well as find her missing sister, Alma, who has become a Greater Fiend. Soon after they first meet, she is captured but</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64, 32 ], "matryoshka_weights": [ 1, 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16384 - `per_device_eval_batch_size`: 4096 - `learning_rate`: 0.2 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16384 - `per_device_eval_batch_size`: 4096 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.2 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | gooaq loss | msmarco loss | s2orc loss | allnli loss | paq loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 | |:------:|:----:|:-------------:|:----------:|:------------:|:----------:|:-----------:|:--------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:| | 0.0002 | 1 | 43.5181 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | 0.0597 | 250 | 17.804 | 2.1081 | 12.8291 | 10.8194 | 14.2895 | 5.3792 | 0.3202 | 0.5446 | 0.6721 | 0.3176 | 0.6222 | 0.3867 | 0.3022 | 0.3952 | 0.8741 | 0.2474 | 0.3986 | 0.5913 | 0.5463 | 0.4783 | | 0.1195 | 500 | 9.6842 | 1.6991 | 12.2374 | 10.6084 | 13.9790 | 4.7183 | 0.3148 | 0.5759 | 0.7063 | 0.3640 | 0.6250 | 0.3846 | 0.2832 | 0.4168 | 0.8659 | 0.2537 | 0.3744 | 0.5732 | 0.5509 | 0.4837 | | 0.1792 | 750 | 8.7691 | 1.6922 | 12.0631 | 10.3970 | 12.4485 | 4.4473 | 0.3496 | 0.5664 | 0.7157 | 0.3179 | 0.6585 | 0.3826 | 0.2934 | 0.4040 | 0.8782 | 0.2523 | 0.3845 | 0.5962 | 0.5502 | 0.4884 | | 0.2389 | 1000 | 8.606 | 1.6685 | 11.7765 | 10.2828 | 12.4139 | 4.2823 | 0.3509 | 0.5636 | 0.7026 | 0.3249 | 0.6562 | 0.4049 | 0.3123 | 0.4174 | 0.8673 | 0.2657 | 0.3969 | 0.5582 | 0.5514 | 0.4902 | | 0.2987 | 1250 | 8.4178 | 1.6072 | 11.7581 | 9.2590 | 12.8865 | 4.2231 | 0.3341 | 0.5587 | 0.7103 | 0.3354 | 0.6534 | 0.4033 | 0.3116 | 0.4294 | 0.8663 | 0.2718 | 0.4048 | 0.5891 | 0.5466 | 0.4934 | | 0.3584 | 1500 | 8.1084 | 1.6751 | 11.8237 | 9.8291 | 11.5805 | 4.1559 | 0.3345 | 0.5668 | 0.7094 | 0.3287 | 0.6535 | 0.3948 | 0.3311 | 0.4098 | 0.8632 | 0.2649 | 0.4171 | 0.5913 | 0.5514 | 0.4936 | | 0.4182 | 1750 | 7.9489 | 1.5858 | 11.8367 | 9.8385 | 13.0328 | 4.0980 | 0.3543 | 0.5464 | 0.6984 | 0.3158 | 0.6582 | 0.3862 | 0.3233 | 0.4201 | 0.8665 | 0.2743 | 0.3924 | 0.5909 | 0.5577 | 0.4911 | | 0.4779 | 2000 | 8.2594 | 1.6123 | 11.8052 | 9.9075 | 11.3651 | 4.0788 | 0.3491 | 0.5551 | 0.7208 | 0.3235 | 0.6570 | 0.4058 | 0.3220 | 0.4215 | 0.8801 | 0.2629 | 0.4143 | 0.5998 | 0.5514 | 0.4972 | | 0.5376 | 2250 | 8.299 | 1.6416 | 11.7180 | 9.9462 | 10.7895 | 4.0423 | 0.3636 | 0.5582 | 0.7071 | 0.3048 | 0.6649 | 0.3951 | 0.3248 | 0.4316 | 0.8804 | 0.2561 | 0.4252 | 0.6036 | 0.5484 | 0.4972 | | 0.5974 | 2500 | 7.7807 | 1.6518 | 11.7898 | 9.9235 | 11.1670 | 4.0001 | 0.3639 | 0.5556 | 0.7288 | 0.3148 | 0.6525 | 0.3979 | 0.3178 | 0.4436 | 0.8860 | 0.2593 | 0.4208 | 0.5935 | 0.5581 | 0.4994 | | 0.6571 | 2750 | 7.8997 | 1.5797 | 11.6813 | 9.5124 | 11.4893 | 3.9633 | 0.3465 | 0.5562 | 0.7084 | 0.3101 | 0.6631 | 0.4102 | 0.3194 | 0.4410 | 0.8805 | 0.2566 | 0.4261 | 0.5983 | 0.5552 | 0.4978 | | 0.7168 | 3000 | 8.0204 | 1.5620 | 11.6746 | 9.6655 | 10.8783 | 3.9539 | 0.3439 | 0.5569 | 0.7295 | 0.3173 | 0.6606 | 0.4129 | 0.3180 | 0.4521 | 0.8888 | 0.2576 | 0.4012 | 0.6065 | 0.5560 | 0.5001 | | 0.7766 | 3250 | 8.0225 | 1.4596 | 11.5664 | 9.6954 | 10.9838 | 3.9493 | 0.3496 | 0.5626 | 0.7239 | 0.3330 | 0.6551 | 0.4197 | 0.3129 | 0.4491 | 0.8893 | 0.2726 | 0.4061 | 0.6103 | 0.5555 | 0.5031 | | 0.8363 | 3500 | 7.6933 | 1.5522 | 11.6974 | 9.1753 | 11.2026 | 3.9082 | 0.3581 | 0.5570 | 0.7170 | 0.3216 | 0.6492 | 0.4018 | 0.3204 | 0.4360 | 0.8841 | 0.2675 | 0.4031 | 0.6052 | 0.5553 | 0.4982 | | 0.8961 | 3750 | 7.711 | 1.5267 | 11.6615 | 9.4673 | 11.3195 | 3.8847 | 0.3563 | 0.5613 | 0.7162 | 0.3265 | 0.6497 | 0.4109 | 0.3253 | 0.4384 | 0.8713 | 0.2657 | 0.4195 | 0.6058 | 0.5566 | 0.5003 | | 0.9558 | 4000 | 7.8549 | 1.5300 | 11.6244 | 9.1383 | 11.0781 | 3.8785 | 0.3533 | 0.5609 | 0.7153 | 0.3285 | 0.6528 | 0.4069 | 0.3250 | 0.4382 | 0.8744 | 0.2642 | 0.4068 | 0.5961 | 0.5595 | 0.4986 | | 1.0 | 4185 | - | - | - | - | - | - | 0.3503 | 0.5612 | 0.7223 | 0.3321 | 0.6508 | 0.4069 | 0.3251 | 0.4285 | 0.8744 | 0.2658 | 0.4064 | 0.6054 | 0.5595 | 0.4991 | ### Framework Versions - Python: 3.10.15 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.4.1 - Accelerate: 1.1.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CRAFT" ]
Non_BioNLP
croissantllm/base_155k
croissantllm
text2text-generation
[ "transformers", "pytorch", "llama", "text-generation", "legal", "code", "text-generation-inference", "art", "text2text-generation", "fr", "en", "dataset:cerebras/SlimPajama-627B", "dataset:uonlp/CulturaX", "dataset:pg19", "dataset:bigcode/starcoderdata", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,705
1,706
8
0
--- datasets: - cerebras/SlimPajama-627B - uonlp/CulturaX - pg19 - bigcode/starcoderdata language: - fr - en license: mit pipeline_tag: text2text-generation tags: - legal - code - text-generation-inference - art --- # CroissantLLM - Base (155k steps) This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 155k steps (2.44 T) tokens. To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1. ## Abstract We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware. To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources. To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives. This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models. ## Citation Our work can be cited as: ```bash Coming soon ``` ## Usage This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "croissantllm/base_155k" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto") inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant. He is heading to the market. -> Il va au marché. We are running on the beach. ->", return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5) print(tokenizer.decode(tokens[0])) # remove bos token inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device) tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60) print(tokenizer.decode(tokens[0])) ```
[ "TRANSLATION" ]
[ "CRAFT" ]
Non_BioNLP
EleutherAI/pythia-70m-deduped
EleutherAI
text-generation
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "en", "dataset:EleutherAI/the_pile_deduplicated", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
1,676
1,688
191,714
25
--- datasets: - EleutherAI/the_pile_deduplicated language: - en license: apache-2.0 tags: - pytorch - causal-lm - pythia --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-70M-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-70M-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-70M-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-70M-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-70M-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-70M-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "QUESTION_ANSWERING", "TRANSLATION" ]
[ "SCIQ" ]
Non_BioNLP
anuccikpmg/multilingual-e5-large-instruct-openvino-int8
anuccikpmg
feature-extraction
[ "sentence-transformers", "openvino", "xlm-roberta", "feature-extraction", "mteb", "transformers", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2402.05672", "arxiv:2401.00368", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,737
1,737
9
0
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - sentence-transformers - transformers model-index: - name: multilingual-e5-large-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.23880597014924 - type: ap value: 39.07351965022687 - type: f1 value: 70.04836733862683 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 66.71306209850107 - type: ap value: 79.01499914759529 - type: f1 value: 64.81951817560703 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.85307346326837 - type: ap value: 22.447519885878737 - type: f1 value: 61.0162730745633 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.04925053533191 - type: ap value: 23.44983217128922 - type: f1 value: 62.5723230907759 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.28742500000001 - type: ap value: 94.8449918887462 - type: f1 value: 96.28680923610432 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 56.716 - type: f1 value: 55.76510398266401 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 52.99999999999999 - type: f1 value: 52.00829994765178 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.806000000000004 - type: f1 value: 48.082345914983634 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.507999999999996 - type: f1 value: 47.68752844642045 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.709999999999994 - type: f1 value: 47.05870376637181 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.662000000000006 - type: f1 value: 43.42371965372771 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 31.721 - type: map_at_10 value: 49.221 - type: map_at_100 value: 49.884 - type: map_at_1000 value: 49.888 - type: map_at_3 value: 44.31 - type: map_at_5 value: 47.276 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 49.5 - type: mrr_at_100 value: 50.163000000000004 - type: mrr_at_1000 value: 50.166 - type: mrr_at_3 value: 44.618 - type: mrr_at_5 value: 47.541 - type: ndcg_at_1 value: 31.721 - type: ndcg_at_10 value: 58.384 - type: ndcg_at_100 value: 61.111000000000004 - type: ndcg_at_1000 value: 61.187999999999995 - type: ndcg_at_3 value: 48.386 - type: ndcg_at_5 value: 53.708999999999996 - type: precision_at_1 value: 31.721 - type: precision_at_10 value: 8.741 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.609 - type: recall_at_1 value: 31.721 - type: recall_at_10 value: 87.411 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.044 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.40419580759799 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.48593255007969 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.889179122289995 - type: mrr value: 77.61146286769556 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.15075203727929 - type: cos_sim_spearman value: 86.9622224570873 - type: euclidean_pearson value: 86.70473853624121 - type: euclidean_spearman value: 86.9622224570873 - type: manhattan_pearson value: 86.21089380980065 - type: manhattan_spearman value: 86.75318154937008 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.65553235908142 - type: f1 value: 99.60681976339595 - type: precision value: 99.58246346555325 - type: recall value: 99.65553235908142 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26260180497468 - type: f1 value: 99.14520507740848 - type: precision value: 99.08650671362535 - type: recall value: 99.26260180497468 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.07412538967787 - type: f1 value: 97.86629719431936 - type: precision value: 97.76238309664012 - type: recall value: 98.07412538967787 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.42074776197998 - type: f1 value: 99.38564156573635 - type: precision value: 99.36808846761454 - type: recall value: 99.42074776197998 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.73376623376623 - type: f1 value: 85.68480707214599 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.935218072113855 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.276389017675264 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 27.764166666666668 - type: map_at_10 value: 37.298166666666674 - type: map_at_100 value: 38.530166666666666 - type: map_at_1000 value: 38.64416666666667 - type: map_at_3 value: 34.484833333333334 - type: map_at_5 value: 36.0385 - type: mrr_at_1 value: 32.93558333333333 - type: mrr_at_10 value: 41.589749999999995 - type: mrr_at_100 value: 42.425333333333334 - type: mrr_at_1000 value: 42.476333333333336 - type: mrr_at_3 value: 39.26825 - type: mrr_at_5 value: 40.567083333333336 - type: ndcg_at_1 value: 32.93558333333333 - type: ndcg_at_10 value: 42.706583333333334 - type: ndcg_at_100 value: 47.82483333333333 - type: ndcg_at_1000 value: 49.95733333333334 - type: ndcg_at_3 value: 38.064750000000004 - type: ndcg_at_5 value: 40.18158333333333 - type: precision_at_1 value: 32.93558333333333 - type: precision_at_10 value: 7.459833333333334 - type: precision_at_100 value: 1.1830833333333335 - type: precision_at_1000 value: 0.15608333333333332 - type: precision_at_3 value: 17.5235 - type: precision_at_5 value: 12.349833333333333 - type: recall_at_1 value: 27.764166666666668 - type: recall_at_10 value: 54.31775 - type: recall_at_100 value: 76.74350000000001 - type: recall_at_1000 value: 91.45208333333332 - type: recall_at_3 value: 41.23425 - type: recall_at_5 value: 46.73983333333334 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 12.969 - type: map_at_10 value: 21.584999999999997 - type: map_at_100 value: 23.3 - type: map_at_1000 value: 23.5 - type: map_at_3 value: 18.218999999999998 - type: map_at_5 value: 19.983 - type: mrr_at_1 value: 29.316 - type: mrr_at_10 value: 40.033 - type: mrr_at_100 value: 40.96 - type: mrr_at_1000 value: 41.001 - type: mrr_at_3 value: 37.123 - type: mrr_at_5 value: 38.757999999999996 - type: ndcg_at_1 value: 29.316 - type: ndcg_at_10 value: 29.858 - type: ndcg_at_100 value: 36.756 - type: ndcg_at_1000 value: 40.245999999999995 - type: ndcg_at_3 value: 24.822 - type: ndcg_at_5 value: 26.565 - type: precision_at_1 value: 29.316 - type: precision_at_10 value: 9.186 - type: precision_at_100 value: 1.6549999999999998 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 18.436 - type: precision_at_5 value: 13.876 - type: recall_at_1 value: 12.969 - type: recall_at_10 value: 35.142 - type: recall_at_100 value: 59.143 - type: recall_at_1000 value: 78.594 - type: recall_at_3 value: 22.604 - type: recall_at_5 value: 27.883000000000003 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.527999999999999 - type: map_at_10 value: 17.974999999999998 - type: map_at_100 value: 25.665 - type: map_at_1000 value: 27.406000000000002 - type: map_at_3 value: 13.017999999999999 - type: map_at_5 value: 15.137 - type: mrr_at_1 value: 62.5 - type: mrr_at_10 value: 71.891 - type: mrr_at_100 value: 72.294 - type: mrr_at_1000 value: 72.296 - type: mrr_at_3 value: 69.958 - type: mrr_at_5 value: 71.121 - type: ndcg_at_1 value: 50.875 - type: ndcg_at_10 value: 38.36 - type: ndcg_at_100 value: 44.235 - type: ndcg_at_1000 value: 52.154 - type: ndcg_at_3 value: 43.008 - type: ndcg_at_5 value: 40.083999999999996 - type: precision_at_1 value: 62.5 - type: precision_at_10 value: 30.0 - type: precision_at_100 value: 10.038 - type: precision_at_1000 value: 2.0869999999999997 - type: precision_at_3 value: 46.833000000000006 - type: precision_at_5 value: 38.800000000000004 - type: recall_at_1 value: 8.527999999999999 - type: recall_at_10 value: 23.828 - type: recall_at_100 value: 52.322 - type: recall_at_1000 value: 77.143 - type: recall_at_3 value: 14.136000000000001 - type: recall_at_5 value: 17.761 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.51 - type: f1 value: 47.632159862049896 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 60.734 - type: map_at_10 value: 72.442 - type: map_at_100 value: 72.735 - type: map_at_1000 value: 72.75 - type: map_at_3 value: 70.41199999999999 - type: map_at_5 value: 71.80499999999999 - type: mrr_at_1 value: 65.212 - type: mrr_at_10 value: 76.613 - type: mrr_at_100 value: 76.79899999999999 - type: mrr_at_1000 value: 76.801 - type: mrr_at_3 value: 74.8 - type: mrr_at_5 value: 76.12400000000001 - type: ndcg_at_1 value: 65.212 - type: ndcg_at_10 value: 77.988 - type: ndcg_at_100 value: 79.167 - type: ndcg_at_1000 value: 79.452 - type: ndcg_at_3 value: 74.362 - type: ndcg_at_5 value: 76.666 - type: precision_at_1 value: 65.212 - type: precision_at_10 value: 10.003 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 29.518 - type: precision_at_5 value: 19.016 - type: recall_at_1 value: 60.734 - type: recall_at_10 value: 90.824 - type: recall_at_100 value: 95.71600000000001 - type: recall_at_1000 value: 97.577 - type: recall_at_3 value: 81.243 - type: recall_at_5 value: 86.90299999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 23.845 - type: map_at_10 value: 39.281 - type: map_at_100 value: 41.422 - type: map_at_1000 value: 41.593 - type: map_at_3 value: 34.467 - type: map_at_5 value: 37.017 - type: mrr_at_1 value: 47.531 - type: mrr_at_10 value: 56.204 - type: mrr_at_100 value: 56.928999999999995 - type: mrr_at_1000 value: 56.962999999999994 - type: mrr_at_3 value: 54.115 - type: mrr_at_5 value: 55.373000000000005 - type: ndcg_at_1 value: 47.531 - type: ndcg_at_10 value: 47.711999999999996 - type: ndcg_at_100 value: 54.510999999999996 - type: ndcg_at_1000 value: 57.103 - type: ndcg_at_3 value: 44.145 - type: ndcg_at_5 value: 45.032 - type: precision_at_1 value: 47.531 - type: precision_at_10 value: 13.194 - type: precision_at_100 value: 2.045 - type: precision_at_1000 value: 0.249 - type: precision_at_3 value: 29.424 - type: precision_at_5 value: 21.451 - type: recall_at_1 value: 23.845 - type: recall_at_10 value: 54.967 - type: recall_at_100 value: 79.11399999999999 - type: recall_at_1000 value: 94.56700000000001 - type: recall_at_3 value: 40.256 - type: recall_at_5 value: 46.215 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 37.819 - type: map_at_10 value: 60.889 - type: map_at_100 value: 61.717999999999996 - type: map_at_1000 value: 61.778 - type: map_at_3 value: 57.254000000000005 - type: map_at_5 value: 59.541 - type: mrr_at_1 value: 75.638 - type: mrr_at_10 value: 82.173 - type: mrr_at_100 value: 82.362 - type: mrr_at_1000 value: 82.37 - type: mrr_at_3 value: 81.089 - type: mrr_at_5 value: 81.827 - type: ndcg_at_1 value: 75.638 - type: ndcg_at_10 value: 69.317 - type: ndcg_at_100 value: 72.221 - type: ndcg_at_1000 value: 73.382 - type: ndcg_at_3 value: 64.14 - type: ndcg_at_5 value: 67.07600000000001 - type: precision_at_1 value: 75.638 - type: precision_at_10 value: 14.704999999999998 - type: precision_at_100 value: 1.698 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 41.394999999999996 - type: precision_at_5 value: 27.162999999999997 - type: recall_at_1 value: 37.819 - type: recall_at_10 value: 73.52499999999999 - type: recall_at_100 value: 84.875 - type: recall_at_1000 value: 92.559 - type: recall_at_3 value: 62.092999999999996 - type: recall_at_5 value: 67.907 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.60079999999999 - type: ap value: 92.67396345347356 - type: f1 value: 94.5988098167121 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.285 - type: map_at_10 value: 33.436 - type: map_at_100 value: 34.63 - type: map_at_1000 value: 34.681 - type: map_at_3 value: 29.412 - type: map_at_5 value: 31.715 - type: mrr_at_1 value: 21.848 - type: mrr_at_10 value: 33.979 - type: mrr_at_100 value: 35.118 - type: mrr_at_1000 value: 35.162 - type: mrr_at_3 value: 30.036 - type: mrr_at_5 value: 32.298 - type: ndcg_at_1 value: 21.862000000000002 - type: ndcg_at_10 value: 40.43 - type: ndcg_at_100 value: 46.17 - type: ndcg_at_1000 value: 47.412 - type: ndcg_at_3 value: 32.221 - type: ndcg_at_5 value: 36.332 - type: precision_at_1 value: 21.862000000000002 - type: precision_at_10 value: 6.491 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.744 - type: precision_at_5 value: 10.331999999999999 - type: recall_at_1 value: 21.285 - type: recall_at_10 value: 62.083 - type: recall_at_100 value: 88.576 - type: recall_at_1000 value: 98.006 - type: recall_at_3 value: 39.729 - type: recall_at_5 value: 49.608000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.92612859097127 - type: f1 value: 93.82370333372853 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.67681036911807 - type: f1 value: 92.14191382411472 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.26817878585723 - type: f1 value: 91.92824250337878 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.96554963983714 - type: f1 value: 90.02859329630792 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.02509860164935 - type: f1 value: 89.30665159182062 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.55515370705244 - type: f1 value: 87.94449232331907 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.4623803009576 - type: f1 value: 66.06738378772725 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.3716539870386 - type: f1 value: 60.37614033396853 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 80.34022681787857 - type: f1 value: 58.302008026952 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.72095208268087 - type: f1 value: 59.64524724009049 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.87020437432773 - type: f1 value: 57.80202694670567 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.73598553345387 - type: f1 value: 58.19628250675031 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.6630800268998 - type: f1 value: 65.00996668051691 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.7128446536651 - type: f1 value: 57.95860594874963 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.61129791526563 - type: f1 value: 59.75328290206483 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.00134498991257 - type: f1 value: 67.0230483991802 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.54068594485541 - type: f1 value: 65.54604628946976 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.032952252858095 - type: f1 value: 58.715741857057104 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.80901143241427 - type: f1 value: 68.33963989243877 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.47141896435777 - type: f1 value: 69.56765020308262 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.2373907195696 - type: f1 value: 69.04529836036467 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.05783456624076 - type: f1 value: 74.69430584708174 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.82111634162744 - type: f1 value: 70.77228952803762 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.25353059852051 - type: f1 value: 71.05310103416411 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.28648285137861 - type: f1 value: 69.08020473732226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.31540013449899 - type: f1 value: 70.9426355465791 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.2151983860121 - type: f1 value: 67.52541755908858 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.58372562205784 - type: f1 value: 69.49769064229827 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.9233355749832 - type: f1 value: 69.36311548259593 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.07330195023538 - type: f1 value: 64.99882022345572 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.62273032952253 - type: f1 value: 70.6394885471001 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.77000672494957 - type: f1 value: 62.9368944815065 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.453261600538 - type: f1 value: 70.85069934666681 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6906523201076 - type: f1 value: 72.03249740074217 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.03631472763953 - type: f1 value: 59.3165215571852 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.913920645595155 - type: f1 value: 57.367337711611285 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.42837928715535 - type: f1 value: 52.60527294970906 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.33490248823135 - type: f1 value: 63.213340969404065 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.58507061197041 - type: f1 value: 68.40256628040486 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.11230665770006 - type: f1 value: 66.44863577842305 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.70073974445192 - type: f1 value: 67.21291337273702 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.43913920645595 - type: f1 value: 64.09838087422806 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.80026899798251 - type: f1 value: 68.76986742962444 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.78816408876934 - type: f1 value: 62.18781873428972 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.6577000672495 - type: f1 value: 68.75171511133003 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.42501681237391 - type: f1 value: 71.18434963451544 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.64828513786146 - type: f1 value: 70.67741914007422 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.62811028917284 - type: f1 value: 71.36402039740959 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.88634835238736 - type: f1 value: 69.23701923480677 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.15938130464022 - type: f1 value: 71.87792218993388 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.96301277740416 - type: f1 value: 67.29584200202983 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.49562878278412 - type: f1 value: 66.91716685679431 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6805648957633 - type: f1 value: 72.02723592594374 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.00605245460659 - type: f1 value: 60.16716669482932 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.90988567585742 - type: f1 value: 63.99405488777784 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.62273032952253 - type: f1 value: 65.17213906909481 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.50907868190988 - type: f1 value: 69.15165697194853 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.30733019502352 - type: f1 value: 66.69024007380474 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.24277067921989 - type: f1 value: 68.80515408492947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.49831876260929 - type: f1 value: 64.83778567111116 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.28782784129119 - type: f1 value: 69.3294186700733 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.315400134499 - type: f1 value: 71.22674385243207 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.37794216543377 - type: f1 value: 68.96962492838232 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.33557498318764 - type: f1 value: 72.28949738478356 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.84398117014123 - type: f1 value: 64.71026362091463 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.76462676529925 - type: f1 value: 69.8229667407667 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.02420981842636 - type: f1 value: 71.76576384895898 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.7572293207801 - type: f1 value: 72.76840765295256 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.02286482851379 - type: f1 value: 66.17237947327872 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.60928043039678 - type: f1 value: 77.27094731234773 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.68325487558843 - type: f1 value: 77.97530399082261 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.13315400134498 - type: f1 value: 75.97558584796424 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.47410894418292 - type: f1 value: 80.52244841473792 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.9670477471419 - type: f1 value: 77.37318805793146 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.09683927370544 - type: f1 value: 77.69773737430847 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.20847343644922 - type: f1 value: 75.17071738727348 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.07464694014796 - type: f1 value: 77.16136207698571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.53396099529255 - type: f1 value: 73.58296404484122 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.75319435104237 - type: f1 value: 75.24674707850833 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.0948217888366 - type: f1 value: 76.47559490205028 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.07599193006052 - type: f1 value: 70.76028043093511 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.10490921318089 - type: f1 value: 77.01215275283272 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.25756556825824 - type: f1 value: 70.20605314648762 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.08137188971082 - type: f1 value: 77.3899269057439 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.35440484196369 - type: f1 value: 79.58964690002772 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.42299932750504 - type: f1 value: 68.07844356925413 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.15669132481507 - type: f1 value: 65.89383352608513 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.11432414256894 - type: f1 value: 57.69910594559806 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.24747814391392 - type: f1 value: 70.42455553830918 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46267652992603 - type: f1 value: 76.8854559308316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.24815063887021 - type: f1 value: 72.77805034658074 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11566913248151 - type: f1 value: 73.86147988001356 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.0168123739072 - type: f1 value: 69.38515920054571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.41156691324814 - type: f1 value: 73.43474953408237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.39609952925353 - type: f1 value: 67.29731681109291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.20914593140552 - type: f1 value: 77.07066497935367 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.52387357094821 - type: f1 value: 78.5259569473291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.6913248150639 - type: f1 value: 76.91201656350455 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.1217215870881 - type: f1 value: 77.41179937912504 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.25891055817083 - type: f1 value: 75.8089244542887 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.70679219905851 - type: f1 value: 78.21459594517711 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.83523873570948 - type: f1 value: 74.86847028401978 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.71755211835911 - type: f1 value: 74.0214326485662 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.06523201075991 - type: f1 value: 79.10545620325138 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.91862811028918 - type: f1 value: 66.50386121217983 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.93140551445865 - type: f1 value: 70.755435928495 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.40753194351042 - type: f1 value: 71.61816115782923 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.1815736381977 - type: f1 value: 75.08016717887205 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.86482851378614 - type: f1 value: 72.39521180006291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46940147948891 - type: f1 value: 76.70044085362349 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.89307330195024 - type: f1 value: 71.5721825332298 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.7511768661735 - type: f1 value: 75.17918654541515 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69535978480162 - type: f1 value: 78.90019070153316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.45729657027572 - type: f1 value: 76.19578371794672 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.92715354123554 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 35.53536244162518 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.08507884504006 - type: mrr value: 34.32436977159129 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.935 - type: map_at_10 value: 13.297 - type: map_at_100 value: 16.907 - type: map_at_1000 value: 18.391 - type: map_at_3 value: 9.626999999999999 - type: map_at_5 value: 11.190999999999999 - type: mrr_at_1 value: 46.129999999999995 - type: mrr_at_10 value: 54.346000000000004 - type: mrr_at_100 value: 55.067 - type: mrr_at_1000 value: 55.1 - type: mrr_at_3 value: 51.961 - type: mrr_at_5 value: 53.246 - type: ndcg_at_1 value: 44.118 - type: ndcg_at_10 value: 35.534 - type: ndcg_at_100 value: 32.946999999999996 - type: ndcg_at_1000 value: 41.599000000000004 - type: ndcg_at_3 value: 40.25 - type: ndcg_at_5 value: 37.978 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.842 - type: precision_at_100 value: 8.427 - type: precision_at_1000 value: 2.128 - type: precision_at_3 value: 37.977 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.935 - type: recall_at_10 value: 17.211000000000002 - type: recall_at_100 value: 34.33 - type: recall_at_1000 value: 65.551 - type: recall_at_3 value: 10.483 - type: recall_at_5 value: 13.078999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 35.231 - type: map_at_10 value: 50.202000000000005 - type: map_at_100 value: 51.154999999999994 - type: map_at_1000 value: 51.181 - type: map_at_3 value: 45.774 - type: map_at_5 value: 48.522 - type: mrr_at_1 value: 39.687 - type: mrr_at_10 value: 52.88 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.58500000000001 - type: mrr_at_3 value: 49.228 - type: mrr_at_5 value: 51.525 - type: ndcg_at_1 value: 39.687 - type: ndcg_at_10 value: 57.754000000000005 - type: ndcg_at_100 value: 61.597 - type: ndcg_at_1000 value: 62.18900000000001 - type: ndcg_at_3 value: 49.55 - type: ndcg_at_5 value: 54.11899999999999 - type: precision_at_1 value: 39.687 - type: precision_at_10 value: 9.313 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 22.229 - type: precision_at_5 value: 15.939 - type: recall_at_1 value: 35.231 - type: recall_at_10 value: 78.083 - type: recall_at_100 value: 94.42099999999999 - type: recall_at_1000 value: 98.81 - type: recall_at_3 value: 57.047000000000004 - type: recall_at_5 value: 67.637 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.241 - type: map_at_10 value: 85.462 - type: map_at_100 value: 86.083 - type: map_at_1000 value: 86.09700000000001 - type: map_at_3 value: 82.49499999999999 - type: map_at_5 value: 84.392 - type: mrr_at_1 value: 82.09 - type: mrr_at_10 value: 88.301 - type: mrr_at_100 value: 88.383 - type: mrr_at_1000 value: 88.384 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.035 - type: ndcg_at_1 value: 82.12 - type: ndcg_at_10 value: 89.149 - type: ndcg_at_100 value: 90.235 - type: ndcg_at_1000 value: 90.307 - type: ndcg_at_3 value: 86.37599999999999 - type: ndcg_at_5 value: 87.964 - type: precision_at_1 value: 82.12 - type: precision_at_10 value: 13.56 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.88 - type: precision_at_5 value: 24.92 - type: recall_at_1 value: 71.241 - type: recall_at_10 value: 96.128 - type: recall_at_100 value: 99.696 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.181 - type: recall_at_5 value: 92.694 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.59757799655151 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.27391998854624 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.243 - type: map_at_10 value: 10.965 - type: map_at_100 value: 12.934999999999999 - type: map_at_1000 value: 13.256 - type: map_at_3 value: 7.907 - type: map_at_5 value: 9.435 - type: mrr_at_1 value: 20.9 - type: mrr_at_10 value: 31.849 - type: mrr_at_100 value: 32.964 - type: mrr_at_1000 value: 33.024 - type: mrr_at_3 value: 28.517 - type: mrr_at_5 value: 30.381999999999998 - type: ndcg_at_1 value: 20.9 - type: ndcg_at_10 value: 18.723 - type: ndcg_at_100 value: 26.384999999999998 - type: ndcg_at_1000 value: 32.114 - type: ndcg_at_3 value: 17.753 - type: ndcg_at_5 value: 15.558 - type: precision_at_1 value: 20.9 - type: precision_at_10 value: 9.8 - type: precision_at_100 value: 2.078 - type: precision_at_1000 value: 0.345 - type: precision_at_3 value: 16.900000000000002 - type: precision_at_5 value: 13.88 - type: recall_at_1 value: 4.243 - type: recall_at_10 value: 19.885 - type: recall_at_100 value: 42.17 - type: recall_at_1000 value: 70.12 - type: recall_at_3 value: 10.288 - type: recall_at_5 value: 14.072000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.84209174935282 - type: cos_sim_spearman value: 81.73248048438833 - type: euclidean_pearson value: 83.02810070308149 - type: euclidean_spearman value: 81.73248295679514 - type: manhattan_pearson value: 82.95368060376002 - type: manhattan_spearman value: 81.60277910998718 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 88.52628804556943 - type: cos_sim_spearman value: 82.5713913555672 - type: euclidean_pearson value: 85.8796774746988 - type: euclidean_spearman value: 82.57137506803424 - type: manhattan_pearson value: 85.79671002960058 - type: manhattan_spearman value: 82.49445981618027 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.23682503505542 - type: cos_sim_spearman value: 87.15008956711806 - type: euclidean_pearson value: 86.79805401524959 - type: euclidean_spearman value: 87.15008956711806 - type: manhattan_pearson value: 86.65298502699244 - type: manhattan_spearman value: 86.97677821948562 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.63370304677802 - type: cos_sim_spearman value: 84.97105553540318 - type: euclidean_pearson value: 85.28896108687721 - type: euclidean_spearman value: 84.97105553540318 - type: manhattan_pearson value: 85.09663190337331 - type: manhattan_spearman value: 84.79126831644619 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 90.2614838800733 - type: cos_sim_spearman value: 91.0509162991835 - type: euclidean_pearson value: 90.33098317533373 - type: euclidean_spearman value: 91.05091625871644 - type: manhattan_pearson value: 90.26250435151107 - type: manhattan_spearman value: 90.97999594417519 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.80480973335091 - type: cos_sim_spearman value: 87.313695492969 - type: euclidean_pearson value: 86.49267251576939 - type: euclidean_spearman value: 87.313695492969 - type: manhattan_pearson value: 86.44019901831935 - type: manhattan_spearman value: 87.24205395460392 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.05662789380672 - type: cos_sim_spearman value: 90.02759424426651 - type: euclidean_pearson value: 90.4042483422981 - type: euclidean_spearman value: 90.02759424426651 - type: manhattan_pearson value: 90.51446975000226 - type: manhattan_spearman value: 90.08832889933616 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.5975528273532 - type: cos_sim_spearman value: 67.62969861411354 - type: euclidean_pearson value: 69.224275734323 - type: euclidean_spearman value: 67.62969861411354 - type: manhattan_pearson value: 69.3761447059927 - type: manhattan_spearman value: 67.90921005611467 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.11244327231684 - type: cos_sim_spearman value: 88.37902438979035 - type: euclidean_pearson value: 87.86054279847336 - type: euclidean_spearman value: 88.37902438979035 - type: manhattan_pearson value: 87.77257757320378 - type: manhattan_spearman value: 88.25208966098123 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.87174608143563 - type: mrr value: 96.12836872640794 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 67.258 - type: map_at_100 value: 67.757 - type: map_at_1000 value: 67.78800000000001 - type: map_at_3 value: 64.602 - type: map_at_5 value: 65.64 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.441 - type: mrr_at_100 value: 68.825 - type: mrr_at_1000 value: 68.853 - type: mrr_at_3 value: 66.444 - type: mrr_at_5 value: 67.26100000000001 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.852 - type: ndcg_at_100 value: 73.9 - type: ndcg_at_1000 value: 74.628 - type: ndcg_at_3 value: 67.093 - type: ndcg_at_5 value: 68.58 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.0670000000000002 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 26.111 - type: precision_at_5 value: 16.733 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 84.967 - type: recall_at_100 value: 93.833 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 71.589 - type: recall_at_5 value: 75.483 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.66633663366336 - type: cos_sim_ap value: 91.17685358899108 - type: cos_sim_f1 value: 82.16818642350559 - type: cos_sim_precision value: 83.26488706365504 - type: cos_sim_recall value: 81.10000000000001 - type: dot_accuracy value: 99.66633663366336 - type: dot_ap value: 91.17663411119032 - type: dot_f1 value: 82.16818642350559 - type: dot_precision value: 83.26488706365504 - type: dot_recall value: 81.10000000000001 - type: euclidean_accuracy value: 99.66633663366336 - type: euclidean_ap value: 91.17685189882275 - type: euclidean_f1 value: 82.16818642350559 - type: euclidean_precision value: 83.26488706365504 - type: euclidean_recall value: 81.10000000000001 - type: manhattan_accuracy value: 99.66633663366336 - type: manhattan_ap value: 91.2241619496737 - type: manhattan_f1 value: 82.20472440944883 - type: manhattan_precision value: 86.51933701657458 - type: manhattan_recall value: 78.3 - type: max_accuracy value: 99.66633663366336 - type: max_ap value: 91.2241619496737 - type: max_f1 value: 82.20472440944883 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.85101268897951 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 42.461184054706905 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.44542568873886 - type: mrr value: 52.33656151854681 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.75982974997539 - type: cos_sim_spearman value: 30.385405026539914 - type: dot_pearson value: 30.75982433546523 - type: dot_spearman value: 30.385405026539914 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22799999999999998 - type: map_at_10 value: 2.064 - type: map_at_100 value: 13.056000000000001 - type: map_at_1000 value: 31.747999999999998 - type: map_at_3 value: 0.67 - type: map_at_5 value: 1.097 - type: mrr_at_1 value: 90.0 - type: mrr_at_10 value: 94.667 - type: mrr_at_100 value: 94.667 - type: mrr_at_1000 value: 94.667 - type: mrr_at_3 value: 94.667 - type: mrr_at_5 value: 94.667 - type: ndcg_at_1 value: 86.0 - type: ndcg_at_10 value: 82.0 - type: ndcg_at_100 value: 64.307 - type: ndcg_at_1000 value: 57.023999999999994 - type: ndcg_at_3 value: 85.816 - type: ndcg_at_5 value: 84.904 - type: precision_at_1 value: 90.0 - type: precision_at_10 value: 85.8 - type: precision_at_100 value: 66.46 - type: precision_at_1000 value: 25.202 - type: precision_at_3 value: 90.0 - type: precision_at_5 value: 89.2 - type: recall_at_1 value: 0.22799999999999998 - type: recall_at_10 value: 2.235 - type: recall_at_100 value: 16.185 - type: recall_at_1000 value: 53.620999999999995 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.172 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.75 - type: precision value: 96.45 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.54913294797689 - type: f1 value: 82.46628131021194 - type: precision value: 81.1175337186898 - type: recall value: 85.54913294797689 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.21951219512195 - type: f1 value: 77.33333333333334 - type: precision value: 75.54878048780488 - type: recall value: 81.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.26666666666665 - type: precision value: 98.1 - type: recall value: 98.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.5 - type: f1 value: 99.33333333333333 - type: precision value: 99.25 - type: recall value: 99.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.2 - type: precision value: 96.89999999999999 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.18333333333334 - type: precision value: 96.88333333333333 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.61194029850746 - type: f1 value: 72.81094527363183 - type: precision value: 70.83333333333333 - type: recall value: 77.61194029850746 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.91666666666667 - type: precision value: 91.08333333333334 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.29268292682927 - type: f1 value: 85.27642276422765 - type: precision value: 84.01277584204414 - type: recall value: 88.29268292682927 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.0 - type: precision value: 94.46666666666668 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.681652490887 - type: f1 value: 91.90765492102065 - type: precision value: 91.05913325232888 - type: recall value: 93.681652490887 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.17391304347827 - type: f1 value: 89.97101449275361 - type: precision value: 88.96811594202899 - type: recall value: 92.17391304347827 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.43478260869566 - type: f1 value: 87.72173913043478 - type: precision value: 86.42028985507245 - type: recall value: 90.43478260869566 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.03 - type: precision value: 86.95 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.4 - type: f1 value: 91.45666666666666 - type: precision value: 90.525 - type: recall value: 93.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.9059107358263 - type: f1 value: 78.32557872364869 - type: precision value: 76.78260286824823 - type: recall value: 81.9059107358263 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.58333333333333 - type: precision value: 91.73333333333332 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.10000000000001 - type: f1 value: 74.50500000000001 - type: precision value: 72.58928571428571 - type: recall value: 79.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.55 - type: precision value: 95.05 - type: recall value: 96.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.0952380952381 - type: f1 value: 77.98458049886621 - type: precision value: 76.1968253968254 - type: recall value: 82.0952380952381 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.9 - type: f1 value: 84.99190476190476 - type: precision value: 83.65 - type: recall value: 87.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.56666666666666 - type: precision value: 94.01666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.2 - type: precision value: 98.0 - type: recall value: 98.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.38333333333334 - type: precision value: 93.78333333333335 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.4 - type: f1 value: 84.10380952380952 - type: precision value: 82.67 - type: recall value: 87.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.5 - type: f1 value: 94.33333333333334 - type: precision value: 93.78333333333333 - type: recall value: 95.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.4 - type: f1 value: 86.82000000000001 - type: precision value: 85.64500000000001 - type: recall value: 89.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.1 - type: f1 value: 93.56666666666668 - type: precision value: 92.81666666666666 - type: recall value: 95.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.9 - type: f1 value: 98.6 - type: precision value: 98.45 - type: recall value: 98.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.01347708894879 - type: f1 value: 93.51752021563343 - type: precision value: 92.82794249775381 - type: recall value: 95.01347708894879 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.00854700854701 - type: f1 value: 96.08262108262107 - type: precision value: 95.65527065527067 - type: recall value: 97.00854700854701 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.39999999999999 - type: precision value: 94.88333333333333 - type: recall value: 96.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5909090909091 - type: f1 value: 95.49242424242425 - type: precision value: 94.9621212121212 - type: recall value: 96.5909090909091 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.90566037735849 - type: f1 value: 81.85883997204752 - type: precision value: 80.54507337526205 - type: recall value: 84.90566037735849 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.5 - type: f1 value: 96.75 - type: precision value: 96.38333333333333 - type: recall value: 97.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7704280155642 - type: f1 value: 82.99610894941635 - type: precision value: 81.32295719844358 - type: recall value: 86.7704280155642 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.52136752136752 - type: f1 value: 61.89662189662191 - type: precision value: 59.68660968660969 - type: recall value: 67.52136752136752 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.2 - type: f1 value: 86.32 - type: precision value: 85.015 - type: recall value: 89.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.0 - type: f1 value: 94.78333333333333 - type: precision value: 94.18333333333334 - type: recall value: 96.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.8785046728972 - type: f1 value: 80.54517133956385 - type: precision value: 79.154984423676 - type: recall value: 83.8785046728972 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.60000000000001 - type: f1 value: 92.01333333333334 - type: precision value: 91.28333333333333 - type: recall value: 93.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.1 - type: f1 value: 96.26666666666667 - type: precision value: 95.85000000000001 - type: recall value: 97.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.67833333333333 - type: precision value: 79.03928571428571 - type: recall value: 84.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.3 - type: f1 value: 96.48333333333332 - type: precision value: 96.08333333333331 - type: recall value: 97.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.66666666666667 - type: precision value: 94.16666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.36666666666667 - type: precision value: 95.96666666666668 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.80666666666667 - type: precision value: 92.12833333333333 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.22333333333334 - type: precision value: 95.875 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.33333333333333 - type: f1 value: 70.78174603174602 - type: precision value: 69.28333333333332 - type: recall value: 74.33333333333333 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.6 - type: f1 value: 32.938348952090365 - type: precision value: 31.2811038961039 - type: recall value: 37.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.5 - type: f1 value: 89.13333333333333 - type: precision value: 88.03333333333333 - type: recall value: 91.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.14285714285714 - type: f1 value: 77.67857142857143 - type: precision value: 75.59523809523809 - type: recall value: 82.14285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.0450054884742 - type: f1 value: 63.070409283362075 - type: precision value: 60.58992781824835 - type: recall value: 69.0450054884742 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.1 - type: f1 value: 57.848333333333336 - type: precision value: 55.69500000000001 - type: recall value: 63.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.01666666666667 - type: precision value: 94.5 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.90666666666667 - type: precision value: 94.425 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.61333333333333 - type: precision value: 83.27 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.4 - type: f1 value: 71.90746031746032 - type: precision value: 70.07027777777778 - type: recall value: 76.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.26666666666667 - type: precision value: 96.95 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.8 - type: f1 value: 74.39555555555555 - type: precision value: 72.59416666666667 - type: recall value: 78.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 93.78999999999999 - type: precision value: 93.125 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.75 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.25666666666666 - type: precision value: 93.64166666666668 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 56.934306569343065 - type: f1 value: 51.461591936044485 - type: precision value: 49.37434827945776 - type: recall value: 56.934306569343065 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 20.200000000000003 - type: f1 value: 16.91799284049284 - type: precision value: 15.791855158730158 - type: recall value: 20.200000000000003 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.2 - type: f1 value: 95.3 - type: precision value: 94.85 - type: recall value: 96.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.3 - type: f1 value: 95.11666666666667 - type: precision value: 94.53333333333333 - type: recall value: 96.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.88095238095238 - type: f1 value: 87.14285714285714 - type: precision value: 85.96230158730161 - type: recall value: 89.88095238095238 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 24.099999999999998 - type: f1 value: 19.630969083349783 - type: precision value: 18.275094905094907 - type: recall value: 24.099999999999998 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.4368530020704 - type: f1 value: 79.45183870649709 - type: precision value: 77.7432712215321 - type: recall value: 83.4368530020704 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.53333333333333 - type: precision value: 93.91666666666666 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.8 - type: f1 value: 98.48333333333332 - type: precision value: 98.33333333333334 - type: recall value: 98.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.5 - type: f1 value: 14.979285714285714 - type: precision value: 14.23235060690943 - type: recall value: 17.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.93939393939394 - type: f1 value: 91.991341991342 - type: precision value: 91.05339105339105 - type: recall value: 93.93939393939394 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.31297709923665 - type: f1 value: 86.76844783715012 - type: precision value: 85.63613231552164 - type: recall value: 89.31297709923665 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.12663755458514 - type: f1 value: 98.93255701115964 - type: precision value: 98.83551673944687 - type: recall value: 99.12663755458514 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.0 - type: f1 value: 89.77999999999999 - type: precision value: 88.78333333333333 - type: recall value: 92.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89265536723164 - type: f1 value: 95.85687382297553 - type: precision value: 95.33898305084746 - type: recall value: 96.89265536723164 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.6 - type: f1 value: 11.820611790170615 - type: precision value: 11.022616224355355 - type: recall value: 14.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.93333333333334 - type: precision value: 94.48666666666666 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.72333333333334 - type: precision value: 83.44166666666666 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.47333333333333 - type: precision value: 92.875 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.71666666666665 - type: precision value: 95.28333333333335 - type: recall value: 96.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.8 - type: f1 value: 14.511074040901628 - type: precision value: 13.503791000666002 - type: recall value: 17.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.10187667560321 - type: f1 value: 92.46648793565683 - type: precision value: 91.71134941912423 - type: recall value: 94.10187667560321 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.11666666666666 - type: precision value: 95.68333333333334 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.72727272727273 - type: f1 value: 66.58949745906267 - type: precision value: 63.86693017127799 - type: recall value: 72.72727272727273 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.14084507042254 - type: f1 value: 88.26291079812206 - type: precision value: 87.32394366197182 - type: recall value: 90.14084507042254 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.67065868263472 - type: f1 value: 58.2876627696987 - type: precision value: 55.79255774165953 - type: recall value: 64.67065868263472 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.41666666666667 - type: precision value: 93.85 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.172413793103445 - type: f1 value: 49.63992493549144 - type: precision value: 47.71405113769646 - type: recall value: 55.172413793103445 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.46478873239437 - type: f1 value: 73.4417616811983 - type: precision value: 71.91607981220658 - type: recall value: 77.46478873239437 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.61538461538461 - type: f1 value: 80.91452991452994 - type: precision value: 79.33760683760683 - type: recall value: 84.61538461538461 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.6 - type: precision value: 97.3 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.5741127348643 - type: f1 value: 72.00417536534445 - type: precision value: 70.53467872883321 - type: recall value: 75.5741127348643 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 62.2 - type: f1 value: 55.577460317460314 - type: precision value: 52.98583333333333 - type: recall value: 62.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.18241042345277 - type: f1 value: 90.6468124709167 - type: precision value: 89.95656894679696 - type: recall value: 92.18241042345277 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.13333333333333 - type: precision value: 94.66666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 95.85000000000001 - type: precision value: 95.39999999999999 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.1259842519685 - type: f1 value: 89.76377952755905 - type: precision value: 88.71391076115485 - type: recall value: 92.1259842519685 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.49 - type: precision value: 91.725 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.5623268698061 - type: f1 value: 73.27364463791058 - type: precision value: 71.51947852086357 - type: recall value: 77.5623268698061 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.56666666666666 - type: precision value: 96.16666666666667 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.34615384615384 - type: f1 value: 61.092032967032964 - type: precision value: 59.27197802197802 - type: recall value: 66.34615384615384 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.41190476190476 - type: precision value: 92.7 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.13333333333333 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.97333333333334 - type: precision value: 91.14166666666667 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.21698113207547 - type: f1 value: 90.3796046720575 - type: precision value: 89.56367924528303 - type: recall value: 92.21698113207547 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.6 - type: f1 value: 96.91666666666667 - type: precision value: 96.6 - type: recall value: 97.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.44525547445255 - type: f1 value: 96.71532846715328 - type: precision value: 96.35036496350365 - type: recall value: 97.44525547445255 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.34000000000002 - type: precision value: 91.49166666666667 - type: recall value: 94.1 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.2910000000000004 - type: map_at_10 value: 10.373000000000001 - type: map_at_100 value: 15.612 - type: map_at_1000 value: 17.06 - type: map_at_3 value: 6.119 - type: map_at_5 value: 7.917000000000001 - type: mrr_at_1 value: 44.897999999999996 - type: mrr_at_10 value: 56.054 - type: mrr_at_100 value: 56.82000000000001 - type: mrr_at_1000 value: 56.82000000000001 - type: mrr_at_3 value: 52.381 - type: mrr_at_5 value: 53.81 - type: ndcg_at_1 value: 42.857 - type: ndcg_at_10 value: 27.249000000000002 - type: ndcg_at_100 value: 36.529 - type: ndcg_at_1000 value: 48.136 - type: ndcg_at_3 value: 33.938 - type: ndcg_at_5 value: 29.951 - type: precision_at_1 value: 44.897999999999996 - type: precision_at_10 value: 22.653000000000002 - type: precision_at_100 value: 7.000000000000001 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 32.653 - type: precision_at_5 value: 27.755000000000003 - type: recall_at_1 value: 3.2910000000000004 - type: recall_at_10 value: 16.16 - type: recall_at_100 value: 43.908 - type: recall_at_1000 value: 79.823 - type: recall_at_3 value: 7.156 - type: recall_at_5 value: 10.204 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.05879999999999 - type: ap value: 14.609748142799111 - type: f1 value: 54.878956295843096 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.61799660441426 - type: f1 value: 64.8698191961434 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.32860036611885 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.34714192048638 - type: cos_sim_ap value: 80.26732975975634 - type: cos_sim_f1 value: 73.53415148134374 - type: cos_sim_precision value: 69.34767360299276 - type: cos_sim_recall value: 78.25857519788919 - type: dot_accuracy value: 88.34714192048638 - type: dot_ap value: 80.26733698491206 - type: dot_f1 value: 73.53415148134374 - type: dot_precision value: 69.34767360299276 - type: dot_recall value: 78.25857519788919 - type: euclidean_accuracy value: 88.34714192048638 - type: euclidean_ap value: 80.26734337771738 - type: euclidean_f1 value: 73.53415148134374 - type: euclidean_precision value: 69.34767360299276 - type: euclidean_recall value: 78.25857519788919 - type: manhattan_accuracy value: 88.30541813196639 - type: manhattan_ap value: 80.19415808104145 - type: manhattan_f1 value: 73.55143870713441 - type: manhattan_precision value: 73.25307511122743 - type: manhattan_recall value: 73.85224274406332 - type: max_accuracy value: 88.34714192048638 - type: max_ap value: 80.26734337771738 - type: max_f1 value: 73.55143870713441 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.81061047075717 - type: cos_sim_ap value: 87.11747055081017 - type: cos_sim_f1 value: 80.04355498817256 - type: cos_sim_precision value: 78.1165262000733 - type: cos_sim_recall value: 82.06806282722513 - type: dot_accuracy value: 89.81061047075717 - type: dot_ap value: 87.11746902745236 - type: dot_f1 value: 80.04355498817256 - type: dot_precision value: 78.1165262000733 - type: dot_recall value: 82.06806282722513 - type: euclidean_accuracy value: 89.81061047075717 - type: euclidean_ap value: 87.11746919324248 - type: euclidean_f1 value: 80.04355498817256 - type: euclidean_precision value: 78.1165262000733 - type: euclidean_recall value: 82.06806282722513 - type: manhattan_accuracy value: 89.79508673885202 - type: manhattan_ap value: 87.11074390832218 - type: manhattan_f1 value: 80.13002540726349 - type: manhattan_precision value: 77.83826945412311 - type: manhattan_recall value: 82.56082537727133 - type: max_accuracy value: 89.81061047075717 - type: max_ap value: 87.11747055081017 - type: max_f1 value: 80.13002540726349 --- ## Multilingual-E5-large-instruct [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 24 layers and the embedding size is 1024. ## Usage Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset. ### Transformers ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, '南瓜的家常做法') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]] ``` ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, '南瓜的家常做法') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] input_texts = queries + documents model = SentenceTransformer('intfloat/multilingual-e5-large-instruct') embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]] ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with 1 billion weakly supervised text pairs. **Second stage**: fine-tuning on datasets from the [E5-mistral](https://arxiv.org/abs/2401.00368) paper. ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## FAQ **1. Do I need to add instructions to the query?** Yes, this is how the model is trained, otherwise you will see a performance degradation. The task definition should be a one-sentence instruction that describes the task. This is a way to customize text embeddings for different scenarios through natural language instructions. Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation. On the other hand, there is no need to add instructions to the document side. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2024multilingual, title={Multilingual E5 Text Embeddings: A Technical Report}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2402.05672}, year={2024} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
MilosKosRad/ScientificNLIsrb
MilosKosRad
null
[ "safetensors", "deberta-v2", "sr", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "region:us" ]
1,742
1,742
2
0
--- base_model: microsoft/deberta-v3-large language: sr metrics: - accuracy - precision - recall - f1 --- # srbNLI: Serbian Natural Language Inference Model ## Model Overview srbNLI is a fine-tuned Natural Language Inference (NLI) model for Serbian, created by adapting the SciFact dataset. The model is based on state-of-the-art transformer architectures. It is trained to recognize relationships between claims and evidence in Serbian text, with applications in scientific claim verification and potential expansion to broader claim verification tasks. ## Key Details - **Model Type**: Transformer-based - **Language**: Serbian - **Task**: Natural Language Inference (NLI), Textual Entailment, Claim Verification - **Dataset**: srbSciFact (automatically translated SciFact dataset) - **Fine-tuning**: Fine-tuned on Serbian NLI data (support, contradiction, and neutral categories). - **Metrics**: Accuracy, Precision, Recall, F1-score ## Motivation This model addresses the lack of NLI datasets and models for Serbian, a low-resource language. It provides a tool for textual entailment and claim verification, especially for scientific claims, with broader potential for misinformation detection and automated fact-checking. ## Training - **Base Models Used**: DeBERTa-v3-large - **Training Data**: Automatically translated SciFact dataset - **Fine-tuning**: Conducted on a single DGX NVIDIA A100 GPU (40 GB) - **Hyperparameters**: Optimized learning rate, batch size, weight decay, epochs, and early stopping ## Evaluation The model was evaluated using standard NLI metrics (accuracy, precision, recall, F1-score). It was also compared to the GPT-4o model for generalization capabilities. ## Use Cases - **Claim Verification**: Scientific claims and general domain claims in Serbian - **Misinformation Detection**: Identifying contradictions or support between claims and evidence - **Cross-lingual Applications**: Potential for cross-lingual claim verification with multilingual models ## Future Work - Improving accuracy with human-corrected translations and Serbian-specific datasets - Expanding to general-domain claim verification - Enhancing multilingual NLI capabilities ## Results Comparison The table below presents a comparison of the fine-tuned models (DeBERTa-v3-large, RoBERTa-large, BERTić, GPT-4o, and others) on the srbSciFact dataset, focusing on key metrics: Accuracy (Acc), Precision (P), Recall (R), and F1-score (F1). The models were evaluated on their ability to classify relationships between claims and evidence in Serbian text. | Model | Accuracy | Precision (P) | Recall (R) | F1-score (F1) | |----------------------|----------|---------------|------------|---------------| | **DeBERTa-v3-large** | 0.70 | 0.86 | 0.82 | 0.84 | | **RoBERTa-large** | 0.57 | 0.63 | 0.76 | 0.69 | | **BERTić (Serbian)** | 0.56 | 0.56 | 0.37 | 0.44 | | **GPT-4o (English)** | 0.66 | 0.70 | 0.77 | 0.78 | | **mDeBERTa-base** | 0.63 | 0.92 | 0.75 | 0.83 | | **XLM-RoBERTa-large** | 0.64 | 0.89 | 0.77 | 0.83 | | **mBERT-cased** | 0.48 | 0.76 | 0.50 | 0.60 | | **mBERT-uncased** | 0.57 | 0.45 | 0.61 | 0.52 | ### Observations - **DeBERTa-v3-large** performed the best overall, with an accuracy of 0.70 and an F1-score of 0.84. - **RoBERTa-large** and **BERTić** showed lower performance, especially in recall, suggesting challenges in handling complex linguistic inference in Serbian. - **GPT-4o** outperforms all fine-tuned models in F1-score when the prompt is in English, but the **DeBERTa-v3-large** model slightly outperforms GPT-4o when the prompt is in Serbian. - **mDeBERTa-base** and **XLM-RoBERTa-large** exhibited strong cross-lingual performance, with F1-scores of 0.83 and 0.83, respectively. This demonstrates the potential of adapting advanced transformer models to Serbian while highlighting areas for future improvement, such as refining translations and expanding domain-specific data. ---
[ "TEXTUAL_ENTAILMENT", "TRANSLATION" ]
[ "SCIFACT" ]
Non_BioNLP
mogaio/pr_ebsa_fr_tran_merged25_e1_beginning_offsets
mogaio
text-classification
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "model-index", "region:us" ]
1,702
1,702
73
0
--- base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 library_name: setfit metrics: - accuracy_score - classification_report pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Adil Hussain Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin Shah à l''époque où il fréquentait l''École nationale d''art dramatique' - text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates aspirent à renverser six circonscriptions détenues par les républicains que M. Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la conférence du parti démocrate à la Chambre des représentants, Suite à la page suivante a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions de dollars aux campagnes dans les circonscriptions de New York Des problèmes à venir pour les démocrates de New York en 2024 ? Les dirigeants démocrates de New York se débattent depuis des mois avec le problème de l''hébergement des dizaines de milliers de migrants qui ont été transportés par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les démocrates de New York en 2024 ? Les dirigeants démocrates de New York se débattent depuis des mois avec le problème de l''hébergement des dizaines de milliers de migrants qui ont été transportés par bus jusqu''à New York et laissés à sa charge. Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune issue ne soit en vue : les retombées potentielles pour leur parti lors des élections de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils parlent d''immigration - comme les démocrates le font pour l''avortement - et sont clairement à l''attaque sur la question des migrants à New York, tandis que les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication pour le Centre de politique de l''Université de Virginie, au réseau USA Today Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville, et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux frais de la ville Les démocrates doivent y remporter des victoires pour gagner cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain président de la Chambre des représentants Les publicités d''attaque des républicains s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique de la crise des migrants, soulignant que les élections de 2024 n''auront lieu que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient se poser' - text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA H-1B AUX ETATS-UNIS Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy, candidat républicain indien-américain à l''élection présidentielle, a promis de "vider" le système basé sur la loterie et de le remplacer par un système d''admission méritocratique s''il remporte les élections présidentielles de 2024' - text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover ("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover ("Queer as Folk") a 54 ans. a 54 ans. Acteur ("Je sais ce que vous avez fait l''été dernier") a 50 ans' - text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche. "Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago, voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé Howard, qui était le roi de tous les médias, en prince Harry de tous les médias. Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous avec lui ?" M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre de ses sketches à l''antenne, a été un critique virulent de Trump tout au long de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à nouveau en 2024. En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu l''élection ?" Il a poursuivi en qualifiant les partisans de Trump de "nigauds". "Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous avec lui ?" M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke" comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego". Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine. "Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs. L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy Failla critique Stern. "L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne", a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral' inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy_score value: 0.9434954007884363 name: Accuracy_Score - type: classification_report value: '0': precision: 0.9361702127659575 recall: 0.9322033898305084 f1-score: 0.9341825902335456 support: 236 '1': precision: 0.9333333333333333 recall: 0.9302325581395349 f1-score: 0.9317803660565723 support: 301 '2': precision: 0.9646017699115044 recall: 0.9732142857142857 f1-score: 0.9688888888888889 support: 224 accuracy: 0.9434954007884363 macro avg: precision: 0.9447017720035985 recall: 0.945216744561443 f1-score: 0.9449506150596689 support: 761 weighted avg: precision: 0.9434169513880108 recall: 0.9434954007884363 f1-score: 0.9434482162802315 support: 761 name: Classification_Report --- # SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 3 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> | | neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> | | obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy_Score | Classification_Report | |:--------|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **all** | 0.9435 | {'0': {'precision': 0.9361702127659575, 'recall': 0.9322033898305084, 'f1-score': 0.9341825902335456, 'support': 236}, '1': {'precision': 0.9333333333333333, 'recall': 0.9302325581395349, 'f1-score': 0.9317803660565723, 'support': 301}, '2': {'precision': 0.9646017699115044, 'recall': 0.9732142857142857, 'f1-score': 0.9688888888888889, 'support': 224}, 'accuracy': 0.9434954007884363, 'macro avg': {'precision': 0.9447017720035985, 'recall': 0.945216744561443, 'f1-score': 0.9449506150596689, 'support': 761}, 'weighted avg': {'precision': 0.9434169513880108, 'recall': 0.9434954007884363, 'f1-score': 0.9434482162802315, 'support': 761}} | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e1_beginning_offsets") # Run inference preds = model("Adil Hussain Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 9 | 247.2638 | 2089 | | Label | Training Sample Count | |:------|:----------------------| | neg | 913 | | obj | 1216 | | pos | 911 | ### Training Hyperparameters - batch_size: (8, 8) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 1 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0013 | 1 | 0.3703 | - | | 0.0658 | 50 | 0.3145 | - | | 0.1316 | 100 | 0.1839 | - | | 0.1974 | 150 | 0.2558 | - | | 0.2632 | 200 | 0.2683 | - | | 0.3289 | 250 | 0.1572 | - | | 0.3947 | 300 | 0.1953 | - | | 0.4605 | 350 | 0.171 | - | | 0.5263 | 400 | 0.2326 | - | | 0.5921 | 450 | 0.1762 | - | | 0.6579 | 500 | 0.2818 | - | | 0.7237 | 550 | 0.2733 | - | | 0.7895 | 600 | 0.195 | - | | 0.8553 | 650 | 0.2104 | - | | 0.9211 | 700 | 0.2124 | - | | 0.9868 | 750 | 0.0818 | - | | 1.0526 | 800 | 0.1046 | - | | 1.1184 | 850 | 0.1633 | - | | 1.1842 | 900 | 0.3207 | - | | 1.25 | 950 | 0.2703 | - | | 1.3158 | 1000 | 0.1934 | - | | 1.3816 | 1050 | 0.2547 | - | | 1.4474 | 1100 | 0.0933 | - | | 1.5132 | 1150 | 0.2102 | - | | 1.5789 | 1200 | 0.0699 | - | | 1.6447 | 1250 | 0.1778 | - | | 1.7105 | 1300 | 0.1796 | - | | 1.7763 | 1350 | 0.0221 | - | | 1.8421 | 1400 | 0.2154 | - | | 1.9079 | 1450 | 0.1683 | - | | 1.9737 | 1500 | 0.3096 | - | | 2.0395 | 1550 | 0.201 | - | | 2.1053 | 1600 | 0.1954 | - | | 2.1711 | 1650 | 0.2301 | - | | 2.2368 | 1700 | 0.1141 | - | | 2.3026 | 1750 | 0.1949 | - | | 2.3684 | 1800 | 0.164 | - | | 2.4342 | 1850 | 0.2307 | - | | 2.5 | 1900 | 0.1912 | - | | 2.5658 | 1950 | 0.2349 | - | | 2.6316 | 2000 | 0.0922 | - | | 2.6974 | 2050 | 0.0702 | - | | 2.7632 | 2100 | 0.1089 | - | | 2.8289 | 2150 | 0.1711 | - | | 2.8947 | 2200 | 0.1432 | - | | 2.9605 | 2250 | 0.2739 | - | | 3.0263 | 2300 | 0.1889 | - | | 3.0921 | 2350 | 0.1036 | - | | 3.1579 | 2400 | 0.1372 | - | | 3.2237 | 2450 | 0.028 | - | | 3.2895 | 2500 | 0.1739 | - | | 3.3553 | 2550 | 0.142 | - | | 3.4211 | 2600 | 0.0838 | - | | 3.4868 | 2650 | 0.0657 | - | | 3.5526 | 2700 | 0.0054 | - | | 3.6184 | 2750 | 0.0426 | - | | 3.6842 | 2800 | 0.1974 | - | | 3.75 | 2850 | 0.0279 | - | | 3.8158 | 2900 | 0.1326 | - | | 3.8816 | 2950 | 0.1614 | - | | 3.9474 | 3000 | 0.1251 | - | | 4.0132 | 3050 | 0.1174 | - | | 4.0789 | 3100 | 0.1948 | - | | 4.1447 | 3150 | 0.0555 | - | | 4.2105 | 3200 | 0.0064 | - | | 4.2763 | 3250 | 0.064 | - | | 4.3421 | 3300 | 0.0013 | - | | 4.4079 | 3350 | 0.135 | - | | 4.4737 | 3400 | 0.0574 | - | | 4.5395 | 3450 | 0.174 | - | | 4.6053 | 3500 | 0.2199 | - | | 4.6711 | 3550 | 0.387 | - | | 4.7368 | 3600 | 0.114 | - | | 4.8026 | 3650 | 0.0853 | - | | 4.8684 | 3700 | 0.0325 | - | | 4.9342 | 3750 | 0.019 | - | | 5.0 | 3800 | 0.0572 | - | | 0.0013 | 1 | 0.1435 | - | | 0.0658 | 50 | 0.0969 | - | | 0.1316 | 100 | 0.1085 | - | | 0.1974 | 150 | 0.0271 | - | | 0.2632 | 200 | 0.0138 | - | | 0.3289 | 250 | 0.058 | - | | 0.3947 | 300 | 0.1205 | - | | 0.4605 | 350 | 0.0788 | - | | 0.5263 | 400 | 0.1449 | - | | 0.5921 | 450 | 0.0383 | - | | 0.6579 | 500 | 0.0338 | - | | 0.7237 | 550 | 0.1253 | - | | 0.7895 | 600 | 0.069 | - | | 0.8553 | 650 | 0.104 | - | | 0.9211 | 700 | 0.0462 | - | | 0.9868 | 750 | 0.1975 | - | | 1.0526 | 800 | 0.0241 | - | | 1.1184 | 850 | 0.0426 | - | | 1.1842 | 900 | 0.0519 | - | | 1.25 | 950 | 0.0815 | - | | 1.3158 | 1000 | 0.1839 | - | | 1.3816 | 1050 | 0.0198 | - | | 1.4474 | 1100 | 0.0128 | - | | 1.5132 | 1150 | 0.1645 | - | | 1.5789 | 1200 | 0.0019 | - | | 1.6447 | 1250 | 0.0557 | - | | 1.7105 | 1300 | 0.0098 | - | | 1.7763 | 1350 | 0.001 | - | | 1.8421 | 1400 | 0.1557 | - | | 1.9079 | 1450 | 0.1286 | - | | 1.9737 | 1500 | 0.094 | - | | 2.0395 | 1550 | 0.0059 | - | | 2.1053 | 1600 | 0.0227 | - | | 2.1711 | 1650 | 0.0899 | - | | 2.2368 | 1700 | 0.0053 | - | | 2.3026 | 1750 | 0.0021 | - | | 2.3684 | 1800 | 0.0114 | - | | 2.4342 | 1850 | 0.1163 | - | | 2.5 | 1900 | 0.0959 | - | | 2.5658 | 1950 | 0.0252 | - | | 2.6316 | 2000 | 0.0921 | - | | 2.6974 | 2050 | 0.1159 | - | | 2.7632 | 2100 | 0.0026 | - | | 2.8289 | 2150 | 0.1211 | - | | 2.8947 | 2200 | 0.1843 | - | | 2.9605 | 2250 | 0.0014 | - | | 3.0263 | 2300 | 0.0085 | - | | 3.0921 | 2350 | 0.0839 | - | | 3.1579 | 2400 | 0.2372 | - | | 3.2237 | 2450 | 0.0213 | - | | 3.2895 | 2500 | 0.0155 | - | | 3.3553 | 2550 | 0.1128 | - | | 3.4211 | 2600 | 0.0945 | - | | 3.4868 | 2650 | 0.0917 | - | | 3.5526 | 2700 | 0.0011 | - | | 3.6184 | 2750 | 0.0024 | - | | 3.6842 | 2800 | 0.0044 | - | | 3.75 | 2850 | 0.121 | - | | 3.8158 | 2900 | 0.0056 | - | | 3.8816 | 2950 | 0.003 | - | | 3.9474 | 3000 | 0.0899 | - | | 4.0132 | 3050 | 0.0157 | - | | 4.0789 | 3100 | 0.1188 | - | | 4.1447 | 3150 | 0.001 | - | | 4.2105 | 3200 | 0.0222 | - | | 4.2763 | 3250 | 0.1209 | - | | 4.3421 | 3300 | 0.1085 | - | | 4.4079 | 3350 | 0.0054 | - | | 4.4737 | 3400 | 0.0009 | - | | 4.5395 | 3450 | 0.0015 | - | | 4.6053 | 3500 | 0.003 | - | | 4.6711 | 3550 | 0.0009 | - | | 4.7368 | 3600 | 0.0003 | - | | 4.8026 | 3650 | 0.0009 | - | | 4.8684 | 3700 | 0.03 | - | | 4.9342 | 3750 | 0.1206 | - | | 5.0 | 3800 | 0.0003 | - | | 0.0013 | 1 | 0.2045 | - | | 0.0658 | 50 | 0.0078 | - | | 0.1316 | 100 | 0.0087 | - | | 0.1974 | 150 | 0.0386 | - | | 0.2632 | 200 | 0.1015 | - | | 0.3289 | 250 | 0.0022 | - | | 0.3947 | 300 | 0.0291 | - | | 0.4605 | 350 | 0.0013 | - | | 0.5263 | 400 | 0.0022 | - | | 0.5921 | 450 | 0.1324 | - | | 0.6579 | 500 | 0.113 | - | | 0.7237 | 550 | 0.0011 | - | | 0.7895 | 600 | 0.1723 | - | | 0.8553 | 650 | 0.0049 | - | | 0.9211 | 700 | 0.206 | - | | 0.9868 | 750 | 0.1683 | - | | 1.0526 | 800 | 0.0954 | - | | 1.1184 | 850 | 0.018 | - | | 1.1842 | 900 | 0.1854 | - | | 1.25 | 950 | 0.0342 | - | | 1.3158 | 1000 | 0.0015 | - | | 1.3816 | 1050 | 0.0062 | - | | 1.4474 | 1100 | 0.1187 | - | | 1.5132 | 1150 | 0.0048 | - | | 1.5789 | 1200 | 0.0011 | - | | 1.6447 | 1250 | 0.002 | - | | 1.7105 | 1300 | 0.092 | - | | 1.7763 | 1350 | 0.1245 | - | | 1.8421 | 1400 | 0.0009 | - | | 1.9079 | 1450 | 0.1185 | - | | 1.9737 | 1500 | 0.0017 | - | | 2.0395 | 1550 | 0.008 | - | | 2.1053 | 1600 | 0.0049 | - | | 2.1711 | 1650 | 0.0083 | - | | 2.2368 | 1700 | 0.0026 | - | | 2.3026 | 1750 | 0.0081 | - | | 2.3684 | 1800 | 0.0036 | - | | 2.4342 | 1850 | 0.0016 | - | | 2.5 | 1900 | 0.0017 | - | | 2.5658 | 1950 | 0.0014 | - | | 2.6316 | 2000 | 0.0017 | - | | 2.6974 | 2050 | 0.002 | - | | 2.7632 | 2100 | 0.1022 | - | | 2.8289 | 2150 | 0.0004 | - | | 2.8947 | 2200 | 0.0007 | - | | 2.9605 | 2250 | 0.0794 | - | | 3.0263 | 2300 | 0.0183 | - | | 3.0921 | 2350 | 0.0377 | - | | 3.1579 | 2400 | 0.029 | - | | 3.2237 | 2450 | 0.0003 | - | | 3.2895 | 2500 | 0.0961 | - | | 3.3553 | 2550 | 0.0008 | - | | 3.4211 | 2600 | 0.0873 | - | | 3.4868 | 2650 | 0.0501 | - | | 3.5526 | 2700 | 0.0029 | - | | 3.6184 | 2750 | 0.0008 | - | | 3.6842 | 2800 | 0.0004 | - | | 3.75 | 2850 | 0.0011 | - | | 3.8158 | 2900 | 0.0518 | - | | 3.8816 | 2950 | 0.0002 | - | | 3.9474 | 3000 | 0.1115 | - | | 4.0132 | 3050 | 0.0129 | - | | 4.0789 | 3100 | 0.0005 | - | | 4.1447 | 3150 | 0.0012 | - | | 4.2105 | 3200 | 0.1086 | - | | 4.2763 | 3250 | 0.0199 | - | | 4.3421 | 3300 | 0.0004 | - | | 4.4079 | 3350 | 0.0001 | - | | 4.4737 | 3400 | 0.0832 | - | | 4.5395 | 3450 | 0.0003 | - | | 4.6053 | 3500 | 0.0041 | - | | 4.6711 | 3550 | 0.1146 | - | | 4.7368 | 3600 | 0.0027 | - | | 4.8026 | 3650 | 0.0002 | - | | 4.8684 | 3700 | 0.0544 | - | | 4.9342 | 3750 | 0.0002 | - | | 5.0 | 3800 | 0.0046 | - | | 0.0013 | 1 | 0.0015 | - | | 0.0658 | 50 | 0.1973 | - | | 0.1316 | 100 | 0.0106 | - | | 0.1974 | 150 | 0.0744 | - | | 0.2632 | 200 | 0.1033 | - | | 0.3289 | 250 | 0.0425 | - | | 0.3947 | 300 | 0.1125 | - | | 0.4605 | 350 | 0.0018 | - | | 0.5263 | 400 | 0.0019 | - | | 0.5921 | 450 | 0.0002 | - | | 0.6579 | 500 | 0.0007 | - | | 0.7237 | 550 | 0.1393 | - | | 0.7895 | 600 | 0.0002 | - | | 0.8553 | 650 | 0.0043 | - | | 0.9211 | 700 | 0.0339 | - | | 0.9868 | 750 | 0.0002 | - | | 0.0013 | 1 | 0.0007 | - | | 0.0658 | 50 | 0.0419 | - | | 0.1316 | 100 | 0.0068 | - | | 0.1974 | 150 | 0.1401 | - | | 0.2632 | 200 | 0.0423 | - | | 0.3289 | 250 | 0.1122 | - | | 0.3947 | 300 | 0.0037 | - | | 0.4605 | 350 | 0.005 | - | | 0.5263 | 400 | 0.0006 | - | | 0.5921 | 450 | 0.0006 | - | | 0.6579 | 500 | 0.0016 | - | | 0.7237 | 550 | 0.1244 | - | | 0.7895 | 600 | 0.0016 | - | | 0.8553 | 650 | 0.0028 | - | | 0.9211 | 700 | 0.002 | - | | 0.9868 | 750 | 0.057 | - | | 0.0013 | 1 | 0.1396 | - | | 0.0658 | 50 | 0.0366 | - | | 0.1316 | 100 | 0.0021 | - | | 0.1974 | 150 | 0.1088 | - | | 0.2632 | 200 | 0.0449 | - | | 0.3289 | 250 | 0.0187 | - | | 0.3947 | 300 | 0.0017 | - | | 0.4605 | 350 | 0.1262 | - | | 0.5263 | 400 | 0.0052 | - | | 0.5921 | 450 | 0.1188 | - | | 0.6579 | 500 | 0.0002 | - | | 0.7237 | 550 | 0.0006 | - | | 0.7895 | 600 | 0.0758 | - | | 0.8553 | 650 | 0.025 | - | | 0.9211 | 700 | 0.0052 | - | | 0.9868 | 750 | 0.1985 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.15.0 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "CAS" ]
Non_BioNLP
sabafallah/bge-large-en-v1.5-Q4_K_M-GGUF
sabafallah
feature-extraction
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "transformers", "mteb", "llama-cpp", "gguf-my-repo", "en", "base_model:BAAI/bge-large-en-v1.5", "base_model:quantized:BAAI/bge-large-en-v1.5", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,725
1,739
24
0
--- base_model: BAAI/bge-large-en-v1.5 language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb - llama-cpp - gguf-my-repo model-index: - name: bge-large-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8507462686567 - type: ap value: 38.566457320228245 - type: f1 value: 69.69386648043475 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.416675 - type: ap value: 89.1928861155922 - type: f1 value: 92.39477019574215 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.175999999999995 - type: f1 value: 47.80712792870253 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.184999999999995 - type: map_at_10 value: 55.654 - type: map_at_100 value: 56.25 - type: map_at_1000 value: 56.255 - type: map_at_3 value: 51.742999999999995 - type: map_at_5 value: 54.129000000000005 - type: mrr_at_1 value: 40.967 - type: mrr_at_10 value: 55.96 - type: mrr_at_100 value: 56.54900000000001 - type: mrr_at_1000 value: 56.554 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.44 - type: ndcg_at_1 value: 40.184999999999995 - type: ndcg_at_10 value: 63.542 - type: ndcg_at_100 value: 65.96499999999999 - type: ndcg_at_1000 value: 66.08699999999999 - type: ndcg_at_3 value: 55.582 - type: ndcg_at_5 value: 59.855000000000004 - type: precision_at_1 value: 40.184999999999995 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.405 - type: recall_at_1 value: 40.184999999999995 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 98.72 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 77.027 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.567077926750066 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.19453389182364 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.46555939623092 - type: mrr value: 77.82361605768807 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.9554128814735 - type: cos_sim_spearman value: 84.65373612172036 - type: euclidean_pearson value: 83.2905059954138 - type: euclidean_spearman value: 84.52240782811128 - type: manhattan_pearson value: 82.99533802997436 - type: manhattan_spearman value: 84.20673798475734 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.78896103896103 - type: f1 value: 87.77189310964883 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.714538337650495 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.90108349284447 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.795 - type: map_at_10 value: 43.669000000000004 - type: map_at_100 value: 45.151 - type: map_at_1000 value: 45.278 - type: map_at_3 value: 40.006 - type: map_at_5 value: 42.059999999999995 - type: mrr_at_1 value: 39.771 - type: mrr_at_10 value: 49.826 - type: mrr_at_100 value: 50.504000000000005 - type: mrr_at_1000 value: 50.549 - type: mrr_at_3 value: 47.115 - type: mrr_at_5 value: 48.832 - type: ndcg_at_1 value: 39.771 - type: ndcg_at_10 value: 50.217999999999996 - type: ndcg_at_100 value: 55.454 - type: ndcg_at_1000 value: 57.37 - type: ndcg_at_3 value: 44.885000000000005 - type: ndcg_at_5 value: 47.419 - type: precision_at_1 value: 39.771 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.268 - type: precision_at_5 value: 15.536 - type: recall_at_1 value: 32.795 - type: recall_at_10 value: 62.580999999999996 - type: recall_at_100 value: 84.438 - type: recall_at_1000 value: 96.492 - type: recall_at_3 value: 47.071000000000005 - type: recall_at_5 value: 54.079 - type: map_at_1 value: 32.671 - type: map_at_10 value: 43.334 - type: map_at_100 value: 44.566 - type: map_at_1000 value: 44.702999999999996 - type: map_at_3 value: 40.343 - type: map_at_5 value: 41.983 - type: mrr_at_1 value: 40.764 - type: mrr_at_10 value: 49.382 - type: mrr_at_100 value: 49.988 - type: mrr_at_1000 value: 50.03300000000001 - type: mrr_at_3 value: 47.293 - type: mrr_at_5 value: 48.51 - type: ndcg_at_1 value: 40.764 - type: ndcg_at_10 value: 49.039 - type: ndcg_at_100 value: 53.259 - type: ndcg_at_1000 value: 55.253 - type: ndcg_at_3 value: 45.091 - type: ndcg_at_5 value: 46.839999999999996 - type: precision_at_1 value: 40.764 - type: precision_at_10 value: 9.191 - type: precision_at_100 value: 1.476 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.72 - type: precision_at_5 value: 15.299 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 58.816 - type: recall_at_100 value: 76.654 - type: recall_at_1000 value: 89.05999999999999 - type: recall_at_3 value: 46.743 - type: recall_at_5 value: 51.783 - type: map_at_1 value: 40.328 - type: map_at_10 value: 53.32599999999999 - type: map_at_100 value: 54.37499999999999 - type: map_at_1000 value: 54.429 - type: map_at_3 value: 49.902 - type: map_at_5 value: 52.002 - type: mrr_at_1 value: 46.332 - type: mrr_at_10 value: 56.858 - type: mrr_at_100 value: 57.522 - type: mrr_at_1000 value: 57.54899999999999 - type: mrr_at_3 value: 54.472 - type: mrr_at_5 value: 55.996 - type: ndcg_at_1 value: 46.332 - type: ndcg_at_10 value: 59.313 - type: ndcg_at_100 value: 63.266999999999996 - type: ndcg_at_1000 value: 64.36 - type: ndcg_at_3 value: 53.815000000000005 - type: ndcg_at_5 value: 56.814 - type: precision_at_1 value: 46.332 - type: precision_at_10 value: 9.53 - type: precision_at_100 value: 1.238 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.054000000000002 - type: precision_at_5 value: 16.589000000000002 - type: recall_at_1 value: 40.328 - type: recall_at_10 value: 73.421 - type: recall_at_100 value: 90.059 - type: recall_at_1000 value: 97.81 - type: recall_at_3 value: 59.009 - type: recall_at_5 value: 66.352 - type: map_at_1 value: 27.424 - type: map_at_10 value: 36.332 - type: map_at_100 value: 37.347 - type: map_at_1000 value: 37.422 - type: map_at_3 value: 33.743 - type: map_at_5 value: 35.176 - type: mrr_at_1 value: 29.153000000000002 - type: mrr_at_10 value: 38.233 - type: mrr_at_100 value: 39.109 - type: mrr_at_1000 value: 39.164 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.169000000000004 - type: ndcg_at_1 value: 29.153000000000002 - type: ndcg_at_10 value: 41.439 - type: ndcg_at_100 value: 46.42 - type: ndcg_at_1000 value: 48.242000000000004 - type: ndcg_at_3 value: 36.362 - type: ndcg_at_5 value: 38.743 - type: precision_at_1 value: 29.153000000000002 - type: precision_at_10 value: 6.315999999999999 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 15.443000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.424 - type: recall_at_10 value: 55.364000000000004 - type: recall_at_100 value: 78.211 - type: recall_at_1000 value: 91.74600000000001 - type: recall_at_3 value: 41.379 - type: recall_at_5 value: 47.14 - type: map_at_1 value: 19.601 - type: map_at_10 value: 27.826 - type: map_at_100 value: 29.017 - type: map_at_1000 value: 29.137 - type: map_at_3 value: 25.125999999999998 - type: map_at_5 value: 26.765 - type: mrr_at_1 value: 24.005000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.631 - type: mrr_at_1000 value: 33.694 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 31.630999999999997 - type: ndcg_at_1 value: 24.005000000000003 - type: ndcg_at_10 value: 33.158 - type: ndcg_at_100 value: 38.739000000000004 - type: ndcg_at_1000 value: 41.495 - type: ndcg_at_3 value: 28.185 - type: ndcg_at_5 value: 30.796 - type: precision_at_1 value: 24.005000000000003 - type: precision_at_10 value: 5.908 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 13.391 - type: precision_at_5 value: 9.876 - type: recall_at_1 value: 19.601 - type: recall_at_10 value: 44.746 - type: recall_at_100 value: 68.82300000000001 - type: recall_at_1000 value: 88.215 - type: recall_at_3 value: 31.239 - type: recall_at_5 value: 37.695 - type: map_at_1 value: 30.130000000000003 - type: map_at_10 value: 40.96 - type: map_at_100 value: 42.282 - type: map_at_1000 value: 42.392 - type: map_at_3 value: 37.889 - type: map_at_5 value: 39.661 - type: mrr_at_1 value: 36.958999999999996 - type: mrr_at_10 value: 46.835 - type: mrr_at_100 value: 47.644 - type: mrr_at_1000 value: 47.688 - type: mrr_at_3 value: 44.562000000000005 - type: mrr_at_5 value: 45.938 - type: ndcg_at_1 value: 36.958999999999996 - type: ndcg_at_10 value: 47.06 - type: ndcg_at_100 value: 52.345 - type: ndcg_at_1000 value: 54.35 - type: ndcg_at_3 value: 42.301 - type: ndcg_at_5 value: 44.635999999999996 - type: precision_at_1 value: 36.958999999999996 - type: precision_at_10 value: 8.479000000000001 - type: precision_at_100 value: 1.284 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 20.244 - type: precision_at_5 value: 14.224999999999998 - type: recall_at_1 value: 30.130000000000003 - type: recall_at_10 value: 59.27 - type: recall_at_100 value: 81.195 - type: recall_at_1000 value: 94.21199999999999 - type: recall_at_3 value: 45.885 - type: recall_at_5 value: 52.016 - type: map_at_1 value: 26.169999999999998 - type: map_at_10 value: 36.451 - type: map_at_100 value: 37.791000000000004 - type: map_at_1000 value: 37.897 - type: map_at_3 value: 33.109 - type: map_at_5 value: 34.937000000000005 - type: mrr_at_1 value: 32.877 - type: mrr_at_10 value: 42.368 - type: mrr_at_100 value: 43.201 - type: mrr_at_1000 value: 43.259 - type: mrr_at_3 value: 39.763999999999996 - type: mrr_at_5 value: 41.260000000000005 - type: ndcg_at_1 value: 32.877 - type: ndcg_at_10 value: 42.659000000000006 - type: ndcg_at_100 value: 48.161 - type: ndcg_at_1000 value: 50.345 - type: ndcg_at_3 value: 37.302 - type: ndcg_at_5 value: 39.722 - type: precision_at_1 value: 32.877 - type: precision_at_10 value: 7.9 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.846 - type: precision_at_5 value: 12.9 - type: recall_at_1 value: 26.169999999999998 - type: recall_at_10 value: 55.35 - type: recall_at_100 value: 78.755 - type: recall_at_1000 value: 93.518 - type: recall_at_3 value: 40.176 - type: recall_at_5 value: 46.589000000000006 - type: map_at_1 value: 27.15516666666667 - type: map_at_10 value: 36.65741666666667 - type: map_at_100 value: 37.84991666666666 - type: map_at_1000 value: 37.96316666666667 - type: map_at_3 value: 33.74974999999999 - type: map_at_5 value: 35.3765 - type: mrr_at_1 value: 32.08233333333334 - type: mrr_at_10 value: 41.033833333333334 - type: mrr_at_100 value: 41.84524999999999 - type: mrr_at_1000 value: 41.89983333333333 - type: mrr_at_3 value: 38.62008333333333 - type: mrr_at_5 value: 40.03441666666666 - type: ndcg_at_1 value: 32.08233333333334 - type: ndcg_at_10 value: 42.229 - type: ndcg_at_100 value: 47.26716666666667 - type: ndcg_at_1000 value: 49.43466666666667 - type: ndcg_at_3 value: 37.36408333333333 - type: ndcg_at_5 value: 39.6715 - type: precision_at_1 value: 32.08233333333334 - type: precision_at_10 value: 7.382583333333334 - type: precision_at_100 value: 1.16625 - type: precision_at_1000 value: 0.15408333333333332 - type: precision_at_3 value: 17.218 - type: precision_at_5 value: 12.21875 - type: recall_at_1 value: 27.15516666666667 - type: recall_at_10 value: 54.36683333333333 - type: recall_at_100 value: 76.37183333333333 - type: recall_at_1000 value: 91.26183333333333 - type: recall_at_3 value: 40.769916666666674 - type: recall_at_5 value: 46.702333333333335 - type: map_at_1 value: 25.749 - type: map_at_10 value: 33.001999999999995 - type: map_at_100 value: 33.891 - type: map_at_1000 value: 33.993 - type: map_at_3 value: 30.703999999999997 - type: map_at_5 value: 31.959 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 35.955 - type: mrr_at_100 value: 36.709 - type: mrr_at_1000 value: 36.779 - type: mrr_at_3 value: 33.947 - type: mrr_at_5 value: 35.089 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.329 - type: ndcg_at_100 value: 41.79 - type: ndcg_at_1000 value: 44.169000000000004 - type: ndcg_at_3 value: 33.184999999999995 - type: ndcg_at_5 value: 35.107 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.7669999999999995 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.213000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 25.749 - type: recall_at_10 value: 47.791 - type: recall_at_100 value: 68.255 - type: recall_at_1000 value: 85.749 - type: recall_at_3 value: 36.199 - type: recall_at_5 value: 41.071999999999996 - type: map_at_1 value: 17.777 - type: map_at_10 value: 25.201 - type: map_at_100 value: 26.423999999999996 - type: map_at_1000 value: 26.544 - type: map_at_3 value: 22.869 - type: map_at_5 value: 24.023 - type: mrr_at_1 value: 21.473 - type: mrr_at_10 value: 29.12 - type: mrr_at_100 value: 30.144 - type: mrr_at_1000 value: 30.215999999999998 - type: mrr_at_3 value: 26.933 - type: mrr_at_5 value: 28.051 - type: ndcg_at_1 value: 21.473 - type: ndcg_at_10 value: 30.003 - type: ndcg_at_100 value: 35.766 - type: ndcg_at_1000 value: 38.501000000000005 - type: ndcg_at_3 value: 25.773000000000003 - type: ndcg_at_5 value: 27.462999999999997 - type: precision_at_1 value: 21.473 - type: precision_at_10 value: 5.482 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.205 - type: precision_at_5 value: 8.692 - type: recall_at_1 value: 17.777 - type: recall_at_10 value: 40.582 - type: recall_at_100 value: 66.305 - type: recall_at_1000 value: 85.636 - type: recall_at_3 value: 28.687 - type: recall_at_5 value: 33.089 - type: map_at_1 value: 26.677 - type: map_at_10 value: 36.309000000000005 - type: map_at_100 value: 37.403999999999996 - type: map_at_1000 value: 37.496 - type: map_at_3 value: 33.382 - type: map_at_5 value: 34.98 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.549 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.397 - type: mrr_at_3 value: 38.029 - type: mrr_at_5 value: 39.451 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 42.1 - type: ndcg_at_100 value: 47.089999999999996 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 36.836999999999996 - type: ndcg_at_5 value: 39.21 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.164 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.915 - type: precision_at_5 value: 11.940000000000001 - type: recall_at_1 value: 26.677 - type: recall_at_10 value: 55.54599999999999 - type: recall_at_100 value: 77.094 - type: recall_at_1000 value: 92.01 - type: recall_at_3 value: 41.191 - type: recall_at_5 value: 47.006 - type: map_at_1 value: 24.501 - type: map_at_10 value: 33.102 - type: map_at_100 value: 34.676 - type: map_at_1000 value: 34.888000000000005 - type: map_at_3 value: 29.944 - type: map_at_5 value: 31.613999999999997 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 37.996 - type: mrr_at_100 value: 38.946 - type: mrr_at_1000 value: 38.995000000000005 - type: mrr_at_3 value: 35.079 - type: mrr_at_5 value: 36.69 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.232 - type: ndcg_at_100 value: 45.247 - type: ndcg_at_1000 value: 47.613 - type: ndcg_at_3 value: 33.922999999999995 - type: ndcg_at_5 value: 36.284 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.008 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 24.501 - type: recall_at_10 value: 51.18899999999999 - type: recall_at_100 value: 78.437 - type: recall_at_1000 value: 92.842 - type: recall_at_3 value: 35.808 - type: recall_at_5 value: 42.197 - type: map_at_1 value: 22.039 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.275 - type: map_at_1000 value: 31.379 - type: map_at_3 value: 27.98 - type: map_at_5 value: 29.358 - type: mrr_at_1 value: 24.03 - type: mrr_at_10 value: 32.568000000000005 - type: mrr_at_100 value: 33.403 - type: mrr_at_1000 value: 33.475 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 31.796000000000003 - type: ndcg_at_1 value: 24.03 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 39.668 - type: ndcg_at_1000 value: 42.296 - type: ndcg_at_3 value: 30.709999999999997 - type: ndcg_at_5 value: 33.024 - type: precision_at_1 value: 24.03 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.828 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 13.309000000000001 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 22.039 - type: recall_at_10 value: 47.746 - type: recall_at_100 value: 68.23599999999999 - type: recall_at_1000 value: 87.852 - type: recall_at_3 value: 35.852000000000004 - type: recall_at_5 value: 41.410000000000004 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 15.692999999999998 - type: map_at_10 value: 26.903 - type: map_at_100 value: 28.987000000000002 - type: map_at_1000 value: 29.176999999999996 - type: map_at_3 value: 22.137 - type: map_at_5 value: 24.758 - type: mrr_at_1 value: 35.57 - type: mrr_at_10 value: 47.821999999999996 - type: mrr_at_100 value: 48.608000000000004 - type: mrr_at_1000 value: 48.638999999999996 - type: mrr_at_3 value: 44.452000000000005 - type: mrr_at_5 value: 46.546 - type: ndcg_at_1 value: 35.57 - type: ndcg_at_10 value: 36.567 - type: ndcg_at_100 value: 44.085 - type: ndcg_at_1000 value: 47.24 - type: ndcg_at_3 value: 29.964000000000002 - type: ndcg_at_5 value: 32.511 - type: precision_at_1 value: 35.57 - type: precision_at_10 value: 11.485 - type: precision_at_100 value: 1.9619999999999997 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 22.237000000000002 - type: precision_at_5 value: 17.471999999999998 - type: recall_at_1 value: 15.692999999999998 - type: recall_at_10 value: 43.056 - type: recall_at_100 value: 68.628 - type: recall_at_1000 value: 86.075 - type: recall_at_3 value: 26.918999999999997 - type: recall_at_5 value: 34.14 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.53 - type: map_at_10 value: 20.951 - type: map_at_100 value: 30.136000000000003 - type: map_at_1000 value: 31.801000000000002 - type: map_at_3 value: 15.021 - type: map_at_5 value: 17.471999999999998 - type: mrr_at_1 value: 71.0 - type: mrr_at_10 value: 79.176 - type: mrr_at_100 value: 79.418 - type: mrr_at_1000 value: 79.426 - type: mrr_at_3 value: 78.125 - type: mrr_at_5 value: 78.61200000000001 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 44.106 - type: ndcg_at_100 value: 49.268 - type: ndcg_at_1000 value: 56.711999999999996 - type: ndcg_at_3 value: 48.934 - type: ndcg_at_5 value: 45.826 - type: precision_at_1 value: 71.0 - type: precision_at_10 value: 35.0 - type: precision_at_100 value: 11.360000000000001 - type: precision_at_1000 value: 2.046 - type: precision_at_3 value: 52.833 - type: precision_at_5 value: 44.15 - type: recall_at_1 value: 9.53 - type: recall_at_10 value: 26.811 - type: recall_at_100 value: 55.916999999999994 - type: recall_at_1000 value: 79.973 - type: recall_at_3 value: 16.413 - type: recall_at_5 value: 19.980999999999998 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.519999999999996 - type: f1 value: 46.36601294761231 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 74.413 - type: map_at_10 value: 83.414 - type: map_at_100 value: 83.621 - type: map_at_1000 value: 83.635 - type: map_at_3 value: 82.337 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 80.19800000000001 - type: mrr_at_10 value: 87.715 - type: mrr_at_100 value: 87.778 - type: mrr_at_1000 value: 87.779 - type: mrr_at_3 value: 87.106 - type: mrr_at_5 value: 87.555 - type: ndcg_at_1 value: 80.19800000000001 - type: ndcg_at_10 value: 87.182 - type: ndcg_at_100 value: 87.90299999999999 - type: ndcg_at_1000 value: 88.143 - type: ndcg_at_3 value: 85.60600000000001 - type: ndcg_at_5 value: 86.541 - type: precision_at_1 value: 80.19800000000001 - type: precision_at_10 value: 10.531 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.429 - type: recall_at_1 value: 74.413 - type: recall_at_10 value: 94.363 - type: recall_at_100 value: 97.165 - type: recall_at_1000 value: 98.668 - type: recall_at_3 value: 90.108 - type: recall_at_5 value: 92.52 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.701 - type: map_at_10 value: 37.122 - type: map_at_100 value: 39.178000000000004 - type: map_at_1000 value: 39.326 - type: map_at_3 value: 32.971000000000004 - type: map_at_5 value: 35.332 - type: mrr_at_1 value: 44.753 - type: mrr_at_10 value: 53.452 - type: mrr_at_100 value: 54.198 - type: mrr_at_1000 value: 54.225 - type: mrr_at_3 value: 50.952 - type: mrr_at_5 value: 52.464 - type: ndcg_at_1 value: 44.753 - type: ndcg_at_10 value: 45.021 - type: ndcg_at_100 value: 52.028 - type: ndcg_at_1000 value: 54.596000000000004 - type: ndcg_at_3 value: 41.622 - type: ndcg_at_5 value: 42.736000000000004 - type: precision_at_1 value: 44.753 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.061999999999998 - type: recall_at_1 value: 22.701 - type: recall_at_10 value: 51.432 - type: recall_at_100 value: 77.009 - type: recall_at_1000 value: 92.511 - type: recall_at_3 value: 37.919000000000004 - type: recall_at_5 value: 44.131 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.189 - type: map_at_10 value: 66.24600000000001 - type: map_at_100 value: 67.098 - type: map_at_1000 value: 67.149 - type: map_at_3 value: 62.684 - type: map_at_5 value: 64.974 - type: mrr_at_1 value: 80.378 - type: mrr_at_10 value: 86.127 - type: mrr_at_100 value: 86.29299999999999 - type: mrr_at_1000 value: 86.297 - type: mrr_at_3 value: 85.31400000000001 - type: mrr_at_5 value: 85.858 - type: ndcg_at_1 value: 80.378 - type: ndcg_at_10 value: 74.101 - type: ndcg_at_100 value: 76.993 - type: ndcg_at_1000 value: 77.948 - type: ndcg_at_3 value: 69.232 - type: ndcg_at_5 value: 72.04599999999999 - type: precision_at_1 value: 80.378 - type: precision_at_10 value: 15.595999999999998 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.884 - type: precision_at_5 value: 29.145 - type: recall_at_1 value: 40.189 - type: recall_at_10 value: 77.981 - type: recall_at_100 value: 89.21 - type: recall_at_1000 value: 95.48299999999999 - type: recall_at_3 value: 67.326 - type: recall_at_5 value: 72.863 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.84599999999999 - type: ap value: 89.4710787567357 - type: f1 value: 92.83752676932258 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.132 - type: map_at_10 value: 35.543 - type: map_at_100 value: 36.702 - type: map_at_1000 value: 36.748999999999995 - type: map_at_3 value: 31.737 - type: map_at_5 value: 33.927 - type: mrr_at_1 value: 23.782 - type: mrr_at_10 value: 36.204 - type: mrr_at_100 value: 37.29 - type: mrr_at_1000 value: 37.330999999999996 - type: mrr_at_3 value: 32.458999999999996 - type: mrr_at_5 value: 34.631 - type: ndcg_at_1 value: 23.782 - type: ndcg_at_10 value: 42.492999999999995 - type: ndcg_at_100 value: 47.985 - type: ndcg_at_1000 value: 49.141 - type: ndcg_at_3 value: 34.748000000000005 - type: ndcg_at_5 value: 38.651 - type: precision_at_1 value: 23.782 - type: precision_at_10 value: 6.665 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.776 - type: precision_at_5 value: 10.84 - type: recall_at_1 value: 23.132 - type: recall_at_10 value: 63.794 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.807 - type: recall_at_3 value: 42.765 - type: recall_at_5 value: 52.11 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.3842594786827 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.49384404924761 - type: f1 value: 59.7580539534629 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.56220578345663 - type: f1 value: 75.27228165561478 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.53463349024884 - type: f1 value: 80.4893958236536 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.56100273484962 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.470380028839607 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.06102792457849 - type: mrr value: 33.30709199672238 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.776999999999999 - type: map_at_10 value: 14.924000000000001 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.538999999999998 - type: map_at_3 value: 10.982 - type: map_at_5 value: 12.679000000000002 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.232000000000006 - type: mrr_at_100 value: 57.818999999999996 - type: mrr_at_1000 value: 57.847 - type: mrr_at_3 value: 54.901999999999994 - type: mrr_at_5 value: 56.481 - type: ndcg_at_1 value: 46.594 - type: ndcg_at_10 value: 38.129000000000005 - type: ndcg_at_100 value: 35.54 - type: ndcg_at_1000 value: 44.172 - type: ndcg_at_3 value: 43.025999999999996 - type: ndcg_at_5 value: 41.052 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 28.111000000000004 - type: precision_at_100 value: 8.929 - type: precision_at_1000 value: 2.185 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.232 - type: recall_at_1 value: 6.776999999999999 - type: recall_at_10 value: 19.289 - type: recall_at_100 value: 36.359 - type: recall_at_1000 value: 67.54 - type: recall_at_3 value: 11.869 - type: recall_at_5 value: 14.999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.108000000000004 - type: map_at_10 value: 47.126000000000005 - type: map_at_100 value: 48.171 - type: map_at_1000 value: 48.199 - type: map_at_3 value: 42.734 - type: map_at_5 value: 45.362 - type: mrr_at_1 value: 34.936 - type: mrr_at_10 value: 49.571 - type: mrr_at_100 value: 50.345 - type: mrr_at_1000 value: 50.363 - type: mrr_at_3 value: 45.959 - type: mrr_at_5 value: 48.165 - type: ndcg_at_1 value: 34.936 - type: ndcg_at_10 value: 55.028999999999996 - type: ndcg_at_100 value: 59.244 - type: ndcg_at_1000 value: 59.861 - type: ndcg_at_3 value: 46.872 - type: ndcg_at_5 value: 51.217999999999996 - type: precision_at_1 value: 34.936 - type: precision_at_10 value: 9.099 - type: precision_at_100 value: 1.145 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.456 - type: precision_at_5 value: 15.411 - type: recall_at_1 value: 31.108000000000004 - type: recall_at_10 value: 76.53999999999999 - type: recall_at_100 value: 94.39 - type: recall_at_1000 value: 98.947 - type: recall_at_3 value: 55.572 - type: recall_at_5 value: 65.525 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.56400000000001 - type: map_at_10 value: 85.482 - type: map_at_100 value: 86.114 - type: map_at_1000 value: 86.13 - type: map_at_3 value: 82.607 - type: map_at_5 value: 84.405 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.304 - type: mrr_at_100 value: 88.399 - type: mrr_at_1000 value: 88.399 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.024 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.06500000000001 - type: ndcg_at_100 value: 90.232 - type: ndcg_at_1000 value: 90.305 - type: ndcg_at_3 value: 86.375 - type: ndcg_at_5 value: 87.85300000000001 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.486999999999998 - type: precision_at_100 value: 1.534 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.813 - type: precision_at_5 value: 24.773999999999997 - type: recall_at_1 value: 71.56400000000001 - type: recall_at_10 value: 95.812 - type: recall_at_100 value: 99.7 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 87.966 - type: recall_at_5 value: 92.268 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.241876648614145 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.66212576446223 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.308 - type: map_at_10 value: 13.803 - type: map_at_100 value: 16.176 - type: map_at_1000 value: 16.561 - type: map_at_3 value: 9.761000000000001 - type: map_at_5 value: 11.802 - type: mrr_at_1 value: 26.200000000000003 - type: mrr_at_10 value: 37.621 - type: mrr_at_100 value: 38.767 - type: mrr_at_1000 value: 38.815 - type: mrr_at_3 value: 34.117 - type: mrr_at_5 value: 36.107 - type: ndcg_at_1 value: 26.200000000000003 - type: ndcg_at_10 value: 22.64 - type: ndcg_at_100 value: 31.567 - type: ndcg_at_1000 value: 37.623 - type: ndcg_at_3 value: 21.435000000000002 - type: ndcg_at_5 value: 18.87 - type: precision_at_1 value: 26.200000000000003 - type: precision_at_10 value: 11.74 - type: precision_at_100 value: 2.465 - type: precision_at_1000 value: 0.391 - type: precision_at_3 value: 20.033 - type: precision_at_5 value: 16.64 - type: recall_at_1 value: 5.308 - type: recall_at_10 value: 23.794999999999998 - type: recall_at_100 value: 50.015 - type: recall_at_1000 value: 79.283 - type: recall_at_3 value: 12.178 - type: recall_at_5 value: 16.882 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.93231134675553 - type: cos_sim_spearman value: 81.68319292603205 - type: euclidean_pearson value: 81.8396814380367 - type: euclidean_spearman value: 81.24641903349945 - type: manhattan_pearson value: 81.84698799204274 - type: manhattan_spearman value: 81.24269997904105 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.73241671587446 - type: cos_sim_spearman value: 79.05091082971826 - type: euclidean_pearson value: 83.91146869578044 - type: euclidean_spearman value: 79.87978465370936 - type: manhattan_pearson value: 83.90888338917678 - type: manhattan_spearman value: 79.87482848584241 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.14970731146177 - type: cos_sim_spearman value: 86.37363490084627 - type: euclidean_pearson value: 83.02154218530433 - type: euclidean_spearman value: 83.80258761957367 - type: manhattan_pearson value: 83.01664495119347 - type: manhattan_spearman value: 83.77567458007952 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.40474139886784 - type: cos_sim_spearman value: 82.77768789165984 - type: euclidean_pearson value: 80.7065877443695 - type: euclidean_spearman value: 81.375940662505 - type: manhattan_pearson value: 80.6507552270278 - type: manhattan_spearman value: 81.32782179098741 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.08585968722274 - type: cos_sim_spearman value: 88.03110031451399 - type: euclidean_pearson value: 85.74012019602384 - type: euclidean_spearman value: 86.13592849438209 - type: manhattan_pearson value: 85.74404842369206 - type: manhattan_spearman value: 86.14492318960154 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.95069052788875 - type: cos_sim_spearman value: 86.4867991595147 - type: euclidean_pearson value: 84.31013325754635 - type: euclidean_spearman value: 85.01529258006482 - type: manhattan_pearson value: 84.26995570085374 - type: manhattan_spearman value: 84.96982104986162 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.54617647971897 - type: cos_sim_spearman value: 87.49834181751034 - type: euclidean_pearson value: 86.01015322577122 - type: euclidean_spearman value: 84.63362652063199 - type: manhattan_pearson value: 86.13807574475706 - type: manhattan_spearman value: 84.7772370721132 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.20047755786615 - type: cos_sim_spearman value: 67.05324077987636 - type: euclidean_pearson value: 66.91930642976601 - type: euclidean_spearman value: 65.21491856099105 - type: manhattan_pearson value: 66.78756851976624 - type: manhattan_spearman value: 65.12356257740728 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.19852871539686 - type: cos_sim_spearman value: 87.5161895296395 - type: euclidean_pearson value: 84.59848645207485 - type: euclidean_spearman value: 85.26427328757919 - type: manhattan_pearson value: 84.59747366996524 - type: manhattan_spearman value: 85.24045855146915 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.63320317811032 - type: mrr value: 96.26242947321379 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 60.928000000000004 - type: map_at_10 value: 70.112 - type: map_at_100 value: 70.59299999999999 - type: map_at_1000 value: 70.623 - type: map_at_3 value: 66.846 - type: map_at_5 value: 68.447 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 71.212 - type: mrr_at_100 value: 71.616 - type: mrr_at_1000 value: 71.64500000000001 - type: mrr_at_3 value: 68.77799999999999 - type: mrr_at_5 value: 70.094 - type: ndcg_at_1 value: 64.0 - type: ndcg_at_10 value: 74.607 - type: ndcg_at_100 value: 76.416 - type: ndcg_at_1000 value: 77.102 - type: ndcg_at_3 value: 69.126 - type: ndcg_at_5 value: 71.41300000000001 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 9.933 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.556 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 60.928000000000004 - type: recall_at_10 value: 87.322 - type: recall_at_100 value: 94.833 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.428 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86237623762376 - type: cos_sim_ap value: 96.72586477206649 - type: cos_sim_f1 value: 93.01858362631845 - type: cos_sim_precision value: 93.4409687184662 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.78019801980199 - type: dot_ap value: 93.72748205246228 - type: dot_f1 value: 89.04109589041096 - type: dot_precision value: 87.16475095785441 - type: dot_recall value: 91.0 - type: euclidean_accuracy value: 99.85445544554456 - type: euclidean_ap value: 96.6661459876145 - type: euclidean_f1 value: 92.58337481333997 - type: euclidean_precision value: 92.17046580773042 - type: euclidean_recall value: 93.0 - type: manhattan_accuracy value: 99.85445544554456 - type: manhattan_ap value: 96.6883549244056 - type: manhattan_f1 value: 92.57598405580468 - type: manhattan_precision value: 92.25422045680239 - type: manhattan_recall value: 92.9 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.72586477206649 - type: max_f1 value: 93.01858362631845 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.39930057069995 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.96398659903402 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.946944700355395 - type: mrr value: 56.97151398438164 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.541657650692905 - type: cos_sim_spearman value: 31.605804192286303 - type: dot_pearson value: 28.26905996736398 - type: dot_spearman value: 27.864801765851187 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.8870000000000002 - type: map_at_100 value: 9.78 - type: map_at_1000 value: 22.514 - type: map_at_3 value: 0.6669999999999999 - type: map_at_5 value: 1.077 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 89.86699999999999 - type: mrr_at_100 value: 89.86699999999999 - type: mrr_at_1000 value: 89.86699999999999 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 74.818 - type: ndcg_at_100 value: 53.715999999999994 - type: ndcg_at_1000 value: 47.082 - type: ndcg_at_3 value: 82.134 - type: ndcg_at_5 value: 79.81899999999999 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 54.48 - type: precision_at_1000 value: 20.518 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 85.2 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 2.072 - type: recall_at_100 value: 13.013 - type: recall_at_1000 value: 43.462 - type: recall_at_3 value: 0.695 - type: recall_at_5 value: 1.139 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.328 - type: map_at_10 value: 9.795 - type: map_at_100 value: 15.801000000000002 - type: map_at_1000 value: 17.23 - type: map_at_3 value: 4.734 - type: map_at_5 value: 6.644 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 46.902 - type: mrr_at_100 value: 47.495 - type: mrr_at_1000 value: 47.495 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 44.218 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 24.806 - type: ndcg_at_100 value: 36.419000000000004 - type: ndcg_at_1000 value: 47.272999999999996 - type: ndcg_at_3 value: 25.666 - type: ndcg_at_5 value: 25.448999999999998 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 23.061 - type: precision_at_100 value: 7.714 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 26.531 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.328 - type: recall_at_10 value: 16.524 - type: recall_at_100 value: 47.179 - type: recall_at_1000 value: 81.22200000000001 - type: recall_at_3 value: 5.745 - type: recall_at_5 value: 9.339 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9142 - type: ap value: 14.335574772555415 - type: f1 value: 54.62839595194111 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.94340690435768 - type: f1 value: 60.286487936731916 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.26597708987974 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.48882398521786 - type: cos_sim_ap value: 79.04326607602204 - type: cos_sim_f1 value: 71.64566826860633 - type: cos_sim_precision value: 70.55512918905092 - type: cos_sim_recall value: 72.77044854881267 - type: dot_accuracy value: 84.19264469213805 - type: dot_ap value: 67.96360043562528 - type: dot_f1 value: 64.06418393006827 - type: dot_precision value: 58.64941898706424 - type: dot_recall value: 70.58047493403694 - type: euclidean_accuracy value: 87.45902127913214 - type: euclidean_ap value: 78.9742237648272 - type: euclidean_f1 value: 71.5553235908142 - type: euclidean_precision value: 70.77955601445535 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.41729749061214 - type: manhattan_ap value: 78.90073137580596 - type: manhattan_f1 value: 71.3942611553533 - type: manhattan_precision value: 68.52705653967483 - type: manhattan_recall value: 74.51187335092348 - type: max_accuracy value: 87.48882398521786 - type: max_ap value: 79.04326607602204 - type: max_f1 value: 71.64566826860633 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.68125897465751 - type: cos_sim_ap value: 85.6003454431979 - type: cos_sim_f1 value: 77.6957163958641 - type: cos_sim_precision value: 73.0110366307807 - type: cos_sim_recall value: 83.02279026793964 - type: dot_accuracy value: 87.7672992587418 - type: dot_ap value: 82.4971301112899 - type: dot_f1 value: 75.90528233151184 - type: dot_precision value: 72.0370626469368 - type: dot_recall value: 80.21250384970742 - type: euclidean_accuracy value: 88.4503434625684 - type: euclidean_ap value: 84.91949884748384 - type: euclidean_f1 value: 76.92365018444684 - type: euclidean_precision value: 74.53245721712759 - type: euclidean_recall value: 79.47336002463813 - type: manhattan_accuracy value: 88.47556952691427 - type: manhattan_ap value: 84.8963689101517 - type: manhattan_f1 value: 76.85901249256395 - type: manhattan_precision value: 74.31693989071039 - type: manhattan_recall value: 79.58115183246073 - type: max_accuracy value: 88.68125897465751 - type: max_ap value: 85.6003454431979 - type: max_f1 value: 77.6957163958641 --- # sabafallah/bge-large-en-v1.5-Q4_K_M-GGUF This model was converted to GGUF format from [`BAAI/bge-large-en-v1.5`](https://huggingface.co/BAAI/bge-large-en-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BAAI/bge-large-en-v1.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sabafallah/bge-large-en-v1.5-Q4_K_M-GGUF --hf-file bge-large-en-v1.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sabafallah/bge-large-en-v1.5-Q4_K_M-GGUF --hf-file bge-large-en-v1.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sabafallah/bge-large-en-v1.5-Q4_K_M-GGUF --hf-file bge-large-en-v1.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sabafallah/bge-large-en-v1.5-Q4_K_M-GGUF --hf-file bge-large-en-v1.5-q4_k_m.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
espnet/owsm_v3.2
espnet
automatic-speech-recognition
[ "espnet", "audio", "automatic-speech-recognition", "multilingual", "dataset:owsm_v3.1", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
1,724
1,724
15
5
--- datasets: - owsm_v3.1 language: multilingual license: cc-by-4.0 tags: - espnet - audio - automatic-speech-recognition --- ## ESPnet2 S2T model ### `espnet/owsm_v3.2` This model was trained by jctian98 using owsm_v3.1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout a25ff1b51af7fe346f692258c8f9613b89341c6d pip install -e . cd egs2/owsm_v3.1/s2t1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/owsm_v3.2 ``` ## Use this model You can use this model in your projects with the following code: ```python # make sure espnet is installed: pip install espnet from espnet2.bin.s2t_inference import Speech2Text model = Speech2Text.from_pretrained( "espnet/owsm_v3.2" ) speech, rate = soundfile.read("speech.wav") text, *_ = model(speech)[0] ``` ## S2T config <details><summary>expand</summary> ``` config: conf/train_s2t_ebf_conv2d_size768_e9_d9_piecewise_lr5e-4_warmup60k_flashattn.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: sequence valid_iterator_type: null output_dir: exp/s2t_train_s2t_ebf_conv2d_size768_e9_d9_piecewise_lr5e-4_warmup60k_flashattn_raw_bpe50000 ngpu: 1 seed: 42 num_workers: 4 num_att_plot: 0 dist_backend: nccl dist_init_method: file:///weka/home-pengyf/espnet-owsm-train/egs2/owsm_v3.2_punc_filt/s2t1/exp/s2t_train_s2t_ebf_conv2d_size768_e9_d9_piecewise_lr5e-4_warmup60k_flashattn_raw_bpe50000/.dist_init_52d4a52c-08da-49db-8587-caacc8d64465 dist_world_size: 16 dist_rank: 0 local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: slurm multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 45 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max - - valid - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: 500 use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: true wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false use_lora: false save_lora_only: true lora_conf: {} pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 15000 batch_size: 256 valid_batch_size: 512 batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/s2t_stats_raw_bpe50000/splits10/speech_shape - exp/s2t_stats_raw_bpe50000/splits10/text_prev_shape.bpe - exp/s2t_stats_raw_bpe50000/splits10/text_ctc_shape.bpe - exp/s2t_stats_raw_bpe50000/splits10/text_shape.bpe valid_shape_file: - exp/s2t_stats_raw_bpe50000/valid/speech_shape - exp/s2t_stats_raw_bpe50000/valid/text_prev_shape.bpe - exp/s2t_stats_raw_bpe50000/valid/text_ctc_shape.bpe - exp/s2t_stats_raw_bpe50000/valid/text_shape.bpe batch_type: unsorted valid_batch_type: null fold_length: - 80000 - 150 - 150 - 150 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: true chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 chunk_excluded_key_prefixes: [] chunk_default_fs: null train_data_path_and_name_and_type: - - exp/s2t_stats_raw_bpe50000/splits10/wav.scp - speech - kaldi_ark - - exp/s2t_stats_raw_bpe50000/splits10/text.prev - text_prev - text - - exp/s2t_stats_raw_bpe50000/splits10/text.ctc - text_ctc - text - - exp/s2t_stats_raw_bpe50000/splits10/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text.prev - text_prev - text - - dump/raw/dev/text.ctc - text_ctc - text - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 allow_multi_rates: false valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adamw optim_conf: lr: 0.0005 betas: - 0.9 - 0.98 eps: 1.0e-06 weight_decay: 0.0 scheduler: piecewiselinearwarmuplr scheduler_conf: warmup_steps_list: - 0 - 30000 - 60000 warmup_lr_list: - 0.0 - 5.0e-05 - 0.0005 token_list: - <blank> - <unk> - <na> - <nospeech> - <abk> - <afr> - <amh> - <ara> - <asm> - <ast> - <aze> - <bak> - <bas> - <bel> - <ben> - <bos> - <bre> - <bul> - <cat> - <ceb> - <ces> - <chv> - <ckb> - <cmn> - <cnh> - <cym> - <dan> - <deu> - <dgd> - <div> - <ell> - <eng> - <epo> - <est> - <eus> - <fas> - <fil> - <fin> - <fra> - <frr> - <ful> - <gle> - <glg> - <grn> - <guj> - <hat> - <hau> - <heb> - <hin> - <hrv> - <hsb> - <hun> - <hye> - <ibo> - <ina> - <ind> - <isl> - <ita> - <jav> - <jpn> - <kab> - <kam> - <kan> - <kat> - <kaz> - <kea> - <khm> - <kin> - <kir> - <kmr> - <kor> - <lao> - <lav> - <lga> - <lin> - <lit> - <ltz> - <lug> - <luo> - <mal> - <mar> - <mas> - <mdf> - <mhr> - <mkd> - <mlt> - <mon> - <mri> - <mrj> - <mya> - <myv> - <nan> - <nep> - <nld> - <nno> - <nob> - <npi> - <nso> - <nya> - <oci> - <ori> - <orm> - <ory> - <pan> - <pol> - <por> - <pus> - <quy> - <roh> - <ron> - <rus> - <sah> - <sat> - <sin> - <skr> - <slk> - <slv> - <sna> - <snd> - <som> - <sot> - <spa> - <srd> - <srp> - <sun> - <swa> - <swe> - <swh> - <tam> - <tat> - <tel> - <tgk> - <tgl> - <tha> - <tig> - <tir> - <tok> - <tpi> - <tsn> - <tuk> - <tur> - <twi> - <uig> - <ukr> - <umb> - <urd> - <uzb> - <vie> - <vot> - <wol> - <xho> - <yor> - <yue> - <zho> - <zul> - <asr> - <st_ara> - <st_cat> - <st_ces> - <st_cym> - <st_deu> - <st_eng> - <st_est> - <st_fas> - <st_fra> - <st_ind> - <st_ita> - <st_jpn> - <st_lav> - <st_mon> - <st_nld> - <st_por> - <st_ron> - <st_rus> - <st_slv> - <st_spa> - <st_swe> - <st_tam> - <st_tur> - <st_vie> - <st_zho> - <notimestamps> - <0.00> - <0.02> - <0.04> - <0.06> - <0.08> - <0.10> - <0.12> - <0.14> - <0.16> - <0.18> - <0.20> - <0.22> - <0.24> - <0.26> - <0.28> - <0.30> - <0.32> - <0.34> - <0.36> - <0.38> - <0.40> - <0.42> - <0.44> - <0.46> - <0.48> - <0.50> - <0.52> - <0.54> - <0.56> - <0.58> - <0.60> - <0.62> - <0.64> - <0.66> - <0.68> - <0.70> - <0.72> - <0.74> - <0.76> - <0.78> - <0.80> - <0.82> - <0.84> - <0.86> - <0.88> - <0.90> - <0.92> - <0.94> - <0.96> - <0.98> - <1.00> - <1.02> - <1.04> - <1.06> - <1.08> - <1.10> - <1.12> - <1.14> - <1.16> - <1.18> - <1.20> - <1.22> - <1.24> - <1.26> - <1.28> - <1.30> - <1.32> - <1.34> - <1.36> - <1.38> - <1.40> - <1.42> - <1.44> - <1.46> - <1.48> - <1.50> - <1.52> - <1.54> - <1.56> - <1.58> - <1.60> - <1.62> - <1.64> - <1.66> - <1.68> - <1.70> - <1.72> - <1.74> - <1.76> - <1.78> - <1.80> - <1.82> - <1.84> - <1.86> - <1.88> - <1.90> - <1.92> - <1.94> - <1.96> - <1.98> - <2.00> - <2.02> - <2.04> - <2.06> - <2.08> - <2.10> - <2.12> - <2.14> - <2.16> - <2.18> - <2.20> - <2.22> - <2.24> - <2.26> - <2.28> - <2.30> - <2.32> - <2.34> - <2.36> - <2.38> - <2.40> - <2.42> - <2.44> - <2.46> - <2.48> - <2.50> - <2.52> - <2.54> - <2.56> - <2.58> - <2.60> - <2.62> - <2.64> - <2.66> - <2.68> - <2.70> - <2.72> - <2.74> - <2.76> - <2.78> - <2.80> - <2.82> - <2.84> - <2.86> - <2.88> - <2.90> - <2.92> - <2.94> - <2.96> - <2.98> - <3.00> - <3.02> - <3.04> - <3.06> - <3.08> - <3.10> - <3.12> - <3.14> - <3.16> - <3.18> - <3.20> - <3.22> - <3.24> - <3.26> - <3.28> - <3.30> - <3.32> - <3.34> - <3.36> - <3.38> - <3.40> - <3.42> - <3.44> - <3.46> - <3.48> - <3.50> - <3.52> - <3.54> - <3.56> - <3.58> - <3.60> - <3.62> - <3.64> - <3.66> - <3.68> - <3.70> - <3.72> - <3.74> - <3.76> - <3.78> - <3.80> - <3.82> - <3.84> - <3.86> - <3.88> - <3.90> - <3.92> - <3.94> - <3.96> - <3.98> - <4.00> - <4.02> - <4.04> - <4.06> - <4.08> - <4.10> - <4.12> - <4.14> - <4.16> - <4.18> - <4.20> - <4.22> - <4.24> - <4.26> - <4.28> - <4.30> - <4.32> - <4.34> - <4.36> - <4.38> - <4.40> - <4.42> - <4.44> - <4.46> - <4.48> - <4.50> - <4.52> - <4.54> - <4.56> - <4.58> - <4.60> - <4.62> - <4.64> - <4.66> - <4.68> - <4.70> - <4.72> - <4.74> - <4.76> - <4.78> - <4.80> - <4.82> - <4.84> - <4.86> - <4.88> - <4.90> - <4.92> - <4.94> - <4.96> - <4.98> - <5.00> - <5.02> - <5.04> - <5.06> - <5.08> - <5.10> - <5.12> - <5.14> - <5.16> - <5.18> - <5.20> - <5.22> - <5.24> - <5.26> - <5.28> - <5.30> - <5.32> - <5.34> - <5.36> - <5.38> - <5.40> - <5.42> - <5.44> - <5.46> - <5.48> - <5.50> - <5.52> - <5.54> - <5.56> - <5.58> - <5.60> - <5.62> - <5.64> - <5.66> - <5.68> - <5.70> - <5.72> - <5.74> - <5.76> - <5.78> - <5.80> - <5.82> - <5.84> - <5.86> - <5.88> - <5.90> - <5.92> - <5.94> - <5.96> - <5.98> - <6.00> - <6.02> - <6.04> - <6.06> - <6.08> - <6.10> - <6.12> - <6.14> - <6.16> - <6.18> - <6.20> - <6.22> - <6.24> - <6.26> - <6.28> - <6.30> - <6.32> - <6.34> - <6.36> - <6.38> - <6.40> - <6.42> - <6.44> - <6.46> - <6.48> - <6.50> - <6.52> - <6.54> - <6.56> - <6.58> - <6.60> - <6.62> - <6.64> - <6.66> - <6.68> - <6.70> - <6.72> - <6.74> - <6.76> - <6.78> - <6.80> - <6.82> - <6.84> - <6.86> - <6.88> - <6.90> - <6.92> - <6.94> - <6.96> - <6.98> - <7.00> - <7.02> - <7.04> - <7.06> - <7.08> - <7.10> - <7.12> - <7.14> - <7.16> - <7.18> - <7.20> - <7.22> - <7.24> - <7.26> - <7.28> - <7.30> - <7.32> - <7.34> - <7.36> - <7.38> - <7.40> - <7.42> - <7.44> - <7.46> - <7.48> - <7.50> - <7.52> - <7.54> - <7.56> - <7.58> - <7.60> - <7.62> - <7.64> - <7.66> - <7.68> - <7.70> - <7.72> - <7.74> - <7.76> - <7.78> - <7.80> - <7.82> - <7.84> - <7.86> - <7.88> - <7.90> - <7.92> - <7.94> - <7.96> - <7.98> - <8.00> - <8.02> - <8.04> - <8.06> - <8.08> - <8.10> - <8.12> - <8.14> - <8.16> - <8.18> - <8.20> - <8.22> - <8.24> - <8.26> - <8.28> - <8.30> - <8.32> - <8.34> - <8.36> - <8.38> - <8.40> - <8.42> - <8.44> - <8.46> - <8.48> - <8.50> - <8.52> - <8.54> - <8.56> - <8.58> - <8.60> - <8.62> - <8.64> - <8.66> - <8.68> - <8.70> - <8.72> - <8.74> - <8.76> - <8.78> - <8.80> - <8.82> - <8.84> - <8.86> - <8.88> - <8.90> - <8.92> - <8.94> - <8.96> - <8.98> - <9.00> - <9.02> - <9.04> - <9.06> - <9.08> - <9.10> - <9.12> - <9.14> - <9.16> - <9.18> - <9.20> - <9.22> - <9.24> - <9.26> - <9.28> - <9.30> - <9.32> - <9.34> - <9.36> - <9.38> - <9.40> - <9.42> - <9.44> - <9.46> - <9.48> - <9.50> - <9.52> - <9.54> - <9.56> - <9.58> - <9.60> - <9.62> - <9.64> - <9.66> - <9.68> - <9.70> - <9.72> - <9.74> - <9.76> - <9.78> - <9.80> - <9.82> - <9.84> - <9.86> - <9.88> - <9.90> - <9.92> - <9.94> - <9.96> - <9.98> - <10.00> - <10.02> - <10.04> - <10.06> - <10.08> - <10.10> - <10.12> - <10.14> - <10.16> - <10.18> - <10.20> - <10.22> - <10.24> - <10.26> - <10.28> - <10.30> - <10.32> - <10.34> - <10.36> - <10.38> - <10.40> - <10.42> - <10.44> - <10.46> - <10.48> - <10.50> - <10.52> - <10.54> - <10.56> - <10.58> - <10.60> - <10.62> - <10.64> - <10.66> - <10.68> - <10.70> - <10.72> - <10.74> - <10.76> - <10.78> - <10.80> - <10.82> - <10.84> - <10.86> - <10.88> - <10.90> - <10.92> - <10.94> - <10.96> - <10.98> - <11.00> - <11.02> - <11.04> - <11.06> - <11.08> - <11.10> - <11.12> - <11.14> - <11.16> - <11.18> - <11.20> - <11.22> - <11.24> - <11.26> - <11.28> - <11.30> - <11.32> - <11.34> - <11.36> - <11.38> - <11.40> - <11.42> - <11.44> - <11.46> - <11.48> - <11.50> - <11.52> - <11.54> - <11.56> - <11.58> - <11.60> - <11.62> - <11.64> - <11.66> - <11.68> - <11.70> - <11.72> - <11.74> - <11.76> - <11.78> - <11.80> - <11.82> - <11.84> - <11.86> - <11.88> - <11.90> - <11.92> - <11.94> - <11.96> - <11.98> - <12.00> - <12.02> - <12.04> - <12.06> - <12.08> - <12.10> - <12.12> - <12.14> - <12.16> - <12.18> - <12.20> - <12.22> - <12.24> - <12.26> - <12.28> - <12.30> - <12.32> - <12.34> - <12.36> - <12.38> - <12.40> - <12.42> - <12.44> - <12.46> - <12.48> - <12.50> - <12.52> - <12.54> - <12.56> - <12.58> - <12.60> - <12.62> - <12.64> - <12.66> - <12.68> - <12.70> - <12.72> - <12.74> - <12.76> - <12.78> - <12.80> - <12.82> - <12.84> - <12.86> - <12.88> - <12.90> - <12.92> - <12.94> - <12.96> - <12.98> - <13.00> - <13.02> - <13.04> - <13.06> - <13.08> - <13.10> - <13.12> - <13.14> - <13.16> - <13.18> - <13.20> - <13.22> - <13.24> - <13.26> - <13.28> - <13.30> - <13.32> - <13.34> - <13.36> - <13.38> - <13.40> - <13.42> - <13.44> - <13.46> - <13.48> - <13.50> - <13.52> - <13.54> - <13.56> - <13.58> - <13.60> - <13.62> - <13.64> - <13.66> - <13.68> - <13.70> - <13.72> - <13.74> - <13.76> - <13.78> - <13.80> - <13.82> - <13.84> - <13.86> - <13.88> - <13.90> - <13.92> - <13.94> - <13.96> - <13.98> - <14.00> - <14.02> - <14.04> - <14.06> - <14.08> - <14.10> - <14.12> - <14.14> - <14.16> - <14.18> - <14.20> - <14.22> - <14.24> - <14.26> - <14.28> - <14.30> - <14.32> - <14.34> - <14.36> - <14.38> - <14.40> - <14.42> - <14.44> - <14.46> - <14.48> - <14.50> - <14.52> - <14.54> - <14.56> - <14.58> - <14.60> - <14.62> - <14.64> - <14.66> - <14.68> - <14.70> - <14.72> - <14.74> - <14.76> - <14.78> - <14.80> - <14.82> - <14.84> - <14.86> - <14.88> - <14.90> - <14.92> - <14.94> - <14.96> - <14.98> - <15.00> - <15.02> - <15.04> - <15.06> - <15.08> - <15.10> - <15.12> - <15.14> - <15.16> - <15.18> - <15.20> - <15.22> - <15.24> - <15.26> - <15.28> - <15.30> - <15.32> - <15.34> - <15.36> - <15.38> - <15.40> - <15.42> - <15.44> - <15.46> - <15.48> - <15.50> - <15.52> - <15.54> - <15.56> - <15.58> - <15.60> - <15.62> - <15.64> - <15.66> - <15.68> - <15.70> - <15.72> - <15.74> - <15.76> - <15.78> - <15.80> - <15.82> - <15.84> - <15.86> - <15.88> - <15.90> - <15.92> - <15.94> - <15.96> - <15.98> - <16.00> - <16.02> - <16.04> - <16.06> - <16.08> - <16.10> - <16.12> - <16.14> - <16.16> - <16.18> - <16.20> - <16.22> - <16.24> - <16.26> - <16.28> - <16.30> - <16.32> - <16.34> - <16.36> - <16.38> - <16.40> - <16.42> - <16.44> - <16.46> - <16.48> - <16.50> - <16.52> - <16.54> - <16.56> - <16.58> - <16.60> - <16.62> - <16.64> - <16.66> - <16.68> - <16.70> - <16.72> - <16.74> - <16.76> - <16.78> - <16.80> - <16.82> - <16.84> - <16.86> - <16.88> - <16.90> - <16.92> - <16.94> - <16.96> - <16.98> - <17.00> - <17.02> - <17.04> - <17.06> - <17.08> - <17.10> - <17.12> - <17.14> - <17.16> - <17.18> - <17.20> - <17.22> - <17.24> - <17.26> - <17.28> - <17.30> - <17.32> - <17.34> - <17.36> - <17.38> - <17.40> - <17.42> - <17.44> - <17.46> - <17.48> - <17.50> - <17.52> - <17.54> - <17.56> - <17.58> - <17.60> - <17.62> - <17.64> - <17.66> - <17.68> - <17.70> - <17.72> - <17.74> - <17.76> - <17.78> - <17.80> - <17.82> - <17.84> - <17.86> - <17.88> - <17.90> - <17.92> - <17.94> - <17.96> - <17.98> - <18.00> - <18.02> - <18.04> - <18.06> - <18.08> - <18.10> - <18.12> - <18.14> - <18.16> - <18.18> - <18.20> - <18.22> - <18.24> - <18.26> - <18.28> - <18.30> - <18.32> - <18.34> - <18.36> - <18.38> - <18.40> - <18.42> - <18.44> - <18.46> - <18.48> - <18.50> - <18.52> - <18.54> - <18.56> - <18.58> - <18.60> - <18.62> - <18.64> - <18.66> - <18.68> - <18.70> - <18.72> - <18.74> - <18.76> - <18.78> - <18.80> - <18.82> - <18.84> - <18.86> - <18.88> - <18.90> - <18.92> - <18.94> - <18.96> - <18.98> - <19.00> - <19.02> - <19.04> - <19.06> - <19.08> - <19.10> - <19.12> - <19.14> - <19.16> - <19.18> - <19.20> - <19.22> - <19.24> - <19.26> - <19.28> - <19.30> - <19.32> - <19.34> - <19.36> - <19.38> - <19.40> - <19.42> - <19.44> - <19.46> - <19.48> - <19.50> - <19.52> - <19.54> - <19.56> - <19.58> - <19.60> - <19.62> - <19.64> - <19.66> - <19.68> - <19.70> - <19.72> - <19.74> - <19.76> - <19.78> - <19.80> - <19.82> - <19.84> - <19.86> - <19.88> - <19.90> - <19.92> - <19.94> - <19.96> - <19.98> - <20.00> - <20.02> - <20.04> - <20.06> - <20.08> - <20.10> - <20.12> - <20.14> - <20.16> - <20.18> - <20.20> - <20.22> - <20.24> - <20.26> - <20.28> - <20.30> - <20.32> - <20.34> - <20.36> - <20.38> - <20.40> - <20.42> - <20.44> - <20.46> - <20.48> - <20.50> - <20.52> - <20.54> - <20.56> - <20.58> - <20.60> - <20.62> - <20.64> - <20.66> - <20.68> - <20.70> - <20.72> - <20.74> - <20.76> - <20.78> - <20.80> - <20.82> - <20.84> - <20.86> - <20.88> - <20.90> - <20.92> - <20.94> - <20.96> - <20.98> - <21.00> - <21.02> - <21.04> - <21.06> - <21.08> - <21.10> - <21.12> - <21.14> - <21.16> - <21.18> - <21.20> - <21.22> - <21.24> - <21.26> - <21.28> - <21.30> - <21.32> - <21.34> - <21.36> - <21.38> - <21.40> - <21.42> - <21.44> - <21.46> - <21.48> - <21.50> - <21.52> - <21.54> - <21.56> - <21.58> - <21.60> - <21.62> - <21.64> - <21.66> - <21.68> - <21.70> - <21.72> - <21.74> - <21.76> - <21.78> - <21.80> - <21.82> - <21.84> - <21.86> - <21.88> - <21.90> - <21.92> - <21.94> - <21.96> - <21.98> - <22.00> - <22.02> - <22.04> - <22.06> - <22.08> - <22.10> - <22.12> - <22.14> - <22.16> - <22.18> - <22.20> - <22.22> - <22.24> - <22.26> - <22.28> - <22.30> - <22.32> - <22.34> - <22.36> - <22.38> - <22.40> - <22.42> - <22.44> - <22.46> - <22.48> - <22.50> - <22.52> - <22.54> - <22.56> - <22.58> - <22.60> - <22.62> - <22.64> - <22.66> - <22.68> - <22.70> - <22.72> - <22.74> - <22.76> - <22.78> - <22.80> - <22.82> - <22.84> - <22.86> - <22.88> - <22.90> - <22.92> - <22.94> - <22.96> - <22.98> - <23.00> - <23.02> - <23.04> - <23.06> - <23.08> - <23.10> - <23.12> - <23.14> - <23.16> - <23.18> - <23.20> - <23.22> - <23.24> - <23.26> - <23.28> - <23.30> - <23.32> - <23.34> - <23.36> - <23.38> - <23.40> - <23.42> - <23.44> - <23.46> - <23.48> - <23.50> - <23.52> - <23.54> - <23.56> - <23.58> - <23.60> - <23.62> - <23.64> - <23.66> - <23.68> - <23.70> - <23.72> - <23.74> - <23.76> - <23.78> - <23.80> - <23.82> - <23.84> - <23.86> - <23.88> - <23.90> - <23.92> - <23.94> - <23.96> - <23.98> - <24.00> - <24.02> - <24.04> - <24.06> - <24.08> - <24.10> - <24.12> - <24.14> - <24.16> - <24.18> - <24.20> - <24.22> - <24.24> - <24.26> - <24.28> - <24.30> - <24.32> - <24.34> - <24.36> - <24.38> - <24.40> - <24.42> - <24.44> - <24.46> - <24.48> - <24.50> - <24.52> - <24.54> - <24.56> - <24.58> - <24.60> - <24.62> - <24.64> - <24.66> - <24.68> - <24.70> - <24.72> - <24.74> - <24.76> - <24.78> - <24.80> - <24.82> - <24.84> - <24.86> - <24.88> - <24.90> - <24.92> - <24.94> - <24.96> - <24.98> - <25.00> - <25.02> - <25.04> - <25.06> - <25.08> - <25.10> - <25.12> - <25.14> - <25.16> - <25.18> - <25.20> - <25.22> - <25.24> - <25.26> - <25.28> - <25.30> - <25.32> - <25.34> - <25.36> - <25.38> - <25.40> - <25.42> - <25.44> - <25.46> - <25.48> - <25.50> - <25.52> - <25.54> - <25.56> - <25.58> - <25.60> - <25.62> - <25.64> - <25.66> - <25.68> - <25.70> - <25.72> - <25.74> - <25.76> - <25.78> - <25.80> - <25.82> - <25.84> - <25.86> - <25.88> - <25.90> - <25.92> - <25.94> - <25.96> - <25.98> - <26.00> - <26.02> - <26.04> - <26.06> - <26.08> - <26.10> - <26.12> - <26.14> - <26.16> - <26.18> - <26.20> - <26.22> - <26.24> - <26.26> - <26.28> - <26.30> - <26.32> - <26.34> - <26.36> - <26.38> - <26.40> - <26.42> - <26.44> - <26.46> - <26.48> - <26.50> - <26.52> - <26.54> - <26.56> - <26.58> - <26.60> - <26.62> - <26.64> - <26.66> - <26.68> - <26.70> - <26.72> - <26.74> - <26.76> - <26.78> - <26.80> - <26.82> - <26.84> - <26.86> - <26.88> - <26.90> - <26.92> - <26.94> - <26.96> - <26.98> - <27.00> - <27.02> - <27.04> - <27.06> - <27.08> - <27.10> - <27.12> - <27.14> - <27.16> - <27.18> - <27.20> - <27.22> - <27.24> - <27.26> - <27.28> - <27.30> - <27.32> - <27.34> - <27.36> - <27.38> - <27.40> - <27.42> - <27.44> - <27.46> - <27.48> - <27.50> - <27.52> - <27.54> - <27.56> - <27.58> - <27.60> - <27.62> - <27.64> - <27.66> - <27.68> - <27.70> - <27.72> - <27.74> - <27.76> - <27.78> - <27.80> - <27.82> - <27.84> - <27.86> - <27.88> - <27.90> - <27.92> - <27.94> - <27.96> - <27.98> - <28.00> - <28.02> - <28.04> - <28.06> - <28.08> - <28.10> - <28.12> - <28.14> - <28.16> - <28.18> - <28.20> - <28.22> - <28.24> - <28.26> - <28.28> - <28.30> - <28.32> - <28.34> - <28.36> - <28.38> - <28.40> - <28.42> - <28.44> - <28.46> - <28.48> - <28.50> - <28.52> - <28.54> - <28.56> - <28.58> - <28.60> - <28.62> - <28.64> - <28.66> - <28.68> - <28.70> - <28.72> - <28.74> - <28.76> - <28.78> - <28.80> - <28.82> - <28.84> - <28.86> - <28.88> - <28.90> - <28.92> - <28.94> - <28.96> - <28.98> - <29.00> - <29.02> - <29.04> - <29.06> - <29.08> - <29.10> - <29.12> - <29.14> - <29.16> - <29.18> - <29.20> - <29.22> - <29.24> - <29.26> - <29.28> - <29.30> - <29.32> - <29.34> - <29.36> - <29.38> - <29.40> - <29.42> - <29.44> - <29.46> - <29.48> - <29.50> - <29.52> - <29.54> - <29.56> - <29.58> - <29.60> - <29.62> - <29.64> - <29.66> - <29.68> - <29.70> - <29.72> - <29.74> - <29.76> - <29.78> - <29.80> - <29.82> - <29.84> - <29.86> - <29.88> - <29.90> - <29.92> - <29.94> - <29.96> - <29.98> - <30.00> - ▁ - ',' - . - ▁the - s - ▁of - 。 - ▁and - ▁to - ▁a - ▁in - '''' - ▁that - ▁I - ▁was - e - t - ▁it - '-' - ▁is - ▁you - en - ▁he - ▁de - ▁for - 的 - '?' - ▁with - ▁be - の - d - ▁his - ▁as - n - 、 - ▁on - ▁we - ▁had - a - ▁not - o - ▁at - ▁have - ▁die - i - ▁her - er - ▁so - ▁The - ▁und - ed - ▁this - m - ▁but - re - ing - が - ▁la - ▁by - ▁they - を - ▁an - ▁are - ▁all - ▁from - ▁der - ▁me - ▁which - は - es - に - ▁him - ▁she - ▁were - ▁my - y - ▁or - ▁one - ▁no - r - ▁do - ▁And - ▁our - ly - ▁" - ▁He - で - u - ▁would - ▁there - ▁zu - ▁their - 了 - ▁will - ▁said - ▁what - ▁them - te - ▁been - ▁out - ▁like - ▁know - ▁It - ▁who - ▁can - ▁if - ▁when - '!' - ▁up - ▁i - ▁ist - ▁about - ▁que - ▁more - ve - ▁un - ▁das - ▁es - ▁man - 在 - ▁Sie - と - а - ▁en - ▁ich - ar - ▁very - ▁some - ▁into - ▁в - ▁не - ▁your - ▁has - ▁sie - ▁time - ▁A - ▁could - 是 - ▁и - 和 - de - 我 - ▁den - in - ▁But - al - ▁ein - ▁von - k - ▁We - も - ; - ▁now - ▁nicht - ▁just - ▁over - ▁think - ▁then - ▁other - ▁than - ▁So - ▁see - 人 - ▁dass - 你 - ▁des - ll - ▁auf - ▁little - le - l - е - ▁l - ▁um - ▁any - ▁also - ▁mit - c - ▁eine - ▁us - ▁war - an - ▁на - ▁only - ▁did - na - ▁these - ▁two - ne - z - ▁Und - ▁In - ▁se - ▁people - ▁wir - ▁el - ▁sich - ta - ▁go - ▁well - to - ▁na - is - ▁с - ▁di - 上 - ten - から - ▁good - ▁y - ▁don - 不 - ▁что - у - se - ▁first - 他 - ▁much - ▁how - ':' - ▁where - g - ▁She - ▁down - ▁made - ▁should - ▁le - ▁You - 有 - st - ▁way - ▁upon - и - です - ▁before - ▁come - 中 - ▁get - ▁am - ▁als - ▁back - 'on' - ▁those - '"' - la - h - as - ra - ▁its - 大 - ▁great - ▁du - ▁La - ▁going - ▁say - da - ▁по - な - w - ▁für - ▁make - ▁because - ▁after - 一 - ▁ver - ▁such - ▁men - ▁right - ▁wie - é - ▁here - ▁through - ▁may - est - ▁long - ▁They - ▁even - м - ch - 啊 - ▁er - ge - 我们 - し - '1' - ▁must - ▁dem - ▁я - ▁never - ▁most - ▁im - ▁came - p - j - ▁per - ▁Ich - я - us - il - ▁day - 'no' - ▁mu - me - 他们 - ▁haben - ." - ▁life - ro - den - ▁This - ▁own - ▁et - b - ir - ce - f - т - ▁really - ▁je - ▁old - ▁many - ▁con - ▁ku - ▁again - ▁things - 地 - '2' - ni - os - 来 - ▁being - ma - ▁Mr - や - 说 - ▁les - ▁у - man - ▁Es - 就 - ▁There - ▁за - ▁might - it - ▁last - ▁take - ی - ▁too - ▁still - ga - か - х - る - be - ▁work - ▁new - we - ы - un - va - ▁sind - م - ▁это - ▁went - ’ - at - I - ▁hat - д - ね - ▁à - って - ▁aus - ▁himself - ▁every - 一个 - 这个 - do - ▁same - ▁thought - li - ▁want - ▁years - た - ci - ▁al - ka - ▁van - ▁ne - ▁yeah - 会 - ti - ه - ri - い - th - ▁del - ▁something - ▁Die - ▁ge - した - or - ▁look - ▁thing - する - ▁year - ▁world - ▁away - 这 - ▁part - ▁got - ▁da - et - 呢 - ho - ment - ▁three - ▁hand - ▁вы - ▁S - ▁That - ▁una - ▁а - н - ba - 子 - ▁place - ▁то - ▁without - 她 - ers - ▁kind - ng - ▁ya - je - ▁über - 小 - 到 - ▁under - ter - ja - л - с - й - el - ▁found - mo - 它 - ж - ur - して - 着 - ▁put - ▁while - ness - ▁Er - ▁off - о - お - ▁No - ▁werden - 出 - '3' - men - 对 - ung - ko - sa - em - 被 - ▁و - ка - lo - able - ▁het - ▁我 - ▁shall - ▁wenn - ▁What - ▁eyes - ▁einen - ▁auch - ▁another - si - 都 - ▁far - 下 - ▁saw - ▁een - go - この - 去 - ▁face - wa - ▁E - ▁always - ke - am - ▁give - 好 - ▁As - 日 - 手 - на - て - ▁yet - ▁oder - ▁tell - ity - 点 - 要 - ▁ever - he - zi - ▁once - ли - ▁Das - ▁ha - ▁God - では - ▁nothing - 年 - ry - ▁though - ation - ▁home - ▁If - ▁head - ▁diese - ▁aber - но - mi - ▁few - 里 - ▁El - ▁va - ▁love - der - ▁vor - 从 - ▁find - 时 - ha - za - ent - ▁young - ) - ▁business - ze - ▁house - um - ▁sein - ▁let - ton - ▁left - ▁better - ▁mean - ▁su - ▁night - 」 - ▁took - ▁si - ▁Then - di - ▁— - ca - り - د - ت - к - ▁against - ▁mind - ▁ko - ▁moment - ш - ▁dat - ver - ya - ▁each - x - 吗 - ▁done - so - ▁both - ▁من - ant - ▁Be - ▁course - ▁می - ted - 没有 - ▁To - ▁улица - ▁between - という - ▁wird - ▁дробь - 用 - ▁end - ч - ▁half - ▁looked - ▁door - ла - ▁te - ▁need - ▁nach - sch - co - vo - 家 - ▁nur - ن - ▁Re - ▁uns - ▁( - v - 也 - 就是 - ▁Aber - ▁Le - ▁v - ▁De - ▁как - ▁met - 「 - ▁por - ش - ▁einem - lu - ▁lot - ، - ▁ma - 想 - ▁heart - 为 - 水 - ▁seen - ▁einer - ▁When - ス - '4' - ler - ▁called - ▁در - ▁For - ▁los - ら - ▁told - ie - · - ▁pro - ю - ▁next - ▁heard - ul - ok - ▁An - ▁side - ▁father - ▁knew - ▁O - ▁你 - く - ▁whole - gen - ▁believe - ki - には - ▁seemed - 得 - ▁asked - 이 - ▁par - ▁son - ました - ▁о - ▁L - ▁quite - ▁mich - ia - son - ▁به - ▁para - ▁water - ・ - ye - 本 - tu - ▁ni - 回 - та - 多 - ▁Se - ▁set - ▁к - 者 - ▁können - ▁enough - ▁habe - その - ste - ▁mir - ▁op - ▁name - ▁best - 前 - ▁ال - ▁po - ▁having - 行 - ▁Do - ine - ▁sa - ▁around - ▁il - ▁noch - ▁Mrs - ▁different - A - '".' - 高 - ة - 过 - ます - ▁does - ▁woman - 分 - ▁uh - ▁com - ▁why - ▁room - ▁он - 当 - ▁wurde - ▁C - ▁call - ting - 这些 - г - 心 - ▁point - ▁His - ▁country - 的人 - ▁ba - id - ▁second - ▁che - 事 - ▁mi - ▁да - po - du - ▁Now - でも - ▁bir - ▁light - ▁Wir - з - ken - ▁ik - ですね - ▁pas - 性 - ▁anything - ▁hundred - ▁hatte - ▁En - ▁ja - by - ▁quarter - ▁kann - S - ▁days - ▁з - ad - ▁про - さん - ion - き - 生 - ck - ▁almost - ل - ▁мы - '5' - ▁four - 与 - ist - ▁при - ▁use - ▁high - ▁care - ak - ▁bei - ó - ▁yn - ▁ob - ي - 而 - 三 - vi - ▁sure - ▁mother - à - 山 - au - ▁wa - ▁sort - im - ▁sehr - ▁help - ▁small - в - ку - ▁fact - 所 - 力 - ▁bu - ▁soon - ▁от - ▁qui - ▁together - ▁five - ▁At - ▁Der - ▁actually - ا - 做 - 가 - ▁money - ▁pa - ▁until - ▁gave - ku - 打 - 什么 - ▁durch - land - ▁раз - ut - ▁full - ▁morning - 那 - ven - que - pe - ов - ▁used - ▁began - ▁est - ling - га - ▁hands - ▁і - ع - ▁question - 可以 - р - ▁Il - ちょっと - п - ▁Li - less - ▁doing - wi - ▁في - bo - ▁felt - ▁turned - 自己 - 很 - 把 - 能 - zu - ▁так - ▁pe - よ - lar - ▁را - ر - ▁words - hi - ▁pour - bi - ol - ▁bit - 最 - ▁talk - ▁hard - ▁since - ik - pa - ▁dans - ▁open - and - gi - ▁My - ▁didn - ▁real - ▁M - ки - しました - ом - led - 后 - ',"' - ▁که - ▁hier - nt - ▁twenty - 国 - ▁rest - 新 - ▁但是 - ▁ب - ▁number - 看 - kan - ▁All - ▁Well - す - 月 - ▁whom - ▁looking - ▁ah - ig - ▁white - ▁lo - ▁feel - ▁large - ▁power - ▁keep - ▁oh - fe - ▁big - ▁certain - ▁myself - 的时候 - ▁present - ▁New - ate - さ - 度 - ▁matter - ▁alle - ic - mu - ▁dieser - ▁pre - ない - ▁qu - ▁ka - ▁children - 吧 - the - ▁Ver - да - ▁une - oj - だ - 走 - ▁brought - ▁among - ▁till - ek - ▁voice - ju - み - ▁Oh - '6' - ▁па - ب - ▁taken - ▁order - ▁他 - ▁rather - و - ▁вот - ▁given - 更 - jo - ▁D - ▁все - ▁но - ▁gibt - ▁death - ▁girl - ▁case - ▁Miss - ▁Na - ▁person - ▁dis - ▁tu - ▁true - ▁word - 个 - ▁during - ▁nor - ▁strong - 斯 - ▁estas - 特 - ▁live - とか - dy - ▁important - 将 - ся - ル - ▁dann - ▁boy - ▁human - ▁poor - ▁keur - les - ▁On - ▁Lord - ▁za - ▁How - ▁already - ▁state - った - ▁mehr - 天 - ▁K - 名 - ▁ab - ▁hear - ▁أ - ▁-- - 道 - 内 - ty - つ - ▁B - ▁hij - ▁themselves - ▁round - ц - ▁air - ▁из - ▁waren - ive - ji - ▁friend - ▁forward - 现在 - ▁company - ▁Ma - 金 - ▁coming - 外 - ть - ▁Al - ▁stood - '7' - ▁within - ь - ▁この - ai - ▁thousand - ▁sur - ▁Her - ▁zum - ▁zijn - ▁change - ▁niet - ▁show - ம் - age - ▁herself - ▁sent - ▁often - ▁این - 可能 - ▁along - ▁One - 我的 - ▁از - ▁however - го - ▁T - ▁immer - ▁c - ben - sta - yo - í - 面 - ▁across - ▁است - O - う - ts - ada - ▁N - 并 - ▁co - zo - いた - T - ▁near - che - 老 - 再 - 海 - ▁cannot - ▁bi - tion - ▁idea - ▁others - ▁nu - ▁ser - ▁ta - ▁non - 向 - ▁voor - 尔 - ▁making - fa - ger - 给 - ▁sea - ▁Ne - として - 长 - 或 - ▁understand - nie - 比 - 成 - ので - ▁family - ▁six - ▁nature - ▁fire - á - ▁ki - ▁black - 日本 - ва - 已经 - ▁public - ▁whether - ▁Menschen - ▁etwas - 非常 - 何 - ▁least - 你的 - 可 - ▁amb - ها - ▁li - ▁gone - ▁Ge - š - 因为 - ▁ihre - ем - 以 - いい - 开始 - ▁wirklich - ▁reason - ▁ihr - ▁hope - ▁Co - 西 - ▁feet - P - ▁continue - ▁до - ить - ▁ce - ▁Un - 发 - io - ▁sense - ▁means - ▁sat - ▁wife - ны - 方 - イ - ▁everything - 像 - ▁body - ▁women - tes - ado - ▁child - ▁read - 您 - ▁less - ard - ▁w - ▁Ein - ▁short - wo - 还 - ▁times - ▁所以 - ▁behind - ▁dear - ç - ▁om - ▁growth - 不是 - ▁vi - ▁market - B - ▁possible - ▁St - ▁そして - ў - ▁turn - ля - б - ?" - ▁seine - ▁school - mer - ▁general - ring - 头 - ▁ihn - ▁这 - نا - 法 - ま - '8' - ă - E - 는 - ▁city - aj - ▁known - per - 先 - ren - ет - ny - 体 - ▁car - ▁become - ana - ▁close - ▁perhaps - ты - è - ▁pretty - 化 - ▁whose - ق - ▁Y - ous - ية - ▁Sa - ▁م - cu - わ - ▁tun - ▁под - ▁dan - ▁bin - 他的 - не - ▁au - 又 - ieren - 二 - ▁friends - ▁kaj - ▁las - ▁several - ▁able - ner - ▁U - lan - ▁won - ра - ク - ▁leave - ات - ▁Mo - س - ك - C - ▁alone - era - ле - bu - 지 - ▁Um - red - gel - ▁passed - 位 - ой - ▁itself - ▁Ha - ber - ▁Sir - ▁later - con - ▁Je - 花 - 无 - ated - 口 - ▁Ra - もう - ▁그 - ▁Ta - 入 - ▁és - ▁wo - ile - ▁says - ▁Was - ▁past - ▁، - zen - だった - ▁Yes - リ - ▁în - ▁line - ▁ask - ▁else - ▁either - ▁John - ア - 数 - 学 - ز - ▁became - ite - ish - ning - mos - ▁story - ▁Me - 于 - 这种 - ▁Wenn - val - 고 - bar - ▁Zeit - ー - ло - ▁meine - ▁speak - fer - up - ん - are - ▁unter - など - ▁bo - ال - ان - ari - 这是 - ▁bad - ▁run - ▁z - ▁above - ▁dé - ster - iz - ▁Ja - 物 - aba - ▁со - 开 - ▁yes - ▁probably - 重 - им - ä - і - '9' - gu - します - 今 - ▁là - 呀 - ▁один - ▁free - ▁wanted - ▁form - ▁mal - gan - '10' - ang - ▁future - ▁como - ▁Dr - ▁Pa - ▁Ah - 但 - ran - op - ▁sagte - 让 - 知道 - ▁Per - ▁indeed - ▁These - ▁hour - います - ▁start - tter - ▁wissen - qui - 部 - のは - ▁viel - ▁held - ance - ▁bring - 市 - port - ▁plus - 田 - ▁ў - ▁land - ト - 太 - ▁P - 所以 - ная - ный - ▁сто - ful - ма - des - ▁Da - ▁ihm - ait - ам - ▁fell - ▁getting - ▁dead - ▁Art - ▁würde - way - ▁keine - ú - ▁Vi - ot - sen - 知 - 万 - ▁saying - ▁b - ag - vu - ▁Ni - ▁bed - ▁они - ▁vous - にも - '20' - 白 - ▁dark - 主 - ft - カ - ми - ▁А - ▁thus - ▁wieder - 没 - ▁therefore - nya - 你们 - ▁же - んだ - bre - ▁book - ▁em - ▁lost - ▁thou - ere - ▁Si - ▁然后 - ▁lady - ▁cried - ▁sehen - ▁ground - 之 - ▁sagen - ▁мне - ▁einfach - 工作 - ▁거 - ▁Man - лі - chen - まで - end - ina - ▁ad - ▁ago - om - му - 听 - ▁ну - ▁today - 先生 - ▁red - ▁play - ▁бы - ▁lay - ▁было - ▁ты - ン - ▁fa - ▁F - 自 - 時 - ham - ▁для - ти - ▁started - 一些 - ▁early - ▁answer - 四 - 光 - ды - コ - 拉 - ini - nu - ▁truth - rs - ▁那 - 世界 - 路 - 도 - ▁clear - ▁towards - sten - ▁expect - ف - 身 - ▁его - ▁我们 - ▁front - ▁Let - ▁Or - ؟ - min - ▁kon - her - ▁andere - ez - ко - 就像 - ▁aan - ند - ▁diesem - ▁cost - han - fi - ▁есть - これ - ▁ما - ban - 定 - 德 - ▁Po - ▁Ba - வ - ▁با - ▁G - ▁два - め - ▁town - ix - sh - ▁الم - su - ご - ▁zwei - ▁further - ▁replied - ▁stand - している - しています - ita - ff - ▁answered - ▁gut - 流 - ker - 感 - ▁government - ▁нас - то - 那个 - 一样 - ▁fear - ин - ▁working - ▁дом - ▁taking - ▁fast - ая - ism - ▁Bu - ▁sun - ab - ▁table - ▁Con - ▁sometimes - 死 - ▁fine - ▁era - 放 - ▁return - 通 - ▁там - ▁Why - ▁kept - 利 - ap - 見 - lich - ley - day - ▁Not - ▁remember - nd - dan - ▁position - ru - gar - 你知道 - んで - 目 - tro - iert - ах - 作 - ▁ten - ک - ▁jetzt - ▁trying - tan - ▁zur - ча - ies - ▁soul - ▁wish - 明 - 这样 - 车 - 起 - ▁try - ▁bis - ▁seems - ▁ت - 还是 - いて - q - ▁continued - でした - 加 - された - ▁Du - ных - ит - ▁weil - ▁plan - ▁gi - ▁evening - ▁ye - 儿 - ▁subject - ▁если - ▁sound - ▁об - 安 - ▁hi - lle - ▁sin - ▁அ - ▁longer - ▁pay - ire - র - ▁nous - ']' - 两 - ▁doubt - ил - ▁beautiful - ль - ர் - え - ris - tor - 问题 - mal - ons - ▁Les - ▁cap - 合 - ▁king - ▁その - une - ▁ready - ▁interest - tra - ▁law - 에 - és - ▁earth - ▁top - elle - 马 - いる - ▁fi - ▁dit - ▁letter - ▁cold - ò - 中国 - ▁view - ▁manner - 어 - ▁system - ▁zij - ある - ня - ▁third - cy - ド - 都是 - ▁week - ▁pri - ischen - ▁art - ▁service - 爱 - де - ▁thinking - tar - ▁selbst - ▁Ab - 我们的 - ون - lin - 孩子 - ence - ▁[ - 克 - ▁deep - one - து - 需要 - ▁results - 美 - ▁cause - G - 火 - ラ - cia - ной - ▁anderen - ▁vol - kin - ▁comes - あ - ▁talking - ary - ▁happened - 南 - tt - ub - ▁machen - 王 - ▁natural - ron - ▁months - ex - od - ▁value - ▁happy - ▁husband - ▁post - ين - ▁să - ▁Ho - ما - ▁sir - ▁Mi - ▁Mar - ▁thy - 吃 - ure - 立 - ▁Ka - 戦 - ▁alles - ö - ▁English - ▁p - D - がある - ▁「 - ▁müssen - N - ▁zurück - M - ▁У - ▁아 - ▁Our - 同 - ▁viele - ▁sub - ор - ian - 看到 - F - ▁ex - ▁pass - ▁seven - ▁York - 跟 - ▁brother - igen - ▁capital - 全 - cher - マ - ▁Lo - ▁living - 平 - ▁low - ▁King - ▁sight - ▁Ar - ▁nie - ▁Vor - ise - 动 - ende - ▁meet - ни - ▁Sch - ні - 但是 - 正 - ▁hours - ▁certainly - ş - ▁watch - ▁support - 野 - mit - K - 意 - ия - ▁但 - ▁cu - 时间 - 使用 - ▁V - ê - ble - ging - 品 - ▁strange - ▁Ri - پ - 木 - кі - ā - ▁followed - ক - ige - ▁common - ▁hold - 人们 - ▁ca - ▁returned - ern - ▁R - 等 - ah - ▁suddenly - ▁account - len - ▁road - ud - ▁меня - ▁experience - fo - ▁có - وا - ▁After - ▁received - ロ - ▁guess - car - 生活 - ée - ▁dieses - 它们 - 真 - 気 - ▁arm - 을 - ▁thirty - ▁seeing - ▁б - if - ح - ische - cht - ▁social - ▁spirit - sel - ей - ▁cut - ست - ship - 区 - sto - ▁Dinge - ▁minutes - ▁Don - ▁terms - ▁feeling - ▁due - ▁maar - ▁blood - ▁denke - use - ▁diesen - ▁أن - 次 - ▁двадцать - ています - 文 - ▁eight - ▁food - ▁particular - ating - ▁spoke - 格 - ▁Fa - ▁очень - 五 - ▁eines - レ - ▁England - ▁Leute - 原 - خ - ▁result - こと - 元 - ▁inter - bel - ▁seem - ▁process - ▁Ca - ria - について - ▁tried - 带 - 钱 - ▁Go - フ - ndo - ▁horse - ▁Of - ▁effect - ▁Bo - ல் - 别 - ▁gu - ement - ▁Ihnen - のか - ▁arms - ▁beyond - 理 - ▁simple - だけ - ▁Welt - ▁By - ▁Pro - 快 - ac - ன் - ▁With - ę - ▁muss - ый - タ - ▁Ro - ▁ei - 自己的 - let - ▁reached - ▁hair - ungen - なかった - ▁она - ▁els - ele - ন - 量 - king - ▁когда - '...' - 米 - ブ - ▁single - 号 - ▁tra - 長 - ▁move - こ - 形 - 一下 - те - ▁tr - になる - _ - ▁maybe - ama - 人が - ier - ▁makes - L - ▁deal - 研究 - ▁doesn - ▁period - ▁Su - mente - ▁eye - cha - ▁mais - 何か - ▁wrong - ▁Va - за - nen - ▁wurden - los - ▁information - ба - zer - ▁và - ▁Te - че - ale - 然后 - ▁Hi - gre - ▁Jo - ры - tel - van - 通过 - vis - ▁Z - ef - 门 - ▁sleep - ▁Here - ▁Can - 店 - ▁yourself - af - аў - ▁Wa - ira - 只是 - ர - ▁یک - ▁foot - ▁self - シ - 住 - ▁è - ▁Di - ž - room - ▁Am - 气 - tic - ▁tôi - ▁пере - del - ▁whatever - ▁mo - ▁はい - ан - ම - ну - ▁W - ▁mine - ▁Pe - ▁chance - ص - H - じ - 公司 - wn - ▁denn - 谁 - んです - 州 - 公 - ▁Ad - ▁attention - ▁seiner - ▁schon - ▁master - č - ▁wild - ナ - ▁wind - 声 - wer - pi - ▁suppose - ▁rose - ▁fall - そう - ре - ▁ihnen - ▁walk - ▁más - ▁problem - ▁unsere - ار - cur - ▁higher - もの - れ - The - las - 的话 - fu - ali - ▁нь - ▁necessary - ▁character - 女 - 八 - ▁што - он - っていう - ▁history - غ - line - 台 - ▁daughter - ▁J - ▁fair - どう - ▁Some - ▁force - té - ▁Wie - 半 - tre - ▁respect - ▁yo - ▁customers - ▁fifty - 那些 - ду - ▁space - 送 - ▁data - är - ▁appeared - ▁river - چ - また - ura - sha - ば - ▁carried - 社会 - 第 - க் - 味 - ▁Mae - 代 - ▁share - ▁ju - ▁jeung - ▁sit - ano - ▁pu - ий - ▁три - ▁toward - 出来 - ▁window - вер - ▁lui - ▁нет - mes - ally - ている - 時間 - 叫 - ج - 科 - или - ▁haar - க - ▁example - க்க - ks - ور - ▁Aus - 情 - ▁opportunity - ach - ▁ho - ▁act - ▁ningal - ▁London - ▁diwawancara - 球 - ▁oloh - ▁wartawan - ▁glad - ▁dipoto - 人の - 信 - 们 - ই - ▁except - 色 - V - ▁ke - ▁pot - mar - 布 - ▁church - ▁Tom - ▁outside - ▁level - っ - ф - tas - ati - 那么 - ▁following - ▁Ihre - 不会 - ▁camp - ミ - nde - ম - ▁Ga - ▁party - ▁ou - ▁او - hu - ▁во - ▁gold - ام - ▁mar - ウ - 喜欢 - ▁Ti - 使 - ▁einige - ▁Ku - ward - ▁drei - ▁send - ди - ▁안 - 今天 - 如果 - ▁stuff - ▁especially - ▁avec - 该 - ید - ▁opened - ▁couple - ▁lived - ▁уже - ▁lower - ▁ought - ок - ▁근데 - né - ке - र - ত - ▁чтобы - par - ▁stay - ▁bar - ities - 希望 - ▁Christ - ▁knowledge - ras - cer - cho - ▁control - ide - ▁added - 空 - 六 - ল - U - lie - 政府 - eur - ▁miles - ным - 怎么 - hen - ▁Who - са - 集 - ▁mein - 川 - ▁blue - па - ▁none - 朝 - 问 - 话 - への - 有一个 - 应该 - ní - 教 - ▁lie - 相 - 他们的 - ▁daß - ша - ▁Auf - ▁green - ▁mis - ▁stop - ding - ▁beginning - ▁뭐 - ▁damit - 北 - 食 - ▁một - ض - ving - ” - 越 - ▁будет - ▁entre - una - ▁eighteen - во - mon - ▁product - ▁Also - ▁late - ▁exactly - ということで - ici - lla - nda - ▁ran - ста - ▁bien - ▁Mal - sche - ▁paper - ▁okay - R - ice - けど - ▁fellow - ▁easy - ого - ▁weiß - ▁este - ▁Mu - 是一个 - don - ky - ▁save - ▁comp - res - ▁quickly - 的是 - ▁Car - 总 - ize - 也是 - 言 - ところ - nes - 的事情 - 真的 - 这么 - オ - ▁على - 十 - 正在 - 不要 - ▁dollars - ando - ▁wonder - 美国 - ▁purpose - ط - ذ - ри - це - 干 - ▁American - 能够 - ▁court - 夜 - 其他 - ▁rich - 期 - ns - 近 - ные - ▁forth - ズ - ▁och - 我想 - 的な - ▁figure - ▁demand - よく - ▁nun - 林 - ▁nearly - む - ▁Com - ▁wat - 大学 - ▁wall - ▁French - ▁Europe - ие - ▁building - uri - ▁pleasure - ato - ць - heit - こう - for - およそ - î - ▁fer - ▁С - ▁comme - ▁questions - ல - になって - 请 - ய - mas - 认为 - ▁Ko - 或者 - ▁step - ன - やっぱり - ▁strength - ▁local - ▁sweet - ▁ن - ani - ую - ▁uit - ен - 器 - 石 - گ - tin - ү - 只 - tal - 处 - 发现 - yi - 後 - ▁situation - tur - ▁expected - 才 - ▁although - すると - ▁main - ▁только - ット - த - ges - ▁consider - へ - ▁grow - ▁zoo - 東京 - stand - ▁success - 军 - uch - ▁hem - ▁health - lor - ▁group - ▁street - ▁moved - ▁oku - stre - 深 - ▁var - ▁Я - ram - ▁entered - ということです - eth - 不能 - ту - 九 - ப் - ▁sont - ▁bright - nk - ien - 之前 - ▁nine - ▁lives - 的东西 - бы - ▁pi - '30' - time - ▁doctor - と思います - ▁performance - ▁av - ndi - ▁team - ▁werde - ро - 线 - 受 - can - ა - ▁pen - wan - ą - You - sion - 学校 - чы - ▁progress - 觉得 - ▁secret - 阿 - med - 直 - 找 - ▁goes - ▁silence - ▁music - 船 - 은 - 다 - мо - न - pan - ами - 事情 - ▁Tu - 局 - ▁Leben - ida - ▁пять - ▁afraid - ▁hu - ▁class - ▁object - ▁difficult - chi - ▁impact - すごい - ව - ▁learn - ▁greater - ▁könnte - 制 - 音 - ийн - ü - ou - ▁instead - ▁Gu - ட - 见 - ante - 〉 - ▁Als - ▁forty - 神 - ▁distance - ▁floor - 写 - ▁peace - sti - ▁office - ▁key - ▁И - ▁follow - ▁потому - ▁modern - 解 - side - ▁четыре - ▁Et - ction - ッ - ▁seinen - ▁eat - ▁din - ▁trouble - ib - ▁Ur - 指 - 进 - kt - ول - 成为 - ía - 落 - できる - gra - ▁тридцать - 所有 - ین - ḥ - ▁drive - 歌 - 几 - ▁interesting - ß - tte - 关于 - mp - 张 - ▁game - ▁nga - 感じ - ▁provide - ▁Hu - ▁〈 - ▁Zu - eg - tri - ▁standing - 曲 - kel - サ - ▁remain - ▁Bi - ▁estis - ▁Ben - ал - ▁dog - iza - ▁просто - ▁smile - eren - ▁г - ▁sister - ▁break - 给我 - 大家 - ció - ▁Mary - لا - ▁please - form - aka - ▁hatten - バ - ▁dar - ized - ু - lang - bra - ▁был - ▁length - っている - ▁grew - 選手 - ▁private - ville - 之后 - ▁news - ▁これ - 私 - vel - 讲 - ى - ых - ▁Ju - ▁如果 - 取 - ▁ara - ▁increase - ▁mon - ца - یم - ですよね - ▁wouldn - ries - ▁không - メ - ▁mouth - ▁cho - ▁project - ▁bear - ▁Fe - ▁mag - ero - ې - ▁Lady - ▁final - ▁design - ▁mai - ▁gab - ▁base - ст - out - cent - 話 - gri - ▁walked - 本当に - ça - そして - ▁train - ations - ido - さんが - ▁sobre - ▁мо - iv - ▁action - 서 - се - ▁visit - ▁financial - ▁tre - pen - rea - house - ▁neither - ё - ité - ется - ▁més - のが - 卡 - ▁wait - ▁written - 波 - ного - 雨 - チ - tie - ▁grand - ▁сейчас - ▁laid - ▁sitting - ▁geht - ▁heavy - nia - ▁Teil - 完全 - ар - 风 - of - 接 - ха - dig - 如何 - ▁personal - ade - 急 - 进行 - キ - ▁kwa - rie - ▁семь - ▁ад - 間 - ▁trees - ▁amount - ▁While - 任何 - ▁couldn - ে - ▁happen - ше - ▁Mann - ▁ihrer - ned - エ - ▁ré - ▁ways - ▁ganz - ces - iri - ▁books - ▁которые - ▁ship - ▁ihren - 雪 - ▁пра - ▁special - より - 由 - ▁learned - дзе - ▁development - ▁died - ▁War - ▁San - ▁может - ▁begin - ción - ▁joy - ▁immediately - ▁الأ - же - 很多 - ▁stopped - ▁Mit - ▁dir - dd - vor - ▁这是 - ▁plain - 书 - 为什么 - ▁places - ▁spot - ▁board - ika - ▁running - ▁Where - ▁これは - ▁Land - ▁remained - ▁various - ▁konnte - ▁imp - ▁или - mie - ▁desire - 民 - ▁á - ▁Ki - ple - ▁yang - 率 - ▁fit - ▁sei - ▁vielleicht - 君 - ▁sales - ▁gegen - দ - ▁rate - liche - amente - isi - ▁Pi - ▁France - fen - tive - ▁fight - ip - ▁Ve - ▁film - ち - ▁warm - ▁seu - ▁loved - ▁married - clock - 提供 - mis - ▁Im - ▁perfect - かな - lik - ▁danger - ব - san - ɣ - パ - ▁sollte - ට - ▁George - ▁States - mel - ino - ▁caught - lä - ▁conversation - ata - ert - そういう - ▁safe - og - ğ - ins - ят - ▁wäre - ▁переулок - ▁spring - сан - ▁slowly - دا - ▁cool - 表 - ▁кто - ▁program - ▁kam - shi - ▁cette - 一种 - ▁рас - 倒 - ▁regard - õ - ł - 初 - ▁faith - 発 - ▁hall - 清 - ▁quiet - 村 - ии - 让我 - ▁dinner - ▁giving - log - ▁их - ral - ▁сорок - ▁ко - leg - 交 - ு - 早 - 安全 - 土 - uk - ▁However - ▁nice - 地方 - なく - ▁command - cio - das - ▁Ze - ▁straight - ▁гэта - 一点 - ▁gentleman - 选择 - 周 - 実 - ▁hay - 还有 - ▁changed - ▁trust - 东西 - ▁hot - ió - 反 - ona - ですが - とは - 笑 - น - را - さんの - ▁Dar - ▁waiting - 看看 - ▁job - 七 - ▁că - ය - ▁wollen - cla - ▁bank - ▁model - ▁еще - ▁heute - ▁write - rá - 事件 - ▁Pre - ▁möchte - ▁Ru - 巴 - ▁build - ▁built - 城 - low - ▁race - 百 - 国家 - ▁آن - ▁price - ▁picture - mor - '11' - ▁بود - 以上 - ▁나 - art - ▁United - ▁lips - cal - ▁ப - ▁Pri - ▁đ - ▁tree - ▁vom - 病 - ré - ska - ක - ē - 風 - ▁middle - ▁age - ▁broken - ▁impossible - ▁しかし - ▁mij - ski - ▁gehen - ▁quick - ▁direction - 的一个 - ▁society - ▁bat - my - ▁weiter - ▁simply - ih - ▁below - ▁placed - ▁wrote - 帮助 - ように - ▁test - ▁вас - 来说 - ▁study - ▁evil - 是的 - ▁talked - ▁Wo - ▁drew - ▁lang - ▁darüber - ئ - ▁showed - 県 - lichen - ▁của - zeit - 一起 - ▁corner - ▁برای - ▁opinion - ▁thank - 車 - ым - 甚至 - த்த - ▁color - ▁chúng - ▁Peter - 保 - される - 最后 - ▁presence - ▁ي - pro - ▁för - ▁kan - 기 - ▁station - 特别 - ow - ▁wide - yan - 街 - kon - 中的 - ▁naar - ▁minute - ▁hardly - ▁Paul - 低 - ▁piece - ▁க - ると - ▁zich - ▁eu - 命 - ют - dra - xi - ec - ▁filled - ▁expression - 买 - 足 - 场 - dem - ug - ▁chair - ther - ▁pan - ▁soft - ▁battle - ▁pero - 差 - int - ▁growing - ▁Gi - ▁national - 机 - ito - ▁America - 派 - like - ista - ң - ▁davon - ▁current - mus - ▁dies - 番 - ▁touch - だと - 存在 - ▁summer - schen - ▁இ - ▁similar - ▁Jahren - ▁pleased - 一直 - ▁Sam - ▁zusammen - 如此 - ▁cent - hr - ▁decided - ▁Lu - ▁На - 理解 - ▁beauty - ▁West - nis - ava - ▁mas - ぶ - 強 - fin - 员 - ▁Cha - цы - ▁因为 - ニ - ▁language - ▁ம - cro - ده - ▁ru - ▁어 - 活 - ▁muri - ▁lead - gang - ▁și - やって - ans - ө - dos - ▁village - ▁political - 失 - ▁dos - ▁fresh - field - 切 - ▁ohne - lt - ▁revenue - 室 - 男 - 士 - put - ▁ма - 河 - ▁isn - ties - pp - ▁dich - さんは - mm - 部分 - ▁girls - ▁Ke - 夫 - ▁super - kor - 继续 - ich - ▁risk - ながら - ▁sus - ▁Arbeit - ▁sign - よね - ず - ▁ill - اس - hy - ▁appearance - ▁snow - well - 森 - ▁add - ▁army - ▁Your - ator - ament - ▁sick - ▁những - ▁reach - gli - 古 - ▁condition - ▁parts - kla - ▁works - от - nce - ые - グ - ▁Tra - ▁month - 官 - win - хо - ▁everybody - ▁direct - ▁Jahre - んですけど - ▁mentioned - 座 - 管 - sse - ▁serve - ▁chi - 电 - ▁boys - ай - 線 - ▁现在 - ம - 实 - ▁ل - cra - мен - uka - 商 - ▁afternoon - そうです - んですね - ▁raised - 片 - pu - اء - ве - ▁likely - ▁aba - 島 - 报 - 推 - ob - ▁Will - 女性 - ▁anar - ue - ▁garden - ▁influence - ▁brain - 边 - ▁ни - 救 - 方面 - dir - まだ - ▁heaven - ▁шесть - ться - ▁fait - schaft - ▁buy - 少 - ▁worked - டி - ▁thoughts - アメリカ - のです - ме - ▁umu - 起来 - ▁boat - ▁hin - 却 - ▁Eine - 最近 - ions - 得到 - ▁allowed - ala - ▁Yet - ▁struck - ▁enemy - ث - 作为 - ▁Just - ▁ஒரு - ную - umu - ▁using - ▁cor - ▁Diese - 的地方 - ジ - ments - ili - ▁looks - ▁pain - тор - ▁meant - ать - தி - ded - リー - but - ▁terrible - ▁carry - ▁Yeah - ▁weeks - tā - あの - ▁darauf - 其 - ▁kein - wood - ▁pod - ツ - W - ible - sk - ▁David - ▁Kon - モ - ▁bra - ▁Ya - ▁cross - 实际上 - ants - ació - ▁born - 支持 - ▁South - あと - nte - ▁Nun - ▁duty - ような - ▁See - ▁Thank - というのは - 哈 - vin - ▁ourselves - 是什么 - ▁fourth - чи - iti - ped - 她的 - 难 - stra - ▁быть - 岁 - ▁worth - 를 - ▁chief - ▁grave - har - ▁Elle - ▁doch - ▁ஆ - ▁od - 到了 - ▁speaking - لی - ▁cre - 不知道 - 円 - ▁material - න - hin - ▁journey - ture - いく - ▁band - ney - 奥 - なんか - ▁wasn - ojn - ▁том - ▁Ce - ▁девять - 必须 - され - шы - ▁trade - ▁вам - ▁fond - 断 - ▁type - ▁according - 条 - сти - ▁kun - ▁dream - ▁considered - ▁easily - ▁iz - 》 - ▁H - var - ▁seinem - 松 - ▁forget - ▁writing - 达 - ▁Par - ▁labor - 根 - ros - ▁надо - ɛ - ▁paid - iya - тер - dia - wing - ぐらい - ▁products - ▁vast - ▁daar - ▁industry - ▁fun - ▁science - 青 - ▁zwischen - ா - それ - கள் - ев - ▁ба - 当然 - tz - gy - هم - oli - wy - ▁inside - ses - 省 - cken - dar - 院 - 尼 - гу - heid - ▁gran - 罗 - ▁não - ▁finally - als - ▁worden - स - ▁sta - ▁су - 首 - ▁uma - ▁dort - ▁generally - ای - ந்த - ial - ▁shot - َ - 给你 - ▁Paris - त - ı - ▁бол - 在一起 - ford - ▁focus - 無 - hol - mat - ▁tout - ▁field - ова - んですが - ▁enter - 在这里 - ▁einmal - ▁path - 肉 - よう - ▁supposed - lia - най - ps - berg - ves - يا - '50' - 强 - ▁individual - ▁пры - ▁vier - met - ก - dor - しか - 战 - ▁North - ▁tea - ▁Sta - tis - ▁sudden - 两个 - ▁Jack - 房 - ことが - ical - ▁major - ру - せ - 只有 - ут - ▁spent - ▁broke - ▁waar - 工 - んですよ - их - ů - ▁nichts - ▁principal - ▁Sp - nh - ▁esta - ә - だから - 成功 - ビ - ▁восемь - ▁আ - 一次 - trag - 客 - ▁led - ▁winter - ม - ▁Pu - ▁content - fall - ric - 朋友 - ▁exclaimed - ▁charge - looking - ろ - ▁І - していた - 决定 - ▁knows - ▁term - sin - 世 - ▁пятьдесят - ▁wonderful - 以后 - ▁wood - ▁evidence - oc - ssen - ▁deux - ▁пред - ▁moving - ▁receive - ход - вод - mmer - ats - 案 - 左 - bli - 远 - ▁laughed - ▁win - ▁bound - 取り - ний - ▁Because - rig - 这样的 - ▁sí - tru - ▁ただ - 眼 - ▁occasion - 权 - “ - ▁moral - kat - net - ▁entirely - rem - ▁rain - ▁difference - ep - ▁clean - ▁iron - rin - ட் - ▁meiner - ৰ - ▁significant - ケ - wr - ▁paar - ão - set - ▁Tag - ▁General - ▁area - ▁twelve - いました - にある - ▁actual - qu - ▁ama - sco - 转 - ser - られる - ▁latter - ▁بر - ▁hätte - sie - ened - ▁create - ここ - aga - ▁closed - ▁normal - 你可以 - ▁drink - ▁denen - via - ▁beside - ▁positive - ボ - жа - ▁kandi - eb - ▁В - ▁meeting - ▁della - ▁fu - 一个人 - ите - ▁pale - 回来 - ▁report - 传 - 热 - 問題 - 努力 - ▁opportunities - ム - bri - ▁Are - ▁broad - 나 - ein - 亚 - 星 - ▁usually - ская - ları - ▁May - tta - أ - ▁leaving - ph - э - 千 - ▁community - вы - 东 - ▁million - 画 - ola - 种 - ▁cry - ги - mbo - ева - های - ▁allow - ▁mà - ▁molt - ი - ▁Har - 收 - ▁usual - ▁ging - ▁那么 - 思 - isch - ▁size - ский - ▁arrived - ▁была - ▁К - эр - ette - اد - ▁unless - sed - ▁cases - ▁tears - ▁“ - ▁energy - 哪 - 常 - ▁season - ▁silent - ▁animal - ส - ど - å - kommen - ▁Pen - ū - '12' - ▁services - vid - ima - мі - nou - ▁чем - ▁hast - ▁bak - spe - llen - ▁habit - ▁killed - ▁covered - 咱们 - ▁haven - 見て - ▁nog - ▁crowd - ▁cat - чу - nas - ▁rep - poli - род - ▁sprechen - 以及 - pt - ▁complete - ▁бо - ▁этого - vers - ▁Captain - ▁youth - ius - ック - ▁hit - bor - ov - gehen - 自然 - ▁« - lé - zar - ▁막 - ி - たい - jar - ▁kleine - сы - ▁ngo - ▁Bar - ▁нам - ▁fifteen - rat - ami - lic - ▁sixty - ▁Two - stru - ▁global - pf - 许多 - ▁está - ▁đó - ▁shut - sar - び - ▁tatsächlich - ▁Hand - ▁ahead - ▁particularly - ▁From - ▁star - yon - ге - ков - ▁tax - ▁technology - ▁European - mak - ▁religion - uma - ▁وال - ula - 警察 - ▁Д - 代表 - 的に - ▁former - ▁gemacht - ネ - 上的 - '40' - ▁Frau - ▁China - ils - mba - ▁President - 拿 - ient - ▁approach - 妈妈 - 真正 - ▁turning - ▁ubu - wu - ▁مع - ▁große - ▁Jesus - 酒 - ە - bit - ▁somewhat - light - ▁merely - 不同 - ▁dazu - yu - 家庭 - 有什么 - ora - ▁Dan - 冷 - ▁cast - ын - ▁dels - ▁また - hand - かった - ほど - ோ - 节 - ▁cy - hab - sis - ▁horses - ▁miss - ハ - ți - ▁gar - ▁somebody - 대 - ▁Els - stu - ▁note - ර - ▁har - iki - rà - 变 - bro - ▁fe - ability - ping - ▁Cor - ▁glaube - ▁spend - ▁lines - نی - ▁someone - ▁mijn - бо - ▁noble - ஸ் - tres - ҡ - ▁needs - があります - ev - ▁fish - ▁Nach - ▁fort - 式 - ▁rise - ман - ▁appear - ということ - । - ▁ze - ное - ▁increased - ▁Have - け - dri - ara - ▁surprise - 对于 - ▁offer - лы - ▁letters - ▁attack - ▁bij - ▁stone - nos - みんな - ily - ▁Geschichte - ▁original - ▁discovered - 温 - 每 - ですか - 리 - ина - ▁sorry - ▁col - án - ரி - 让我们 - 装 - 重要 - 是否 - 以前 - 找到 - ▁circumstances - 系统 - ▁Father - 此 - tik - row - ком - ▁здесь - これは - ▁всё - ▁imagine - ад - 所有的 - ▁этом - ▁honor - ис - 今年 - mun - ▁tall - ▁тем - ▁sky - ▁gonna - ি - ▁needed - ▁trans - になった - illa - 感染 - ▁escape - ▁هو - ▁steps - 機 - ▁video - 第一 - ▁blind - なら - ▁sono - ▁huge - ô - ▁sand - ▁ideas - イン - ▁countries - ▁Frage - ▁rock - tura - 整个 - 皆さん - ▁notice - uta - ひ - ▁worse - ▁Nu - nel - board - hir - pped - 玩 - oni - ви - ▁iki - ▁parents - ▁instant - ▁تا - 李 - 員 - ▁James - ▁wise - ▁tend - pul - ▁bottom - ▁ersten - 语 - ▁slow - kle - ▁Men - dis - ас - bil - ▁где - 短 - ▁Their - ▁zou - ▁kids - ▁conditions - ▁того - ぎ - тай - all - 方法 - ▁porque - ▁هم - ▁based - ▁customer - मा - ▁entire - ▁marriage - ith - ně - 往 - ак - ▁ball - ▁store - leri - idad - ţi - ُ - ▁های - 了一个 - lü - ▁unto - ▁movement - 进入 - 如果你 - 昨日 - hl - ▁Fall - mont - ged - gal - ▁effort - rt - 場 - ▁serious - ▁Okay - 学生 - ität - ร - rwa - ද - 师 - aire - си - ці - ī - kar - ▁prepared - ▁Ex - gla - ▁während - ガ - স - 件 - 健康 - ▁reading - ▁się - 考え - ▁north - ▁cash - 思い - lay - hn - اب - ட்ட - cen - ▁الت - 其实 - ques - loo - ▁thick - ▁breath - ▁Even - ත - ▁دو - ▁advantage - ▁mort - vad - こんな - ual - ete - ▁würden - 大きな - rn - sam - mut - до - ▁draw - J - 零 - tus - ▁тут - じゃない - 关系 - ▁greatest - ▁credit - ddy - 兰 - ▁min - ▁sah - ▁Kinder - ▁fest - '!"' - ▁education - 算 - 歳 - 字 - â - そ - া - ▁watched - ▁changes - ノ - encia - وم - пер - 单 - ▁kar - 皮 - ▁Mon - rang - ▁points - 勝 - mic - 了解 - ▁bon - ▁رو - ▁welche - по - ▁storm - ▁completely - tag - 红 - 的问题 - ▁watching - huh - ▁த - ados - ▁guard - лю - itat - ▁nó - ▁property - 图 - 服 - ения - ▁members - ▁takes - ▁including - られた - ably - table - ified - зна - ▁Han - 包 - '15' - 草 - What - ▁без - kul - 教育 - 是不是 - ▁gives - mann - ▁tak - ▁Weise - ение - pon - े - ey - ▁listen - ▁laugh - ▁için - ân - ী - 脚 - ż - 要求 - ▁total - ため - ▁benefit - ved - ран - ▁我想 - wal - teil - たら - ちゃん - ▁genau - ador - 突然 - ▁إلى - nan - ▁scene - ▁можно - eri - ▁Herr - プ - ав - ▁bw - 队 - ▁police - ▁fortune - 만 - ón - ▁weather - ▁parte - ▁balance - ▁Ihr - 板 - вед - ell - ▁finden - 这一 - 包括 - vil - tant - ▁Wi - تر - ▁exp - imi - ▁mountain - ▁Robert - ▁час - ер - 非 - ná - ▁forest - ▁даже - ▁ac - 시 - ▁Mas - ▁research - 我就 - ▁app - 有人 - きました - 嘛 - osa - ▁list - ▁Za - ▁percent - рэ - といいます - ▁Ĝi - mb - inde - ▁named - 当时 - лен - 二十 - ▁som - 服务 - ▁daran - ир - ▁tone - ▁believed - ▁speech - ▁glass - 离 - ▁mere - ▁taste - ld - 場所 - ▁events - 血 - iko - night - との - ▁solid - ▁pie - ▁était - fre - ▁fallen - 未 - ast - ▁Charles - ▁Stadt - ว - ▁memory - ▁uz - ことを - lí - 我会 - ▁mad - eva - iga - ப்ப - мы - ▁tan - werk - тра - 连 - мер - ▁edge - ute - ார் - ▁шестьдесят - 自由 - press - ▁kommen - 纳 - ▁mhm - ▁لل - ▁markets - ▁south - ▁passage - ▁pra - 完 - бе - ▁ре - ▁afterwards - rei - ▁played - ▁Henry - 的方式 - ▁casa - 中心 - ол - ▁wenig - ▁shape - 关 - ious - ▁ха - ḍ - sia - 原因 - 右 - ▁economic - 参加 - spect - ▁Dann - ▁großen - 人的 - sp - ▁prove - lon - apa - ▁bri - ▁spread - 有点 - チーム - ▁everyone - 頭 - ▁fixed - ▁states - ▁dress - ▁animals - 人は - ▁comfort - 过去 - stan - ▁bag - 割 - ロシア - ▁meaning - ▁determined - 야 - 修 - ▁dry - رو - ▁người - ▁justice - ▁Which - 試合 - 大会 - ▁wollte - ும் - 屋 - ▁German - Oh - oma - ▁popular - ▁record - ▁box - ін - 华 - 技术 - ▁companies - asi - ei - ▁hebben - ▁بال - ▁그냥 - ् - ▁ont - ▁geben - 感觉 - 江 - ▁rec - 自分の - 待 - ▁surface - ▁shore - sent - ▁kill - ▁honour - recht - ▁understood - ign - ▁walls - ▁persons - 听到 - ▁cover - ▁lange - ▁который - ▁ook - 注意 - ▁sad - ▁natürlich - ▁narrow - ella - ▁ber - 密 - டு - 谷 - ▁silver - ▁ses - ▁Mor - ві - pre - ▁message - ▁grass - ▁Great - ▁ganze - 方式 - ▁gerade - 追 - ▁Nor - 来自 - ▁House - ön - 政治 - odd - 如 - 我认为 - fan - til - でしょう - kom - 在那里 - ぐ - ▁smoke - ▁嗯 - ▁brave - 影响 - cul - 全部 - ▁driven - ▁그런 - ła - 返 - ▁stream - ▁prevent - ▁19 - があった - ▁environment - вал - ▁sold - ще - pin - 吉 - ву - zin - ▁sollten - ِ - yn - ▁sondern - 로 - ls - ▁excited - ▁tri - ▁Right - eux - ▁confidence - ▁lassen - ▁pick - ▁gesagt - ▁Weg - 好的 - ▁X - 文化 - ▁region - ныя - 别人 - ▁Good - ▁Che - 管理 - ▁Did - 配 - ▁William - 里面 - ors - 人类 - ▁mass - ▁де - ▁unserer - 任 - 总是 - 我是 - 守 - ▁wished - ▁pleasant - ▁familiar - ▁weak - ▁کار - ret - эл - eu - 罪 - tä - zel - ın - 'off' - ▁п - 啦 - ▁anti - ▁تو - スト - ako - ▁được - ▁ancient - ▁wel - ▁perfectly - 東 - ▁ces - 调 - ▁Sal - ▁bạn - rum - 步 - 同じ - ▁University - ative - 程 - gera - 科学 - 跑 - ▁dia - cos - cing - তা - デ - ▁kwi - ▁operating - 時に - lit - ▁ibi - 因 - ▁search - ▁absolutely - ▁degree - vert - ▁telling - ته - ย - ▁offered - ▁jeder - 哥 - 让你 - 提 - sing - ▁seat - ▁человек - ▁Th - стра - ▁happiness - 这里 - ering - 井 - lli - 这是一个 - 城市 - 母 - ▁hurt - 至 - ▁efforts - سی - ظ - ▁Col - ール - тар - どんな - biri - ெ - ▁interested - ▁production - ▁proper - 不同的 - ▁Але - ais - iro - っと - law - 完成 - له - fil - ▁inform - 利用 - ग - 状況 - ▁darkness - ▁dafür - ▁accept - 通常 - ًا - 试图 - ▁double - ния - ▁як - きょう - ▁Jahr - เ - ▁neck - gin - nam - ea - ること - ▁produce - ▁내가 - 家族 - 非常に - ego - ▁quality - ▁Tro - ▁sau - 雷 - ▁한 - ▁throughout - ো - ación - 查 - ▁Christian - 精 - ría - ▁خ - ▁Tre - 改变 - fully - ▁international - It - 那里 - 々 - tir - ▁är - nder - مان - ▁favor - ▁Beispiel - ▁range - бу - 恩 - pot - ▁でも - ạ - である - uza - 黑 - лся - 演 - 今日 - ▁beneath - eta - ▁sharp - म - ▁Ch - ▁Like - ▁clothes - ▁Des - nze - ▁gute - ▁fla - 役 - 直接 - igi - 素 - ▁seventy - को - 'No' - ium - ▁shop - ্ - ▁captain - 坐 - හ - ポイント - ▁marry - ▁fill - ▁bil - によって - ▁şi - ▁いや - 各 - ▁century - ▁seva - こちら - ப - ▁vint - ▁Über - 福 - ▁attempt - 一番 - 対 - 未来 - thi - ▁Son - ற - sim - ری - tem - ▁finding - すごく - uku - 经济 - ▁repeated - ▁British - 気持ち - ▁explain - ▁loss - sur - rit - ▁stage - バー - vá - ela - ay - eau - ▁courage - nic - ▁vain - ▁opening - ▁sagt - '60' - ▁lose - ierte - 它的 - 想要 - ▁supply - ▁Б - க்கு - んですか - ▁structure - прав - 는데 - ▁že - ▁observed - pic - ▁شد - ▁drawing - ▁promise - ▁deliver - 老师 - qua - 熱 - itu - あれ - ▁putting - ▁author - ▁finished - dol - lig - ▁agree - ▁produced - over - になります - 付 - ▁тоже - 果 - ün - されました - ▁reply - ▁двести - ▁march - ▁deze - gas - ▁foreign - ▁gra - ني - 为了 - ▁Ende - cle - halt - shed - эн - ▁basis - ▁ничего - 留 - ▁plant - ▁soldiers - ▁heraus - ▁formed - وی - ▁profit - ო - '100' - ▁gesehen - 学习 - 比较 - 共 - ▁stock - ▁cha - ▁neue - 获得 - න් - ▁management - ▁issue - 似乎 - なので - ु - ▁equal - nden - tten - 无法 - لي - ▁noise - ి - ▁ф - ▁этот - ▁existence - 毛 - uff - 料 - zione - ena - ▁gli - ▁mil - ▁pentru - ▁statements - ください - tat - ane - ▁macht - ▁passion - ▁moon - ▁holding - つの - 前に - ▁laws - ▁numbers - und - ▁muy - ▁Geld - ල - ▁dropped - ▁cloud - лу - ी - wed - まして - みたいな - ▁beiden - jan - ill - ▁access - ச - vé - ▁10 - ▁20 - ▁Bei - 间 - ▁sword - 作品 - ▁warum - ▁конечно - аль - ▁exist - ▁song - لو - ▁mental - ▁stick - lung - 个人 - ▁houses - ▁Had - ▁Gar - ▁rule - iku - dat - ▁Though - ▁leading - 油 - work - об - 現在 - ▁threw - some - 破 - ▁additional - 数据 - ▁wij - ▁più - 少し - ▁Saint - sz - ს - ▁leaves - などの - ▁fünf - ▁Grund - あります - ▁address - cour - কে - ungs - ▁Denn - ▁addition - とも - ▁bas - ▁yellow - ▁flowers - ▁platform - ▁erste - ▁playing - ▁weight - ▁эти - ▁Street - 産 - ▁Queen - ▁knowing - 亲 - ул - 開 - ▁lying - テ - tung - ▁Dies - ▁trong - 型 - ▁proud - ск - ロー - ▁glance - ▁семьдесят - ▁island - மா - ▁ladies - 데 - вид - 党 - ▁decision - ▁weg - ▁எ - ades - gg - ນ - 退 - ですから - فت - それを - cie - 去年 - ▁speed - ▁ears - ▁sake - unk - ▁này - ▁cri - 维 - 可能会 - ▁– - ▁scarcely - ▁sell - ches - ▁clearly - ▁உ - cord - ▁seek - ▁choice - ▁Mother - став - би - ć - ▁brown - iger - ▁besser - 及 - 便 - 動 - 事故 - uko - ▁earlier - 接受 - 我在 - quer - stone - ▁Em - ▁powerful - ▁люди - chte - ග - ▁prop - ▁recent - じゃ - ▁orders - ▁ear - 」。 - ▁ihrem - ▁flat - ▁declared - ▁matters - ▁tam - ▁required - 骨 - ▁larger - ник - 夏 - pri - vre - ▁ziemlich - arbeit - ▁baby - ▁travel - ▁grace - ▁civil - ▁passing - ター - ▁firm - жи - ▁meinen - enden - 系 - 排 - ▁Every - ▁Aba - ▁aid - ▁さあ - 端 - 我觉得 - ists - cks - 界 - ▁faire - tum - 故事 - こちらの - 理由 - fra - ிய - 其中 - ille - ▁accident - ▁ph - ▁今 - 愛 - ▁thể - aŭ - zy - serv - による - 望 - war - ket - ▁possession - 说话 - ▁кон - ▁caused - ши - سا - ▁walking - ▁voi - 投 - そんな - cker - গ - ▁divine - れば - ですよ - ▁Los - ▁wine - 角 - 死亡 - aux - ▁judge - anya - ‘ - ▁Madame - ▁farm - ▁са - ▁bow - rus - spir - али - ▁learning - hor - 引 - lock - 超 - ポ - あった - ▁directly - व - ▁han - 個 - ▁shook - ヤ - 感到 - 午後 - ▁dire - 強い - gue - ▁oil - 之一 - ▁Anne - ぞ - ▁birds - ▁Gra - ▁tur - ▁remains - ▁Moment - 持 - ▁время - ▁あっ - ▁gro - tet - ரு - での - stro - 残 - ▁god - aient - 宝 - ▁sicher - цца - ▁shown - lam - ▁نه - 人生 - 一定 - চ - ▁gas - 养 - ▁contract - ▁mid - ▁هذا - 计划 - ▁på - ▁square - 证 - ▁Roman - ▁были - ▁drawn - ▁spite - ▁vision - ▁denken - 信息 - 费 - ▁avait - 横 - 另一个 - ▁Mark - ▁carefully - ▁tut - ▁coat - ente - ▁crea - ▁Come - cause - zwe - 登 - ▁flu - 卖 - 站 - า - ston - ▁Sant - ▁practice - ус - vol - ずっと - ▁willing - ▁task - mek - ▁prison - 地区 - 出去 - ▁opposite - ▁ту - ▁related - ▁military - bie - ▁center - ▁だから - 俺 - ese - ▁ol - 称 - ур - ▁Indian - 伤 - nem - লে - っています - ▁available - ▁club - یه - ▁instance - isa - rah - ▁진짜 - ▁smiled - ▁calm - 結構 - ▁Many - ▁curious - ▁af - ▁Haus - ▁understanding - 初めて - ▁shadow - ▁aware - ▁track - ▁hotel - ▁пре - 求 - 晚上 - ▁gehört - ▁شما - ▁hoe - ▁stories - ません - んですよね - きた - ▁bird - ▁prince - ched - دی - morrow - ێ - ім - 苦 - ▁Ort - ▁population - ▁engaged - ▁다 - ▁gij - ▁Bra - 兵 - vy - lı - 离开 - ▁spoken - 喝 - 잖아 - ▁больше - вар - kuru - av - ▁thin - uz - こういう - ゴ - û - ime - となる - dal - 一天 - ▁catch - ▁soll - ▁été - ▁Bir - ▁att - iĝis - ▁aux - ピ - 例 - ా - ог - ▁shows - chten - ண - 父 - 良 - 必要 - ダ - reg - ï - place - 町 - 然 - rio - ▁conduct - 确实 - ہ - 今回 - ▁surprised - auf - 你会 - ▁Washington - ▁successful - 几乎 - try - sol - されている - 法律 - '19' - ▁difficulty - ör - 跳 - ▁Prince - ▁media - ▁而且 - ▁college - क - ▁strategy - 自分 - ое - 回到 - ▁fue - com - ši - ▁religious - ▁scale - ▁uncle - 基 - 群 - 仍然 - 迪 - ▁О - ▁pocket - ос - ふ - ▁sing - ள - 支 - 容 - ▁centre - ▁آ - ification - ▁hill - ▁fly - ▁pure - ▁auto - ала - 牛 - ▁pla - ▁activity - 武 - ▁forma - ró - 也不 - ▁tired - inte - 发生 - ▁portion - ▁hung - оч - 한 - ica - ▁schnell - '18' - ▁regular - dde - ▁cyane - 把它 - eɣ - 药 - ▁đã - ▁East - ▁uno - ▁mari - ▁Tri - ດ - ▁leben - '0' - ktor - nge - ▁Daten - ▁لا - ство - ▁plans - ▁pel - 假 - ▁eighty - 解决 - plo - ง - 当你 - ▁noticed - ▁kommt - zon - ▁Unternehmen - ▁jeden - nnen - ▁Sache - 准备 - یت - ▁sal - 你就 - 関係 - лан - ు - ▁hearing - ▁happens - 照 - bert - 弱 - ▁лет - ▁Chi - act - ▁знаю - get - ▁бу - ூ - 视 - ▁Ci - ыл - bat - ▁Zi - vent - 段 - 秋 - фи - pper - vas - ▁mark - ▁advance - ▁Ik - ▁восемьдесят - دي - ▁إ - ▁gets - ▁creature - ▁suffering - кой - ▁todo - 選 - 满 - tori - ิ - či - ▁fully - rent - ło - 树 - '16' - ▁golden - してる - 인 - ▁hath - We - 就会 - ▁peculiar - ▁Spiel - 读 - ux - れる - ▁High - 你在 - ▁officers - br - られ - ▁vie - gestellt - თ - ▁improve - ▁Problem - ▁wichtig - kwa - ▁Church - bb - 最高 - тан - tern - 业 - bs - ▁faint - stri - ▁wear - ult - ▁forced - mul - 谈论 - ▁res - nal - ▁date - 書 - ற்ற - ▁són - ▁apart - ▁aussi - 市场 - lent - ▁Gefühl - 渡 - 春 - 税 - ▁School - 除了 - lem - されています - 最终 - 友 - 相信 - '",' - ▁fruit - she - gna - য় - ua - ана - ▁member - ल - ном - ▁served - ▁bread - 明天 - ▁Eu - ▁cal - ▁text - ▁pré - تی - ▁students - ▁species - ▁extra - قا - 约 - ▁First - ▁nos - ▁wants - ▁其实 - ▁proved - ▁яго - щ - ▁besides - ▁rein - ▁anybody - ▁pr - ▁fan - ға - ▁کرد - ▁excellent - ▁fate - 洗 - ▁lord - ▁impression - ▁Qui - 飞 - ▁අ - ▁pieces - abo - ▁host - ▁fool - 音乐 - ▁event - ▁юм - 关注 - ▁ச - ▁leur - ▁Kar - ▁areas - 项目 - ▁Smith - ▁много - 玉 - 一般 - ского - ▁importance - ▁vida - igkeit - ▁otherwise - ギ - ▁net - uh - ▁Van - ▁organ - 男性 - प - 해 - 类 - ▁burst - ore - bon - ▁ந - ▁Dia - ration - ▁measure - ś - дар - নি - dio - ierung - ▁commercial - ▁nti - ▁digital - اف - ▁geen - ざ - ▁waited - ▁그래서 - ▁reality - ▁coast - ray - 告诉 - ▁З - rate - nik - ▁Richard - 伊 - pier - ▁ему - ▁そう - nza - 忙 - อ - zz - ▁bal - ▁motion - rak - 拥有 - ▁với - ▁сказал - 背 - ▁Bre - ▁keeping - 라 - ▁check - dom - ▁image - 治 - part - তে - すること - 迫 - ssa - ▁block - ▁throw - lis - ct - 年前 - He - ▁heat - ▁breast - しい - ori - bed - ства - ▁tin - ▁Den - '17' - ▁brand - rü - vez - ffe - ていた - ▁因此 - vä - din - ▁intention - lly - ▁pride - amu - ▁machine - ▁làm - ائ - ▁forgotten - bol - もある - бор - nne - ım - ▁reasons - 穿 - ▁costs - 的时间 - '25' - 語 - ▁anxious - 联系 - ▁э - ▁bekommen - ▁eigenen - ▁stranger - ▁Cre - ṛ - ▁ordered - ▁сам - ▁Those - ▁Such - ير - ▁pounds - されて - けれども - ▁gentle - ▁height - jn - 击 - ▁kuri - 我们在 - 害 - 日本の - gh - ▁City - ▁bill - 错 - gir - ▁Hier - ▁skin - 주 - pos - ▁Wasser - ique - ydd - ▁policy - 可能性 - 经常 - 过来 - ▁тебя - nye - 选 - ▁angry - ▁كان - ▁seit - 万円 - ▁Park - ▁schwer - ▁deg - ▁fashion - раз - 后来 - ▁Hal - 之间 - 若 - ▁created - ▁ability - ▁байна - ▁девяносто - 香 - ▁meinem - ole - 脱 - els - ねえ - halten - ▁forms - ▁Ten - ▁alive - の中で - ▁потом - 久 - ▁mention - lijk - cz - 掉 - 娘 - ▁press - ▁remembered - ▁provided - 发展 - てる - pie - ด - bone - lei - нд - ත් - ▁các - بر - ▁Seite - pping - ▁kuba - ▁Far - 軍 - ▁servant - nni - ▁failed - rung - ▁nation - udi - 스 - emp - lum - ▁blow - люб - ked - ii - ▁med - ▁keinen - 駅 - ▁Мы - ウクライナ - ▁sum - 底 - からの - ▁enjoy - ▁comment - rar - ▁pair - ▁hospital - ▁тебе - kim - ▁вопрос - bal - 告 - ▁promised - ▁имени - ล - வா - ▁sol - ▁fat - ▁origin - 极 - 改 - ▁poet - ▁authority - uli - ▁aim - やはり - ▁pos - bot - ▁choose - 保持 - wel - ▁legal - ▁ordinary - ▁names - ▁Pat - uni - ▁nobody - ້ - yle - ology - ▁fingers - によりますと - än - ▁би - 是个 - its - 収 - tig - ▁Joe - ▁Pan - tă - ▁Person - ▁сегодня - ▁вообще - gle - ▁könnten - сть - ▁defend - ▁officer - izi - bau - tí - ▁freedom - ▁taught - ▁fancy - ▁patient - ude - ▁president - antes - 来了 - ▁beg - ▁chose - ▁meisten - cies - ▁nom - 助 - 塔 - ví - ▁correct - 奇 - ▁Kan - ▁valley - ▁teach - ▁Post - でしょうか - 育 - ▁ride - dad - ился - 親 - 進 - ▁crime - су - ▁investment - ▁És - ▁welcome - ▁asking - ▁dro - ▁cousin - 班 - ord - ▁style - llo - ▁gentlemen - iye - sem - ກ - ▁volume - če - ▁legs - 微 - ▁shoulder - ▁reform - ▁অ - できない - ▁letzten - lá - нем - 秒 - ño - ▁Buch - ▁Red - ▁Bur - 考 - rup - cor - 迷 - ▁Form - ▁tent - ▁review - ▁Unter - ▁sufficient - tische - mble - ▁delight - 普 - ▁active - 情報 - pus - kes - aus - wert - まず - 限 - ෙ - gia - 違う - ود - ▁desert - 已 - 歳の - ▁elle - ▁unser - ▁circle - 层 - '14' - мат - ▁rising - 怎么样 - ila - 倍 - ▁beat - tto - 也许 - 令人 - any - 医生 - 晚 - ▁feelings - ீ - 英 - ▁jedoch - ▁roof - ▁شده - kā - ▁tongue - 면 - ▁limit - ▁ninety - 田さん - ule - ▁grown - ▁него - ▁altogether - ▁Thus - ▁jo - のお - ▁dangerous - ▁ме - したい - ruh - 云 - 了吗 - ▁heads - še - ▁Au - right - ▁starting - ▁mountains - ▁devil - ▁такой - ▁rapid - ▁même - fort - wen - cí - ▁daily - hör - ▁аб - 様 - ▁murder - ের - uf - ▁이제 - zwa - ▁Black - gl - ▁gun - ega - 变得 - tim - கி - ▁alt - ▁kleinen - 料理 - ▁mistake - ▁ap - ешь - ▁agreed - gie - 组织 - ▁sounds - стро - more - ew - long - nar - ▁potential - 伯 - oso - bes - ▁Augen - ▁explained - द - ▁Па - ங்க - ▁quietly - ე - ▁park - nut - ел - 建 - jen - ▁busy - 出现 - ▁Dick - ▁honest - far - ▁greatly - ▁те - ▁planet - ▁ausge - ▁terror - ▁etwa - ži - ▁income - ▁twice - гийн - ней - 込み - ì - tom - ▁ay - ters - пол - ▁contra - ▁spa - ري - 好像 - 副 - 的小 - ▁hun - 结 - ыр - 尽 - ▁invest - なんです - ▁sorrow - ddi - ▁Only - 网 - ▁physical - ум - ▁highest - 托 - bona - пи - 应 - ▁suit - kte - ▁Perhaps - 他在 - каз - ▁safety - ▁людей - ются - ▁brief - প - aw - ▁royal - ▁sides - ▁pou - ▁basically - 菜 - வி - 到底 - 乱 - ▁empty - ▁sans - 《 - ▁streets - était - ▁exact - gor - はい - ▁sogar - ight - ▁glory - ив - iyor - ▁surely - cut - ▁ای - ийг - ▁anyone - ▁margin - pun - ▁passiert - ▁Os - иться - ▁pet - ▁Mais - 精神 - ▁liked - ▁meer - ▁struggle - dge - ▁River - ▁Old - lat - ▁district - ▁breakfast - ▁todos - ▁sex - ▁intended - بل - iamo - az - 注目 - ▁flow - ▁central - த் - ▁Jim - 市の - 景 - ▁fel - kal - geben - ▁fault - ▁exercise - ▁lack - ज - ▁Q - يد - гүй - 回答 - Fi - ▁target - ▁express - ▁себя - ▁Dis - 社 - 身上 - '200' - ▁problems - ээ - ▁capacity - ▁bent - ▁companion - ▁Cla - 女人 - ▁possibly - old - ▁touched - ▁dare - ▁join - ws - lief - ▁после - ▁refer - ▁seventeen - শ - ▁settled - 结果 - ▁Rome - ▁Martin - ▁count - сон - ▁そんな - 会社 - 电影 - ▁culture - ande - با - ming - ▁rode - ▁fur - ν - ▁보 - 多くの - ▁아니 - ▁temps - genommen - пе - 降 - mbe - rac - ▁bedeutet - ▁suffer - ε - 新的 - 造 - ▁ended - 政策 - ▁older - tika - uda - ▁ее - なぜ - dé - ▁Mil - ние - 怪 - 当中 - eye - ▁П - ▁почему - ▁aside - лись - ingen - gwa - का - ées - まあ - зе - ▁sentence - ▁previous - ▁described - ▁thrown - 源 - glo - ▁бар - ゆ - 付け - とき - 警 - セ - ▁gate - লা - 結果 - ▁driving - ▁nineteen - ニュース - tain - ▁sang - ▁Bel - ▁Cu - 节目 - したら - 团 - 場合 - 视频 - ▁qual - ▁false - 有很多 - gus - 样 - gem - パン - '80' - 级 - izing - 相当 - ат - ismus - ziehen - ▁それは - வு - igt - しっかり - 休 - ▁خود - じゃないですか - rad - 喜 - ▁somewhere - ▁calling - ono - ▁superior - ましょう - ▁そうですね - ▁Before - pens - пу - ▁später - 换 - ▁rights - ▁кар - ▁spo - ▁shoulders - ▁operations - 除 - gé - ▁lake - ▁mr - stor - わけ - ▁windows - 我要 - ा - ▁pressure - ▁judgment - ▁helped - ▁separate - ▁Ac - ▁Ser - hundert - ▁fin - 余 - 全国 - дан - 列 - ▁specific - ▁admit - おいしい - ▁Union - ▁dance - ▁famous - 消 - ▁بی - 乐 - ▁stars - ▁trial - ▁пока - cara - ской - 赤 - ▁release - iva - 企业 - ▁highly - 顿 - яр - ech - pit - mond - 刚 - eza - ▁becomes - ▁page - rk - ▁гор - anga - 顔 - ▁ها - zig - ▁кор - ▁ones - ▁Ph - 那是 - 这一点 - ▁neu - xe - 现 - ▁belief - vit - 黄 - ▁experiment - ▁series - ▁Г - ▁cuando - 鱼 - ugh - ецца - 好了 - alt - Ma - を受け - ▁extremely - ▁signal - ▁holy - ▁strike - ▁그거 - ▁ankaŭ - lieb - 側 - ità - 切り - ▁debt - たち - なんて - 요 - ▁нужно - ote - ▁gw - col - ▁Commission - 防 - 陈 - 更多 - ét - cas - ▁carriage - ▁Gre - ▁Frank - ▁presented - 下来 - спе - ▁первый - ▁fra - ▁Mont - وت - 愿 - non - pend - plan - 身体 - ▁grande - ▁també - ▁allen - ▁Ed - 社区 - ▁heb - ▁recently - gun - ▁bell - ŝ - yer - ▁如果你 - ised - дер - ▁obviously - keit - 枪 - 票 - 現場 - whi - ening - ි - ▁considerable - 分享 - ▁cruel - ▁Bill - ▁wahrscheinlich - 而不是 - 才能 - ▁Frauen - 沙 - ▁drop - 是在 - ▁மற்றும் - ▁portfolio - kir - ▁sacrifice - ን - 控制 - きます - 激 - ▁Israel - nje - ▁поэтому - லை - нов - ▁listening - знач - ▁hinter - 探 - ▁ange - 抗 - 散 - legen - 黒 - ▁connection - 支援 - ワ - 杰 - र् - ▁sought - rich - 谢谢 - ▁pointed - ▁refused - dus - ▁weit - sicht - duc - ▁truly - ▁India - 考虑 - 立ち - nor - our - ▁concerned - ibi - ▁palace - uru - дел - mine - ▁avoid - blo - ▁security - eix - ▁guide - voll - ij - fle - 双 - ▁Mag - になりました - ▁zo - 方向 - ▁Bro - 아 - ▁joined - ▁какой - Laughter - аны - 業 - かなり - pr - bet - 電 - ▁yesterday - 痛 - ▁argument - rik - ▁fairly - なって - ▁affairs - ▁career - uje - ▁pare - 在这个 - ▁Edward - eller - 父母 - のに - ▁él - ▁match - 毒 - ▁Por - ül - tent - த்து - ▁peut - ră - ▁我是 - 連 - 実は - ře - ▁sua - ▁때 - ▁Green - 衣 - setzen - ▁guys - git - ▁Thomas - ▁National - ▁nose - ▁vers - ▁fix - 电话 - ▁distant - ▁ideal - 静 - 走了 - ▁flight - heim - 看着 - 好好 - ▁troops - ▁average - ▁claim - ▁второй - ▁onder - pra - 并不 - জ - ▁faces - ▁artist - いろんな - эд - つか - ▁slightly - っていうのは - ngen - ▁এ - tha - tür - ál - ▁advanced - ҙ - ▁où - 旅 - ▁wealth - 当我 - ▁хорошо - ▁August - 处理 - ▁versch - ▁அவர் - ▁west - ▁sheet - horn - ▁الس - ▁issues - ▁Trump - iste - 这样做 - бер - xa - ▁method - ▁useful - ▁ocean - از - 做的 - ▁quatre - ▁naturally - mark - 言葉 - sal - 経済 - ▁source - ▁çok - hold - บ - ート - ▁richtig - ▁temple - die - ý - kli - ▁bevor - 末 - ность - ▁quarters - ▁thousands - ед - ベ - pil - 数字 - ここで - ▁Port - ont - tos - ▁думаю - ▁worst - ▁然而 - dag - ario - fini - ▁spiritual - ▁ذلك - ▁harm - ett - ▁بعد - ает - 内容 - 比如 - 存 - ▁gain - 越来越 - ngu - ара - もっと - ▁Bla - ▁Mhm - 活動 - 能力 - নে - ▁cas - ▁gay - ppen - 伝え - یا - ▁suggested - ▁loose - 他是 - nova - ose - ▁bereits - ▁可是 - ▁eh - ▁zal - 龙 - 政 - сэн - ▁tender - ▁Kopf - くる - 多少 - bye - term - leid - ▁occurred - сто - 肯定 - ▁discover - ▁moments - ▁prayer - ේ - ▁Mac - 直到 - 富 - 封 - بي - کن - dur - 游戏 - нее - 犯 - ▁features - ència - ▁rough - 几个 - ▁Wal - ▁title - டை - ▁erhalten - 剧 - ▁lamp - ify - 保护 - ▁bodies - ▁había - 状態 - ▁dr - со - 게 - ▁ships - ▁loud - 快乐 - ▁этой - ▁ප - 楼 - ▁White - 藤 - not - ust - 带着 - 唱 - ▁salt - ▁gathered - ▁fighting - 期待 - ▁wahr - ▁suggest - ès - ▁Ent - ▁Ol - ▁papers - ▁priest - 杀 - hora - ▁gray - ▁guidance - 性的 - лар - ▁unknown - endo - spiel - eh - ▁Cal - ▁fail - 回家 - それで - ि - ▁Bas - 居 - vir - ▁Wer - ▁bru - ▁200 - কা - ▁kinds - ▁student - ソ - нь - ▁online - dit - ▁qua - zt - なかなか - ▁Nothing - ▁servants - зи - ▁solo - ▁Lebens - ▁bridge - 一切 - 引き - tausend - ▁bringing - ▁EU - ▁interview - ますね - ▁extent - system - ▁cell - mbi - twa - ティ - 番組 - 日に - 让他 - 息 - ▁involved - hau - mara - ▁disease - ▁affection - ▁established - vie - ▁你知道 - ▁Grand - ो - roll - pl - ▁Inter - ▁saved - ила - por - 手机 - ▁weniger - று - 解释 - ▁Sunday - 止 - ▁allem - ▁pace - ▁Punkt - ▁novel - ことは - ун - ▁yr - ong - ▁Ob - 根据 - ём - ▁virtue - ค - iş - ▁dressed - nehmen - ère - ▁sixteen - ▁apparently - bare - ▁trail - 刘 - 苏 - gil - bas - ции - yen - ▁Professor - ボール - α - cin - fel - ▁về - ▁map - fl - fon - ▁conclusion - 想象 - ▁protest - стан - пла - ▁ở - ▁дело - ume - tif - ▁obliged - ▁rapidly - ▁ар - ▁becoming - ▁Duke - ▁handsome - ▁се - ▁هذه - ization - 是我 - ▁People - pat - ▁거야 - haus - ▁committed - 活动 - pel - ciones - বে - ▁pulled - ▁heeft - ▁cosa - ▁finger - ▁dachte - ▁existing - ▁bought - eḍ - を見 - سم - ▁showing - ▁mode - 前の - ▁Europa - ▁liegt - ▁daha - 梅 - istic - ▁torn - 今回の - из - 只要 - ска - ▁schw - ▁больш - ▁network - 停 - ▁dogs - һ - ▁raise - ян - ▁бер - ▁evidently - iem - ▁Х - カー - ▁bringen - ▁talent - ▁今天 - ▁வி - 茶 - 射 - 네 - 我不 - sky - 夫人 - ▁Mutter - ư - ▁hy - に対して - ▁advice - post - ▁ganzen - ▁slight - ▁Pol - gro - 妈 - ▁closely - 告诉我 - nin - ▁Philip - gs - 导 - ▁forces - ▁Louis - ट - وي - ▁じゃあ - ▁Fo - back - рт - tti - ▁California - ▁reflect - ▁donc - форм - ▁ئا - ▁happening - ▁eso - 产 - ▁pop - ▁まずは - aha - 第二 - 記録 - arm - 脸 - vien - nom - ema - gate - 领 - ņ - ▁woods - ▁عن - ▁一方 - lust - 姆 - ace - '70' - сты - دار - ▁az - ▁pray - ヒ - In - ▁hearts - اق - ča - ▁develop - 対応 - ちゃんと - ▁eleven - 特に - bla - ژ - 认识 - rom - ▁questo - 男人 - யா - tch - 的这个 - ▁labour - 饭 - ▁computer - なる - ▁Auto - 首先 - 呃 - ▁¿ - ▁roll - cions - ▁trick - ۆ - stein - 的生活 - nja - コン - ▁announced - ▁male - ▁Ye - ▁Spe - 旅行 - hood - ▁comfortable - ção - ▁carrer - ery - ා - ▁standard - cr - ▁facts - ности - ▁theory - りました - 医 - 狗 - 両 - হ - ▁Indians - ▁hoch - ▁Harry - 私の - 做了 - 更多的 - ▁Another - ▁dozen - ▁port - 一年 - 들 - bus - বা - 置 - ▁genug - igung - ▁mix - lah - gon - ▁vote - もあります - 的工作 - 设计 - ▁prefer - ▁tower - kana - mise - ▁shame - ▁cup - 流れ - ▁listened - ▁하 - ▁பா - ath - ▁definitely - ▁presently - stellen - ▁wohl - 仕事 - wis - ubu - それは - ▁hate - ▁indi - ක් - aya - ▁castle - ▁الح - 情况 - ▁represent - cé - ▁alla - зы - ▁olarak - ▁Pra - 睡 - ▁upper - ▁cities - uw - ▁aller - 主要 - さらに - ▁charm - ▁ĉi - ▁commission - ▁consideration - ▁birth - にして - ▁vielen - sign - '24' - 에서 - 私は - ▁excuse - ▁それ - ▁drove - ▁тогда - nyi - 挺 - 担心 - юць - ं - gt - ▁carrying - ட்டு - ingly - がありました - ▁Vater - ▁teeth - ▁пера - 示 - flu - ап - 英国 - esc - ▁stra - ▁tres - ▁native - 校 - 同时 - bin - 薬 - ▁Val - ( - ej - ▁Ste - 拍 - 第一次 - toj - ▁include - 我们可以 - ▁Sha - тэ - '13' - ▁sự - يت - ▁eigentlich - رس - ▁dim - ▁Für - mir - ında - von - ▁수 - аць - ▁respond - られて - ▁details - eurs - ▁extraordinary - uga - げ - ▁Most - тэй - ▁cya - ả - ▁шу - ▁좀 - ▁metal - tá - 니까 - न् - ▁laughing - 每个 - 程度 - ▁Elizabeth - oon - تا - 这次 - lip - өө - ▁reward - ▁iyi - rij - ▁tem - 続いて - ▁இந்த - 我现在 - ▁bekannt - angle - ▁bitter - ▁frei - ▁projects - 因为我 - 建议 - ison - ▁kunnen - ▁pur - amos - fri - urs - ▁element - ▁Yo - ▁như - ▁jemand - ▁„ - 生命 - 判断 - 粉 - ▁peu - ▁everywhere - ▁protection - ▁pity - ▁để - ▁kitchen - ▁Bru - ▁relief - flo - ▁revi - ▁detail - dic - ▁secure - ▁guy - ▁gift - ér - üh - tions - くらい - ▁klar - ▁compared - ▁System - ced - kam - ▁oli - ▁favour - 百分之 - olo - ▁Brown - ▁indem - ▁afford - ▁более - mia - ico - ต - ▁Earth - inter - ines - ▁bay - ▁succeeded - かもしれない - ▁Kind - ▁вер - ▁lies - wort - ▁suffered - をして - 確認 - مر - ▁gleich - 三个 - ▁expressed - ию - これを - ▁enemies - ▁contrary - ща - 姐 - ▁aniran - ▁transport - ▁appears - 까 - ▁gy - dro - tier - lea - tı - ▁jest - レー - burg - ここに - だって - 人に - abi - ▁chapter - える - comp - ▁Oder - ald - ▁seized - tó - сад - ▁smooth - ▁pal - ▁Regierung - şi - 上げ - ▁careful - ▁gr - ▁Ber - ians - ▁trip - 症 - いない - ▁manage - ▁treatment - ▁Jane - ▁anders - ▁अ - ▁businesses - یر - pli - Vo - 我说 - 再次 - ür - ▁measures - 运动 - ップ - ▁queen - ▁lovely - ▁constant - lim - ▁مو - ▁Master - cept - ▁ĝi - ▁Menge - ▁focused - 放在 - ▁wore - 慢 - 去了 - 折 - 博 - sid - ▁flesh - ු - bung - ▁beim - ступ - オリンピック - ▁проезд - ▁estate - 都会 - ▁actions - ▁College - ▁hang - でしょ - ▁discussion - ered - kus - य - 款 - tation - 刚刚 - ▁getan - シャ - 有些 - ▁rare - ▁smaller - 帮 - fold - 十分 - ▁imagination - rou - ▁permit - ▁Гэта - ▁overall - кова - ▁developed - させて - ▁Internet - 女孩 - gend - eln - ▁pli - ▁Gal - ▁falling - ▁female - ▁During - nta - ▁mü - 攻撃 - などを - ball - ▁Và - ▁sechs - ▁audience - ▁principle - fli - ▁kiss - ▁temper - 词 - ▁whispered - ▁liber - 組 - ht - book - ыш - уд - rim - ▁possess - ▁organization - なんですが - ▁lifted - 谈 - tiv - iß - aria - ▁protect - лася - 』 - ▁horror - tica - stat - ▁thế - ▁verstehen - haw - pha - ref - ▁Film - ▁மு - sko - فا - ப்பு - ▁cook - 致 - ▁frequently - elles - べ - ▁sé - 看起来 - ▁smiling - ▁excitement - 速 - rant - ▁теперь - vr - ▁chamber - ▁ice - ▁кол - マン - வை - ▁Uh - ome - ень - ಿ - ▁treat - ▁hidden - وس - shing - 聞いて - ▁realize - ▁skill - ▁instrument - ▁realized - ▁Said - ▁intelligence - ▁professional - уж - در - ▁smart - 哦 - 塞 - 落ち - тур - kü - شر - じゃあ - 逃 - 表示 - ▁primer - 核 - five - ▁incident - ▁Min - эт - fru - ▁affair - бра - run - さんに - pond - app - ▁très - ▁marked - ▁extreme - nga - ▁wake - ▁erst - 『 - ▁spi - ▁dreadful - ▁satisfied - 都有 - ké - 的歌 - 調 - ھ - ▁colour - 办 - nek - ▁Colonel - 萨 - ña - ▁recht - ▁picked - ▁milk - yor - reich - ▁fr - cke - Y - 食べ - isten - ት - itude - nger - ▁kingdom - ▁ف - 具 - 目前 - فر - zie - 回去 - 容疑者 - ▁đi - pad - ▁être - なんだ - だけで - ▁播放 - ▁пу - 先ほど - ő - زی - パー - このあと - ▁dabei - өн - olu - مس - wyd - ▁managed - ▁independent - 占 - ▁occupied - ▁gods - că - 滑 - お前 - ır - 氏 - cons - ▁crossed - ўся - жы - wie - ▁mé - bly - ▁alte - ▁bid - よりも - ▁مت - town - ▁medical - 网络 - ▁pull - 境 - ▁dengan - ▁accepted - lichkeit - ▁destroy - nken - bia - medi - ▁Hill - 以来 - ars - ▁division - ▁leg - ayo - ▁Would - ▁versuchen - ▁স - ން - тро - 事实 - 宮 - 自身 - みたい - ▁increasing - 复 - зі - 组 - ▁soldier - ▁будзе - ▁Natur - 運 - 大约 - لك - ▁majority - ▁Tell - がない - ▁wedi - ા - position - ▁پر - ▁resources - ▁möglich - ▁thanks - rod - ැ - лет - ▁appeal - 伦 - вой - stellt - '500' - ▁solution - つけ - tting - ▁şey - ист - ▁hacer - ▁ново - 冬 - ▁それでは - 変 - pass - hal - ▁них - 接種 - ▁systems - ache - ▁pin - ▁neuen - 一位 - ngi - 每天 - ▁Africa - ▁niemand - ▁hills - 尾 - これから - ▁capable - že - ▁meal - ▁role - phe - ْ - vě - 今の - ▁anger - ▁partner - 자 - ▁request - 分析 - ▁sto - ラン - сі - ▁Gesicht - ▁patients - ▁oo - ▁chu - ▁qué - ▁liberty - tud - ▁fleet - 展示 - este - wir - ▁frame - jas - ▁remarked - pia - ▁Michael - лә - ってる - テレビ - ▁pred - もし - ▁Mel - гі - ço - 患者 - 嗯 - ස - 人で - 人間 - ▁motor - ▁raz - ▁bore - نى - ▁powers - ▁mighty - 压 - band - ▁satisfaction - னை - ▁faithful - ▁还有 - ▁Ger - ▁iran - 兄弟 - ific - მ - 上帝 - களை - 堂 - avi - ▁training - 商品 - ▁awful - ▁Gott - ▁게 - しない - 影響 - 不想 - 给他 - ▁Э - ▁Stra - 不了 - 企業 - 猫 - ▁schools - ▁Joseph - ▁improvement - 创造 - 条件 - ▁dust - elo - 出了 - ever - 投票 - ա - ▁response - ▁Mer - ▁Hall - ington - 新しい - 難しい - ▁போ - 论 - ▁Det - even - ständig - ▁connected - divi - elijk - اه - 曾经 - iɣ - 的故事 - 対策 - фе - ▁나는 - か月 - 今月 - ▁chain - ▁farther - ▁Bal - ▁해 - ▁значит - 而且 - 言って - ган - ên - эх - ▁پ - 我也 - tek - ▁situa - 不管 - রা - していました - ▁Has - ▁treasure - ▁nta - ホ - ▁resist - оп - ▁Hy - од - аг - бай - ными - ▁Still - 的事 - யில் - ▁proof - chan - ▁solche - atu - 志 - punkt - ▁asleep - ▁wondering - '90' - ▁deeply - ▁conference - ик - Be - hm - ▁کا - と思って - ▁Una - dí - 赛 - พ - だろう - ▁families - kas - ▁gewesen - ▁الع - ▁parties - 調査 - 他说 - ▁contact - ອ - それが - 橋 - ▁Fort - ▁是的 - 私が - らない - taj - rte - ▁Maria - ага - 影 - ajn - ▁equally - ره - main - وب - ▁Freund - ▁Para - 不好 - ▁Bon - maze - ▁levels - ▁Sy - انی - 教授 - ▁триста - дал - ▁rot - зу - ▁gan - ▁strategic - rek - 跟你 - ▁esto - 年の - anza - ▁regarded - ▁leader - ▁begun - ▁سر - 一家 - 勒 - ▁log - star - 考えて - 的孩子 - 证明 - vita - شت - 观 - pose - ▁hell - ▁removed - Do - тал - 有一些 - 因此 - nä - دن - 心里 - ▁fields - ▁import - ▁Facebook - mwe - gira - というのが - ▁prior - ▁despair - cion - 増 - Z - ▁пад - 位置 - isha - 参与 - 打开 - hat - ▁eene - 调查 - 爸爸 - ▁expense - ▁sage - aff - ▁visible - 洛 - ▁continues - lerin - ▁cada - гал - ▁Esta - 選挙 - кан - 嫌 - ҫ - ▁воз - 馬 - 即使 - んですけれども - 事实上 - ▁себе - 合作 - stig - 职 - ▁anche - ▁dying - 索 - ▁yield - しく - stellung - ▁rid - 紧 - ▁function - ▁east - ▁Fragen - force - 一緒に - ▁plenty - ▁flag - 視 - 三十 - роз - dik - script - ▁Не - やすい - bl - ▁complex - ▁agreement - 游 - ▁statement - anda - 跟我 - 我不知道 - ▁untuk - ▁remarkable - 主义 - ▁educa - ▁እ - 壁 - ▁Plan - 悪 - nud - ▁sprang - ▁leicht - rimo - last - 普通 - ñ - 関 - にかけて - ▁buried - gent - ▁бул - ▁burning - 一条 - mobil - ▁Web - ▁fundamental - 参 - 厚 - nell - 莱 - 新たな - ▁jak - ▁setting - قد - ▁gene - sy - бі - 起こ - ▁دارد - ▁Monsieur - 箱 - ▁дела - uba - ▁weitere - 结婚 - ▁tro - готов - ▁잘 - cat - pes - ▁unseren - ڵ - ▁tail - ▁tempo - hang - ▁treated - そこ - ▁savage - ske - 提出 - ▁universal - ▁あの - ே - ▁всех - ▁私は - vet - ▁proportion - 道路 - ▁sẽ - ▁wisdom - ▁jam - ▁spirits - kins - 最大 - uti - wit - ▁nervous - 动物 - ▁vez - ▁kra - ово - ets - merk - uzi - ӱ - ▁explanation - ▁nest - ▁campaign - 世纪 - bwa - roj - bt - ▁bereit - ▁эта - ▁grup - ▁사 - nim - ▁roz - ▁mot - хи - ▁suspicion - می - einander - être - ▁nurse - 令 - لت - ▁أو - vou - ▁gör - 日の - 満 - ▁card - phi - トップ - າ - Ha - ▁slave - gam - тель - 礼 - 我可以 - ▁innocent - ▁violence - ▁entrance - よかった - ▁goed - rry - 天气 - shobora - ▁zei - 永远 - ▁divers - vier - ▁citizens - ▁rules - ▁helfen - tle - ▁desired - ▁Auch - 丸 - ▁Three - ▁confi - ▁bro - ▁manifest - ですけど - лась - ▁worthy - nit - きのう - ▁decide - ▁dari - ▁negative - না - ▁minds - haft - 僕 - としては - ▁dull - ▁conscious - ▁necessity - ▁row - ffer - ે - rian - aban - ▁earnings - wind - ▁сказать - ▁blo - 被害 - 念 - ▁zehn - top - بی - ▁Wort - ▁Fin - ▁我觉得 - dam - ▁depend - wand - の中 - ▁execution - 行动 - ▁evident - wick - 大人 - 言う - imu - ▁さらに - ▁ruin - ▁Fu - ▁friendship - ▁differ - ▁State - ▁какие - hel - mme - jí - اح - gol - eff - ▁nói - まる - ▁もう - ާ - ▁clever - 人员 - ▁aspect - tisch - 今後 - ▁via - ▁relationship - mini - ▁такое - ▁confident - ▁mor - ▁Sin - ▁describe - ▁Prozent - гла - nch - ▁multi - 岡 - なり - ете - ▁route - 招 - kazi - ▁чего - lè - 时代 - 记 - ▁effective - 衣服 - ディ - nak - ரா - ▁تر - ctor - ▁cab - ax - lf - ▁نو - ick - bury - bank - ▁Ali - ▁Männer - raw - esa - ejo - ▁dawn - ▁attitude - ▁section - 洋 - ▁kas - ▁над - iu - ▁мар - تم - 唔 - 你想 - ▁mile - ▁obtained - ож - kit - 浮 - sus - ▁ben - пом - お願いします - ▁س - 舞台 - ologie - ここから - ▁unable - ▁Kom - 啥 - টা - ▁figures - 魔 - ▁Nous - tü - ▁Cap - ▁staff - ough - hil - prim - str - ▁supper - ▁Zo - 一人 - ρ - ந்து - 航 - 週間 - ency - 靠 - ▁更に - ▁না - tage - ▁Day - 似 - ▁bija - 县 - ▁generation - ▁groups - 明日 - ▁lawyer - ▁core - 讨论 - 決め - ▁всего - ▁wounded - ▁Kur - ə - ह - ▁violent - öl - ▁flower - ppe - 中で - ▁mundo - 建立 - ▁alarm - ▁nächsten - 运 - ▁Ken - sä - ▁calls - ▁fri - ▁younger - eme - گی - ound - ▁twee - ั - 丁 - ▁sup - ▁pol - ров - ҡа - born - ▁كل - ▁Meinung - 重新 - 诺 - 智 - рын - ▁کند - ▁hoped - ▁challenge - ▁topic - ▁Aunt - 年代 - ель - ▁Hause - sty - ▁meat - 句 - ▁rear - 我们将 - про - ▁nehmen - вет - 撃 - ып - weg - ▁delivered - lab - ▁possessed - 的想法 - ã - ▁attend - 架 - ▁الن - 整 - ザ - 很好 - ▁hide - 婚 - phy - ▁fut - ▁stronger - ▁darin - 展 - ▁你们 - 很难 - ▁acquaintance - んだよ - ▁plate - 小姐 - sey - ▁Bay - ▁rooms - ▁arbeiten - ▁petit - jun - ▁union - ▁bottle - ที่ - 父亲 - эм - 感谢 - ො - ▁midst - hou - gni - 感情 - ときに - ▁inner - ▁adalah - ás - 是因为 - ۈ - ụ - ただ - 大的 - 亮 - ▁نمی - kira - ▁Rose - czy - air - のある - ế - ▁shock - стр - uto - ▁expectations - ▁funny - ribu - ▁communication - ▁ج - äh - ▁Ка - ▁constantly - bul - ▁rocks - essen - ▁economy - ▁somehow - ▁gaze - ▁flying - ▁بە - ▁bur - ります - ▁Video - ▁через - 铁 - mate - უ - يل - ▁crown - ▁bist - оў - X - ▁screen - ▁tip - 课 - ለ - ▁kennen - ▁verwenden - cade - gleich - 打ち - 取材 - lant - pak - ▁maintain - ▁wit - 人気 - ▁relation - ▁assured - 優勝 - মা - ▁너 - year - ▁effects - 刺 - worth - ▁sail - ▁내 - ছে - んだけど - ▁More - ▁corn - ▁pipe - ▁investments - autre - ხ - ▁relations - cket - ▁rates - ▁require - ▁affected - ▁kap - ▁prisoner - ▁الب - ▁Christmas - ▁حال - water - ён - 장 - ▁deck - 不到 - ▁absence - 年間 - 明白 - vari - ▁gente - 上がって - ▁hören - 皇 - жу - ▁dot - ▁included - アン - ▁shift - bei - ▁Raum - ▁دە - 施設 - を見て - oy - 在我 - ▁Flo - lassen - ▁knees - ▁aunt - ▁blame - னி - segu - ▁Google - ில் - ▁regret - ▁millions - ▁perform - ▁splendid - 歩 - ▁Never - ▁hart - ▁beide - 制作 - 可能是 - parti - ▁shade - kh - ▁nearer - ▁Amerika - ци - ▁dei - っていた - 軽 - ▁doors - ▁unique - 思考 - ే - スター - ид - 意识到 - 上がり - termin - ▁うん - ່ - aan - ▁està - オー - wydd - 备 - ów - ▁criminal - من - 不用 - つく - 即 - ▁recognized - ▁kur - ▁stones - ▁Jetzt - 麦 - 方が - 历史 - 曾 - peri - 女子 - 北京 - ால் - ▁triumph - ишь - path - 儿子 - 暗 - raz - 人たち - 楽 - hod - ▁kid - ▁sheep - ▁Once - 職 - 认 - 质 - ец - 硬 - ச் - ▁grey - ▁intent - ▁Que - vă - ▁ح - たま - ▁너무 - cock - ▁Island - àn - 旧 - yl - ▁Idee - zone - ▁Helen - gara - ▁immense - جا - 总统 - ▁tiny - ngo - த்தில் - 医院 - ▁uy - ▁grief - ▁Arthur - ▁суд - ▁manera - 都在 - ▁Look - при - ▁pat - ▁govern - '""' - ▁brilliant - ▁sons - ▁kurz - bili - ▁pictures - கு - 検査 - ▁ariko - dzi - ▁concerning - 届 - 公共 - ugu - いつも - рон - prov - uit - ▁encore - ▁ceased - ген - ▁Reihe - แ - yard - ले - ▁mortal - ▁demanded - ▁frightened - ▁ещё - 属 - ۇ - gged - 幸福 - 空间 - ▁yer - 込んで - list - 吸 - 権 - ▁threat - 移 - 者の - ▁бел - 地说 - ▁Uncle - ttle - ▁cheer - 印象 - lain - 轮 - ▁Second - тә - 也有 - 块 - श - ▁rent - enga - 恋 - ▁char - ▁largest - meye - 生物 - ▁necessarily - lus - 午前 - それから - ▁begann - tho - 地球 - というか - ▁Namen - ないと - 经 - ▁perspective - 是如何 - ▁stepped - 你能 - 汉 - ▁Wissen - өр - licht - ▁Bob - org - ▁tiene - ▁update - mount - lık - mina - ▁trois - ▁cabin - ゲーム - ▁کو - そうな - 湖 - 発表 - ▁آنها - fern - liegen - cré - tail - ▁wondered - ▁interests - pé - 乗 - vat - ஜ - ép - 일 - 疑 - сці - ▁quando - ▁disappeared - ▁Nicht - dul - кс - ▁schien - 予想 - ▁finish - が出 - setzt - ▁resolved - loc - ▁throat - ▁لم - '21' - 只能 - 宫 - ▁engine - ▁말 - ▁March - ierten - rick - isme - ▁Dem - ▁drug - ▁Krieg - прос - ▁tous - ▁seated - bald - 了一 - nc - ▁proceeded - 语言 - 共同 - کە - rust - عمل - ▁Dank - ▁tief - idi - ▁Del - ▁vessel - のように - tory - 曼 - ▁sc - ▁kal - ▁Körper - ண்ட - ▁Bank - லி - ▁delicate - 動き - 产生 - 不可 - قل - 随着 - 价 - рь - ▁brauchen - ▁critical - عة - azi - ▁foundation - ▁sport - клад - даг - үү - essa - ▁Germany - ▁scheme - 転 - tà - 結 - down - ▁mystery - となりました - nov - 원 - 叶 - rica - ▁admitted - uro - 灯 - ▁puis - 行为 - ▁stellen - ▁wave - 举 - ▁Arm - 轻 - gru - nig - ▁site - dü - 出場 - ▁amazing - ▁teaching - ▁awake - ▁veel - 产品 - ▁vy - ▁description - ▁pardon - 都不 - ▁troba - ▁numerous - ▁instinct - uj - どこ - ▁guns - のかな - treten - மை - late - ▁chosen - ▁yi - elt - oh - 作用 - ▁Tan - ▁cow - 増え - なんと - colo - gua - 作り - ▁Jean - ▁bare - টি - 魚 - ▁número - 土地 - 是为了 - ▁stark - 暴 - ▁paused - ular - ▁published - 避難 - ▁experienced - 映画 - ▁signs - кам - ыг - ▁khi - ▁fund - غا - 痛苦 - fy - ▁worship - ▁apartment - ο - ▁phone - ▁arrival - ▁cui - нес - ▁proposed - 床 - ห - 沉 - ▁steht - 清楚 - ▁plants - 你怎么 - iyo - 的一些 - 永 - いっぱい - ங்கள் - ▁crack - kora - 它是 - tique - ▁lag - ท - ▁objects - ▁whenever - ulo - ▁одна - ▁mille - ▁gained - 机会 - gger - ▁shelter - 恶 - ▁Энэ - eten - ▁singing - ▁nations - ъ - のでしょうか - hä - ▁Recht - ▁factors - ▁doen - gat - 難 - なくて - lap - 威 - лег - ▁conflict - ▁zero - 告诉你 - ▁turns - ▁profession - ▁problema - ▁Both - ▁Court - 做什么 - ubwo - ▁bold - halb - 하고 - ▁professor - 万人 - лт - ▁changing - やり - 池 - ワクチン - rous - ▁එ - ▁affect - ▁absolute - ▁İ - ▁grim - ▁gradually - ▁occur - fas - ▁hole - nau - 国内 - 宇宙 - 借 - 新闻 - ▁delle - amp - byo - ▁mysterious - ▁té - ▁Indeed - 延 - 唯一 - ▁fre - れた - 随 - ▁Luc - kken - 就可以 - pal - ▁remind - 股 - 表明 - ▁moi - Ar - ▁anyway - スタート - enda - 突 - 房子 - 確 - 就在 - ▁sieht - cient - ▁الج - 每个人都 - 熊 - ▁foolish - ▁같애 - ▁official - Qu - ▁gì - pla - ▁Gen - alo - ching - usa - ▁pushed - 事儿 - ▁Long - 环境 - ▁Take - ▁voices - ▁Sur - ▁reported - richtet - ▁خو - වි - erde - ▁instantly - 绝对 - ▁principles - 없 - ▁document - ▁сер - ▁Although - ▁التي - ▁Juli - ▁delay - ▁quan - ▁Asia - ് - ▁scientific - ▁praise - をする - を見せ - となった - ▁mercy - hren - ▁error - ▁silk - 领域 - test - ▁custom - ▁prices - овская - ĉ - fahren - ún - ksi - ▁mae - ăm - ▁estaba - そこに - 園 - ▁Het - nzi - مل - iyi - giye - ▁micro - тов - ▁També - ▁conscience - کر - ром - já - 据 - なんで - 受け - ▁Familie - ambi - ▁ён - tric - 、2 - iendo - 这不是 - さんと - ▁nel - ▁propri - یک - ▁Yu - とても - unt - 目的 - azione - ▁пар - 报告 - ▁completed - ▁giant - rank - дзі - нос - ▁friendly - ▁election - 当我们 - 所有这些 - 各种 - すぐ - kol - wali - ▁Mü - ▁voyage - ake - ▁stehen - 怕 - lama - ▁sympathy - уб - ι - 持ち - かもしれません - ▁universe - ▁direkt - anyi - ▁proceed - 愿意 - ▁council - ▁unbe - ▁tour - جر - ▁eating - 大事 - 사 - 年に - 也没有 - 地域 - 独 - ▁Very - 하 - ridge - ▁machte - してください - ▁Rat - ▁Ama - nej - ▁stairs - ও - kro - ory - dien - ▁우리 - ーン - と思う - ▁закон - ▁Spanish - سي - уул - 説明 - 有时 - гә - گو - できた - сла - 至少 - ප - ペ - ▁retreat - ▁engagement - nah - ▁failure - ischer - ▁worry - 、1 - ▁pensa - faced - 来的 - ▁software - 营 - 大きく - شی - pela - 焼き - サン - turn - mpa - 抱 - luk - させ - sho - ▁mia - 弹 - nto - cono - ▁буду - head - نت - ▁fought - ▁abantu - ▁domestic - ▁wider - igu - ▁countenance - rück - 寻找 - 众 - تي - 마 - قر - 注 - ▁sacred - ene - sul - vul - ui - 冲 - 論 - ▁гар - ash - ▁enormous - ▁immediate - hum - ▁elements - だと思います - رف - tia - ▁consequence - 関東 - 務 - 材 - 机构 - 蒂 - آ - ▁sooner - ▁teacher - ▁こちら - ▁Woche - ▁họ - どんどん - 真正的 - ▁باش - ▁mí - 選手が - ▁forgive - мә - ▁verschiedene - 酸 - していく - ▁brothers - äng - 你是 - ▁practical - ▁bald - ট - rí - نه - кт - ▁لە - 別 - ẓ - ▁soil - ▁tiu - 今日は - mittel - ologi - ▁sam - ▁modest - ▁eigene - 时候 - ではない - 价值 - rez - رب - ▁sem - ▁favorite - 坏 - プロ - ▁pause - コロナ - oto - ▁trou - ▁dreams - ▁実は - ▁prima - ▁alten - 受到 - dale - 鲁 - ▁سا - ▁quand - ▁Mc - وه - cus - ▁website - ▁że - 天気 - ▁tap - ▁wings - ▁mistress - 宗教 - 梦 - ▁currently - өл - ▁coal - 了我 - ▁sprach - ▁werd - bah - rze - ▁construction - ului - める - ▁Again - Al - 蒙 - 達 - ators - 身边 - மான - 态 - sehen - 圣 - ät - 势 - ▁Liebe - ▁employed - があって - 并且 - なのか - ▁approached - ▁Neu - ▁Wood - ▁mud - 全然 - ▁arch - ▁glauben - 做出 - 総 - quel - ▁horrible - ▁bearing - ▁magic - ije - ған - ▁По - umi - cked - chu - қ - ▁nahm - ▁anywhere - ọ - ▁Cas - ▁consent - ▁کنم - ▁discovery - சி - osi - 岛 - ▁poem - いません - ▁einge - mbro - rra - вор - ▁kiel - 吃饭 - ▁trace - dzie - ▁Tur - くて - ▁Star - ▁Op - ▁nou - র্ - aku - dda - La - دە - ほ - ▁ee - ここまで - 背景 - ▁Sometimes - ▁sigh - ▁hurried - ▁restaurant - weise - ご覧 - ident - ▁Dollar - ▁relative - త - ▁previously - ▁mission - ▁improved - 是一种 - ▁liebe - 胡 - fici - hem - ▁esp - ▁liberal - ም - ▁County - ically - ös - 人家 - tit - の方 - kri - 负 - ು - ▁waters - ▁clouds - మ - عد - 允许 - ▁thunder - ▁ease - 거든 - ▁கா - vingt - 盘 - 出て - 莫 - anar - uring - zeug - 姿 - 冰 - ▁которая - 委 - ▁Margaret - amo - 母亲 - ▁apply - ▁gal - ▁konnten - ▁però - 刻 - ▁Angst - ▁Wil - eks - амі - ▁lad - тым - 停止 - door - раб - ▁euch - ▁пол - ▁gently - 不太 - ▁utter - ▁benefits - ▁điều - ▁hal - sión - ▁World - uv - ▁bwa - 時代 - ▁consciousness - blick - cil - ▁Lee - ライ - 希 - ▁tradition - ▁hari - 豆 - ▁شود - いつ - ▁classes - lös - ▁nord - ▁많이 - ި - ▁coffee - ぼ - ▁rue - ものを - 一百 - 呼 - ▁odd - ▁solemn - ское - ▁informed - 全体 - ismo - în - ▁objection - ▁sunt - ▁Australia - nation - 撮影 - স্ - 顺 - ▁día - ▁stands - 下去 - ▁disse - ▁opera - ▁Tod - ▁luck - ▁Bun - ▁descend - rif - nica - рен - 尽管 - lot - 映像 - 作業 - lk - ▁discuss - zan - ▁ari - ▁Win - 觉 - hui - ▁guilty - isse - ▁mood - sprech - を使って - tech - pol - 전 - кри - ▁десять - ition - ongo - 子供 - 不安 - ▁yari - ▁Tal - ▁hasta - 赶 - ▁سال - 種 - ం - ▁cru - てきた - 減 - 许 - ことで - ▁Antwort - Well - ▁当然 - ▁print - 入り - ▁clients - zem - 部屋 - 傷 - ▁الش - ▁concern - ▁severe - ▁enable - ▁goal - ▁neza - кая - minister - からは - ▁counsel - 汽车 - كم - ▁yards - ▁surrounded - 住宅 - 連続 - ▁interpret - 病院 - 是谁 - ▁allein - 秘密 - ▁наш - aza - 殺 - يم - おります - adi - 一定要 - ▁سو - 大丈夫 - 这个问题 - மி - ▁combat - miss - ▁unu - ▁Canada - ▁reign - dica - cli - hop - 库 - egg - 你不 - ▁Minister - ▁radio - 港 - ప - ▁coach - 北海道 - 吹 - М - char - ▁rw - dhi - 早く - ▁За - これまで - ▁zeigen - etti - ാ - awa - 大量 - ▁hero - nner - ғы - ▁plej - ▁cel - ▁Hol - 最好的 - prob - ▁байсан - anc - ▁obtain - ▁phải - ied - leben - 枚 - ▁code - ▁König - ▁arrive - chtig - isto - ▁» - ▁assistance - ▁wil - ▁Chinese - lov - rā - ▁fal - ▁народ - かい - はこの - ▁那你 - ▁schi - ▁easier - 嘴 - のために - きて - ▁subjects - ▁Informationen - ▁verb - ▁located - 不仅 - py - ▁pack - 僕は - baza - ▁verschiedenen - шла - ▁valuable - mana - ▁stores - ▁Roedd - の中に - ▁Arab - kem - ▁returning - ▁escaped - hung - ▁convinced - 治疗 - ▁anxiety - ▁Up - lev - ış - 我知道 - ▁Ш - uc - ▁Fre - inda - cs - ▁Twi - zio - 中央 - ▁Non - ▁reference - gul - やる - гляд - ▁Since - ▁joke - 想到 - ▁notion - ▁introduced - 韓国 - 投资 - ▁ibyo - ▁reduce - ▁Ма - wala - ணி - ▁rushed - ▁eternal - ics - だよ - プレー - 根本 - ▁Char - ▁counter - ▁tight - enge - کی - ▁crew - prav - fite - rö - ▁rope - ▁recognize - ▁pressed - nny - ▁eager - などで - ▁witness - ラー - なんですけど - 中の - வர் - ▁Virginia - hö - ▁segment - 结束 - ▁cop - ▁kom - ▁intend - rer - 有了 - bun - 归 - след - cial - ▁refuse - ▁partly - ▁earn - ▁நி - fat - gü - ını - mā - சா - ▁Ihrer - нова - 優 - ė - ▁intelligent - ▁或者 - ▁monde - مي - ▁porta - 写真 - mě - fol - なんですね - rid - 的大 - dı - ▁acquisition - cou - ▁también - unda - bile - 持って - ▁collection - 子さん - ວ - ▁mod - ▁onze - qa - ša - ▁waste - 絶 - aja - hält - سب - ▁monsieur - づ - ▁nada - 同样 - ▁Each - かけ - ▁rank - ▁Congress - ▁куда - 戸 - ▁вос - tract - است - はない - 麻 - ệ - 刚才 - stal - ▁alors - stin - ▁перед - 勢 - ▁Mädchen - ▁miserable - ▁channel - ange - 某 - 意味 - ▁cela - ▁suis - ▁genius - 調べ - ▁einzige - のこと - 決 - 战争 - gno - ▁Ty - 鬼 - 幅 - ほとんど - 货 - ▁resolution - 好吧 - äl - yar - kä - ▁render - اً - 盗 - ذا - activ - ▁دست - ▁zeide - ▁variety - ress - ▁cyangwa - 랑 - ▁limited - аш - ▁Tür - iten - quen - хан - ▁associated - ▁score - рам - ▁kindly - ▁folks - ් - ▁photograph - abil - autres - ▁precious - oku - hle - 既 - ▁destroyed - bur - ▁تم - اع - 是你 - üm - 言い - ств - ▁absurd - май - 夢 - east - ▁不过 - 会有 - ек - 刀 - ▁Ak - ▁victory - ▁offering - 同意 - итель - 家里 - uer - ▁Russian - hari - 入れ - ▁quarrel - ▁stable - 技術 - ▁así - ▁confess - ის - ▁இது - ▁Catholic - 妻子 - anti - ▁stupid - 早上 - 灵 - ▁candle - kop - ▁Fle - ▁tek - ▁Sc - ▁studies - stimmt - ▁ස - ▁또 - ▁flew - めて - ▁worn - れて - これが - ders - ひと - ▁Haupt - ье - rri - ▁votre - دى - アメリカの - iḍ - ция - пор - ል - 過 - பி - еж - ▁Да - qué - herr - ▁verwendet - ▁heel - 顶 - ▁article - ▁эту - guard - ▁años - ▁herum - تون - 団 - ▁ließ - 都没有 - 我们有 - ▁succeed - ▁Sun - பெ - ング - 里的 - 国際 - 坂 - ▁banks - ▁syn - 十二 - மாக - staat - ▁Put - به - ▁vent - 那种 - ▁trop - bbi - ▁causes - ▁uniform - кон - 警戒 - ▁Tar - ▁deine - ▁curiosity - ▁creatures - gomba - tul - ▁bath - ▁jede - 和你 - tze - 高校 - ▁жа - indi - aren - 劳 - ▁этим - ▁clar - ▁کنند - ▁يمكن - ▁Cy - ится - ვ - ▁dignity - リン - ▁Ziel - 번 - ▁약간 - ▁мир - のような - ▁ghost - ▁sale - том - vý - ▁June - ▁мал - ▁Warum - ▁えっ - rb - 累 - 質 - rz - ▁album - 人々 - いきます - عا - 当時 - については - 彩 - ▁merit - ▁kwam - 露 - нага - தா - ame - cca - 潜 - 编 - кий - 子ども - が多い - liv - ников - ▁version - 高い - 危险 - ު - pada - инская - ▁internet - ▁Alice - ▁smell - ▁games - ▁Sen - ▁reflection - きょうは - ▁seines - schau - ▁cái - 食物 - St - ▁merchant - ▁পা - ▁Lea - ▁competitive - ▁forgot - лось - Co - 席 - 薄 - ▁continent - ぜひ - ▁recon - ▁activities - ▁climate - ▁gerne - ▁acht - kiri - 頂 - ▁Sol - 本当 - ▁wy - ▁web - ▁marketing - ché - ffen - дзя - ▁substance - こうした - ▁Staaten - 구 - ▁reco - 洞 - 岩 - 弟 - fect - 交通 - ▁mostly - rel - ▁lover - 人物 - 豪 - 上了 - kaj - über - ük - ていました - ско - ▁pricing - өг - ▁appropriate - ▁Air - dim - 充 - 丽 - 贵 - hé - ubi - 技 - 团队 - ▁versucht - ▁punishment - ▁nay - ▁September - 頃 - ▁verk - ここは - bou - 状 - ▁branches - ▁cred - ▁Tor - があり - ▁হ - ▁flor - 预 - ▁steel - ▁painful - ▁stated - 的声音 - lad - ▁Gehirn - ▁Nur - 护 - ▁tegen - ▁pitch - ▁今日は - 目の - ▁hanging - ▁tomb - 結婚 - 不可能 - ったら - ▁اما - ▁softly - ological - ites - 固 - 中に - 责任 - ▁letzte - izo - ▁cheap - からね - ರ - 哭 - pren - ▁Это - ▁cart - ▁appointed - არ - gere - ▁application - ▁lots - rada - யை - ▁hopes - ase - ς - ▁flash - ▁amongst - kö - 정 - を持って - ▁rush - ▁fragte - 镇 - kka - ▁Boston - ▁seus - 海外 - ▁Aquest - ▁thoroughly - 人民 - ▁Ap - ▁imagin - ліся - 耳 - venir - scha - bling - mbra - ▁bull - 大統領 - ▁aquí - ▁mismo - に入って - ▁yap - ▁decline - რა - 什么时候 - 你有 - ণ - rés - эй - ▁geworden - ▁reduced - ▁crisis - ぬ - kali - zna - 男子 - 袋 - 一只 - 妹 - emos - 和我 - An - ▁medicine - ▁fourteen - ▁tai - ▁remaining - ключ - som - ▁internal - ▁fatal - ▁pays - gui - ▁suggestion - zug - ыя - ▁directed - 反应 - ▁mill - ▁день - ▁assets - pă - ▁maid - ▁plane - ▁Ban - ை - 答え - نىڭ - nat - lina - ▁plu - that - 他们在 - ander - ▁Carl - ▁adult - ▁divided - مە - mata - ▁всегда - వ - 简单 - omo - pers - 敗 - ງ - acht - ள் - ▁feed - ▁darum - ために - hind - ことし - How - enza - ▁distinct - gis - そうですね - اش - vic - tec - aro - সা - 弄 - あり - ▁tools - ▁Lake - shire - тен - 说了 - 国民 - ▁burden - ▁Seine - ▁puede - ▁hon - ▁expenses - adas - zog - ▁Nacht - ▁accompanied - 舞 - ▁sous - ▁file - ▁combination - ▁waves - あっ - ▁sens - wachsen - ucht - ▁Bri - ▁начал - 陪 - bble - fä - ▁household - sca - ▁прав - 义 - ▁purchase - ones - ちゃう - ▁tremendous - аж - 했 - ▁sy - ▁führen - ▁Sea - ▁bud - ▁sap - ści - نگ - ▁Over - えて - ▁healthy - 府 - ው - 说的 - will - ▁attended - ▁whilst - ▁irgendwie - zing - ▁sank - 京 - ▁exchange - ▁behold - ▁addressed - 売 - eka - ▁pig - ▁пыта - ▁seg - мон - ▁Fla - дә - تن - 経験 - 先月 - زن - ▁kuko - ▁shining - ▁Papa - ▁நா - ▁angel - ▁іх - 补 - ▁degrees - ▁fierce - ▁যা - கொண்ட - ▁ehhe - nna - しながら - ▁uko - ▁wicked - cad - dina - 额 - 在他 - scheid - ▁Therefore - ▁Licht - ▁prisoners - ▁đến - ▁connect - ってこと - 比如说 - 甲 - 但我 - ▁Council - ▁cert - ▁slept - というところ - ▁Но - ▁wound - gging - clo - ▁humble - ▁stared - ▁punt - 振 - ڭ - ▁kindness - 实现 - nyuma - ▁leaders - ▁Bild - duk - kie - ▁Thor - дав - ago - دو - ▁library - ▁sala - ບ - ▁zien - ред - くなる - ▁keen - oka - پی - 混 - 贝 - ▁《 - 藏 - ▁tells - ▁まず - мет - 变化 - 坐在 - ӹ - 出し - 观察 - ▁assume - バス - 易 - ▁concept - 的一部分 - ▁sector - geführt - ▁fare - iden - лын - ▁wheel - four - ▁baş - ▁الإ - ▁flood - ▁poison - ▁Mus - 务 - ▁fled - ▁ça - ▁nodded - გ - ダー - ▁weary - ▁Show - ▁Vous - If - bir - ▁Alle - ▁我就 - blu - 尤其是 - 有多 - ▁rolled - ▁contained - rais - ▁будут - ▁copy - эг - ▁Nord - teur - ▁কি - ù - 值 - ▁Little - ▁против - غير - お店 - 盛り - schrei - wyr - scher - ▁komp - ▁Earl - ▁slide - 始め - রে - さまざまな - ▁Out - 消息 - ゲ - ▁هر - ▁phrase - 路上 - tzen - 開発 - ▁thrust - ał - ていく - ▁arranged - lumin - ▁remarks - muntu - 楽しみ - ▁могу - chant - 计 - sun - vano - ್ - 日本人 - 丹 - 如果我们 - ▁num - 你要 - ▁philosophy - に出 - ▁አ - 一直在 - Se - bila - kum - fie - ▁nk - 加入 - ▁princess - ▁atmosphere - ▁cave - aries - ▁rage - もちろん - あまり - ▁reden - 在这 - ▁tiempo - ière - folge - şe - ▁Bereich - 人口 - dienst - ▁rag - 我有 - ▁tal - ▁dollar - ▁veil - ▁push - gegangen - ▁leven - key - bild - ▁iets - uze - ▁dijo - ▁komen - 面前 - ▁Ihren - ▁kr - 寺 - 导致 - ▁Präsident - ▁throne - グループ - ▁schön - ▁нэг - ▁programs - ▁viv - ▁Wahl - tam - 翻 - én - 甘 - kl - мент - كن - лог - 设 - ▁enjoyed - ▁ŝi - eli - ıyor - rol - ▁closer - ▁Gro - fried - 基本上 - مه - 给了 - ▁ale - ി - த்தை - ▁هي - ▁dell - ایی - 害怕 - ▁tear - 人を - ▁sonra - ण - খ - 建筑 - ▁moet - 男孩 - 依 - ▁кажется - گر - ulu - 其他人 - ▁అ - 欢迎 - 也会 - pres - ▁Hay - ились - 捕 - ▁temp - tered - 抓 - ober - としています - diri - ▁poder - ▁grant - ▁있 - ▁regarding - 一场 - ▁Parlament - ▁string - ▁strongly - క - ello - 由于 - 第三 - ▁assist - 運転 - 盛 - ▁arose - ▁compa - ड - 要是 - ▁wholly - еш - 对我 - 学院 - 烟 - gesetzt - ▁虽然 - ▁meu - iel - ▁lights - gur - 来到 - 触 - 月に - had - fit - ▁revolution - 以为 - ▁یا - гор - erte - ▁Stimme - 弗 - agi - 了很多 - 忍 - ▁profound - छ - amba - ware - 关键 - shaka - вали - vra - 的朋友 - ▁confusion - 挑 - ▁retail - 되 - rien - তি - ▁Japan - 佐 - 正常 - itor - рос - ▁buildings - ▁тот - zähl - ▁jedes - ▁seriously - ▁privilege - ota - 니 - 了他 - ▁forever - 艺术 - mé - oz - пы - üt - ▁Rest - ▁sistema - 相手 - твор - ▁nombre - ях - ありません - したのは - гад - ▁これが - ▁đang - ▁الك - 학 - Л - ▁guests - 之间的 - 族 - ▁blessed - tse - 立て - ▁solutions - ▁department - そうなんです - ▁designed - ▁laughter - 领导 - vous - 額 - 捜査 - 放送 - pid - ▁beach - mma - ▁Johnson - ▁flame - dacht - ▁tide - ▁вельмі - ▁besteht - 止め - ▁hint - 植物 - ▁interior - لة - cum - ▁warning - 没有人 - தான் - 站在 - ▁disc - ▁fet - ▁observe - ▁zijne - ▁друг - サー - '23' - fte - ▁Cle - 进来 - 阳 - ▁É - 大阪 - ▁созда - De - 幸 - ▁hundreds - 監督 - kta - ▁Boy - ▁چه - ▁remote - lied - ▁должны - 功 - ▁вторая - вол - ▁Era - ▁scheint - роб - ▁permanent - ▁neat - 真是 - ▁pool - 今回は - ▁largely - zelf - ▁Schw - 寝 - ▁holiday - ▁mewn - lands - ▁roman - '*' - ▁sensation - 套 - 販売 - teri - ending - 艾 - ▁branch - ▁algo - ▁button - でき - frage - ▁shouted - ▁breaking - ▁Ag - すれば - grad - いって - ্যা - cel - kunda - च - ▁Could - 还没有 - hlen - ▁яв - ▁வே - ▁stir - 媒体 - Le - ▁continuing - asse - ▁intellectual - ▁pea - 典 - 叫做 - 我已经 - fus - ▁деле - 最初 - jem - 帰 - ▁tied - ▁tard - 太多 - を取り - ▁robot - 沢 - ряд - 行业 - ▁steady - こっち - 优 - يك - ▁operation - ▁companions - ▁pilot - ▁groß - phone - 幕 - 谢谢你 - ▁utterly - ▁lock - ▁Maybe - ▁devoted - ▁shouldn - anz - 专 - ▁duties - ike - 基本 - لى - ところで - ▁있어 - 唐 - 変化 - ▁palm - 徒 - ▁manchmal - ▁abandon - kola - ▁critic - вин - ▁contrast - ifica - ▁jou - ▁Scha - উ - ▁hurry - ▁từ - 模式 - ▁Lincoln - ▁remark - tari - ▁этих - ສ - tiva - 工具 - ্য - ▁zwar - ▁Dun - ▁lại - ファン - ▁partners - 凯 - 从未 - 塁 - ▁bigger - 다고 - ▁weakness - ▁Spain - ▁cattle - ▁Anti - ▁vos - たり - ▁launch - cinc - ktion - igno - ▁politics - 一旦 - ▁Joan - ▁views - ▁link - dian - ▁Santa - imo - apo - ▁Emperor - ▁morgen - ▁suo - ester - 司 - 很多人 - ▁lion - 坚持 - 重要的 - 伴 - ▁humanity - ▁поселок - ▁consum - ▁Under - 挑战 - 的名字 - igo - خر - ▁кра - دان - ▁Lage - tera - ▁adventure - ▁ends - ▁essential - рем - establish - はず - 秀 - ▁enterprise - ▁literature - ▁totally - ▁follows - 奖 - atur - ▁என்று - ▁свет - مة - ▁오 - ▁Parliament - ▁cancer - ▁Italian - 诗 - ▁Major - 描述 - ▁dessen - 遇到 - Ch - gio - ▁несколько - kov - got - 笔 - ▁cả - овать - ировать - 以外 - 在我们 - ▁Roger - ▁leap - ▁discussed - 不断 - つまり - Re - ▁Mittel - 怀疑 - grav - ▁katika - aquesta - ээр - звон - gos - จ - той - 替 - ▁studied - ▁две - ▁rief - 東京都 - ▁хочу - ▁Kra - ▁Mess - ▁souls - ▁minister - 莉 - vision - ▁heißt - σ - بو - いで - ▁loving - bata - 律 - ▁самом - тр - 介绍 - тив - ▁Gri - 是我们 - させる - 会议 - ▁প - sses - ▁Gespräch - cc - وق - тя - ▁initiatives - 最大的 - 本来 - coming - ▁bunch - ▁россии - iff - ▁distinguished - ▁Holy - ▁applied - ҡы - 传统 - ▁الق - ▁debate - ▁sofort - ▁primarily - ▁지금 - lier - ரை - نو - çe - grund - ван - そうだ - şa - ▁Selbst - ▁tren - اط - ▁Cro - овский - ▁але - ▁tema - 持续 - ▁Off - ▁distribution - ▁hace - ▁நான் - 试 - ありがとうございました - mea - 国会 - ▁делать - ▁интерес - ▁unhappy - 你也 - 测试 - ▁hinaus - 对他 - ▁そこで - ▁Probleme - ▁Him - ▁produ - ▁possibility - дать - などが - 今日の - ▁bleiben - tete - 搞 - ▁det - ёт - י - 当地 - год - пан - 独立 - ▁formal - ▁mano - rend - 으로 - ▁aquest - ▁anzu - ▁Italy - mmel - ▁rival - ły - ▁Las - るのは - lies - 如果我 - ▁lift - ▁Sto - 增加 - 胸 - あなた - বি - ▁symbol - சு - ▁April - 那样 - 걸 - 習 - ▁deposit - ▁paying - ▁rever - овой - ▁Abend - 季 - ▁Tôi - two - ▁shoot - 你必须 - жээ - ▁толькі - ▁presentation - dies - ▁lit - nach - 広 - camp - ▁trap - ▁interes - njye - ؤ - ▁hungry - ▁sugar - sak - やっぱ - füll - uye - ▁desk - ▁sud - 聊 - 虚 - 把他 - 的原因 - ேன் - ▁nächste - sek - лов - ▁ও - 御 - ▁musical - how - ▁faster - ▁border - 비 - 材料 - 的所有 - 在一个 - ▁stayed - iw - ▁composed - ▁annual - 炎 - ▁meng - 过了 - ▁hielt - gebracht - ▁bus - ▁Zukunft - liness - ▁coup - 合い - 联 - ▁続いては - 则 - mission - ligen - ▁physician - ís - ▁expedition - 戴 - 经历 - āk - ▁conviction - ▁vice - ▁我认为 - 银行 - ▁Texas - ▁wusste - யி - ▁такая - يف - ▁ああ - ▁parent - ▁sieben - ュ - ▁говорит - ▁architect - ден - ▁gibi - box - 谢 - ▁wishes - ▁glanced - 가지고 - служ - ป - geze - ▁delighted - arte - ▁survey - ▁Cho - 検 - リア - ▁innovation - рав - なの - ▁destruction - ▁loro - constru - ▁doctrine - ▁types - ▁Phil - ▁Greek - ehr - ட்டி - ▁singular - ▁visited - ĩ - ے - 我还 - ▁всем - ▁Von - ▁generous - lio - ▁deu - となっています - erie - ▁steam - ▁Musik - ▁Möglichkeit - ▁thì - ▁spending - raj - ums - handel - frau - ▁Š - 形式 - 부 - ▁kay - говорил - 简 - ▁tanto - ា - 这就是 - ▁accustomed - dá - aught - レン - ▁emotion - ことも - ▁mare - 现实 - ▁rifle - ▁dont - слов - んでしょうか - 話を - ▁செ - ▁stretched - ゴール - ▁tale - 誰 - ີ - ▁bomb - ova - 房间 - ▁eggs - 我没有 - ▁Shi - 途 - agen - xo - 하는 - ちゃ - 把我 - 好き - lect - ▁abge - ▁exception - іць - ▁knife - ▁못 - 犯罪 - ▁fran - ▁engineer - ▁Ven - '22' - ▁داد - லா - ▁define - ▁piano - '!?' - ister - jā - ент - 大多数 - ▁offen - 录 - bera - ▁halten - ▁стар - ▁recover - ▁transform - いろいろ - 問 - ▁rend - شه - ▁cottage - feld - lid - ▁Chicago - 标准 - ▁действительно - まさに - chy - бли - ▁perceived - ▁homes - 積 - шин - ほか - жал - Т - 地震 - oko - 记录 - 決勝 - 次の - ▁proposition - การ - 效 - ▁peer - ▁karo - ▁gross - ▁Pass - 練習 - 禁 - preci - شن - になり - oro - 超过 - ▁stiff - 面对 - ▁jsem - ▁fellows - ě - 几天 - ▁hor - seite - ▁vào - ▁Latin - 多い - ▁Irish - ▁gegenüber - ▁pretend - تى - ▁efficient - ▁juga - ▁notes - ▁heap - ▁ஏ - ▁poco - ▁Demà - 저 - ▁일 - 电视 - ▁hid - 确保 - ことに - ו - ▁striking - ▁eigen - шу - ▁प - ▁director - ▁Gesellschaft - ▁Any - atge - ezi - нул - Mo - isiert - 全球 - ▁operate - んじゃないか - ▁strain - 开发 - ▁foi - 证据 - ▁interrupted - 惊 - ▁articles - 善 - ▁également - ▁falsch - ▁leurs - иг - fod - 记得 - ▁expand - 让他们 - 多く - ▁knight - ▁extended - 眼睛 - 我们会 - ▁Politik - 台風 - stic - チャ - ▁Rock - ▁knock - やつ - لم - ▁Jones - ジャ - ▁sole - 有没有 - 年轻 - ▁kissed - 帮我 - ▁Kate - ▁الف - تح - ▁slo - いった - メートル - пад - ▁город - ▁Kü - 爆 - бр - ワン - ▁mess - ▁sleeping - お金 - 人も - ▁অঁ - ▁spre - 容易 - ▁понял - jes - ▁hadn - кол - ▁früher - ▁我要 - ▁dur - ▁frequent - ▁unit - bij - のほう - ▁movie - ▁دی - ▁feature - ▁anirà - ▁corre - 开心 - mah - gled - ▁lonely - ▁Please - буд - accord - ▁damage - irwa - 压力 - lib - 确定 - 際 - 行動 - ages - 卫 - ▁performed - mera - эс - るか - 辛 - 另一 - ▁fois - 例えば - いただきます - ▁할 - 习惯 - registr - wana - genera - iau - 没事 - ▁Fra - 风险 - 见过 - らず - 私も - жил - stead - ▁ここで - ▁değil - lau - にとって - ▁reader - aires - master - 第一个 - ▁southern - 以上の - ▁هستند - キロ - bran - hur - ▁ambition - ▁junge - ▁оста - 知识 - hus - ș - கை - ▁dice - ▁Chris - ▁vis - tischen - för - fiel - ▁yok - 予定 - 女儿 - ▁properly - тон - ▁хар - 拜 - 煮 - 平均 - dding - ै - フィ - ья - ▁hätten - ▁olma - うまく - ▁stem - 드 - ▁tidak - ind - ▁modo - ▁Amb - ▁servi - arch - ▁Sil - ▁proposal - 察 - ▁patience - iques - 合わせ - ▁sorts - ▁schwierig - ▁mate - eling - 错误 - ▁gew - ín - ▁гу - сов - 休息 - ▁crying - 気が - ▁ym - ▁disposition - 伝 - ▁именно - ▁mild - 我去 - ▁bisschen - ▁risks - raga - লি - フェ - 纸 - Ge - ▁shoes - ▁jeune - 最好 - ▁года - 라고 - 瓦 - ками - 把你 - 俩 - 比赛 - ▁Gold - ▁Energie - 여 - 显示 - zor - شا - quis - 開催 - ී - ▁Trans - ான் - يق - ன்ற - imp - 等等 - ▁thấy - ▁fand - ří - ▁trembling - ▁invited - ် - ▁shake - bound - 争 - ▁donde - ▁swept - ▁represented - 康 - ▁việc - ее - rap - ▁совет - ▁przy - politik - 乡 - ▁arise - ▁Kor - ▁handle - ▁nevertheless - 丝 - ▁বা - 因为他 - 必 - оль - 年轻人 - 集中 - ▁Este - ias - ▁prospect - ▁быў - یان - мар - ▁inquired - ▁Entwicklung - ▁überhaupt - уй - ▁ஒ - 更に - час - loop - burn - ▁Having - 含 - سر - ată - ▁noi - ▁생각 - ▁movements - 館 - ddle - ▁cheeks - ▁zaman - ▁مح - teilen - 一张 - enti - schrift - stehen - organiz - ▁Programm - ▁sul - тры - ▁yw - ▁Staat - ▁feared - ▁widow - tsi - ▁achieve - ▁momento - ▁priv - たくさん - nā - fun - tang - キー - ▁plot - 冒 - kara - ▁gemeinsam - юсь - 賞 - ▁და - ▁responsibility - 上面 - ▁jour - ▁responsible - イメージ - ▁driver - ▁Western - ensi - Q - ▁Millionen - ▁zag - 給 - ழ - ▁Egypt - 绝 - ▁reflected - 続 - 5% - 暖 - ому - 下面 - ▁ashamed - вая - ところが - ▁それが - 近く - ▁beginnen - ▁мог - ▁ain - 些 - 经过 - och - 困 - 計 - ▁distress - iad - 症状 - lut - 负责 - ▁victim - tors - alia - ▁poverty - ▁otro - ▁Act - шка - стоя - アル - лад - kita - ッと - ▁hollow - ▁mixed - уме - ино - 接触 - ▁attached - ▁Love - ▁guest - There - ▁beast - ガー - ▁thu - ▁queer - ▁你说 - ▁gesprochen - 試 - ளி - pho - ▁وأ - lose - ▁Herz - 大変 - ▁Komm - どうぞ - 变成 - π - оро - おい - 的一 - 声音 - jya - Go - Po - ▁uw - ▁einigen - と同じ - quet - ların - ▁muito - アップ - ▁crazy - रा - het - нии - ▁Sprache - ▁کنید - ▁slaves - ▁strip - 以降 - ▁wearing - 押し - ▁Ay - gala - ony - raum - ▁Ei - ▁context - ▁Ihrem - 更加 - سه - 思う - ▁location - ирован - ị - 欠 - ▁yu - wart - ▁hut - ously - rouw - ấ - ▁tomorrow - 상 - られない - ação - ▁所以我 - dou - ▁ເ - 照顾 - たちが - 絵 - گە - ▁Bau - دم - ▁premier - ▁intimate - ▁või - கள - 网站 - ▁acts - ้ - » - ▁salut - 強く - ▁Adam - 对你 - maz - ▁الخ - ▁stud - ▁terre - 缺 - 这个人 - ▁Ji - stä - ois - 象 - lage - bru - ▁Sel - rzy - dern - schuldig - ▁adopt - ▁hare - ▁Kat - laufen - になっています - 当他 - ▁bla - 介 - 温度 - vern - そういった - ▁되게 - ▁nachdem - 值得 - ▁Мо - ể - たのは - 本身 - ление - ▁writer - ▁transition - ▁creation - 看到了 - ▁Tas - '300' - 有关 - 不得不 - manda - bled - 怒 - 신 - ▁exciting - 杨 - 環境 - ▁heavily - 国の - уп - cl - ▁داشت - ▁fiscal - ▁Ter - دة - ▁yours - urt - ▁völlig - 困难 - аа - ổ - ▁Luft - ▁bene - kata - ▁bang - حق - 孙 - ▁lest - kı - 盖 - 埃 - rau - ▁concluded - ▁boats - وع - ká - 卵 - ▁inhabitants - 来看 - ▁bul - ▁unusual - نش - ļ - সে - ّ - ▁Billy - 不再 - ▁dal - хий - ▁traffic - ject - иль - 思って - nez - là - ▁Так - 我是说 - cking - ▁sempre - ▁chin - 냐 - ▁روی - ▁unserem - ню - ▁materials - ▁committee - ▁budget - 準備 - iers - 羽 - 陆 - лё - sept - 祝 - vant - ▁ру - ▁glorious - ▁وجود - ▁consistent - ▁belong - 彼 - ellen - mira - ▁이런 - ying - сын - rse - ▁brush - 成员 - ▁такие - ▁hombre - 怀 - 婚姻 - 免 - ▁Tat - ▁sisters - というふうに - ▁Us - ▁beste - kur - espera - ▁cells - ご紹介 - аз - よろしくお願いします - 专业 - ▁aho - 議員 - ▁biz - 心配 - したり - ▁mist - ▁hunting - ▁bya - 杯 - ▁gihe - Why - ▁таки - ▁industrial - ▁reasonable - lern - ▁musste - 国际 - தே - 津 - ▁causa - かり - vine - ▁кан - ▁forehead - ▁slipped - ▁abroad - ▁supported - 긴 - 发布 - цел - ▁Last - ▁Mars - fertig - ▁сделать - 張 - 野菜 - 図 - ▁сама - fang - かどうか - ▁Medi - ▁episode - ▁venture - 虫 - так - дум - ▁recall - tlich - ▁individuals - ▁ўсё - ▁knee - ▁spin - quin - ās - 来た - mad - 另外 - 義 - مت - ▁Ang - ▁importante - gut - ▁icy - ▁rhy - ▁chest - pati - ▁compelled - ▁spare - dle - 我自己 - бар - ▁multiple - سل - 所以我 - ▁jump - ▁biggest - ▁Schritt - ▁repair - его - ▁payment - ▁swift - ▁approaching - ▁вся - nti - 准 - キャ - ▁جو - ర - ▁сил - ▁まあ - ря - ▁manufacture - 这件事 - ▁combined - ▁perd - λ - franc - 価格 - ▁riding - area - ▁жизни - 過去 - ▁dove - ▁человека - ▁accomplished - ロン - 在哪里 - 情绪 - ▁difficulties - 到达 - まった - 墙 - できます - ёр - ▁Cat - ックス - ▁besten - るの - нут - ▁federal - 最後の - ▁Mein - ▁bush - ▁quit - ▁Pero - cció - schließen - лав - ▁consult - 还要 - 咱 - bio - ક - تە - rain - 看见 - 続け - мест - 约翰 - 偏 - ▁cũng - My - 形成 - here - ууд - ▁хотел - どういう - ▁ensure - うわ - 電話 - lation - lets - ▁maiden - 用户 - år - ソン - ▁الص - 瑞 - under - 仲 - 攻 - ▁obvious - 我々 - ने - cast - 接下来 - ມ - lak - ▁Governor - ▁Ham - いか - ▁mala - ▁Ко - 制造 - ุ - 想法 - ▁earnest - ▁tijd - 逃げ - парт - بار - thy - ▁فر - ▁kwe - ▁brings - ▁اون - тре - пал - 流行 - ▁oben - ń - ender - cis - ▁Boden - 虽然 - ▁Kol - ▁agent - ost - ▁Durch - ▁dif - DNA - raten - 十年 - eze - மே - ▁Rwanda - ▁fro - ▁icyo - 北朝鮮 - рез - altres - ▁gesch - OK - ▁Four - 妻 - Ж - ▁retired - 생 - ้า - ▁rude - ▁Geist - ▁жив - pè - ▁acquired - ▁actor - 牌 - 実際に - ▁stress - ▁regions - ают - 我很 - rec - だったら - 特殊 - くん - 鳥 - ▁rob - ▁Ram - ▁unfortunate - ▁armed - 益 - 制度 - おり - ▁funktioniert - 他会 - ▁Blick - ▁چې - erin - dwa - ▁useless - طور - ▁거기 - leb - idade - 分かって - ▁murmured - 显然 - ▁spell - のため - ▁aufge - ▁margins - ▁grateful - ▁institutions - aɣ - ▁그게 - ían - ▁selling - ▁Technologie - جد - になっている - ▁display - care - য়ে - ▁понимаю - ▁Berg - bag - 他们会 - 巨大的 - ▁helping - 려 - 昔 - ям - ▁Bor - ▁impulse - 那就是 - ▁passat - һы - ▁系 - thing - 悲 - krank - 極 - ▁reputation - ▁eens - ना - ▁manager - ▁Macht - ▁igihe - 有效 - ▁Prozess - Ne - chter - 移动 - 진 - ▁permitted - 一体 - 反对 - 摩 - 娜 - ▁Six - ▁refuge - 力を - ▁sollen - メンバー - ▁smoking - していて - ▁background - 棒 - 暮らし - др - ▁seldom - 疾病 - lines - ▁burned - ▁nella - ▁despite - 较 - 关心 - 并没有 - ât - ▁فقط - ▁cycle - 放弃 - ▁Besides - figur - ▁clo - رك - 一度 - ▁Sicherheit - ▁ښه - ▁Bible - cup - ▁Super - 您的 - ▁Thema - ませんでした - ▁estat - 死了 - கா - ▁charming - ものが - 乔 - ▁Während - ять - ▁obligation - ава - የ - ▁ignorant - 命令 - ▁admiration - ▁basket - ுள்ள - ▁சி - ▁especial - 磨 - меш - мас - 斗 - ▁pues - mmen - 南部 - 你说 - 効果 - ران - ясн - ▁entwickelt - ▁Russia - 表达 - ▁closing - 兄 - hak - ▁havas - ▁entonces - 你现在 - ▁traditional - ▁vrouw - ▁depends - ▁idle - ând - ▁pit - ▁nhiều - юцца - ▁nghĩ - ▁ком - lac - ಾ - ▁terra - ört - ▁plainly - ▁northern - رت - 斯特 - ▁repeat - chung - ▁Hat - ▁sports - ▁contest - வே - 健 - аться - 上海 - マイ - ▁locked - graph - ▁eventually - وان - ▁adopted - ▁авто - ▁Sand - bis - ▁avons - ▁প্র - ▁мой - 述 - 敢 - енно - amento - ▁eagerly - ▁лучше - ▁Chief - ▁contribution - 自我 - 我都 - ▁slip - 峰 - ▁thirteen - جي - ▁rail - ▁само - że - ின் - ▁这就是 - ▁vital - lais - ▁olan - 我相信 - ▁drinking - ▁Projekt - ата - rait - ു - ain - see - ▁blessing - bà - 某种 - ▁departure - ▁blank - 报道 - ▁continu - ત - ம்ப - ▁deeper - ▁competition - ▁Doctor - ▁maj - graf - 可是 - сь - ▁kre - 一方で - ▁Law - ू - ých - уш - थ - ▁sistem - ▁weißt - 公開 - ▁coin - ▁freu - データ - ▁suspect - በ - 了吧 - ますよね - ▁すごい - ్ - comb - cun - ▁erkennen - unter - ロシア軍 - kra - enzi - asa - 名字 - dank - ▁Gla - ▁яны - ข - ▁occasionally - ▁Finally - hören - হা - ▁彼は - ▁attractive - ▁gener - ▁pied - ▁그렇게 - ▁braucht - その後 - 活躍 - ▁prophet - 手を - esse - ▁tras - agenda - fal - 해서 - '!」' - hara - ▁pole - ▁sentiment - gues - 决 - ▁iyo - uten - ▁Tim - ▁inclined - 今夜 - obu - しても - ▁settlement - lando - ▁scho - лом - してきた - 例如 - has - 我只是 - ▁rub - foot - gura - лд - stad - ▁Clo - ▁llama - ▁bari - ▁Süd - ▁học - ▁glow - نس - 合わせて - mış - 一生 - ▁Ils - ▁করে - ız - ▁released - 崎 - ▁bones - ▁quant - cap - ▁settle - بول - ስ - ▁owner - ▁songs - ▁Wert - 降り - 亡 - jal - ▁owe - ▁为了 - てください - 戦争 - يس - ▁Jacob - чь - gelijk - 弾 - 미 - ▁comments - km - alter - ▁Werk - 圈 - 响 - ভ - ▁App - ▁granted - ▁こちらは - 過ぎ - ▁Vielleicht - でございます - ▁apple - ▁literally - boy - rai - ▁Pal - ▁100 - ▁Andrew - цу - laden - 意思 - пі - ▁alternative - ▁Eltern - ▁Anna - ▁commanded - ▁opposition - shyi - ▁그래 - ▁handed - ▁bitte - ▁rya - 沖縄 - 全て - oper - ckt - რ - ▁stroke - ▁naj - ▁poetry - 按 - සි - hanga - ▁painted - ▁burn - kehr - 章 - 菲 - și - 的新 - ▁ella - west - eerd - usi - ▁بد - 尝试 - koj - उ - ▁acest - ▁teachers - ▁Name - ▁prime - ▁Does - ▁Scott - ▁"" - 厳しい - ▁assure - 階 - rī - ▁select - ▁billion - 也可以 - vesti - stelle - daki - ▁versus - typ - ▁Plu - 細 - だな - ▁Glück - ▁Hon - 頼 - говор - pur - ▁cure - ▁dared - ▁insult - 投げ - ▁تع - 欧 - ▁Mul - 会議 - dung - ▁Gruppe - 姑娘 - ▁Mur - زه - ▁Team - 沈 - ▁infinite - —— - ▁opposed - カメラ - sicher - ▁hơn - tree - 每一个 - 雄 - esi - ▁Hor - вяз - ▁thrill - ▁irre - 检查 - ▁mucho - nett - وو - ન - ▁pursue - appar - ຫ - ▁leverage - ▁এই - 终于 - ús - bles - ▁vessels - ▁faut - uren - ieron - ▁feels - ▁teams - ▁adapt - ▁increases - خت - 観 - ▁kamen - ▁хотя - 生产 - ▁scar - கோ - matic - あって - نە - ▁إن - 足够 - nosti - ники - 、3 - most - ▁fifth - ▁county - 我们要 - ▁enhance - 費 - satz - 塩 - headed - 入って - ジェ - ▁응 - ▁shout - に対し - ▁chap - дын - ▁newspaper - ະ - ▁anni - 분 - elli - ▁reserve - 見せ - ▁TED - ▁fed - ロシアの - ▁должен - ▁mbere - ாக - ▁Gemeinschaft - gress - ▁vin - mili - ▁shirt - 製 - 大部分 - قي - смотр - ▁gap - cult - 后面 - 一方 - nant - ▁behavior - rè - ▁intense - ▁vague - пло - ्या - ▁باید - ▁twa - át - なし - 属于 - geld - èrent - ▁Arch - ▁goodness - дэг - ▁tio - ▁bara - jin - ▁folk - マスク - 発見 - ▁Laura - ▁lighted - ದ - ▁bet - aver - 運動 - ▁никто - ▁prize - ntu - цион - 裁 - 博士 - ▁Ist - 交流 - produkt - ▁Cli - ▁Rolle - umba - ▁obwohl - ტ - 音楽 - cinq - 完了 - ▁painting - 你自己 - lagen - ▁четырнадцать - 呼吸 - ▁pink - tics - ▁steep - ▁ведь - 冠 - ▁epi - anno - ▁client - ▁jobs - ort - の大 - ▁sia - ▁institution - ▁pistol - ▁signed - ▁defence - ▁работа - ▁lebt - ける - ▁withdraw - ▁troubled - ▁murmur - лин - ▁fins - 印 - ▁elder - ▁nad - 哪里 - ▁Qua - ▁Americans - wl - ▁primera - 闹 - ▁providing - ▁Stephen - 伤害 - 你还 - ▁Sein - nih - 承认 - ▁tele - ▁November - ▁offset - ▁auszu - ▁ĉe - 资源 - 爷 - ▁characters - ▁Sub - ▁questa - 染 - ▁jedem - ▁sou - アウト - jos - في - ▁vend - fund - みて - 仕 - ью - This - 敬 - යි - 追い - っていうこと - ▁fame - ▁Wirtschaft - 就不 - gne - ▁wooden - ▁Tage - ▁Dieser - pis - 後に - 気温 - ▁그때 - ったり - үүд - ▁seeking - யாக - ▁sufficiently - ▁sana - ▁wedding - ء - dine - このように - bord - ▁pages - ์ - denken - 有一 - ▁obey - ▁Kla - ద - rib - ecek - ▁случае - 不行 - ▁verloren - wacht - fla - ▁uttered - 判 - वा - 一部 - ▁regiment - ▁Om - ▁feast - ▁wegen - verse - 骑 - ӓ - greifen - Sa - 出す - ▁Wild - ▁notre - 別の - 寄 - 正如 - ▁observ - 映 - aron - 评论 - 史 - riye - ĝi - 环 - ▁creating - ense - ▁flung - enc - 十五 - nü - 우 - ▁Cette - ▁Angel - ▁mono - ▁hum - ▁desde - 宁 - ▁mayor - 意識 - mur - 行く - ქ - ▁Meine - ▁Ale - ▁sabe - füg - 老板 - დ - ▁Jag - 最も - ▁persönlich - ▁capture - vali - ந - 看到的 - ▁parole - 一名 - やった - 意外 - رد - ▁unten - 潮 - 平台 - ▁있는 - ▁zeggen - ▁Rus - ▁neben - ▁Mat - ▁Daniel - tore - cycl - liga - ▁expansion - ▁distinction - ▁sovereign - ▁نیست - ayı - 銀 - weisen - ▁Та - emi - ▁component - ▁Dy - ху - ▁mehrere - になると - ania - ▁entra - ▁cheerful - あす - ▁recovered - 这么多 - ▁curve - Da - ▁whisper - ▁tough - ் - genda - mento - 利益 - ▁neuro - ▁bark - maga - 答案 - ▁яе - ▁valor - ▁deserve - ▁муж - wyn - 律师 - ▁organic - ▁prze - わけです - 这位 - isation - чет - ▁vroeg - さま - ▁Schwe - 议 - بة - されます - ▁quelque - を受けて - ší - point - ▁ста - зд - ▁plötzlich - ▁persona - полит - 逆 - чер - とこ - せる - ť - ▁Gran - 犬 - ▁이렇게 - ▁پس - ▁plat - 法院 - ▁Kr - mov - ▁infrastructure - 原来 - صد - lop - 商业 - ▁همه - saba - それに - 湿 - 澳大利亚 - 对方 - nten - lyn - 腿 - 肯 - cret - ника - 呆 - 国王 - 目标 - ▁Other - 岸 - ege - So - ▁کنیم - ▁reckon - نم - 对我来说 - ▁seltsam - صل - ▁après - حا - 一件 - ские - ▁habits - ▁poi - ▁qualities - сте - 对吗 - ▁Bis - ▁нибудь - 思想 - ▁rendered - ош - 心理 - teen - ár - ▁fired - ▁entering - お母さん - Pa - qual - ▁biết - ▁depart - nio - ▁lately - 表情 - ▁ບໍ່ - ▁fence - いただいて - ▁stern - ▁decisions - ▁reaching - گاه - ▁observation - mere - که - ▁blanc - чин - ▁conserva - für - tine - ông - 話題 - ▁Jews - ヘ - が入って - étaient - ▁слова - hof - ヒット - ▁US - ▁Hände - ▁Dat - ▁spr - stoff - 我喜欢 - ▁pursuit - ▁Kal - ää - ▁shared - 抽 - ▁substantial - 날 - aged - estra - ▁hence - 全く - ▁dalam - ▁font - マー - ▁الر - 振り - 高兴 - ų - scal - ▁cents - bere - 鸟 - ▁towns - вел - ates - bwira - ▁minor - ▁خیلی - ▁Next - ners - koze - を中心に - 亿 - ▁Kirche - load - ağı - 拒绝 - rack - ương - ▁equipment - ▁rằng - 因为我们 - tanga - ▁свое - szy - ▁massive - ▁staring - izza - 不仅仅是 - ▁toujours - 照片 - tron - Di - ▁upward - ég - ▁cease - nost - ▁тех - kaba - 终 - あげ - ▁purposes - 遠 - تو - ▁நீ - stück - ▁Vol - ▁wow - ▁wretched - ffi - plaats - fulness - gebaut - 记者 - ▁gens - ▁dancing - ▁أنه - 不足 - ▁absolut - ▁lunch - ▁porte - ıl - 分からない - çı - ▁etc - ▁travers - 距離 - ▁investigation - liter - 骗 - ▁conceal - ▁bio - ான - ▁බ - qi - cà - idos - ▁wherever - 马上 - ▁Royal - ально - タイム - ▁declare - ▁conce - mani - 姐姐 - ವ - دل - ▁mounted - ▁шо - ▁гэж - 进去 - ▁accounts - ▁bullet - 广 - ましたね - 大雨 - ▁bless - ▁electric - wol - ה - ▁mask - ▁nav - ▁pound - куп - 'Yes' - ця - gian - ṭ - ക - お伝え - guru - ờ - ▁знаете - ▁unlike - 的情况 - 避 - ales - ▁kugira - cchi - ▁fragen - ▁கு - trau - ▁weder - wara - ▁plum - 跟他 - ▁gazed - 我们需要 - say - meter - のよ - 昨天 - avo - 暴力 - sini - ▁lernen - boot - 打电话 - ▁Umu - ▁authorities - ▁Dass - ▁butter - ▁desperate - staan - ▁historical - او - 我们现在 - schlossen - ▁essentially - ▁padre - att - ▁volumes - dress - zzi - ▁belonged - кай - 达到 - ▁load - ▁wet - 自分が - タイ - ▁zwanzig - ▁foe - jä - ▁вз - ggi - 他就 - ▁expensive - bour - boo - あとは - sinn - ▁yara - iet - ஹ - acce - gard - aud - ▁bosom - 透 - ▁момент - ▁уч - ▁Gemeinde - のも - вай - ▁Ён - ▁shadows - ▁tie - 表现 - ▁pine - どれ - 丈夫 - தை - ▁leadership - ▁fever - ▁acu - plica - ▁appreciate - 麻烦 - comm - ▁gesture - 拡大 - ▁discipline - ▁Bell - arse - ▁blieb - abu - 坚 - ▁первая - 人と - などに - 陸 - ▁rap - んじゃない - 吓 - ▁highlight - ▁philosopher - いや - ances - ▁третий - ▁rất - 创 - 小时 - ▁anderes - ▁Quin - ▁reduction - ▁misery - 我对 - ▁Bat - dek - 域 - 异 - pira - nahme - 戻 - ▁stuck - ▁lid - ▁essere - ▁এক - 異 - ительно - дол - ним - ▁минут - ▁volen - ▁torture - plat - сен - ようです - ▁قبل - iĝas - کس - ▁другой - バイ - aki - ▁Fran - '35' - äm - ▁Today - ▁abuse - إ - рак - ▁integr - ख - ▁fac - емся - ▁zeer - anne - сс - ▁হ্যাঁ - lek - 增长 - 聞き - ▁assumed - 大脑 - ▁phase - mä - ▁Life - ▁external - ▁gather - 播 - ▁dyn - 雅 - stell - bbed - なん - ▁acting - rado - ▁Produkt - bbe - мол - ბ - 団体 - ▁remove - front - ▁молод - ▁sending - bara - tak - ▁зам - tical - によると - らしい - ▁Platz - tritt - sab - ▁helpless - atory - 治療 - ios - dura - bug - 也没 - 因为他们 - 失败 - 提醒 - ▁bench - ▁genannt - 寒 - ▁ornament - ▁Eh - ▁July - ▁milit - ▁administration - 在你 - ▁cosas - һа - 者が - َّ - ▁returns - kna - 再び - ▁shortly - 功能 - ▁overcome - вести - gelegt - 市民 - ண் - 水平 - ▁saddle - ▁firmly - 我希望 - ொ - mona - cles - 約 - ▁quantity - ▁communities - ▁15 - 糖 - 医療 - ▁rồi - ▁деньги - の影響で - ▁grain - ▁الط - ▁extrem - 分かる - ▁хорош - 候 - 给我们 - ▁begged - gegeben - 赞 - にお - ▁расс - تل - ▁delightful - ▁câ - 这是我 - zimmer - 一份 - ▁между - оз - 世界上 - つき - ▁scattered - 少年 - 金融 - ▁beloved - ▁restrain - すでに - ▁sensible - ▁workers - ▁evolution - リーグ - ▁bod - ▁slope - ▁год - ▁planning - ▁dealing - ▁cách - ▁dread - ▁Army - ▁Grunde - ▁allows - gram - ssy - ▁Sim - inci - ▁moeten - innen - 동 - dah - ▁perché - னா - ▁Schule - ▁더 - 長い - ▁gaan - ▁Pas - ▁голос - ▁cloth - lement - ▁drunk - akt - jor - ▁Morgen - / - ▁Hugh - 广告 - ▁Big - 作った - tali - nate - ティー - しましょう - ▁kadar - ▁prepare - 束 - 始まり - もん - ских - itza - ▁đầu - لار - ▁ব - ▁rescue - ▁predict - したこと - 勤 - كل - ▁совершенно - ▁Middle - 留下 - nych - ▁aktiv - ▁schließlich - ▁cutting - уль - нар - ▁Francis - ▁为什么 - schauen - 策 - ?」 - ▁caso - Me - lled - ▁Tot - ▁Lor - 馆 - ▁lugar - ▁Bürger - 泰 - zia - ▁今日 - ▁methods - ▁Grant - ▁Wochen - 醒 - lee - euse - رق - ografi - ▁verlassen - lā - ▁Uni - センター - ▁toch - ▁Sed - ▁cum - ял - ▁Pla - ▁кого - ドル - stadt - 有个 - нда - ▁Afrika - فعل - ▁оно - ▁glücklich - ▁scorn - үн - druck - гра - 行って - 紙 - さんも - ধ - ▁dwell - ▁lecture - 치 - lerini - ▁primary - odo - лай - 生き - ▁cerca - 泡 - ▁zit - čč - өөр - 特朗普 - ช - یل - ▁exposed - 郎 - 类似 - ځ - ulation - ▁aloud - ▁united - ▁bowed - ▁Get - 意识 - 自分で - ▁separated - 接近 - 隆 - 意义 - 一段 - ▁khác - лож - 我能 - ▁مثل - 记住 - illo - ▁Mexico - эв - ừ - ▁Band - elen - ▁centuries - 左右 - ▁disgust - ▁mur - mist - ▁beam - вать - 財 - alis - ▁click - 秘 - 因为它 - 九州 - spec - 類 - eja - ▁alleen - ▁Grace - 什么样的 - maid - ස් - tun - ▁Ly - nyo - ▁losing - ried - 毎日 - ▁patron - tje - tivo - sert - ▁Its - hap - 监 - likuwa - вес - ެ - гло - 版 - ▁одно - ifa - ▁كانت - ▁tramp - ych - ît - ▁प्र - ▁claims - quelle - ич - ネット - ▁тысяча - ▁sera - ▁crowded - ▁begins - ைய - ▁하는 - 造成 - ▁runs - 見え - кә - ではなく - シー - ▁ciudad - кал - ▁infant - يو - 住民 - れない - ▁insisted - ▁ху - struktur - ▁pile - ках - тру - uwe - ją - 有一种 - chas - ▁hastily - овка - SNS - ▁zeigt - ▁schlecht - とともに - ▁nearest - ▁назад - ▁jumped - كي - ▁chỉ - ▁rang - まり - ộ - ▁doubtless - rug - ▁Camp - ▁insist - ▁Chapter - elde - ▁consumer - ▁nutzen - კ - 之外 - ▁Kam - ▁joint - ுவ - 트 - ▁ignorance - 食べて - tid - اج - ▁railway - 是有 - 隔 - ости - ά - ▁extend - 래 - ▁заб - 欲 - 睡觉 - ▁kaum - すぐに - 观众 - 殿 - 烧 - 因为你 - ▁religi - ctic - ▁cheek - 浪 - ▁gekommen - をお - чым - ▁பு - cido - gad - ▁Dienst - kup - bewusst - 帕 - ▁swear - 悪い - ▁рэ - ▁Young - ध - 沿 - ▁eki - tua - hole - ▁planned - әй - ▁issued - cción - вели - 乳 - mour - ваць - اص - ▁mensen - ც - ▁cam - ▁haste - ▁significantly - 新たに - 登場 - ला - ▁clerk - 若い - ▁ebi - ぶり - ▁portrait - ▁experiences - න්න - 魅力 - ▁هناك - dě - 项 - ▁baron - ▁измен - ▁могут - ног - 的影响 - ▁grasp - 穴 - 力量 - ▁transaction - dev - 正解 - ▁stretch - ▁opinions - 每个人 - ▁weapons - 個人 - 模 - 絶対 - 歌曲 - ▁fog - ▁inclu - gezogen - тын - さんです - っていうのが - ▁raising - dict - ೆ - ▁Tiu - ▁Av - овая - ▁señor - ▁outlook - 五十 - 成了 - 升 - ▁شهر - ▁continually - こそ - ▁fost - ▁30 - ▁Schüler - ▁improving - ▁Florida - ▁molto - ▁گفت - гер - 缩 - ▁好吧 - ▁شو - '2000' - 顾 - ▁Kunst - 件事 - psi - ▁Ireland - 간 - nega - geb - ▁fy - ▁deed - ▁cet - ▁möchten - رة - 報告 - ▁About - iness - dent - ▁hoffe - ▁utmost - 文字 - bbing - ssel - ▁consequences - ▁کی - '27' - наход - voj - ▁Eis - ▁Вы - ▁operational - 節 - 带来 - ▁frank - 聴 - 吸引 - ▁significa - ▁Poor - کرد - 宽 - ▁mejor - ▁Freunde - 部门 - tures - 内部 - ▁続いて - ▁großer - 蛋 - ▁bwo - ब - 裏 - しっかりと - ▁zonder - ▁apparent - स् - träge - ▁explore - ▁lesson - に対する - ▁Cri - 尚 - 客户 - 好きな - volle - ▁sounded - 会場 - boat - ▁Jackson - 阅读 - ▁trên - 供 - 当たり - ▁leads - ▁яшчэ - ▁irà - ▁বি - ▁chill - ▁fairy - さっき - wag - ▁emotional - avait - prä - bü - ▁convert - ▁handelt - 就是说 - ▁flatter - 长的 - tico - ▁magnificent - ienne - 제 - 抓住 - ▁ihres - ▁elekt - んだろう - 台湾 - ะ - ồ - ▁tipo - iente - ▁Lang - pas - йн - ▁ब - メディア - ไม่ - ох - වා - ▁initiat - ▁என்ன - ▁baba - ▁mainly - Ö - hearted - ่า - scribe - 宅 - ▁الذي - ▁challenges - lance - ▁Би - ▁verge - chin - zioni - 意味着 - وە - ▁neighborhood - ▁fou - ▁cards - ▁carbon - ilen - ો - ▁mankind - ▁includes - ▁todas - dition - 兴 - ▁gefunden - ▁خوب - kni - 할 - 了你 - ný - ▁missed - ▁oedd - platz - clin - ▁acquainted - ▁двенадцать - yol - 亲爱的 - tama - ▁Walter - 感染者 - ▁Ф - 经验 - поль - 今は - ▁mm - ▁Marie - ▁sexual - ▁absent - бил - baga - ▁wagon - fam - 码 - ▁yard - 瞬間 - ▁ним - 得到了 - ▁efficiency - стрел - ▁representative - 見える - ▁ellos - 最新 - 和他 - わり - ってきた - どの - ministr - ▁leaned - ▁công - ▁geç - ▁То - кла - قى - ▁Arbeits - ▁kw - ▁neces - ようと - гр - ▁Mensch - ▁ал - ▁Situation - ▁requires - そのまま - مو - colored - ▁spielen - 无论 - ▁Yuba - 肩 - 猛 - ▁pis - ots - 容疑者は - ລ - нан - илась - kura - ении - ▁arrow - ▁quelques - ▁están - ▁Sho - tě - mí - ▁ней - 周辺 - hü - ▁entfernt - ▁aren - يع - ▁Service - To - 来る - ▁daughters - ▁prin - ▁alike - 委员会 - 今まで - sser - ▁ɣer - ▁disorder - ▁confirm - ▁nhà - programm - ▁đây - 角度 - 更新 - یی - 師 - ता - くなって - ▁ваш - لب - 两个人 - ▁maig - ▁imagined - пов - حل - ebe - маг - sure - ຕ - ▁尽管 - 实际 - blau - ▁Leuten - tī - 而是 - 深刻 - ▁fuel - ▁говорить - ▁نام - ▁Sohn - icia - dru - ▁eher - ▁university - ▁nein - られました - ▁gall - ▁accordingly - bá - brook - 最終 - ▁Tru - 大きい - ▁Member - 园 - ▁گر - 地下 - ▁مر - 适合 - tele - スポーツ - ▁association - tty - 价格 - ฉัน - ▁thread - geboren - ▁هیچ - ▁safely - tenant - まずは - ▁我也 - ▁midnight - ▁ئە - ▁اند - нне - ી - ▁technical - 変わって - cola - бит - كى - ▁mijnheer - 의 - ▁cyf - stern - bura - ▁durant - you - ▁Stück - 一边 - 状态 - ▁cake - ▁decir - 覚 - 葉 - ▁considering - 聪明 - ▁attempted - licher - ▁fearful - ë - angan - ▁hörte - 爸 - ▁jamais - ▁straw - 标 - ▁chat - ▁factor - ledge - ▁gathering - uki - agara - ▁私 - かん - jet - ண்டு - ▁permet - ▁mirror - gers - 戏 - ▁жизнь - ▁vest - ▁آیا - ▁conclude - Ho - ▁besonders - るように - ▁altar - rab - ▁Cra - ▁pag - チャンス - pang - ▁bond - 相談 - ▁なるほど - 本人 - に向けて - wak - ▁Count - ี - stel - 周り - аю - ▁developing - она - できて - ▁abandoned - ▁얘기 - んの - ▁urbo - イベント - гон - ively - arme - 四个 - tiques - 糟糕 - ▁favorable - ▁mondo - zza - aven - bine - ピー - ▁Tages - ▁Turn - 控 - един - 瓶 - ▁никогда - ի - ▁incredible - ▁contre - スの - ▁cars - ▁breed - තු - جز - fach - ▁saint - ▁nennen - 様子 - フランス - ▁probable - ▁bez - lessly - Ro - ▁Betty - ▁sehe - ▁involve - dden - ▁enthusiasm - существ - shya - richten - swa - ▁persone - ▁huis - ▁gleichen - ▁October - ▁personas - 规 - 主人 - ▁ahora - 던 - ▁scrap - 下さい - mash - ▁Ann - ibwa - ▁тысяч - 武器 - ▁reports - ▁provisions - Ah - ▁ramp - 改めて - 担当 - ▁peak - エン - なんですよ - ▁gedacht - ▁arrest - ▁Af - ▁lands - ▁miracle - ▁images - üb - ▁دا - піс - ▁deny - வர - 決定 - 题 - ▁Nation - mart - fic - ▁hated - ノー - ▁occasions - ໄ - bringen - nă - ▁Erde - ▁Job - ▁contain - ▁rod - 档 - ▁Norman - ▁Dra - 수 - 何で - 齐 - ▁2017 - ▁treball - ▁гол - ▁пятнадцать - ▁Were - ▁crash - ▁maken - ▁bai - ине - 一致 - ▁noted - dau - ▁کل - ▁Princess - ▁employ - वि - ▁sym - ▁село - だけど - ▁historia - ость - ▁Jahrhundert - ▁urged - ▁comun - '26' - ▁umuntu - られている - ▁geschrieben - bern - ▁estimate - 면은 - ▁Schul - 살 - ▁하고 - used - ▁ありがとうございます - ▁badly - glich - ▁Zwei - 细 - stieg - That - kah - rav - ▁kings - ▁Hard - 太阳 - Li - ▁Gene - fallen - zak - 很快 - ▁Cur - ▁weet - ▁paint - ись - ▁implement - фі - ▁toutes - ▁rebel - の皆さん - انه - ▁concert - eti - ▁Francisco - 资金 - ▁secured - Mi - 他の - 搬 - ▁trend - ▁entend - ▁Alexander - ▁outcome - ▁contempla - atori - cole - ▁cares - ▁Barcelona - ▁misfortune - ▁revealed - 鸡 - ▁autor - sitz - ▁curl - igis - 動か - ▁cliff - ドイツ - ▁nào - ▁humor - acy - ▁Tatsache - liz - ▁craft - ▁illness - ステ - ▁мало - ▁semi - ▁Brief - rut - 政府は - ▁erreicht - ▁hervor - ▁Bol - 四十 - wig - issen - ▁mem - ▁wanna - یس - ▁menos - ▁Pierre - ▁Scotland - spel - ▁rolling - ▁Minuten - ▁आ - ▁sagten - 美元 - 腰 - rò - 大概 - 是由 - ▁retain - マーク - riff - この日 - ▁dining - ▁mrs - ▁missing - ブラ - 淡 - 一步 - 辞 - ▁guten - bos - ▁pipeline - ▁Mari - ▁multitude - یکی - 下午 - có - ▁plays - 소 - ▁也许 - god - ▁zweite - ப்பட்ட - ▁comparison - 개 - schein - 审 - سو - ▁má - kubi - ▁Bert - ▁amid - ▁Wind - ▁உள்ள - tava - quent - образ - 既然 - 伟大的 - 選手の - ▁shareholders - ables - と思った - bell - 러 - 长大 - ▁volta - ▁gebracht - ▁pār - ▁ответ - ▁Without - ச்ச - ▁horizon - Н - culo - ▁groot - ▁estos - ルー - ▁тэр - ▁recognition - 我真的 - ▁acted - gaan - ▁uru - 摇 - voca - ▁yell - lässt - ush - 了一些 - lusion - пам - 现场 - ▁năm - brach - ары - 的感觉 - arri - ッド - ▁robe - ologist - eke - ということは - ▁난 - ▁ciutat - ▁explo - ides - ▁toute - 行き - ▁Bericht - ▁recorded - ▁passa - lui - ▁statt - ▁sembla - чат - rov - これで - ไ - جل - ▁contempt - cco - inen - ▁studi - щи - bé - ▁estar - 羊 - ▁USA - 먹 - ▁backward - ▁wh - に戻 - ыз - ▁això - ▁фа - ▁sowohl - стаў - ▁dispute - かも - utse - 一つ - ▁каб - 干什么 - лам - гэ - ▁avant - ▁территория - uɣ - ▁음 - ▁gest - ▁gilt - ▁roads - ▁terwijl - খা - ৈ - ▁mistaken - frei - ▁poured - pris - ями - ▁Majesty - ▁Ruth - 孩子们 - cant - ្ - hom - ▁boots - вае - حد - ር - ▁surrender - ▁nascut - ām - gende - ▁wanting - 说我 - جه - ▁Rou - 名前 - ▁shaking - ▁omu - ▁femme - ▁couch - 战斗 - ▁mobile - енный - ますか - besch - ან - ▁brachte - bind - ▁глаза - ▁commitment - ▁ولكن - eco - 还没 - ▁자 - griff - ર - 几年 - ▁schaffen - ▁vriend - ▁ultimately - 世代 - トラ - 我们都 - 検討 - 操 - zik - util - 었 - ▁flock - vuga - 演讲 - dow - ッシュ - ▁treffen - ▁itu - ▁determine - ▁Wy - iam - ясь - пада - ▁loves - ▁È - 女士 - 遇 - ▁adding - ít - view - مار - ろう - ▁estava - ▁throwing - 使う - nol - keeper - ▁Everything - としても - ▁encouraged - ▁Sinn - ▁interessant - 我们必须 - ▁europe - 恢复 - ents - ▁merry - premi - ▁Tr - ▁عند - 成長 - ණ - uche - ▁gates - ▁drag - ▁trat - ▁End - ▁भ - ▁فکر - nutz - ▁manners - ▁кры - ▁vue - ちょうど - aks - ▁threatened - 惊讶 - ▁alguna - 刑 - 警察は - نفس - 你好 - ▁Gegen - ▁klein - гч - 不需要 - nici - ▁initial - ▁gloom - ▁Vereinigte - ▁borne - But - ▁preserve - ▁அவ - 作为一个 - ▁charged - ▁thinks - buka - 改善 - ▁shone - ▁Forschung - 期間 - ▁wander - ▁furniture - 猪 - ▁Club - 負け - 交易 - ころ - stream - ▁Richtung - ▁Volk - رج - 協力 - ▁poste - ▁temperature - '45' - ▁priests - 豊 - zira - 表現 - iser - خط - ▁millor - ന - 内心 - ▁Water - won - ▁objective - 收入 - cija - buch - 遺 - ▁Albert - 脑 - هی - ▁gukora - ▁relatively - شار - 互联网 - ▁lodge - dium - 牙 - 老人 - обра - ▁Gui - ▁leaning - をした - ▁steadily - ▁verd - ▁sil - ক্ষ - 从来没有 - 出生 - ▁materially - ▁grandfather - んじゃ - もいい - ▁Ek - ▁также - mik - ▁erreichen - ▁yıl - 还在 - ▁sink - yal - ▁Fri - ▁Nein - スーパー - ▁inferior - cita - ▁Alex - ▁cared - 扔 - onge - はどう - ▁schrecklich - ▁reveal - unu - ▁engage - ▁Aufgabe - ▁African - هایی - пуст - arra - ▁Fal - 好吗 - ▁vanished - cier - 我和 - ▁свою - 建物 - ▁但是我 - eo - 家人 - ими - mica - сел - äre - ▁verse - ▁poz - ましたが - ▁endlich - 使われ - ▁melancholy - 動画 - popul - вала - τ - ▁barn - ▁complet - ▁hello - 需求 - 过程 - ▁Wissenschaft - ▁povas - 通り - ▁gratitude - 圆 - 記 - ▁право - jon - ෝ - ▁你看 - ▁nag - fühlen - 欧洲 - 生まれ - 込む - fur - ▁Ana - ▁senses - ▁disgrace - mez - ▁camera - ▁сразу - ▁Cru - Don - ▁radi - ▁Verbindung - icht - ▁我说 - ▁vai - ▁referred - — - 用意 - ▁Down - ▁Fahr - ▁antwortete - yak - ▁langsam - ▁Akt - uous - ▁ili - 時期 - ▁groan - má - ▁commit - ▁Dorothy - 楽し - руш - ▁hunt - ▁compliment - ▁monument - 合う - ▁bishop - version - となって - ▁spur - кы - ▁regul - maal - ▁aquel - ▁Beau - 一会儿 - Ba - ыми - 将来 - ▁basic - zwi - いると - ppi - ▁climb - ード - ▁mateix - ▁Gesetz - ▁occupation - ▁Erfahrung - ▁genuine - ect - чай - 隣 - ▁بیا - شان - ▁terug - tare - யின் - 安排 - 候補 - 広がって - ং - ▁unge - ▁inventory - ▁detect - ばかり - zas - ▁rational - ▁receiving - anto - ▁جا - ▁purple - ▁ugly - ▁danke - โ - ▁nuestro - და - 午 - ▁Ferr - ▁reproach - ▁childhood - 家の - 任务 - ▁tutti - orang - ▁personally - ▁её - зда - 并不是 - ▁ته - ▁Une - 文件 - 以下 - 朱 - ▁spectacle - ▁contains - 宿 - ▁statue - 금 - けて - ▁inches - ▁Dum - ▁Worte - octubre - ▁artificial - ▁drama - ünde - ▁expecting - ▁brick - ▁Wu - 一件事 - 早期 - ▁Rück - ▁disappointed - ▁instruments - ▁strengthen - ▁freely - ▁falls - ▁дома - ありがとう - ないように - ▁Jerusalem - 特徴 - ▁apt - ▁wage - tial - 积 - ▁Lucy - 質問 - ▁leaf - ▁Spa - ▁slightest - ▁brow - пр - lou - ▁conception - spann - para - ▁телефон - ▁alter - ▁À - ▁employment - 坦 - 北部 - ▁capabilities - richt - زا - 晴れ - 不喜欢 - holen - dil - undzwanzig - ữ - eyed - ▁Berlin - ▁Matt - ▁tant - ▁exc - 施 - 残り - 你看 - hall - avuga - ▁Scho - даа - 단 - 爹 - ▁bé - raf - これも - ▁bestimmte - ▁kick - 一项 - ▁prompt - جان - 的方法 - ▁wilde - ڕ - 竹 - 献 - aĵo - فی - 办公室 - onne - öz - ▁その後 - وج - ▁disco - ▁vì - 印度 - ppel - falls - шко - கே - 软 - ▁Az - ▁Kas - こういった - 怎么办 - cep - fford - xed - بط - ▁afterward - ▁Valley - miş - schrieb - 保证 - ▁அது - ós - 奈 - 哥哥 - ▁これで - 案件 - Imana - ▁inte - 空気 - power - 支付 - nimmt - 具体 - するのは - ▁oath - ▁Say - ▁Through - ▁landscape - tatu - culture - mbre - 说什么 - дра - ▁Brit - tima - ark - ▁beaucoup - 办法 - ▁след - ▁originally - 一日 - ▁swell - ナー - ▁wollten - ления - ▁magazine - ▁timing - ▁unexpected - ▁startled - ▁байгаа - ores - ▁examined - ▁Viele - ▁pursued - っちゃう - ▁Einige - ▁Main - もらう - ▁2018 - ĝ - ▁гэтага - neɣ - тельно - ▁他说 - کار - гай - ▁лес - ▁Stunden - ▁puc - 場合は - krat - 腹 - ▁lunga - 我将 - ▁revenues - ফ - десят - ▁winds - رم - ember - ▁candidate - ▁December - ▁jemals - ▁bade - ▁giờ - ▁parla - 餐 - ができる - gga - гд - ▁lokal - ▁og - ▁стороны - grün - ▁Pop - ▁neun - ▁свой - nji - ▁examination - ř - 霍 - ▁yüz - ▁procure - 恐惧 - もらって - ვი - ドラマ - 是吧 - 雲 - ▁Arten - ู - ▁decent - 我们已经 - ▁بعض - ube - هن - ▁stout - を行う - ▁دوست - imana - ▁Suddenly - фа - ▁свои - 勝ち - ▁sharply - ▁hunger - flow - ニー - face - ▁ɣef - sac - ▁empire - rell - miz - ▁сколько - ▁farmer - ▁invisible - ার - 最後に - ▁قرار - ▁outline - '400' - mol - ▁collected - gabe - nın - ыч - 日間 - ▁Mount - ▁ак - ▁способ - ▁province - ▁liquid - య - ogo - houden - ▁admire - ▁fille - ▁shell - ▁Dave - ▁kumu - 尊重 - ▁gingen - based - 狂 - ▁hai - ▁wer - の方が - 不错 - ▁Rosa - ▁所以我们 - ँ - اری - ▁мор - ▁kui - ブル - জা - ▁انجام - 你这 - ▁encourage - ▁lleva - ▁marks - 搭 - '28' - ▁kot - 对话 - ▁employees - جو - こうやって - ▁Unterschied - ▁cyn - ▁noon - ▁совсем - lles - ▁stabil - ậ - ▁ша - 頑張って - ▁Reise - カル - 宗 - 踏 - するため - bahn - ▁charity - 贴 - を感じ - ▁рад - 挂 - ▁Tam - ▁tag - 份 - ▁führte - 時の - кат - ▁examples - 不过 - ▁Glo - ▁solve - 有一天 - 有趣的 - までの - ത - ▁otra - 政権 - ▁例えば - ▁jag - يات - lika - ▁drum - ▁resistance - ▁blush - ▁ridiculous - ▁chiefly - quir - どうして - ▁column - 後半 - ghi - ▁literary - dreh - ▁directions - ▁Region - ▁likewise - pä - teria - 什 - 毕业 - 一千 - 酷 - ន - riv - ▁Jordi - gres - ▁hecho - ▁ainsi - ▁ändern - ém - kunde - バン - 手に - ▁건 - kultur - ▁nap - ự - ▁relationships - 爱情 - 荒 - ይ - 分の - ası - ແ - ▁Sarah - friend - ович - 中间 - 所以你 - kap - 柱 - 斜 - 对不起 - lhe - まって - ▁twin - ▁introduce - ▁ده - '1000' - ▁damals - ēt - ково - ▁harder - ▁Give - ▁cómo - ▁avoir - 烈 - 桥 - ーム - ຍ - ▁meantime - ▁colonel - ▁attacked - 是不 - йте - слав - 小心 - ики - ▁sighed - ▁landing - nyama - lica - 同学 - там - 赵 - ありました - ▁你要 - ▁Mai - 户 - 画面 - hack - ▁hoping - inu - بن - ▁Đ - ▁moins - ▁detective - 방 - udo - 抜け - ối - ink - cata - 吴 - ▁whirl - ▁powder - ▁citizen - pher - ▁astonished - 撞 - ес - シン - ▁Gedanken - ▁Gib - 方は - ▁Bishop - 有可能 - ▁crawl - ▁поли - ▁unconscious - とした - 現 - ▁kendi - ترین - 拖 - ▁aufzu - 勝利 - ▁astonishment - ▁будем - 화 - ▁viz - を作って - 成长 - 佳 - ▁Word - ▁fears - ▁gehe - 发生了 - ▁items - ▁дальше - bira - rg - ▁Dame - స - ▁orange - ザー - ▁speaker - ▁hip - ▁dum - އި - ▁steal - anta - 押 - chel - ▁January - ▁mult - 结构 - ▁guerra - তো - ▁anchor - istes - ায় - ▁بس - cular - 自宅 - 这儿 - uter - жен - ▁мин - ▁Mike - ▁Chan - ▁cries - ▁やっぱり - 但他 - 大臣 - 何も - ▁Poli - トン - ▁prayers - ▁Ili - ▁dame - setzung - ▁kiu - ▁hunter - ▁Kraft - 逮捕 - imba - ▁gusa - ży - ▁spear - 荷 - はある - ミサイル - 세 - ▁rested - юр - ▁McC - 食べる - ckle - größte - ▁پا - ▁wandering - ▁確かに - ▁brutal - lash - ▁Fan - ▁descended - زم - ▁partie - 的那个 - 的信息 - ▁bwe - ▁climbed - ▁readily - 順 - まま - ▁Simon - ەت - ▁মা - ▁belonging - ▁Wat - ▁durante - スピード - 她说 - ▁monster - ▁나도 - eixen - ▁weird - کان - ▁你是 - лек - ▁пожалуйста - ▁ໄປ - 圧 - 母親 - wechsel - ști - ▁kinda - 経 - ▁breeze - ĝa - ▁dragged - 各位 - oval - ▁endure - ▁mail - 期间 - 表面 - ▁daher - play - iji - ▁inspired - дай - avu - ноў - ▁چند - ▁shield - ▁Zahl - geno - ▁Schrei - 细胞 - ▁landed - ▁stellte - 幼 - ▁dix - ▁باز - 卷 - うれしい - ▁Company - ▁ernst - ▁há - ▁chase - ▁mig - ▁depth - ▁anderer - ▁wolf - 移動 - ounce - ▁pointing - عل - ▁studio - 星期 - テン - 雨の - 大哥 - rash - нет - zunehmen - tara - krieg - emba - ▁তো - نج - ▁gotten - 说你 - お願い - ्य - 所做的 - ▁wreck - Е - oi - ikan - 怎样 - ▁messenger - 今度は - ▁bedroom - 适 - 食品 - ライン - 握 - 任何人 - 戦い - ▁compassion - ▁tran - uel - ▁conducted - рат - nahm - ▁pattern - 出身 - னர் - 柔 - ▁reaction - 锁 - ▁граждан - 训练 - ▁pessoas - ▁travail - 对此 - sell - игра - ▁Baron - anze - ছিল - 作家 - ▁ئۇ - 不应该 - ▁основ - ▁Reich - كون - geht - ▁возможно - ▁байдаг - ▁sig - iment - ▁Schiff - iere - 而言 - ▁我知道 - ッチ - 週 - ▁públic - ҙа - ▁devotion - ▁2016 - ruf - ▁الد - эк - laş - ▁facing - 込まれ - ▁Alter - ▁쫌 - ▁permission - ▁jur - ▁coward - zzo - ▁droit - ▁Py - 危 - ▁spoil - ▁aŭ - labor - zahl - kho - 不起 - ▁crop - opa - もない - ▁première - рок - なくなって - imiento - дам - 一块 - ▁ticket - ▁indicated - ▁jungen - politic - rán - ▁Mir - ▁Yn - を出 - rp - 気になる - ▁così - ▁bride - ▁planta - ▁nights - డ - ▁achieved - sucht - ▁vorbei - ▁Chúng - ▁cloak - evi - ▁satisfy - 보 - 说过 - ▁strangers - ګ - ▁papa - 记忆 - aca - cm - ▁于是 - ▁agreeable - with - ▁thine - ▁Regi - ▁abstract - ষ - の姿 - ▁Road - ▁seal - 革命 - eği - 意見 - ▁dirty - mäßig - ▁Gil - ▁editor - ▁ntabwo - خوا - نده - ுக்கு - ▁Bewegung - ▁калі - 围 - しまう - zed - 现代 - ▁photo - ▁Government - ▁hasn - ▁país - ▁Test - posa - ▁leisure - ▁honey - ▁” - がこの - ▁düşün - 家的 - ▁champion - away - acions - انت - кра - 满足 - гән - ▁而是 - 違い - ▁email - ҙә - Р - ▁точно - agost - ▁reli - 学家 - 正式 - ▁своей - だし - ▁disturbed - ▁früh - 중 - ▁motive - anı - ▁territory - 今天的 - ▁Britain - Bo - ▁perfection - 回来了 - rir - ▁drank - ▁چیزی - த்தி - اخت - 我们就 - wohn - issa - 销 - 搜索 - ▁polite - ▁hesitated - дем - 哲学 - Ü - ▁дума - ▁Feuer - ích - валі - ハン - 熟 - ▁models - ▁fox - 瀬 - ל - ▁Met - 激しい - ▁sci - 実際 - сл - 制限 - 隐 - ▁finance - 浅 - ▁patch - 갔 - ▁visual - ▁kuwa - 애 - ▁hopeless - gaben - ▁selected - лер - aşı - 危険 - ▁пи - аем - 还不 - 忘れ - ▁percentage - 筋 - ▁gegeben - ▁sources - ▁sowie - ▁Gleich - 阶段 - 禁止 - ▁trained - ▁سي - ▁fetch - ▁naked - መ - 够 - ▁Bow - ▁стал - ▁xwe - ▁18 - acious - ▁elsewhere - ifying - ▁usted - ▁mutual - onde - ▁sola - ▁cinema - ▁perceive - 済 - 不敢 - ▁slavery - Wa - дж - itaj - ckte - shin - ▁ёсць - cine - イギリス - 奇怪 - 儿童 - 적 - 安定 - 收到 - ▁journal - டா - 得点 - ▁Fer - ▁план - 漁 - wil - 他对 - ▁Julia - стве - 抑え - たちは - lings - ▁бес - ▁owned - ▁furnished - ▁например - 的歌曲 - ▁Beth - ▁pupil - 自信 - 赶紧 - ▁crossing - ழு - なんですけれども - 病人 - ▁obra - dz - цию - ▁islands - ▁sido - mwaka - ▁oak - baha - ▁mình - 聞いた - 出した - univers - sili - 地上 - ිය - يب - 兴奋 - même - ▁Fred - CO - 听说 - гар - зар - ▁circumstance - kleid - ationen - ▁führt - чка - ▁muttered - ▁intellect - ▁phil - 最初の - еть - ▁knocked - нуть - ▁gleam - тел - 掌 - mper - ▁disposed - 専門家 - サービス - ▁பொ - හි - niz - ▁Something - ▁oude - ビー - ▁maintained - 承 - volu - ▁Spring - ▁improvements - ずに - sprung - ▁niets - ▁bana - 社長 - пра - ホームラン - ▁byose - فق - üz - ▁dues - ▁那我 - ▁佢 - られています - 元気 - ising - ▁spun - きている - ▁இரு - metri - ▁voir - ో - ▁байв - ơ - бира - фер - ▁amusement - piece - ▁Gebäude - ▁peasant - 今週 - なんですか - oedd - häng - ▁invitation - ▁narrative - お父さん - 震 - 输 - 珠 - ▁regional - ▁laten - ▁continuous - ydi - mac - tief - 荣 - ▁schöne - ▁Doch - 続く - と一緒に - ▁père - volg - قص - オープン - 喂 - 돈 - かかる - ▁Gericht - レス - ▁Hund - ▁Ok - 穿过 - turi - 建造 - 갈 - ▁nooit - ử - 値 - volv - どちら - ▁buri - 特定 - ▁hitherto - 바 - 思い出 - 消防 - 快速 - 应该是 - ▁confused - ▁fein - чная - ▁sensitive - 訪れ - はもう - erweise - huit - siz - ▁departed - 打算 - mouth - ám - ▁Victoria - ▁governor - ▁amor - ▁sunshine - ▁thực - ções - 时期 - ▁loan - class - ▁resumed - ▁jusqu - うち - legt - ика - 能不能 - ▁50 - ▁grade - 采取 - rance - 处于 - にした - வெ - ▁regulation - 没有什么 - 鉄 - ▁sonst - ▁crystal - 市で - ▁goods - ▁gek - Any - ▁Part - lij - 情感 - ළ - тө - ▁disappointment - lir - 欢 - لف - მე - sburg - jja - 十八 - ▁monarch - ▁propose - aeth - 法国 - ▁whence - ▁sins - 子どもたち - ໂ - ▁кур - stimmen - 식 - frag - 指示 - ▁ils - 提到 - ▁suspected - ▁Diana - aĵoj - ▁abzu - 명 - genie - ति - 信号 - ▁territori - utu - diği - ▁medio - 感兴趣 - ▁Klasse - komen - ▁咁 - ඉ - phon - bula - أكثر - ▁instructions - ▁왜 - 执行 - öö - てしまう - ishi - rina - ▁deserted - ▁Hans - itas - 专家 - 偷 - 時は - ▁außer - ▁гэтым - 유 - ▁Universität - 积极 - ▁cigar - ▁jako - ▁havia - еб - Ә - ▁돼 - 巨大 - ▁Seg - 最重要的 - ▁swiftly - んでいる - ▁emotions - 凡 - 真的是 - κ - ▁civilization - nnes - ▁selfish - あるいは - ▁canoe - 今晚 - 隊 - فل - ▁peaceful - ▁attract - 注意到 - 強化 - ▁scenes - ▁Think - 恨 - дээ - ▁ул - 行われた - ▁Groß - 一切都 - ▁exhausted - 設置 - ▁nacht - عب - 我这 - центр - ▁hoofd - じゃん - dó - душ - ớ - 以至于 - hoo - ▁chicken - 上がる - ▁그러 - 虎 - ▁Central - 観光 - 假设 - deck - kwe - ▁lightly - ▁silly - ▁kama - ▁wilt - 作者 - އ - 被称为 - ▁innerhalb - trop - ▁Work - ▁homme - ▁schreiben - ▁finest - ятся - ▁deci - ▁thứ - ▁jaar - ▁viņa - ▁identify - ▁theatre - destin - 先週 - ▁дэ - ▁players - ▁moreover - вез - usta - 理论 - ▁Being - ▁Lin - ▁keeps - 粗 - 莎 - üste - нич - ▁bite - 责 - ▁Hilfe - ▁справ - შ - っちゃ - tex - gefallen - 明显 - ▁beings - ▁league - ▁existed - станов - ija - kis - 帯 - 形で - ▁gifts - ▁نظر - ▁erzählen - っしゃ - 当初 - ▁Seiten - ▁disturb - ▁Pin - rose - きょうの - нен - glia - ären - 我看 - ▁lucky - により - ▁Anfang - 旗 - tempe - モン - ▁processes - 批 - ▁Kultur - ▁namens - пон - 废 - ▁kn - minded - ニア - trat - gehalten - fro - ▁football - welt - ミリ - 忘记 - ▁supreme - 实际上是 - now - fire - ▁lesen - ▁Fair - ▁tutto - 願 - ▁ages - サッカー - 玛丽 - ▁hyn - ▁Dingen - 買 - ▁entscheiden - 的第一 - க்கி - Mar - posing - 課題 - coll - ভা - ▁روز - ನ - 见到 - 生活中 - 启 - ▁Brad - etta - person - cir - ▁происходит - 更好 - ▁cigarette - quadr - elijke - ▁marble - edifici - tego - iling - ▁стран - へと - publik - ▁уг - ウクライナの - بع - ▁Meanwhile - 减少 - лич - 対象 - open - 碎 - いけ - ▁Saturday - 租 - ▁Ха - рал - institu - ▁concealed - となり - பா - ▁accompany - egu - ▁fantastic - lă - ▁captured - 位于 - ände - 日中 - ▁distinguish - ▁stato - ▁Ordnung - 科技 - estudi - ▁Sport - зо - 迟 - kraft - ▁lessons - ▁timid - ▁بار - ▁universit - مى - 更好的 - нал - ▁Ellen - 银 - 揚げ - গে - ▁promote - 言った - 的眼睛 - レイ - 庭 - いわゆる - 浦 - ▁لكن - ▁technologies - 좋 - nym - rwanda - 月の - Pro - ய் - ▁بزرگ - ▁arrangement - нер - 仅仅 - inin - kula - ▁buying - ▁frown - ▁Chance - ▁pala - 相关 - よし - 漫 - ант - ▁présent - ▁zá - 的情况下 - મ - rul - ▁こんにちは - ▁Ton - 就要 - 書いて - ▁هست - 很重要 - lara - ▁Oxford - ▁allgemein - ▁dein - ▁lieben - 朗 - のだ - wang - ▁pendant - range - 目に - ▁Nähe - commerce - toria - 老公 - ▁Cam - ▁Christians - ▁irrit - ▁هل - ▁quart - кро - ▁Computer - ▁grandes - ße - ▁Durant - 和其他 - ▁Rom - ▁retir - ச்சி - zes - reba - ▁records - eld - 設 - stall - ▁corps - rats - ең - abilir - 住在 - schlagen - 大型 - ▁cultural - ▁Verhalten - unga - smith - ▁holds - лез - ▁senior - ▁collect - 的那 - 黑人 - ▁Hä - ▁meadow - christ - ▁deshalb - 有的 - 那天 - ▁expectation - ▁Ker - ▁handkerchief - ▁worried - 拼 - pir - klu - ▁Susan - ハー - 正确的 - いきたい - gab - ▁Dol - tention - uwa - 这项 - ▁memb - konstru - ەر - 摸 - いるんです - ▁hiding - をしている - ground - हरू - ▁Five - ন্ - iah - ▁switch - 我不会 - 彼女 - aves - ارت - ▁Тэр - ுடன் - ▁saß - ffin - 向き - ▁dwelling - ▁Erfolg - ыва - ె - ▁büyük - ነ - ▁Brazil - gelegen - ▁enabled - ▁span - ▁дээр - нат - tır - ▁Ned - ▁recovery - общ - barra - ▁western - iter - ▁dhateng - tör - 反対 - ätt - führung - ▁preparing - 很少 - undu - chie - ае - ያ - 과 - ▁denk - passen - moni - ▁Dio - rade - ▁vorher - бү - اک - ▁nhưng - 每次 - 庄 - ▁virus - ▁Max - ▁Isabel - ▁arī - ▁weapon - рова - ーション - 悔 - ▁beating - ▁examine - زار - ▁Weil - zó - ▁einzu - ▁asta - ▁Steve - ▁gering - ŭ - 統 - ものです - huri - pression - american - ▁monitor - にする - 懂 - とにかく - 匹 - baar - 子的 - ީ - 的行为 - 将军 - 严重 - ނ - 设备 - 下的 - ▁Pope - यो - ▁negro - ена - ▁ເອີ - 乗り - রি - 許 - ▁12 - cog - 穿着 - ▁petite - ▁Straße - nehm - ▁Wand - ▁befindet - ▁killing - 权利 - 学会 - ▁Blo - ▁nuestra - 予 - ▁prejudice - 发现了 - ▁stil - 信任 - 佩 - وح - 想想 - ▁vivid - を受けた - ーク - 出版 - umva - そういうこと - 小さな - 当て - ▁fishing - 券 - ▁Ŝi - гл - ▁Fox - informa - かね - 遊 - ▁thirst - ▁voy - ▁Nan - 進め - ▁dish - 変わり - ▁rug - ▁arts - 都市 - 来年 - 進んで - ▁hebt - 区域 - drücke - ▁зас - ▁inquiry - ほぼ - مون - 伺 - ▁surprising - ショ - 例子 - らせ - 辺 - ▁واقع - 表演 - ▁telephone - ▁beard - ▁давай - ▁analysis - 提高 - ▁urban - ▁важно - ▁heir - ▁preferred - itud - ▁assurance - จะ - come - ▁Winter - тка - ▁tenía - 可能性がある - വ - crib - って言って - yobozi - ▁anh - كا - ▁bunu - ▁гэты - 議 - 神经 - ▁сай - 研 - ▁можна - सा - 付き - ▁laut - 的机会 - baba - 员工 - 傻 - 千万 - ▁دې - ▁complain - 开放 - ของ - kru - ▁glimpse - 规则 - ▁политик - سە - 敷 - bab - rons - ▁characteristic - ▁establishment - ▁drill - を変え - ▁belongs - 봐 - 責任 - 鹿 - ▁Upon - oba - ographic - skri - 것 - 彼の - 订 - ټ - ▁наверное - Ubu - ▁eaten - 計画 - ▁தொ - ▁sais - ▁Gebiet - ▁specifically - ても - rê - ▁essay - ▁complicated - leh - 误 - ▁какая - onda - 宣 - かない - それでも - ▁Gründe - ▁bend - ▁În - 伸 - ▁menschen - ▁Deshalb - isierung - ▁Thanks - 巡 - 脳 - 就算 - 軍事 - 見事 - ▁akk - rite - 佛 - ▁километр - ▁schedule - cura - 正是 - ▁Unterstützung - 丢 - 宣布 - ера - 工程 - macht - ▁Hoch - 运行 - 要去 - worm - ▁توان - glio - 遗 - ▁moest - соб - ▁власти - ජ - ▁vow - 抜 - ▁revenge - ▁fortunate - ▁چا - ▁precisely - 融 - わない - ▁Fuß - 梦想 - ▁sed - ▁име - 测 - cam - ▁Dieses - களில் - uno - නි - 時間が - sman - organisation - ▁гро - ▁solicit - ▁зачем - ▁Bedeutung - 開始 - iriza - 长期 - となります - ▁partnership - چى - 子里 - 株 - ▁solchen - 看来 - ▁legend - stock - ▁fabric - ▁anticipated - wagen - ▁Bezug - ▁approval - ▁whistle - 下降 - ▁Moses - 电脑 - ▁Flu - ウェ - ▁sino - ҙы - ▁sailed - ▁tool - ▁barba - wch - ▁Nick - ▁preserved - ಸ - гре - 所以我们 - ▁тысячи - 発射 - ▁erfolgreich - larını - 防止 - ▁Са - ▁facility - 해야 - ▁shooting - ▁composition - odi - 蒸 - ▁vậy - 不一样 - ▁Ross - 内で - 助け - が出て - ▁Ara - ▁gün - ór - ▁starts - ▁استفاده - ▁Brother - ▁segui - 这么说 - empl - ▁darling - ▁Esto - idas - ▁heavens - 纽约 - ▁aveva - цэ - شک - 窗 - 연 - ల - ▁propos - ▁boast - ▁häufig - ▁hefyd - ▁одиннадцать - сказ - 民主 - 証 - ▁zona - 来て - lian - ▁snake - 東北 - وز - ▁definite - ▁gebe - ▁chemical - 生气 - freund - ▁asset - 生意 - ლი - 私たち - moto - 显 - ▁harsh - ▁zeker - ▁таксама - સ - ▁Frei - ▁minimum - ▁provides - klo - の話 - 我跟 - ่ - ilor - ▁device - 失去 - 菌 - ▁attain - ▁graceful - ▁আছে - ல்ல - ▁wereld - ходит - ▁parallel - ач - 隠 - ▁febrer - 这意味着 - 取得 - 按照 - iyorum - 职业 - ▁clothing - を作る - ▁confession - 闪 - зем - ▁нашей - ▁свобод - zte - ▁accent - ▁выступ - ▁ongoing - ▁indifferent - patri - pá - ▁transformation - 恐 - しまって - ▁அந்த - ells - stimul - 実験 - 支え - ▁bulun - quí - 它会 - ▁اگر - ▁Пра - 私たちは - ән - ホテル - ▁constitution - иш - ړ - ▁cub - බ - ló - ▁panel - раж - うん - 粒 - ▁gracious - 是他 - ▁gallant - zam - 浸 - үүл - hard - 歴史 - iber - 跟着 - ▁Maar - ▁Familien - 種類 - 目を - 权力 - ovan - Ä - しようと - krit - ▁values - 공 - 软件 - 関係者 - ▁strict - ▁patent - ▁poc - こちらです - 読 - ▁shine - сол - ğer - iller - हा - 와 - ▁rien - ▁compare - ▁pret - 選手は - kil - ▁Seven - fassen - ▁같은 - El - ▁trait - tè - اره - 减 - 展開 - η - ▁давайте - known - dors - ▁dig - ▁scholar - ▁سن - ▁identity - ▁moderate - действ - word - pole - ▁practically - ▁isso - ▁русск - 变成了 - ▁visitor - ▁são - ▁struggling - reichen - リスク - ▁Wahrheit - 愤怒 - 做到 - viol - ▁soup - とっても - ▁interfere - ▁dressing - 艺 - ▁rarely - ▁بیشتر - ▁ჰო - ologia - ▁pint - صف - 不在 - aş - ▁lightning - ▁verändert - ▁mamma - مال - cot - ▁本当に - ▁той - 老婆 - 民族 - ▁finds - ▁colors - থ - roy - ▁crept - archi - ▁doorway - blas - bringt - ▁bị - kuwa - 黄色 - ▁Carolina - ▁execute - ▁transfer - は何 - ებ - 宇 - 言われて - 促 - るという - ▁curse - ски - ▁Gut - 得多 - মি - ▁profitability - 签 - ▁rabbit - 资 - ▁swung - ▁defeat - ▁wären - 湯 - maakt - هر - cı - copi - ▁په - ▁medium - yong - lö - маш - autobús - ▁Autor - けども - rok - プーチン大統領 - بت - past - ได้ - trud - ▁doctors - iche - cional - 说明 - uber - ▁sober - ▁Whit - ▁insurance - ▁wrath - すぎ - nwa - ▁أي - isieren - 让人 - ▁solar - igh - ශ - ▁zweiten - قی - ordi - 公司的 - 就能 - case - ▁Heute - ▁â - 不得 - иа - vê - سى - ▁trends - ▁cael - ▁german - ▁outward - 的世界 - といった - eaux - ▁player - ğu - ▁Tak - ▁nieder - したと - ▁submit - ▁Times - бур - 同志 - 军队 - hoor - ▁arrested - ▁Glen - 的手 - abwe - の一 - ▁Geschäft - jahr - ▁railroad - されていた - みると - Come - раг - 成本 - 需 - 迎 - ▁arme - ▁successfully - ▁یې - 배 - ▁favourite - ▁princip - ▁hanno - kolo - ▁quel - граф - ▁Stein - けどね - 的关系 - ▁Preis - ▁объ - 昼 - ▁Merc - 撮 - wahr - crat - sort - tip - ▁imbere - ▁bowl - 或者是 - ▁gazing - зов - ▁frankly - ▁Angeles - 무 - děl - ▁festival - 数量 - 体験 - ▁месяц - 胜 - 三年 - ▁madame - なければ - abana - 祖 - них - 鞋 - ▁Dit - acak - ▁riches - ść - ▁Sara - кор - ົ - ▁breathing - ▁marched - capacit - ▁angels - ▁hatred - tering - iaeth - ҥ - ▁shudder - iens - ▁touching - ▁Typ - ▁upstairs - ▁вами - эконом - 두 - ▁tudi - ▁Venim - 番の - appel - nuncia - メッセージ - weit - orden - firm - ▁decrease - eck - ři - 不够 - ▁aby - เป็น - ează - මා - ▁можа - 出てくる - ▁invent - 庁 - gani - ▁vede - ▁despre - NHK - ていて - 込んだ - ▁Iran - ivo - ▁belt - ▁graduate - ▁elegant - ▁Rand - chip - ▁introduction - 回転 - 無理 - ▁extensive - ▁Ball - lari - ちゃった - teg - ына - bridge - ▁split - mesi - ▁tonight - ▁hinein - ▁torment - рук - пес - おか - fica - ▁entertain - lima - を使った - 년 - ▁наши - ▁pump - 要么 - 猜 - ▁болон - セット - ▁cleared - ▁трэба - 抢 - ▁stamp - voir - வில் - ēja - 大体 - レベル - ▁beasts - ӧ - ですし - прост - 焼 - ▁времени - მო - erfolg - ія - ево - ര - undi - нік - 上の - 晓 - ▁passes - сал - 的节目 - ▁номер - ▁session - nemen - ▁sustain - 者は - ▁которых - кров - ▁jealous - 枝 - ▁acquisitions - ええ - 增 - 何を - 在这儿 - 那么多 - 药物 - 著 - 摆 - 応援 - ▁compete - ▁outer - 日から - bak - tiga - रि - ▁Whi - ▁обо - '29' - য়া - ▁hook - ிருந்த - পর - 泉 - しょう - ▁immortal - ってくる - assi - 全員 - ▁provision - ▁beaten - ▁прямо - 放心 - ▁thành - ▁Personen - ▁attracted - システム - 까지 - ār - کش - And - ▁exceedingly - ▁caution - ebilir - 意见 - алі - ebb - ▁verbunden - coloured - работ - দা - ▁ndetse - dır - そこで - ちなみに - ▁cose - ило - ▁manufacturing - әр - غر - 大事な - ▁говорят - ▁Town - gana - 特別 - 回目 - вят - hound - nsen - んじゃないかな - 金メダル - 了这个 - ▁circ - ▁Party - ▁zuvor - ▁carpet - ▁restored - kia - ток - ался - ますので - 改革 - の問題 - たちの - ▁Emily - ▁Free - embe - 先に - jär - dog - ▁کرده - ますよ - 沃 - note - 億円 - ▁wives - liko - ▁política - มี - 德国 - ском - ލ - صور - ▁commander - ご覧ください - 购买 - ancy - 飞机 - respon - 途中 - ▁gyda - cide - 的脸 - ▁relax - 敌人 - ▁взгляд - 間に - 집 - どうですか - 的心 - gum - めた - ▁факт - lauf - count - تج - ▁Ebene - ▁determination - ▁founded - GAAP - それぞれ - ▁Entscheidung - tē - ▁sixth - etto - ▁Gas - investiga - arian - kou - عاد - lever - ppy - yim - 怎么了 - ▁دیگر - aran - 附近 - 発生 - 息子 - ▁Vis - ▁rapport - ▁speaks - ingham - ▁Bücher - ▁generate - 歯 - علم - ало - En - ▁behalf - ▁pap - ますが - ▁Gall - 插 - slav - лем - ஃப - ▁encounter - 你别 - ▁Way - 把这个 - ▁гос - ▁сме - ▁wana - のない - スタジオ - schap - 時点で - ですけれども - imos - ▁تمام - のうち - ではありません - が起き - ▁acknowledge - ਾ - ▁disaster - езд - تان - ▁elect - енные - 鮮 - றை - ▁Time - igne - έ - وك - ঁ - ▁pains - ▁যে - というもの - 確保 - ▁roar - ところに - vara - 地元の - ▁Tour - ope - ▁indicate - ▁مق - 销售 - ีย - just - ▁trot - 问题是 - μ - じゃなくて - ▁мяне - とする - 钟 - 飲 - ▁Pot - ▁elbow - fè - hul - ▁Ces - ▁mom - ▁discharge - ▁profess - motiv - 一九 - ▁kleiner - 但这 - ▁impressed - 何が - ỗ - ürü - 再说 - 忘了 - 逊 - ▁Blut - 用的 - スタ - 你去 - 終わり - ▁moja - ▁None - ම් - ▁Software - liste - 理想 - ▁Medien - ▁regards - ений - 自民党 - ▁放一下 - ▁constitute - 两天 - ▁exists - 把自己 - ▁hedge - ▁euro - 被告 - rika - ▁monk - ның - ▁multe - 汁 - lj - ▁pris - ీ - しかった - ▁prend - ▁яна - jung - 难道 - ▁Dor - 播放 - ▁Ĉi - '%' - ▁Aquesta - 適 - ▁restless - を迎え - ▁vergessen - ▁Nie - 体を - 独自 - ▁Jen - たっぷり - ▁соглас - ▁говорю - ▁grandmother - ▁punto - ▁appointment - ▁wordt - vide - 地元 - üyor - ▁näher - るような - ▁operator - ▁jury - 阻止 - ▁pense - мын - ▁television - ▁знал - ▁casual - ▁دارم - ▁funeral - 邀请 - 体验 - リング - ▁さて - ules - ▁tribes - ▁dip - ▁Sud - ▁aggressive - 連絡 - 逼 - صر - ▁我在 - ▁ነው - ▁bueno - ▁Mill - 同样的 - 裂 - cía - ▁awkward - ▁ceremony - YouTube - ▁занима - ▁tat - るのか - ▁Dutch - ですので - дь - ▁그니까 - ▁drugs - ▁menyang - лат - ▁grounds - ▁parece - ▁Geschichten - ▁cau - ▁Sar - ▁vorstellen - ▁properties - 历 - ということを - ▁testing - gation - 没有任何 - poko - mund - ▁answers - 执 - ▁Dal - ▁jug - ▁erinnern - ▁этому - ▁спасибо - ▁remainder - sail - ▁Blue - 一定会 - ছ - ▁thro - sley - 一句 - fred - hale - taba - ▁место - optim - ▁стол - ▁lean - 資 - ట - ▁Wilson - ▁suchen - gā - gabo - print - ▁radical - ді - 了他的 - shaped - almente - قة - ए - ▁stole - овых - მა - 前面 - ▁höher - ▁shiver - wegen - ▁chez - maya - agon - simil - ▁Heaven - ▁чувств - anye - ▁lloc - 壮 - mother - ▁Kampf - 中継 - cell - dum - ▁differences - ▁whereas - '2019' - osos - wenden - を持つ - ▁relieved - ▁coisa - ▁scarce - 壊 - 径 - ▁oameni - 多分 - ▁consists - 工作的 - பு - кус - 周围 - ▁tranquil - gado - ▁flank - fälle - 모 - えた - ▁dedica - ▁Sh - ▁Rep - ▁опять - yana - druk - ibility - ທ - ▁binnen - ufu - ▁Donald - ▁folgen - ستان - wunde - 公平 - ibu - ▁Zeitpunkt - 早速 - めちゃくちゃ - 爬 - ▁lodging - 道德 - ▁invention - 我的意思是 - チャー - ▁owing - idades - 用于 - ▁Instead - yê - ▁مورد - гы - ▁trifle - ▁baz - 年龄 - 业务 - 氷 - ũ - bí - 公众 - mmy - 分け - るんだ - ▁jail - ▁barrier - ▁positions - чен - ök - 敵 - ▁Barr - чных - 组成 - ▁concerns - エネルギー - ▁letting - ▁při - ▁echo - ▁баш - 载 - 灰 - piti - tour - тся - ▁envelope - obli - 太太 - 很容易 - が発生 - க்கும் - ▁prominent - radi - ▁Among - ▁Mah - ғ - 恒 - ା - ▁Cel - 首都 - кин - 경 - пос - ▁Home - ▁depths - ▁volunteer - ▁Hauptstadt - ▁omdat - ▁indulge - 的意思 - Lo - ▁reminded - regen - zustellen - جم - ▁Lassen - ▁Mun - 받 - 低い - agu - 过程中 - ▁Ideen - 侧 - 旁边 - ▁traf - ▁blanket - ▁profitable - ▁Ari - energie - 闭 - ▁Dev - arna - 范 - 慢慢 - ▁dislike - tale - ▁Wall - ▁dien - ▁trusted - ▁Make - ▁Mich - ▁covering - るのが - 京都 - 你不能 - hill - ▁comprend - ▁parted - 厂 - 見た - ▁perception - 生活的 - нская - が行われ - もありました - emt - лас - 识 - 一本 - ▁Möglichkeiten - ▁darf - ▁достаточно - ▁vollständig - ▁Essen - кие - ގެ - ▁выбор - していきます - 全国の - われ - 賀 - 狼 - 競技 - カレー - ▁Amazon - 腕 - ▁grows - ԥ - нае - 默 - 국 - 吐 - γ - ▁Pacific - 월 - ▁feeble - 一段时间 - ▁elected - ▁Handel - ▁dive - ▁hadden - ▁Back - lag - ▁attendant - ▁Rob - މ - ▁endeavor - ▁Spirit - vē - ▁soci - ▁Soon - 恵 - tanz - labora - ▁Хо - 很大的 - klar - てきました - かわいい - fia - ▁tribe - ▁celebrated - 攻め - ති - ▁anticipate - かけて - 桃 - ▁passions - film - воль - ▁ett - дон - ▁chứ - 她们 - kuba - грамм - ▁имеет - ▁denied - 故 - ▁traveller - ▁Slo - ння - ▁cyo - ப்பா - ▁mechanical - ▁creative - ▁complaint - ▁Ба - ▁Viņš - ccion - ▁tis - 败 - Ч - شي - Univers - ▁Run - meister - ▁gö - ▁endeavour - 购 - ▁Sav - させた - ▁robust - ヨ - ▁meines - ලා - 同事 - に行く - ますと - ▁nunca - ▁hijo - ▁fury - ▁Much - 養 - 机器 - ▁بازی - 不如 - ということなんです - ▁oleh - 出演 - まれ - ▁después - 継 - 門 - ▁Center - 人数 - ▁replace - leigh - ▁definition - gren - ▁wichtige - ▁Florence - ▁нельзя - ▁Englishman - 必ず - ected - ▁esti - ▁option - shore - コース - uld - ப்பி - ▁jaw - ຄ - alan - 乎 - ▁siempre - 过的 - ▁былі - ▁Lehrer - vig - ык - 始 - だね - ▁Versuch - ▁hinzu - ▁collar - equip - ▁ähnlich - ▁gleiche - 也不会 - ывать - ▁funds - ▁appetite - ▁является - бан - кар - 詰め - ▁limbs - ダイ - 夫妻 - ări - წ - ۰ - stimmung - ▁folly - ▁собира - 動物 - かかって - ▁excess - ▁Şi - 主任 - ▁Buck - пен - 所说的 - fish - bred - ▁còn - ▁bloom - ▁tones - 等待 - wari - 错了 - gion - オン - 复杂 - ▁애들 - パス - zw - 他们是 - ▁Ralph - 解決 - ▁vigorous - スク - नि - を作り - ▁undertake - 上で - ஷ - 濃 - ר - gge - bikorwa - ▁Hey - ▁Himmel - までに - ▁Negro - big - 应用程序 - 香港 - regel - коп - りの - acre - 方針 - edd - imento - dha - ▁després - ▁Pod - ▁wash - lē - lun - お客さん - كر - 王子 - ▁таким - mates - ▁joining - ▁fatto - ству - 으면 - ▁applications - ▁nail - ▁desires - ற்க - 我不想 - gué - tiu - ▁Radio - ▁disappear - 实验 - മ - ▁Should - ovi - 奶 - anima - ▁Kel - ▁autumn - ▁Revolution - loft - న - 找到了 - 会談 - がいい - ▁общем - 泥 - mination - ▁plea - ▁gwe - 上げて - ▁temptation - ziel - ▁saa - 輝 - ▁perpetual - джа - occupa - ▁tables - ▁lend - の世界 - rop - حر - ▁Austria - 红色 - ▁studying - ゾ - を示 - そんなに - ని - ▁Rue - ▁label - ▁以及 - 限制 - child - ▁fitted - 友達 - раш - ▁daylight - ▁pearl - ▁taxes - dauer - moi - ঘ - 議論 - 崩 - ▁Shall - wise - ▁competi - ▁convey - ▁setzte - ▁deutlich - 알 - ▁romantic - ▁uncertain - 紧张 - ▁тому - 引起 - 層 - ▁shriek - ▁Regel - ▁obscure - каў - ▁уу - schlag - 谈谈 - 答应 - ▁Canadian - ▁далее - 混乱 - gone - ▁altijd - க்கப்பட்ட - 辺り - 带到 - Let - း - ▁distinctly - 让她 - ための - ▁draußen - Emp - ▁образом - ▁weren - mora - 보고 - ▁beheld - оны - atan - ว่า - ▁saving - ▁visitors - ▁seed - ▁hastened - ▁Japanese - ▁resting - imit - ▁fazer - 같 - ヨーロッパ - ませ - ▁کردن - 一気に - 敏 - さを - インド - ▁chapel - 各地 - യ - ▁Fehler - に関する - ▁Var - ҳ - ຮ - ぐらいの - ▁prevented - ნ - 航空 - っていく - ▁leather - фон - 珍 - guer - ▁adjusted - 雑 - евская - ▁romance - ▁ўжо - 核心 - ▁formerly - нэ - ▁laying - பே - дом - ඩ - کا - 負 - ▁maison - 定义 - رە - țe - ▁brands - empe - голов - ▁Polizei - 模型 - ▁Uhr - вен - しよう - ▁собственно - gina - ▁alien - ▁package - 肥 - ▁продолжа - を超える - ▁Bett - ures - тик - ▁administra - ▁Nar - ▁Clara - rand - ▁عنوان - dock - puesto - ▁Ash - 读者 - মে - ▁termina - isk - ții - Ver - 分钟 - ▁fühlte - トリ - 物质 - енных - 史上 - berry - ▁newspapers - ở - hte - ủ - でしたが - аар - 面临 - homme - ogen - ▁poll - 创建 - borough - anzwe - бой - ▁unpleasant - ▁depuis - ▁tire - ▁domina - ▁skirt - 我们也 - ▁parish - ▁öffentlichen - ▁pří - 乌 - ▁searching - ベスト - үр - ▁glaubt - maker - ▁Open - 发出 - ▁municipal - 버 - 基本的 - ▁delivering - ▁Fisch - ▁Bud - ▁formula - ▁seconds - ▁rela - ▁такого - ▁Nobody - 就说 - 後ろ - ▁democracy - ちょ - पा - さあ - 付近 - 予報 - ▁cá - ▁eben - She - ▁formation - ▁feu - ▁Stand - 我不能 - ▁uwo - imper - ▁абсолютно - ▁ஆஹ் - dapat - 发生的 - ▁geweest - ▁ox - ▁achter - ▁gale - ▁Quan - вых - egi - ▁الآن - ▁каждый - 障害 - ▁troubles - изм - 离婚 - ▁Monte - würdig - eight - 稳定 - шта - litz - ▁Vielen - ▁clima - 追求 - usu - লো - ṣ - ランナー - kül - 得很 - ▁olduğu - ▁continua - ktu - 问你 - 我不是 - ▁nas - ▁stake - 严 - جی - 億 - ▁그리고 - ▁profile - ▁kannst - 在美国 - ▁prac - ▁блок - ▁вместе - 为什么要 - ▁surrounding - してた - uge - Applause - stav - ▁Ever - 的女人 - ▁rivers - ▁должно - tò - ▁funding - ены - ▁Zum - ▁Several - aciones - 体制 - kati - ели - ▁scandal - zogen - 速度 - 上昇 - onder - 牧 - ▁ihe - 再開 - 当他们 - тыр - ▁promo - ▁churches - あんまり - 可怕的 - ▁tienen - ▁humour - ▁gerek - mast - män - ▁دور - hun - ▁mock - 添 - 担 - ▁injury - besitz - 作って - ▁Jewish - 真的很 - ර් - num - ▁offers - şti - 在他们 - altra - бри - пле - ▁condi - ▁rat - 市場 - шә - ▁чт - жив - ▁sicherlich - 文章 - ▁blew - ▁subtle - ▁dalla - そば - ▁компани - ใน - ▁flashed - ▁Netz - nça - 投入 - ▁самое - ▁delivery - ▁ấy - 使って - ▁Fel - 微信 - ▁reception - änder - ▁habla - 质量 - ivity - फ - lha - ▁residence - ▁uses - 領 - ▁contracts - loj - hut - ▁начина - ▁Schi - ▁stomach - ▁cable - ▁puzzled - 还能 - ▁kad - ▁communicate - ▁Ul - 这些人 - 倉 - нулся - oce - gelassen - 是非常 - ▁사람 - 寄せ - 才是 - ▁forecast - ▁passionate - ▁туда - 公園 - テーマ - رسی - ▁doubtful - 勝負 - ப்பட - ▁murderer - ▁riesige - 仁 - ▁familia - 简单的 - ▁Wald - ▁zuerst - ▁Partei - ▁offensichtlich - ▁Od - ▁echt - ▁президент - ாள் - ほら - ▁없어 - ▁Foto - 避免 - क् - 兴趣 - ▁Fenster - 对我们 - 専門 - ▁libro - ▁accused - ▁Georgia - kamer - 个月 - ência - кер - ▁результат - つい - инский - ▁varia - ▁writers - ▁Spo - iadau - ▁parce - ▁إذا - 取り組み - 里边 - 暑さ - 撤 - ▁erwähnt - ▁двух - важ - ▁accelerate - ▁nnyo - ▁copper - ▁scratch - ▁stolen - 忘 - ї - 很大 - ▁cual - Sch - ▁facilities - 黑暗 - ▁Williams - stö - ▁layer - ▁Mira - ▁Hotel - 有时候 - 外面 - 十六 - 闻 - 妙 - ▁breit - ▁призна - ▁upset - ▁swim - 进行了 - wiesen - ▁International - ļa - steigen - kund - rock - amendement - ▁хоёр - ▁wrapped - рг - 埋 - 操作 - ▁corporate - 颜色 - 시간 - ▁bother - ▁programme - ▁Ehe - ▁abbiamo - ▁Markt - ▁popula - スタッフ - ό - mming - ▁Barbara - kauf - ▁hinder - amour - ஓ - ▁antwoordde - 姓 - 临 - шо - ▁Landes - ▁shalt - jer - дым - ▁Ren - 現在の - drin - isan - ▁stopping - ▁unglaublich - 部队 - 職員 - ウン - 程序 - ▁haut - ợ - ▁überall - らし - ▁parto - 忠 - ▁Anthony - 威胁 - ەکە - を行って - ピン - rique - ▁silently - 逆に - ライブ - ▁leaped - sters - 機能 - ▁peril - projekt - 一句话 - ワールドカップ - ▁wax - のもの - ▁Wieder - ベル - 遅 - ▁offence - ▁Gor - ístic - 演员 - ▁Presently - ▁exquisite - 联邦 - দে - ▁acid - daten - 育て - قابل - を引き - ガス - 않 - dores - ▁poden - ▁machines - hana - 玛 - media - ▁golf - posta - пят - ▁شخص - clair - 眠 - ▁Din - ▁Mis - ▁doom - いきましょう - ▁பல - cru - geschlagen - ▁wire - 授 - だが - ▁amigo - ▁Damit - 腾 - ▁Kir - ▁otros - ▁clinical - ась - zaba - ▁Tele - ▁Egyptian - cada - ▁Text - ▁ansehen - ▁advantages - ▁stirred - ▁Pur - hla - кө - りとか - غل - 叔 - 多了 - стру - 基于 - Un - ▁Southern - クラ - ▁fastened - 買い - ▁consequently - 彼女は - zoek - ▁disguise - 教えて - ▁nei - ▁washed - ▁Emma - ಕ - 没想到 - 反正 - かせ - 安心 - ▁nerves - 時から - ▁Tier - 某个 - ▁choses - ному - gestalt - ▁teh - mato - ▁containing - 的力量 - すいません - machen - mple - ▁damp - 距离 - nego - 那边 - 教会 - ilo - ▁Asked - 都可以 - myaka - ▁deadly - 做的事情 - ▁cotton - moor - 伸び - ▁아니야 - ▁Partner - いまして - 하지 - gó - گیر - 더라고 - ▁accomplish - ▁February - ▁truck - ▁Republican - ▁Know - 去找 - plic - ▁seventh - せて - 空气 - 初の - ▁Cross - タン - 물 - 猎 - ▁foarte - ▁ascend - liği - muz - 感受 - 攻击 - ▁sue - ▁путин - 分かります - 事業 - 教堂 - 하면 - wesen - ▁Lyn - onia - ▁Stre - taka - ▁процесс - 野さん - ▁defense - zina - сем - Te - 岁的 - 湾 - 好听 - хай - 海洋 - ▁số - ▁Organisation - ▁offices - ▁assembled - ▁skull - െ - ▁supplies - ▁понятно - ▁هغه - ▁resemble - ▁disp - 世界的 - وار - த்தின் - 遺体 - 在他的 - ▁زندگی - цев - یز - dust - سته - وف - ɣa - ▁bias - ▁minut - 真实 - વ - eres - heben - ▁interven - 孩子的 - 对自己 - ▁Vin - äch - よい - 拿着 - 因素 - ▁egy - ına - ▁bundle - 立即 - ▁nonsense - rico - ▁weer - ▁esteem - lini - するなど - 看了 - ▁john - ligt - 主席 - ▁rede - ハイ - 大谷 - ▁meaningful - ▁dreißig - ▁entertainment - ▁мил - ▁prey - һә - がん - ূ - ▁کښې - 嘅 - 疯狂 - ▁ruler - ▁bears - іх - ▁reverse - 自动 - ދ - ▁vz - ▁Polly - ▁embrace - ▁furnish - яць - ▁conta - ▁Nevertheless - 히 - kore - 彼此 - ▁bestimmten - ▁mechanism - ▁Islam - ▁latest - ▁witch - ▁часть - çon - ▁sketch - 最高気温 - ▁цен - 谱 - ▁fatigue - 确 - 切れ - abel - 題 - ▁tendency - bby - ▁whip - под - ▁persuaded - だったり - 在那 - 一時 - tuk - 報 - 下雨 - ▁neighbourhood - ▁Later - іцца - zeichne - っちゃった - 你需要 - ▁Dear - Who - нев - いき - idée - ▁definitiv - чал - ரிய - éra - automat - いう - ordnung - றி - 联合 - ▁پیش - lug - 新型コロナ - からも - ▁шестнадцать - ▁Power - ▁машин - schul - ▁proces - こんなに - ▁Gw - 刺激 - ▁mutter - ▁gesamte - 疼 - ță - 聚 - ▁mio - ▁Cer - っか - 方も - ▁loĝ - ▁seda - ▁enjoyment - ▁Monday - ▁procession - kai - 通知 - илось - ▁limits - ▁Department - ▁шмат - werfen - ものの - valu - ▁repent - カード - лиз - ▁confined - 化学 - ▁effet - ▁engineering - лей - 的书 - 今シーズン - 的好 - кино - 初めての - neu - 思います - illes - ▁onto - ▁Design - ▁своих - ▁prudent - 去做 - ▁grupo - چه - طو - kind - ▁unterstützen - 防衛 - 催 - ▁virtues - ▁seves - 彼ら - と思うんですけど - 杜 - たく - ことができる - 上班 - дук - ついて - ▁tank - ▁steer - angira - kür - 성 - ▁rays - 俗 - izen - digit - ▁tener - ほん - umwa - 再见 - 别的 - ▁Morris - ▁представи - ashobora - ▁sama - ▁благо - ▁bob - 会見 - ▁бог - ▁temporary - 出して - 角色 - 医学 - ▁elephant - ▁obstacle - ▁pře - ▁gloomy - нымі - cian - さい - ▁Mountain - 也不是 - ▁commenced - べき - ຈ - 某些 - sı - ▁streams - ▁wunderbar - ▁agony - 華 - ▁partially - ৃ - ୍ - ĝe - ▁Telefon - 級 - ▁беларус - ▁тринадцать - 公开 - ▁добра - цэн - fahr - ▁Он - رات - 好处 - ▁natives - ango - ▁엄청 - 你觉得 - geni - ▁verme - ließ - 伏 - ▁relevant - ▁Lösung - しまった - ▁grava - 实在 - コー - 蓝 - 仙 - فة - ▁Matthew - ▁Neither - ▁Danke - ımı - は今 - clou - 회 - ▁tube - 十一 - ▁dua - ัง - 漂亮 - 実現 - tina - 電気 - ▁одной - 飛び - 今も - の一部 - レース - үл - ▁Así - ▁minu - stab - ▁suitable - ▁Esperanto - ▁brains - ▁nuclear - CE - ▁convenient - ަ - 赫 - ላ - ▁sunset - сөн - ্ব - کو - ▁pillow - 東京オリンピック - के - ▁masses - 送り - self - ంట - 剂 - ▁Society - ▁launched - ▁Bilder - 場で - 话题 - Au - ▁맞아 - ▁говоря - ▁rural - 切って - ▁Parti - igheid - мир - ensor - য - 具有 - 情報を - ▁buryo - ▁Bart - 東部 - 指摘 - 挑戦 - ▁Ran - industrie - dala - lp - ▁Ntabwo - ▁entwickeln - ▁guter - ▁helps - 児 - 咲 - ▁bewegen - చ - をつけて - круг - 规定 - 帝国 - ▁winning - mpi - 出来的 - тат - 的钱 - 否 - 規 - pē - 基因 - 跡 - ▁generations - 你了 - 따 - もあり - bird - ▁Kindern - onna - もらった - ▁кстати - ▁succession - yin - えっ - ▁glasses - ▁Sco - 对吧 - ▁roots - ▁masters - いいですね - 気温が - ▁aceste - 巨 - ▁таких - ▁inevitable - omu - ▁равно - რი - ▁schlimm - ▁verdad - ジュ - abandi - ▁lingvo - organ - court - ▁Kri - 閉 - ▁Napoleon - 的能力 - ▁либо - tero - ▁participate - leute - ▁Rod - ▁música - contra - uso - きています - ▁extract - bach - ▁можем - ▁seize - ▁associate - 目標 - ▁Johnny - elte - 前半 - ▁Stone - ▁größer - 苹果 - מ - ▁tragedy - ▁equity - 这对 - üs - kut - dded - িত - fashioned - ▁fruits - 紹介 - 風が - ула - plant - ▁Jerry - ▁chairs - тэр - ▁Kaj - ▁deeds - ▁байх - 闲 - ▁групп - ▁vulgar - ▁Prä - ▁earnestly - дыр - 序 - ▁Ме - imme - ▁боль - acle - gora - ▁independence - arı - 信仰 - ▁jewel - 一半 - ▁standards - 来月 - ▁staying - ユ - ▁commune - 遭 - öd - 孤独 - 第二个 - 機関 - ▁yeux - 迅速 - 重要な - ▁insight - ▁longing - 愉快 - ▁trug - කර - 保存 - ▁recommend - ▁breathe - shyira - ▁från - 으니까 - ▁welke - ▁contents - もともと - ächtig - ▁plusieurs - 添加 - 他也 - ▁Museum - ▁occupy - пуска - 関連 - kreis - ▁länger - ▁veces - ▁Ny - gize - ▁geheel - aktion - ▁Jimmy - timo - él - gid - pack - ▁wandered - าย - uß - ▁söyle - ciò - yne - mpe - 漏 - 克斯 - 性格 - mena - neb - ание - ރ - 徐 - brechen - ▁بسیار - chet - ▁abruptly - рас - ▁Herrn - рог - 缓 - ▁sunk - づけ - いきました - lick - をかけ - lığı - ິ - ▁allowing - tile - yah - lish - ஞ்ச - 都没 - ▁Temp - ▁indication - над - ▁Ту - acco - 采 - 점 - ▁trabajo - hnen - ▁pushing - 她在 - なあ - wich - ▁shepherd - ▁neighbors - ▁guarantee - ▁argue - 身份 - ▁tay - sleeve - 最多 - fitting - fed - 我必须 - ▁mwen - ▁hı - 科学家 - ▁solitary - ▁testimony - ▁constitu - 是这样 - ▁Jeff - ▁Ча - ▁claimed - polis - 保護 - ▁respectable - 哪个 - ▁export - ▁გა - ىنى - є - が見 - ▁barrel - ▁sob - သ - ▁vader - ▁thereby - ▁орган - 我来 - 輪 - ▁numéro - ▁commonly - ▁resolve - ĉa - masa - 補 - ▁anymore - chester - source - ▁zéro - ovat - ▁großartig - ▁Mid - まだまだ - ▁Religion - ▁transparent - ▁которое - 渴望 - ▁Third - iska - ▁summit - лө - rufen - が続く - ▁murdered - ▁Secretary - 奇怪的 - әк - loca - rgi - アイ - ▁category - riz - 卢 - ▁sü - ▁Freude - ছি - ▁quello - 自治体 - stick - gefühl - ▁blossom - ින් - ▁bose - вяр - ▁правильно - ▁outrage - ▁Taylor - မ - ▁raison - ▁Jan - 种族 - مۇ - ▁dolan - 混ぜ - alität - と呼ばれる - ▁иван - 側の - 日本は - ▁Yi - ளை - ولا - ▁guilt - ▁Rad - ▁Lind - stärk - 的这种 - erei - aquest - ▁satu - ▁slain - ▁scri - ▁repose - ほかの - うちの - ▁Wol - 滴 - ват - 创业 - щу - 今日も - 評価 - نظر - 胖 - ▁needle - 你可能 - ▁francs - 男が - ▁journalist - ランド - 영 - であり - 開け - ваў - '150' - ▁alterna - のではないか - 誰か - ▁sao - 地位 - 演奏 - ▁serving - неш - zir - ▁mingled - anu - ▁Anda - ▁dreamed - beeld - دون - сна - 町の - ▁Amy - ▁integration - ▁dividend - доў - ▁Shakespeare - ▁nouvelle - いただく - ▁salvation - ற்று - ▁Modell - ▁größten - ▁Mund - 协 - zier - 喺 - ▁laŭ - 计算 - ▁Link - gewalt - ▁nephew - 汇 - 結局 - Il - gari - ワー - ▁مرد - ▁Wales - 緊張 - ▁chính - ▁pulling - ▁sadly - 征 - ▁Gü - 았 - いよいよ - камі - ගේ - ▁وقتی - 划 - 有些人 - ▁presents - ▁grip - ▁consumers - ▁absorbed - お伝えします - 采访 - 依然 - стер - ▁frost - práv - ▁ruined - 蜜 - ▁قو - mati - 驚 - zima - heure - ошел - ▁বল - ▁replaced - ▁bezeichnet - ▁monkey - 高さ - arma - aq - ža - ▁nerve - лах - 田中 - ▁nhất - 相对 - ▁travelling - ジャンプ - human - ▁Hart - ▁Guard - руу - ▁Vorstellung - ▁Stelle - だという - ▁ساخت - ▁stare - 報道 - 手段 - 在我的 - ▁munsi - 、4 - ▁channels - யும் - ugi - 的吗 - ària - このまま - 连接 - 生産 - нные - ▁Hello - ໃ - 内の - фо - ▁nobles - rump - の映像 - 课程 - 我们是 - ▁Tisch - ▁politique - больш - なきゃいけない - ▁indignation - ký - grade - ▁unfortunately - 习 - 離 - ировал - 粘 - ▁princes - ▁наша - ということが - ▁Charlie - ▁affectionate - みます - પ - なんですよね - ▁hogy - bier - これだけ - 早就 - ▁nämlich - ▁اینجا - 仲間 - ▁газ - posició - 屈 - ▁slender - vuit - ▁Kit - ゼ - ▁হয় - 反応 - 这样一个 - abantu - ▁careless - 独特 - 扬 - ập - ▁cunning - ▁links - ured - shot - ▁perfekt - ተ - ▁floating - ▁veröffentlicht - lice - ▁relativ - uck - ئو - 卫生 - るんです - ▁Omu - руч - ▁sweep - 納 - 思った - 飲み - скі - 悲伤 - hielt - ▁momentum - ▁brass - 銃 - ▁چیز - ▁اس - ▁لدي - ▁effectively - ▁сами - зал - 重大 - ▁теле - ▁okul - اور - 竞争 - ▁assent - ▁repar - ▁Studenten - 病毒 - 说是 - はありません - がいる - ▁страны - ▁stond - ▁Bildung - زو - ▁politician - 为你 - ستا - ▁Außerdem - ▁Ра - 知って - ほどの - داری - wasser - thon - ▁canal - ごろ - ▁tales - ▁options - ▁neglect - ▁Baby - ▁aquella - ▁Quel - 縮 - ▁betray - ▁strangely - ▁typically - ▁fires - عرف - şte - ほしい - ▁demands - いるのは - new - 数学 - ▁عم - стой - ▁Science - ▁Ĉu - still - 灭 - 开了 - ▁scientists - ▁farewell - ▁veni - اپ - 前进 - ▁protected - 没什么 - 尖 - 分子 - ▁status - ▁Anwendung - ▁Eve - ▁fiction - ▁mă - ▁четыреста - ▁muốn - ▁wrap - ▁tobacco - ijs - ыс - cultura - 事態 - unta - 반 - ▁toda - কার - лон - ▁Linie - ▁widely - 村さん - 르 - ▁supporting - ▁recollection - ▁کم - 解除 - volution - ▁tenir - ▁podemos - ▁derzeit - российск - ▁waist - ▁besondere - 書き - してくれ - るため - Fa - லாம் - chick - omen - read - quatre - ▁arguments - ▁Samuel - 祭 - айте - prop - ▁wilderness - ▁Petr - чын - ▁surgeon - ▁pronounced - agit - 化的 - ▁olduğunu - ▁идет - ▁aya - treiben - ▁victims - penny - ▁algun - ąc - Ja - ǧ - ▁dacă - ▁tribu - ▁adorn - iest - ▁cheese - が今 - ケース - 咋 - 現地 - ▁Bridge - gyf - ▁fum - ▁arrangements - 事実 - 跟我说 - ▁kullan - なくても - ▁dumb - 成立 - 这是一种 - ▁sunlight - ▁memories - ▁لقد - 委員会 - تس - ryo - ▁prevail - 暑 - uburyo - みんなで - ▁vehicle - 有多少 - ▁heels - 飛 - やら - ▁верх - ▁limp - 炸 - ▁când - avaient - ▁timber - 这么做 - ▁passengers - vind - 能量 - ▁armies - ▁foul - ▁steamer - gebiet - ▁Juan - ▁suite - dür - ▁assault - を奪 - 的国家 - ში - ▁creo - making - 指導 - 强烈 - kelijk - クリ - にいる - ▁bushes - ▁Ohio - rno - ▁yeni - кую - 斯坦 - ն - 관 - ▁alas - ▁Hoffnung - ▁Nhưng - tano - ▁exceed - বার - фор - ▁vive - ▁Whether - ▁intervals - 概念 - ības - 法官 - 講 - ▁Sä - drückt - ▁brows - ▁notwithstanding - 川さん - ▁sauce - 杉 - ▁duck - ▁harvest - ▁Straßen - সি - aye - ▁domain - ▁opge - қа - fact - があると - ▁supplied - ▁betrachten - 给他们 - などと - crit - imwe - ▁Rei - цыя - ▁eby - rating - ▁willen - 毕竟 - 你对 - lehr - bonye - はおよそ - 熟悉 - 多年 - が多く - なくなる - tamente - 点击 - ggs - を得 - ▁cluster - 鳴 - zorg - ▁친구 - ▁Bruce - oza - ファ - ▁condemn - ▁resident - ▁boss - ▁gravely - やめ - ▁tune - ▁начальник - ▁telefon - ▁Steuer - ваю - 每年 - ன்ன - ndra - ▁regi - برا - 調整 - kala - ▁sabi - シュ - 术 - 柳 - 워 - ▁separa - ▁그럼 - 奶奶 - 传播 - 做得 - いますが - ▁sebagai - いいですか - pac - ▁Philadelphia - 차 - ▁Friday - 봤 - ▁Bö - sozi - やすく - 批判 - ▁interessante - سف - ▁scent - ▁Gesundheit - 堡 - ஸ - ▁Herausforderung - ▁Patienten - ▁technique - ▁Stan - 的过程 - ▁format - 了啊 - inta - веч - ▁decidi - 下了 - 灵魂 - 電力 - ▁cabinet - ▁هە - ▁Weißt - 高度 - 邪 - ▁Senator - ologische - ▁grin - られます - ▁roses - Ц - ደ - ඒ - Mail - ▁represents - ▁investors - conf - lett - ▁inch - ateur - ▁suspicious - ▁serpent - ▁condemned - 写了 - メダル - ▁loyal - gründe - ▁exposure - ▁penetrate - 勉強 - க்கா - ▁promotion - presi - pence - ▁gym - مَ - 森林 - 上去 - inya - ▁mischief - 増加 - ▁году - 向け - ▁daring - ьте - guin - ▁register - 喊 - だい - cloth - ▁confirmed - ▁nivel - ▁egal - 明らかに - gling - ▁Ва - ▁eastern - を入れて - ▁commence - いたします - ээс - ума - trieb - 十九 - шен - ັນ - ▁Abraham - ▁boom - ▁guards - umber - contro - 评 - née - ▁construct - лийн - ્ - ▁blast - ambo - ▁Bil - 世界の - ▁inspiration - 两年 - 狭 - histoire - вым - 也在 - ▁conversion - namen - خبر - ▁செய்ய - ▁courts - рус - なります - fér - 外交 - シーン - 心情 - Applaus - ▁könig - 你应该 - positiv - ▁Romans - ▁Fil - っていました - ▁tête - ▁executed - யான - рад - ▁eerste - री - ▁wool - ▁کردم - ▁chimney - ▁решил - 鲜 - ▁burnt - 我们正在 - 当年 - ▁documents - ▁Clark - િ - ▁wid - لق - ▁Med - 页 - ▁villages - solut - ▁costume - 英雄 - 운 - ▁units - களுக்கு - ▁requirements - Union - すべて - 姿勢 - іў - ▁coarse - 活着 - gau - あら - ▁enorme - ▁பிர - 甜 - позиц - ジョ - ▁savings - legi - ▁emerge - 互 - ▁addressing - 滚 - ▁veu - ▁nasıl - 许多人 - öhn - 和我们 - houd - ısı - ▁آب - ▁Bin - ▁producing - ▁pockets - 誰も - chia - oud - ▁Não - ▁Senate - ▁mama - 都要 - gă - ▁abundant - ▁chances - ▁bolt - reiz - ▁Kim - ▁boxes - ▁правда - 方の - ▁erfahren - 投稿 - 民主党 - dici - сә - を確認 - ▁Position - 横浜 - うまい - ncer - ▁sailor - ▁Folge - ▁unver - ▁license - ▁nên - ▁conservative - For - ▁хүн - ▁langen - ▁Vall - 承诺 - ▁ora - ▁scared - ▁Wolf - 所有人 - 头发 - ljen - ▁uncertainties - 披露 - tuur - 所谓的 - ფ - 側に - 的研究 - या - ▁весь - ▁leer - tega - ▁menschliche - ▁Nay - ▁mère - ビル - سان - ▁certainty - ây - zil - タイミング - 樹 - ▁které - nika - ニング - tato - ▁Bundes - ▁forests - 世界中 - を獲得 - ▁Common - ahan - ▁Titel - ▁calmly - ▁можете - ▁legislation - 创新 - 続き - 农 - はまだ - δ - ▁crimes - entes - ▁почти - ብ - ▁exemple - ▁Eric - ▁passieren - ▁washing - 第二天 - glass - ▁currency - ▁obedience - そこから - gele - ▁لأن - ▁muß - 什么事 - ▁wages - kari - 的那样 - なくなった - ▁тү - 詳しい - fik - メン - ▁risen - anzi - ތ - はその - どっち - 鼓 - ▁Gir - жо - জন - ▁Richtig - 使い - 的一种 - ▁Kro - wende - 딱 - ▁discussions - hst - ▁harmoni - ▁organism - بد - работать - ▁personality - ▁Laufe - んですけども - artig - ▁functions - ▁команд - versa - ▁valid - kti - ▁Bevölkerung - ▁delicious - ▁Obwohl - ▁seats - ▁جدا - ▁Hab - ▁hail - 白人 - そんなこと - ▁dash - の影響 - ▁bargain - ▁Zimmer - ▁Apple - ▁видел - ▁укра - ▁zumindest - ▁spricht - 始めた - gamba - を加え - ▁charges - AI - kud - ▁tenderness - ▁сум - 日子 - 会在 - 凭 - ▁virgin - مات - erna - 门口 - ▁которой - ▁befinden - ▁großartige - ▁weigh - istoj - chem - ▁lofty - about - サイ - 五年 - ▁Aufmerksamkeit - ▁prayed - 做好 - 的一切 - ▁differently - kke - ▁mache - 否则 - ▁criticism - の間 - ▁republic - 毎 - ▁bulk - 交给 - made - ▁vivi - ▁devices - ▁warned - рай - 相互 - жан - ▁danach - 块钱 - 妮 - think - овал - ▁Ober - 召 - ▁loaded - ▁பிற - rog - ▁detailed - 茨 - ▁executive - ினார் - 複 - rechte - ▁بۆ - ▁echte - arbeiten - flex - ▁jemanden - ▁лично - quar - 一开始 - の声 - ifer - ▁tett - きっかけ - ▁önce - lma - ▁Moreover - utilis - 地点 - mite - agne - fest - ▁famille - 我今天 - ẽ - ▁sustainable - くれ - tekereza - performing - ▁armen - empat - ager - ereza - 鼓励 - ▁სა - ▁Sicht - 削 - ▁семнадцать - 倾 - ▁повер - 这是我们 - ワクチン接種 - 彼は - ▁contribute - ▁haber - ▁nawe - еры - に来て - するという - Saxon - ▁động - 少ない - ▁declined - ▁dense - 秦 - 首相 - ▁gaat - 满意 - angi - ▁quella - нию - ует - ▁printed - ழி - 力が - ූ - дог - vio - ▁проект - леп - 巧 - ▁Turk - だろ - ▁других - 是要 - ▁meetings - ධ - lateral - verein - মান - ▁estan - Ab - 楽しい - 宋 - 前から - 符 - ▁divide - 是怎么 - ան - ▁schoon - 非常感谢 - ▁Lieutenant - 最後 - ▁promptly - ▁gallop - ▁Studie - ▁Kab - ▁corridor - ▁chart - ▁energ - meli - 今朝 - ▁erhielt - جن - rated - 走到 - baren - ▁Kno - covered - ▁sermon - ▁Și - ▁rece - kumi - ▁wrought - ▁insan - ▁officials - 品牌 - ▁positively - ▁woe - ▁před - 她是 - なか - ▁Kandi - ▁diverse - ソース - ▁Ell - யே - ▁challenging - ▁ascertain - порт - ▁gardens - ガン - 获 - 肌 - ▁neutral - ▁diu - ▁Ад - ▁Point - 한테 - ▁retire - ਰ - ▁amendment - ▁Бо - 이야 - '600' - ▁kaufen - ▁dragon - 称之为 - ▁Sus - ▁быстро - lasse - を探 - oid - ▁dependent - ▁partir - 進出 - ▁Fue - ▁Northern - 誘 - 懸念 - クション - ▁briefly - ▁назва - скай - ост - 手术 - ▁entitled - 诉 - ▁endless - 果たして - беж - ▁celle - 이랑 - ▁Boot - ί - 仍 - 桑 - ▁clay - ▁centro - ▁rhe - ▁obeyed - üsse - ▁approved - ▁knot - ▁kurze - کت - obo - 听起来 - 续 - ミス - ▁владимир - seits - 出现在 - мах - 可以在 - ▁кабинет - 写的 - 什么意思 - тели - 效果 - 细节 - ديد - ▁Live - 生徒 - Č - までは - ▁Bruder - ▁Herzen - market - 日常 - ▁nang - 幸せ - ▁deals - гын - сид - marca - リード - かし - りません - ались - empresa - ▁Jung - ▁producer - 聖 - 穷 - found - як - ங்கு - ▁Pour - ▁announcement - ああ - ▁quest - 去看 - 通信 - 豚 - entre - ▁globe - ▁paw - ▁barely - িয়া - rühr - ▁Eastern - 裁判 - 文学 - 妹妹 - が必要 - abord - ▁visitar - 背后 - 笑声 - ▁sweat - stau - ▁sees - ▁Tochter - ですからね - ň - ▁kang - ▁measured - ▁آم - ▁humans - ▁donne - すこと - ▁languages - 推荐 - 答 - сю - ną - ▁adam - ▁الث - ▁colours - 不住 - கால - さが - ▁Dur - экс - ▁Harvard - ▁Remember - ▁Mitarbeiter - 剑 - やってる - ▁curtain - قط - ▁сур - ▁Bad - ▁gelang - 针 - ▁hano - remo - ▁Funk - チョ - 訳 - pian - 解説 - ▁amused - フォ - ahu - ತ - ▁fountain - ▁그랬 - بح - ことです - шча - ▁terme - 的任何 - ▁achievement - nico - ▁Wit - 捨て - 说他 - 係 - 必要な - 不知 - 呼びかけ - дает - ▁Ple - ▁Empire - ▁holes - ▁seas - '00' - もんね - ▁periods - assa - 臨 - 冇 - deki - その時 - stup - طي - 工资 - 享受 - 进一步 - どうしても - ppo - ▁schlechte - 面白い - 焦 - ▁Pel - šu - ▁counted - ரோ - することで - hra - 柄 - උ - called - menti - ▁convict - 事业 - 礼物 - ▁Früh - ▁movies - зон - ▁attempts - ▁nouveau - ▁conform - name - space - と言って - ▁brood - шир - ▁Bab - 破坏 - くない - ▁displayed - ▁soit - 我没 - ର - 聊天 - руб - ▁Regen - 区の - abb - ▁schwarze - lade - ▁cats - kere - ▁sofa - ▁procedure - овый - احت - ению - igita - 構 - ▁ample - ▁Tommy - اخ - 臣 - yat - ວ່າ - 普遍 - あなたの - ▁underlying - ▁guardian - 探索 - utilitza - ajo - iwa - ▁Atlantic - ▁rigid - ▁Feld - ▁Bos - 医疗 - ▁consisted - ▁сделал - ▁pressing - téri - ونه - lager - nice - 桌子 - 東京の - 汗 - ▁들어 - ▁pensar - ▁estamos - ▁باشد - 广泛 - ▁помог - 大家都 - 到这里 - ▁specimen - дах - کاری - чил - 出现了 - ▁acquire - ▁injured - няя - ▁считаю - Lachen - すばらしい - ▁simplicity - とう - ▁Fol - ▁flows - psy - систем - غان - 认真 - に行って - ▁полу - ං - ▁revolt - блі - ▁называ - ▁experiments - ということですね - ▁Book - 窓 - тора - ▁проста - ▁ventured - ▁jsme - ▁interessiert - ▁lamb - 发表 - ▁pillar - ▁Gun - ▁Bald - 襲 - 肺 - 行った - 液 - ルド - 均 - ங்கள - ▁dearest - dora - 体育 - risto - há - ▁erklären - ِي - haltung - lijke - 十七 - ▁multaj - 合同 - ütze - schloss - களின் - ▁bust - itari - ggy - ▁Pennsylvania - ▁Jersey - 充满 - ▁afite - 不必 - 温暖 - 俺は - īt - ▁jerk - ვე - ▁casi - wissenschaft - argent - ின - 日本代表 - ▁irregular - 입 - زي - munt - ▁пост - ▁defect - 应用 - ▁پای - nomen - მი - ▁Number - ▁ليس - 工場 - 変え - ▁garments - ▁bitterly - 和平 - helm - ▁identified - 芸 - ▁Dort - 想知道 - ▁convention - ▁власть - xes - るから - 夕方 - ି - lg - ▁resto - ▁Carr - bora - ▁Mut - بان - つけて - 審 - ყ - 数は - ▁trembled - 的观点 - ▁flee - ழ் - mbwa - ▁стала - حة - ▁bars - лив - 哪儿 - 也就是 - 凉 - ▁revolver - kubwa - ▁Material - ▁clutch - 住了 - ▁servir - ▁ranks - лаг - が高い - кія - ▁resort - ▁rushing - ▁responded - 扎 - ▁gebruik - ▁pirate - ▁Eight - ▁лишь - ▁Kosten - ょ - トロ - 我们知道 - ▁mach - fähig - ▁trova - வில்லை - ▁hoog - ▁prote - කා - ▁restore - ▁painter - 家伙 - ▁وقت - տ - 亡くなった - ▁writes - 会長 - 但它 - 半分 - сту - طة - からです - シーズン - ▁Sing - ▁zunächst - ograph - ▁plen - ▁ау - ▁Obama - ▁typical - ▁Adams - ▁luxury - يز - паль - ▁milli - တ - ▁advertising - 購入 - ▁eighth - ▁compte - なと思って - aidd - ▁Haar - ▁blaze - ▁soixante - 危機 - ▁invite - ▁idol - ▁sphere - 往往 - крат - 発言 - 総理 - ▁colleagues - ması - ▁مست - ▁Kin - まさか - ▁secretary - 安妮 - ▁doit - 舍 - 聞 - ffy - ▁sai - ▁sailors - ▁Certainly - ▁sağ - ▁від - ▁які - länder - 的儿子 - भ - 想像 - 岸田総理 - போது - 恐怖 - ▁espe - impa - ▁ваше - ▁интересно - ▁behaviour - 那儿 - ▁moonlight - 間違い - 前線 - 好奇 - 一群 - ▁logic - ▁idee - ▁Mad - 桌 - ▁preparation - ▁Journal - ▁Sydney - 西日本 - pelled - ▁nosotros - ▁Ford - fata - ▁faded - ▁чалавек - ங்கி - magnet - ▁consist - ▁сюда - ▁ladder - воз - ▁followers - ▁intens - ▁пап - ▁derived - may - nade - 是最 - ▁crois - 庆 - ▁чуть - ாய் - wur - 想起 - ▁Real - ▁math - 梁 - empre - と発表しました - まとめ - ▁mug - ▁roughly - ▁Rick - osta - markt - bek - ▁forming - ▁avenue - ▁prohibi - ▁investing - ▁thence - ▁alert - クト - ▁bleibt - ▁frozen - cem - mpel - рә - stamm - 行われ - ▁Ну - 瓜 - ▁allí - fekt - وش - ▁cooking - ▁entreat - なりました - ▁maka - ▁helpful - 他不 - ▁borrow - ▁jack - Weiß - ▁كما - ехал - ▁vengeance - ▁». - ▁Generation - 通報 - ▁права - 一个非常 - 臭 - ▁нэр - jwe - 差し - ▁хэл - wah - ▁cresc - ▁hoop - ▁Leistung - ▁Castle - ▁reprodu - neuf - 捜索 - 我一直 - ▁lantern - ▁сказала - ▁weeping - gré - łu - ▁oogen - ▁quien - Est - ▁zweitausend - 食事 - 飯 - කිය - がい - بات - 账 - ▁Gel - ▁Büro - ぜ - 收集 - ▁같이 - торы - juli - 深い - ♪ - ▁sharing - ▁flames - 巻 - ▁provin - ▁Abantu - あなたは - 事を - مز - 聞く - ष - 교 - 掛け - 明确 - ▁instances - 替え - ▁الل - 抵 - zek - ▁Jer - ▁invented - ▁lively - èn - ▁Pont - дя - kopf - カン - فه - 夫婦 - ▁ĝin - ▁survive - ța - lekt - もあった - ▁getroffen - anthrop - ▁diminish - ▁дур - ▁genoeg - メニュー - ▁কর - orienta - ごはん - 很有 - 一个小 - 別に - 실 - ▁alcohol - ▁cavalry - ▁Keep - ▁stap - 値上げ - banga - ▁employer - ▁lawn - 没错 - ump - ▁myth - ▁agents - の部分 - ▁Standard - führen - ▁Hence - ▁selection - ▁Kil - ▁blows - ▁Jason - ▁jsou - ▁heavenly - 予選 - ▁democratic - セン - fizi - ▁folded - ▁fais - ▁Henri - とり - ▁выход - ▁pove - ▁hamwe - チェック - ▁Place - 姑 - つ目 - ▁chaque - läuft - 干吗 - 一定是 - ▁geven - lep - warf - ады - рин - ಲ - ろうと - 小说 - ▁zelfs - ▁segon - bê - ▁summoned - ▁elaborate - 植 - ▁мед - ▁condu - 타 - ▁fancied - life - ▁wirst - ▁homo - ▁Institute - ▁aussehen - кән - сред - オミクロン株 - ▁comprehend - 赢 - ▁hob - ▁curs - ▁одного - ▁biblioteca - 카 - bari - 这个时候 - しまいました - 예 - suit - энд - ▁менән - гаар - ukuri - 一面 - ▁solitude - 確かに - schule - олог - ▁Nancy - そもそも - 行政 - іст - ▁Robin - 编辑 - 睡眠 - つける - ▁nka - ▁Beziehung - 这条 - ウィ - ▁Knight - fond - 撒 - ▁восемнадцать - '5000' - ▁baka - ▁своего - ▁passe - 俄罗斯 - ▁gelernt - ▁Gesundheits - ▁canvas - ▁Zeug - 月から - ▁namely - 警視庁 - ▁Druck - ▁ruhig - ▁Pferd - ▁illegal - ▁recognised - قول - nine - ▁induced - zī - ▁porter - 代の - 海岸 - ▁понимаете - ▁hammer - 雇 - آور - ▁그렇 - ▁programa - 盤 - 练 - find - チーズ - ▁brethren - 美丽 - ▁victor - Ka - けん - енко - はね - ▁Korea - リスト - ▁взял - 对他们 - するために - ▁estado - zicht - ▁Meta - 国家的 - шь - ▁secrets - ▁наших - 陣 - အ - ▁betrachtet - ▁divert - شته - ▁Mitte - 署 - stop - ziert - ỏ - пис - shim - ▁Light - hend - いち - ▁newly - kha - 取った - api - рост - ▁insbesondere - 基地 - чыць - ▁Stunde - ไป - yez - ▁ibya - ாத - ▁lên - 爵 - 交谈 - Ca - 兹 - ▁dramatic - 車が - ▁wept - 囲 - ▁igual - 一回 - 正确 - ▁älter - ▁alarmed - ▁pasa - ément - 外国 - ӑ - ▁вдруг - 要做 - ▁fils - ▁Cape - 相反 - 略 - 他没有 - ান - のみ - エリア - ▁exploit - ▁gesamten - ▁celui - みよう - цо - пала - 椅子 - ▁visto - нг - బ - losen - nisse - ▁conceived - 刑事 - 払 - 这边 - ▁Protestant - уют - 纯 - 保险 - ▁принима - ▁relate - ▁Jun - cation - ▁warrant - plu - ▁لن - ▁regularly - 白色 - ▁trebuie - ▁Australian - раст - reka - 男女 - ▁seul - 道理 - antic - 負担 - 続けて - 体の - ▁circuit - ▁Geschäfts - 農 - ▁sitzen - kota - ▁خواهم - 分から - ▁Australien - rani - 川の - 这一切 - ▁believing - ▁Health - ▁jours - 地面 - 刷 - どうやって - ▁ripe - ▁faculty - 违 - ▁thumb - ▁convince - 笑顔 - arde - ску - ▁boil - ▁Head - ていきます - ▁erect - 出る - ▁thither - 当時の - 发生的事情 - ▁jene - annya - 0,000 - ▁anyhow - ▁lado - ▁assistant - 였 - ▁Christianity - antoj - geschichte - ışı - сом - ▁escrit - ▁Denken - ▁assembly - ▁tạo - 千葉県 - tah - 緑 - 讲述 - 这场 - 知らない - ▁Farbe - ▁discourse - тек - 真っ - ▁correspond - 載 - 爱你 - кли - 仅 - ▁vivo - 被害者 - ▁Això - ▁giới - しかない - ▁swing - migrant - 阵 - ▁prendre - 免费 - чик - 的身体 - ב - ▁divorce - 的主要 - 簡単に - 協 - ▁purse - ▁impatient - 很快就 - ▁alma - ▁третья - ▁Ivan - ▁kant - ▁confessed - plex - 擦 - ▁تت - 真相 - ▁Nova - ▁кем - ພ - ಮ - ▁advancing - ▁aujourd - ▁rip - ▁trials - насць - ▁Terra - ▁rider - ▁Prime - ▁hired - 感覚 - ▁situas - ▁plastic - рев - ▁snap - ▁blade - ▁giá - uloj - ーター - 对待 - もので - kaza - ▁mob - glück - ▁такі - 般 - state - ▁Looking - ුව - 物語 - 在她 - ▁öffentliche - niu - ▁año - ▁уверен - ▁teatre - ție - ▁Papier - にくい - phin - ▁Schwester - 损 - ▁aspects - ukan - mura - ▁wheat - мысл - まい - wissenschaftlich - ▁pueden - ▁Hin - ▁Gru - 組織 - ▁niece - ▁acknowledged - ▁erzählt - ▁flushed - 女王 - ▁mussten - провод - ですかね - ް - ▁illustrate - ▁luego - ▁soziale - 比べ - ▁spielt - ች - さすが - バッター - ▁maxim - ▁occasional - ģi - ▁discern - ▁farmers - দের - ▁telegraph - IT - bec - ▁revelation - 的位置 - лап - ▁awoke - ▁Gan - ▁Як - ▁organized - ▁oncle - trägt - 선 - ▁Hur - ▁decorat - fection - philosoph - mettre - ID - ▁vertical - ヌ - ▁uncomfortable - ▁rosa - дель - جمع - takse - 英语 - ▁moves - скую - 骂 - ▁imperial - ▁shower - нный - ▁ceiling - ▁кому - ▁lasted - ▁essence - ▁viņš - ▁정도 - Podcast - насці - ▁akan - වෙ - 进步 - ியா - ▁bisa - ▁rev - ▁Rachel - と思いますね - 公民 - 上に - ▁lagi - 待って - 嘉 - estima - ▁Cloud - еп - 帮你 - ▁Kunden - 双方 - тары - êm - 前回 - fug - гур - ▁museum - ▁Labor - ランキング - ▁Gilbert - ▁энэ - bö - ▁apprehension - ▁gossip - ▁конце - 讯 - 跨 - ▁плат - ▁banda - 访问 - たった - 立刻 - 犯人 - ▁которую - вил - ▁Kommission - 出席 - ▁Spar - ái - пав - 你认为 - 심 - ▁Viņa - ▁знаешь - 这将 - ▁interval - ɛa - cze - 这种情况 - ▁admired - хі - ница - cate - 始まった - 原则 - 備 - 镜 - ▁saints - 年前に - ▁Call - 들이 - familie - bing - ស - ▁hohe - ▁하나 - ▁trouve - ▁Fanny - '31' - Й - ▁emphasis - ▁ekonomi - ▁செய்த - そのあと - けが - ▁причин - 残って - rita - į - ▁absorb - زان - おく - fusion - ருக்கு - ▁Ray - ▁Morgan - チェ - ▁Kap - ঠ - ▁Größe - ▁erinnert - ▁drops - ▁jewels - ড - lista - ▁reeds - ▁Jeder - لە - 本日 - lec - ▁agua - ▁depression - ayi - ▁плохо - 时刻 - っぱ - ジョン - ора - 咖啡 - ▁trata - ères - 医師 - ▁interact - ▁react - 勇 - ijo - ▁получа - ▁highway - ້າ - 徳 - Where - bä - 州的 - ▁panic - пут - ▁bitten - landa - දි - ▁funciona - ▁төр - 非常非常 - せい - ▁pledge - はこちら - 覆 - 帽子 - ▁hình - ▁Charlotte - ▁questioned - しょ - ▁என்ற - Ş - デン - шым - ▁enfants - ▁subsequent - ▁perish - 电子 - 食材 - wys - стей - 取って - ▁долго - ▁Després - blich - ▁exhibit - ▁tape - ボー - 结合 - きっと - ▁déjà - 劲 - ▁bestehen - 生存 - ▁spar - craft - てくれる - 那时 - ивать - ▁gush - eko - 寻 - 生活在 - ▁muchos - ▁pobre - рек - வீ - したもの - kende - ▁induce - 十三 - hugu - ▁neglected - 永远不会 - ▁мои - 我告诉你 - ▁upright - ڪ - ▁analog - ▁какое - ▁contributed - ▁random - seb - реш - ▁cô - 回目の - ▁dried - トラック - 入れて - ▁Public - 家长 - 啲 - すと - ▁Cambridge - ▁colony - roh - ▁отлич - ▁Maß - ▁impre - かつて - ▁Ergebnisse - ープ - してくれる - つけた - ▁knights - ▁wheels - daj - amerikanische - 했어 - ▁vanity - 寸 - ▁impacted - قدم - weil - ▁Ты - ▁realm - ▁apostle - 发生在 - が出る - ▁ratio - を入れ - ▁Mind - มา - ▁gleichzeitig - nomi - nish - ▁Alles - issi - ▁tumble - 打席 - ▁Play - ▁owners - ▁yid - を与え - ▁grove - ▁Bush - մ - あん - Music - っこ - meze - おはようございます - ▁няма - ▁lease - ▁Unsere - ину - ▁Best - ▁notwendig - ▁цяпер - ▁indifference - 有所 - ▁neues - 中国の - ▁자기 - ования - ▁guessed - ▁swallow - mbang - 明け - ប - 突破 - 宝宝 - ▁attacks - ▁brute - 基础 - ▁summon - 赌 - 日本で - ثر - 链 - ▁persuade - ▁advised - いなかった - schutz - ansah - ▁Bord - ▁incorpora - ▁quoi - ▁drie - 宾 - ирует - ▁versuchte - 込め - зя - ▁neighbor - 分かった - ▁saat - ać - hita - 降る - 滞 - ▁собой - みました - かって - ▁бүр - ▁pension - ▁Euro - ▁Massachusetts - 选举 - 受け入れ - agira - ▁renewed - ▁perfume - ▁downward - ▁metge - において - ▁boven - ▁kenne - ▁lowest - кажу - 个小时 - 私たちの - ▁reside - 조 - 移民 - 这里的 - steuer - といえば - ማ - ▁Glauben - лова - dığı - ставил - ▁assert - 绿 - ▁meanwhile - ▁shrink - のことを - クリーム - ▁මේ - 设置 - ▁palabra - ねぇ - ▁profund - ▁irgendeine - cida - どうも - 屋さん - ▁rond - ▁flourish - 司法 - ▁Kein - ▁practices - думал - стью - 它在 - たって - の動き - ▁cannon - ▁Mä - どのように - ▁mum - ҵ - komeza - ddu - rato - ▁transmit - 入院 - ▁кир - ▁nếu - ▁demonstrate - 阿姨 - 如果他们 - かというと - ▁parliament - ▁deploy - бод - ក - 入った - ▁prosperity - üş - hub - ské - ပ - ▁happily - ▁threshold - ▁Davis - りたい - 売り - ▁picking - ▁segments - 仮 - ôt - 非常重要 - ▁potentially - 女性が - ▁resource - 家に - るために - ▁artists - ▁mga - ▁varied - ▁Interesse - idio - ▁계속 - dida - ▁escort - ▁Monate - ▁niba - AS - 百万 - א - ルール - bac - rci - ▁größere - ▁nahe - ずつ - ▁Ganz - ▁zeigte - ▁быць - ыт - ▁المن - 殺害 - ▁Mexican - 立つ - ▁demon - ▁conversations - ggle - анд - хар - ▁peng - 韩 - ▁altre - خواه - ▁Gelegenheit - 重点 - جميع - 生まれた - දු - しろ - yamba - ▁ئى - ▁ກະ - ホール - ▁anim - 置いて - ▁recalled - лоп - liest - 泽 - ▁Bitte - ▁buiten - ▁scream - сот - اي - ▁entweder - ▁çalış - ▁augment - 各国 - ception - 覚えて - AR - 帰って - ▁Auswirkungen - antwoord - ▁Creek - 自分たち - 等着 - 位の - ▁ogni - 術 - ▁situated - ▁promises - ứ - 男性が - 截 - хүү - ▁categories - يح - ▁shouting - 病気 - чна - ▁plunged - してきました - ▁granda - ▁destiny - ▁брат - رک - 詳しく - ▁losses - ▁Tin - ▁participa - 这两个 - excel - ču - 泣 - ▁Zustand - ▁diejenigen - ▁inclination - 終 - ▁Death - ▁gains - タイプ - ▁bauen - ▁anxiously - 留在 - мін - 将其 - लाई - bian - зан - 后的 - ▁Uganda - 弯 - ▁simp - siniz - ję - 绿色 - 涉及 - ையும் - petu - ▁lieutenant - 关闭 - ▁captive - ▁devas - ▁roused - ▁preparations - ▁sentiments - 曜 - 我们对 - ▁assez - 달 - ▁Vorteil - arak - コーナー - 対戦 - ▁instruct - ▁sahen - ▁Fernseh - ▁sterben - ùng - fau - ▁twi - てくる - '&' - なさい - ▁Stellen - chief - 所在 - ▁pode - mēr - ホーム - ドラ - ▁zwölf - ▁история - ▁девятнадцать - 見ると - ▁Read - ▁tarde - ▁Drei - ование - brü - ▁Wait - 不去 - ponent - tesse - ▁Press - daw - 単 - 伙伴 - ▁thankful - ▁дня - どういうこと - зь - kond - cina - 互相 - 伙 - ▁livre - ▁Vir - 一周 - 琴 - aju - නා - ▁resulted - дт - ▁amazement - 一首 - ▁Cent - ▁Таму - ▁Board - ▁só - ▁ruins - けば - 我们应该 - ▁computers - ën - ốc - šo - üü - 髪 - ▁Douglas - ▁terribly - ▁aliaj - ▁Ski - année - ▁theater - ▁stran - tious - ố - ▁renew - ▁красно - ▁менее - をし - 平等 - raba - 用来 - 下げ - zugehen - altro - 演技 - ポン - ▁attraction - ▁كنت - nywa - 맞 - るんですね - ▁symptoms - ▁feather - ▁Howard - ishwa - ▁아이 - ▁observations - ▁contented - 汤 - くれて - ▁primitive - duction - ▁slim - toxic - ▁cordial - ▁indignant - ھە - اث - ▁crush - ▁Greg - 声が - ▁Dean - schaffen - '800' - だんだん - ▁особенно - 沉默 - ▁rely - 这事 - bund - ▁publish - 兼 - 耐 - ▁fist - шей - 我应该 - 계 - ٹ - 氏は - 来源 - ▁sommes - 戒 - ປ - 徹底 - ▁avail - ▁Allen - مب - ▁Judge - сць - ativ - ▁adventures - さない - ▁frontier - 浴 - 现在是 - 的男人 - てくれ - дея - 童 - Ç - ▁marine - ▁colon - link - 美国人 - 也很 - ющих - wald - പ - たくさんの - 行為 - ▁trim - нак - ▁selten - લ - 쪽 - শি - 美丽的 - 盟 - diplom - 若者 - ▁bisher - ▁recollect - 那个人 - stände - 微笑 - ▁Ng - usia - 피 - brid - 监狱 - ▁politischen - なと思います - 明白了 - ▁degli - なった - 我从 - ▁hideous - 是如此 - ▁oke - afrika - ▁belle - ось - 香り - いただきたい - しまいます - ▁Beginn - valo - ▁awakened - ▁принят - worthy - 寄り - 法案 - 他还 - ▁Artikel - 他自己 - ▁curt - ໍ - ▁ладно - に住む - ▁només - 发生了什么 - ką - ש - طر - 거 - ілі - 議会 - 怖い - 填 - しゃべ - キン - ▁harmony - ▁genauso - なのに - 追加 - ▁twist - pfen - ▁tại - ▁Monat - ým - 如果他 - 作る - ఇ - ▁спросил - lī - ▁variation - ▁разговор - istan - 堆 - mou - ▁viene - цыі - ▁presume - 人気の - 많 - ▁pretended - vā - ▁minha - 进了 - devi - 剤 - ▁undoubtedly - ается - 回り - ▁Teile - ▁sis - 進む - クラス - 厳 - ▁Straf - ▁Pap - 他们都 - ▁Claud - 思いました - ▁bills - িক - 毁 - 炭 - 桂 - ホントに - ティング - ▁troviĝas - ▁earliest - ▁آخر - つつ - ▁user - ▁Year - ▁выгляд - ▁verde - 情况下 - 花了 - ▁campus - ▁каза - plication - ▁peep - ▁আর - ▁Natürlich - رى - 嫁 - ▁depending - 的结果 - рис - 焦虑 - 仔细 - дат - ▁checked - ▁forbid - ▁saber - 一系列 - 是我的 - ▁Quant - ▁Dans - デザイン - 夺 - つもり - みましょう - ▁conquer - 投手 - নের - ▁inflict - ▁Bull - ▁nowhere - ▁simul - 诚 - ▁четвёртый - ▁footsteps - ▁mới - ▁проблема - ▁visits - を集め - ▁prepara - 就有 - 所以这 - 就这样 - ダンス - 靴 - ▁threatening - ceea - ề - anten - евич - 措置 - ▁hoy - 高齢者 - を終え - bola - ▁entdeckt - どおり - 威尔 - ņu - ▁mineral - ▁dispers - ▁donné - ▁Sure - ▁dedicated - ▁Cook - 庫 - პ - They - ▁кара - gramm - ついた - ▁مهم - 迈 - داد - に入る - ▁sample - 问我 - 鼻 - گان - 页面 - ือ - ffel - ▁deceived - ▁doth - ▁flies - selt - 公主 - зак - ▁Lia - গা - ▁Dale - ▁readers - ▁birthday - ▁кре - ставить - 一部分 - melde - ▁Mond - 恐らく - ▁recruit - ▁niedrig - 都知道 - まん - ▁diverses - ▁dropping - ▁groote - ▁снова - ▁într - 危机 - ▁Madam - ▁warmth - 慌 - digen - 得意 - すべき - 付出 - るんですよ - 你们的 - ▁Roma - ▁extension - 奴 - '99' - ▁velvet - 告诉我们 - 当局 - rama - 热情 - 下一个 - ▁limb - ሰ - ▁phenomena - ▁villa - お話 - فن - альный - ▁그러면 - ▁высок - 共和党 - ▁incredibly - というのも - ▁gewe - ▁uncertainty - 行われる - 韦 - sett - ▁instruction - 広島 - することが - ▁Philipp - ▁doll - ▁vile - ▁calcul - ▁Ergebnis - ▁chef - 본 - クイズ - ▁headquarters - ▁Mach - ▁advocate - анг - いるので - ▁Seele - ▁tread - gericht - 主题 - ▁Tony - 巻き - инг - フリー - 平和 - bó - 杀了 - ట్ - ▁openly - даў - 恰 - 苦し - ▁twilight - うちに - ▁expedi - 资本 - ▁tengo - ▁Stock - ▁busca - ▁anguish - ▁planted - ▁dentro - バック - ▁thief - ▁tracks - ▁йо - ▁bout - ▁şu - ▁frightful - ▁linen - ▁Element - 認 - ▁dost - ▁weiteren - 不愿意 - larda - ▁trenta - ▁persist - All - 闘 - zept - ▁адзін - ▁comrades - ffle - 超え - кос - 敌 - バラ - ▁pastor - ความ - 失敗 - Th - ▁стара - ضا - ▁Graham - ▁filling - нні - 緊急 - ▁Klein - ▁findet - ▁mixture - ▁bronze - ▁Low - ▁moeder - ▁Però - 和他的 - ▁dingen - ▁constrain - ▁spark - ຊ - 逐 - 都能 - kulu - ▁göster - ▁marvel - 営業 - fter - ▁unwilling - ▁großes - ▁dopo - ▁tự - тин - いたのは - 芝 - ▁travelled - hield - 野球 - ▁magistrate - step - 見えます - ▁judges - ▁innocence - enfant - llan - ▁пятьсот - これまでに - ▁государств - 消失 - 版本 - ۔ - ▁maintenance - médi - ▁refresh - вит - jah - ▁Add - jās - 墓 - 孔 - 작 - 每天都 - ▁dug - 新型コロナウイルス - フォー - дали - ▁lieber - bber - ▁Nicholas - ▁Schlaf - ▁Stern - ▁Justice - 究 - जा - пля - ▁trabaja - 車の - ▁precede - 大幅 - ▁babies - ▁proceeding - prüf - obten - кнул - 半年 - 这就是为什么 - ží - 自ら - ▁позвол - ▁diversos - 味が - 阴 - ▁separation - ▁ashes - ▁бизнес - ▁Actualment - ▁clan - arlo - rene - だけでなく - зер - ▁Quer - ách - 来週 - 墨 - ▁Städte - ▁admirable - ▁Bezirk - ▁pencil - 获得了 - Tech - 中学 - 貴 - EC - 胎 - ▁Everybody - ▁Fro - ▁triumphant - 走り - ▁kau - 廷 - 싶 - 다가 - ▁motionless - ▁punish - ▁spy - 分かりました - здоров - 업 - ▁ebenfalls - 気に - ▁oldest - داخل - ▁завтра - ▁interna - telli - tòria - ▁factory - креп - 手里 - 導 - ▁thoughtful - ▁struggled - 的爱 - ▁Front - 보다 - 纠 - முறை - 突き - ▁Junge - ▁erstaunlich - ▁ursprünglich - ▁heartily - ▁Shu - ▁goals - ▁eenige - ▁representation - leta - ște - ющие - ▁Sab - ▁Lewis - تاب - るんですけど - ▁realmente - ▁trading - ▁Sally - 慎 - ▁Figur - ским - انگ - ▁habitual - ▁mijne - ことが分かりました - ত্র - mail - 主義 - lıyor - ▁européenne - ▁satisfactory - ▁conven - 的历史 - ▁questi - ▁nacional - mula - オンライン - 最初に - бро - cover - ▁Group - 歩いて - рот - ▁throng - 高橋 - ▁hesitation - 区别 - 記念 - おっ - おう - ▁strat - osten - 一个问题 - 我发现 - Wi - クロ - ▁Lehr - ▁şekilde - いている - ложил - ▁tenia - кас - صب - 告诉他 - ▁cần - 捉え - ▁cơ - ▁dashed - ▁publication - 経営 - ▁strategies - ▁dood - 丰富 - ▁conquest - 工人 - вшись - 申请 - 考える - грам - ▁wherein - 白い - ▁Diumenge - ▁Fähigkeit - dī - ▁большой - لىق - ferenz - ▁rings - 碰 - お天気 - 特别是 - 如今 - ▁framework - ▁violently - ▁suburb - zieh - 僕が - ▁Satan - gner - ▁거지 - 高中 - ▁gravity - 飞行 - ▁Jugend - ナイ - 这也是 - lja - ▁madre - 奔 - ▁fünfzig - 福岡 - கிற - ▁Code - schritt - ▁crimson - ▁Square - ▁шаг - ▁கட - ▁platja - nick - ようになった - ▁exterior - ▁Avenue - ация - 幻 - ▁নাই - ▁Gö - ▁scor - ড় - 垃圾 - 矛盾 - คุณ - 胜利 - ர்கள் - ▁swimming - 続ける - jih - ▁Meer - nata - 我怎么 - 感觉到 - 護 - ▁motives - 働き - 娃 - ▁Què - ▁المح - ▁reverence - ▁unbedingt - ▁nhận - ▁zeal - lch - ▁honestly - ▁Nachrichten - ▁cage - 紫 - 什么呢 - ▁இருந்த - iously - ▁suicide - ▁Krist - гэр - хә - 採 - ▁transactions - роў - ▁jacket - の前に - ئی - ▁expanding - 您可以 - dā - 生きて - ▁retro - крас - ▁стоит - 工业 - ceastă - ▁pomp - ていない - ▁dön - imbu - ▁visiting - ланд - 做一个 - ▁basa - eqq - ▁Market - ョ - larında - ambu - үүн - 债 - 찮 - ическая - ▁membres - 栄 - ニューヨーク - rrington - mpamvu - tius - profe - 学者 - ▁sites - kasi - ▁ажил - 聞こえ - gesellschaft - ▁crushed - ▁dialogue - ▁exit - ▁Villa - ssé - いえ - ▁Marc - ▁här - ▁ancora - ▁answering - 色の - ▁бас - ▁reject - ▁Gray - ▁betrayed - 僕も - ัน - ▁hinunter - ิน - hart - kunst - てた - liver - ▁каким - 用の - 疑い - 不久 - ▁robber - 悩 - fällt - лес - kuza - డు - 我记得 - атор - ބ - tık - قوم - ▁jealousy - 走进 - ▁Пры - 奉 - foc - ▁Caesar - ▁curiously - ần - indu - 蔵 - ▁nuevo - したということです - ▁tierra - ▁beef - ▁Zugang - もう一度 - なこと - ▁мяс - ▁altered - 到来 - ものは - ▁Republic - illon - ▁pelo - ▁wretch - のほうが - ▁variable - nier - サイド - 過ごし - vri - ου - خانه - ▁Burn - ▁array - ▁Welche - とおり - 겠다 - 诊 - ▁baa - ▁mereka - 始まる - груз - ▁Ave - ▁blij - ები - lje - 声明 - 什么是 - ▁reuni - ੀ - present - 再生 - ▁Länder - ▁Glas - ından - PS - 接着 - 読み - ının - するよう - ием - 従 - ▁давно - ω - ▁conceive - ▁Slide - 創 - ريد - ▁accurate - スタン - Artagnan - ▁speziell - 并在 - ▁decay - ▁wann - ▁Rang - ▁goat - 飲食店 - ម - ▁encouraging - ▁konuş - ▁oogenblik - ອງ - 方々 - ▁productivity - わせ - 幸运 - らが - ▁significance - 呼び - ▁mögen - 実施 - ucc - サイト - 衰 - ▁Tagen - 财产 - ையில் - 状態で - ▁bureau - 司令 - ▁hopefully - tiği - подоб - 讨厌 - の間に - ▁Fluss - acity - ▁Bern - போ - ▁exile - त् - ▁вопросы - ▁Ky - ▁strictly - ины - ▁schlug - ▁адна - рв - ▁Inhalt - عتقد - 井さん - ▁Lawrence - ▁enlighten - やん - ▁emo - woman - ▁firing - ▁groupe - ▁cob - ▁mandat - лэг - енная - ▁urgent - ▁trente - цей - モデル - ▁managing - ijwe - 构建 - ▁Muslim - ▁karşı - 続いては - Je - ▁deaf - ▁Што - itse - 위 - 劳动 - گرد - ▁què - ▁impressive - áz - ▁justified - ▁blown - ▁jonge - ▁inquire - ▁dedi - 谈话 - 泊 - վ - ▁arab - ▁Germans - 叔叔 - ▁wounds - 除非 - どこに - êr - 皇上 - ▁слово - 为什么不 - ▁актив - けない - もらえ - дов - ezza - ▁mister - 翔 - デビュー - ▁pluck - ▁verrückt - 営 - ▁Dad - ▁sozialen - roc - 全面 - schiff - bereit - дад - ▁скоро - 砂 - 小组 - ▁menjadi - 上げる - gero - school - ເ - قت - angka - 预测 - ▁원래 - bron - 一刻 - ▁Till - zieren - getragen - ▁sali - ▁drain - ▁mọi - ▁bedeuten - рей - ▁bessere - ▁gesto - ▁tunnel - 开车 - ▁ngày - بال - ận - ▁sống - 直播 - ▁sulla - ▁Sommer - 誓 - ▁Schmerz - ▁связан - ▁furious - 播客 - ▁descent - ▁manier - '36' - 措施 - суд - 方便 - 僕の - ▁scripture - 物を - ▁Gefahr - 你知道吗 - ەوە - 吃了 - ▁violet - ▁signifi - 附 - ▁Kontrolle - ▁possibilities - ▁hiện - 했는데 - льны - 将在 - ண்ண - ▁Einzel - ▁ceva - 、5 - ▁incremental - るので - あんた - шке - ▁Baum - schätz - 感染拡大 - ▁дерев - থা - ▁Hut - ebwa - ▁schm - ▁attribute - なんとか - ▁Human - linie - 机器人 - 也就是说 - ▁grieve - ▁equi - ▁Gewalt - ▁Fund - 었어 - ▁weiterhin - roning - ▁général - nutri - ▁organizations - ▁gust - urteil - 跟你说 - ัก - なお - ▁zullen - 変わる - ▁mould - ▁aroused - ▁camel - 一些人 - ▁agency - ▁Estas - ▁gesund - ▁slid - 詰 - ▁beggar - ▁poly - 크 - ▁Аднак - ▁pent - 承担 - ▁envy - ▁opponent - ということになります - ▁discussing - 競 - ▁compensation - ▁encuentra - ▁yourselves - sieg - の情報 - 是关于 - 大量的 - ography - Ko - ▁unjust - ▁Richter - ▁Пер - cji - ением - ▁lace - руп - 把它们 - ಯ - kunft - ▁Tha - 地域の - ▁idiot - ьян - 相关的 - ▁flowing - ▁drivers - 荡 - ▁Gabriel - ▁ваши - ▁packed - 杂 - 편 - せず - ▁இட - rata - 求め - kole - 東海 - شو - ▁concentra - ▁Jar - ▁tenemos - ▁District - という感じ - ▁Ble - 災害 - 长时间 - ▁Cried - ▁kya - 起了 - яз - アー - ▁Gast - ▁vraiment - 调整 - ▁стать - ▁Arzt - そうで - ▁strongest - 提供了 - ▁decades - 余り - 好多 - õh - ▁Adel - に向かって - ▁многие - ▁reckless - ▁prost - 小さい - ▁Reform - バランス - ถ - ких - ▁rattle - できません - ▁harbor - තා - ▁мала - ե - ▁তিনি - ▁Tio - Ich - 考えた - ôn - iker - fant - 么 - baye - avais - යක් - ەی - ▁Grundlage - езж - ▁erwartet - ▁ministers - 神秘 - ▁Begriff - ▁bells - ▁calculated - ▁بخش - 指出 - ▁Luther - ▁claro - рать - ķ - connais - ▁Publikum - ▁كۆ - mill - 小学校 - 旁 - ▁Hope - 利亚 - ちゃって - ▁dusk - 中国的 - ▁grab - nigh - ▁Бу - terior - ▁Mister - கூட - wür - 飾 - ▁Finger - azioni - ▁pesa - でしたね - 」「 - 房间里 - 赢得 - 郑 - вис - ないので - ▁loans - ▁Quelle - ▁слуша - ▁waved - ゅう - ▁increasingly - 自体 - hafte - ▁получил - ▁purely - ▁tick - ▁exclude - ▁другие - ▁witnesses - ▁Listen - ற்ப - 何度も - unuz - ▁fosse - ▁feathers - 戈 - sprach - ▁Gruppen - Sp - 技能 - 大多数人 - ▁fem - 上升 - 尽可能 - ▁affirm - これまでの - cott - 還 - ▁россия - drag - pata - ▁Care - ▁ultimate - をかけて - 西方 - ▁historian - 同情 - ▁beginnt - ▁Freiheit - 财 - ▁Social - ▁Reaktion - லே - شى - ▁moan - ▁mô - ▁uneasy - ▁substitute - ▁bewegt - やかな - ▁کے - 場面 - ▁tenen - ▁stroll - ▁кил - 善良 - adores - kling - zahlen - ▁недо - ▁Studien - ▁Rechts - ることで - ▁Holland - ්‍ය - bû - 活用 - ▁anos - ▁dünya - 固定 - が一番 - ▁Mississippi - ▁ĉiuj - brau - ▁lowered - 藤さん - 赚 - ▁Vila - потреб - ライト - ▁якія - ында - ▁Women - بدأ - ▁disciples - цов - gekommen - ▁Bernard - ▁platforms - ▁студ - ▁encara - ▁стране - 所说 - willig - ▁porch - かれ - 天然 - 让它 - ▁wishing - teilung - ▁بش - ▁withdrew - ▁Nó - 自動車 - нский - үүр - ▁corpse - ▁Feind - 先輩 - 估计 - eira - ▁ocup - 很高兴 - ▁Oak - 惠 - 这只是 - handlung - ▁customs - ▁mule - ▁musician - ▁تلك - ▁funktionieren - 漫画 - 季節 - ▁العالم - 协议 - ▁stages - ▁Song - ▁zouden - ▁ຢູ່ - どころ - ▁ehrlich - ▁iemand - ▁stove - 见面 - ▁posi - 人たちが - үз - 待ち - onic - 卑 - 和她 - فهم - 火车 - ▁narra - 跌 - 石油 - ▁Sex - 要請 - 으 - ▁france - Ra - ▁emergency - வின் - 超级 - amerika - 劇 - луу - ▁юу - 卓 - ▁différent - ndre - していると - ▁overhead - ▁lustig - kwi - ▁trin - ▁madam - rono - ním - 就是你 - ▁ebenso - lique - ▁Mitglied - ることができ - რე - ▁beschäftigt - ▁tarafından - ▁sincere - 警告 - ショット - ▁hire - ▁parting - 的数据 - 的目标 - ▁illusion - def - தாக - 这本书 - ▁Sonne - ▁severely - ▁wist - ▁Greeks - ▁Jede - 浜 - ޭ - ▁flexibility - 可怕 - ▁должна - ▁побед - ▁decree - uble - ▁toilet - ▁sustained - fällig - ▁Zusammen - 泳 - ▁tornar - ု - ▁어떻게 - 敲 - ▁котором - ▁tú - čin - ▁cinco - ▁maintenant - ▁nime - ▁positioned - оо - そうか - گیری - 竜 - алась - ▁exceptional - 改变了 - 搜 - ▁Until - 皆 - ▁basin - 標 - ▁matt - meni - ▁Others - pik - ▁Kollegen - trans - 公园 - ▁проблем - ▁Oliver - ondo - ▁Ibi - 失去了 - の名前 - ன்று - ▁distributed - 扱 - 对象 - ▁proceedings - istische - ▁가서 - pois - 不见 - ▁purchased - ▁troop - ▁enjoying - 咯 - かける - ほうがいい - ▁prospects - それだけ - كە - しております - ▁memiliki - шло - いただきました - 漂 - awo - ▁equivalent - ▁personne - ▁Bio - ▁menschlichen - 海上 - ▁också - witt - なもの - ften - 黑色 - ▁artistic - кру - ކ - 经营 - 蜂 - 扣 - ▁petition - ▁cellar - хват - 肖 - ▁Pil - 弁 - 天天 - ▁решение - exist - 見つかった - 年轻的 - 未来的 - ▁yapı - ▁Commun - ▁Must - ▁classic - ▁hẽe - ▁majesty - ▁것도 - ▁entertained - мыш - яў - ▁좋아 - валь - remos - ▁può - ▁skills - ▁warriors - ▁diseases - igihugu - ▁tumult - 早い - mala - 谈到 - ▁Kingdom - るよ - ▁users - 躲 - ▁verlieren - வும் - 频道 - ▁spät - ▁Oku - ыць - ▁llega - ▁mouse - ▁considerably - ▁electro - ▁fazla - ▁sav - ஐ - ▁Kennedy - حي - ▁yea - sistent - ▁grau - ▁aliment - ▁Mamma - صح - ▁glare - zieht - ▁cuatro - 妇 - 後の - 平时 - 教え - ▁книг - を目指す - дей - ▁запад - corn - ике - ▁Schutz - ▁هنا - 即将 - 際に - луч - ▁erwarten - ابت - ▁encountered - ▁hue - voor - ▁Fou - 警方 - 瑟 - ▁wealthy - ありますね - ▁shaken - ▁posted - にとっては - ▁install - ็ - 在你的 - ▁Start - ▁roi - 郊 - グラ - نان - ▁antaŭ - 经理 - 大好き - ▁pall - ▁relatives - ▁spraw - الي - ▁invested - ▁hizo - ▁mußte - いたら - ▁مختلف - ▁adjust - ▁angesehen - 单位 - 的那种 - ▁advise - imiz - ▁zulk - フル - 富士 - ▁Haben - 鈴木 - 梨 - ▁misschien - 财富 - ▁Unit - ▁erg - 还会 - 知道了 - دام - liku - 的样子 - ▁colonies - ▁поле - 考えられ - ič - ▁Mademoiselle - 陛下 - ▁Hallo - facebook - ▁comedy - ▁ҡу - ▁crow - ▁Höhe - 指挥 - ▁Guy - おっしゃって - казать - ▁рыб - objet - 缺乏 - 十四 - lwa - ▁denkt - three - pfel - ▁проблемы - 遅れ - してほしい - rush - وض - シェ - ▁Mitglieder - ちゃんの - ▁dismay - čí - ▁muscles - ▁произ - ▁besoin - ვა - 鉄道 - wehr - ▁recommendation - ▁shoe - 父親 - kosten - 英里 - ▁diamond - ▁imperfect - ļu - ipun - ダメ - clip - вест - ▁kort - ▁sour - していない - lessness - 悩み - vuze - valent - ▁schu - ▁Schn - ▁Syria - ▁ibintu - ▁lifetime - ▁notable - 规模 - ▁whale - 冻 - ▁Constitution - ▁Roy - 平衡 - 回应 - 急に - めっちゃ - ▁открыт - зве - なるほど - sätze - abend - كت - ආ - 混合 - ▁Find - ▁مردم - ▁voz - تها - ո - ▁translation - ĝo - 하게 - ▁Anzahl - шиг - 乘 - escola - رض - '198' - чо - まえ - ▁converted - ubwi - ▁excessive - 兆 - ▁compound - ▁flour - தன் - நேர - دىن - 掘 - ▁emperor - bě - ▁Sinne - ረ - ▁vilaĝo - ▁explosion - 든 - кет - ්‍ර - ▁weep - ▁secretly - сек - lý - ▁reporter - ▁confine - łem - نة - 陷入 - qü - ubuzima - 脸上 - ▁buena - ▁echter - ავ - ▁Schlüssel - ▁militar - ▁amafaranga - ▁ninth - ▁benshi - 仪 - kräfte - υ - ▁cargo - ▁laat - ▁align - ▁wink - ▁Tout - gata - 中国人 - ▁confer - ŵ - ผ - вання - ▁woke - وری - 最低 - vollen - ▁viu - roma - ▁이러 - 並 - lege - დი - ▁blend - ▁Brand - noma - ▁proven - ▁түүний - ნა - トー - ▁swarm - द् - kera - paro - 大手 - ▁Imp - kaya - ▁destined - 有机会 - асці - pool - бол - ▁банк - 回復 - ▁Schuld - ▁sailing - 纪 - ▁difícil - ▁руковод - 劝 - ▁fünfzehn - 的日子 - ▁aange - ▁hostile - мур - mede - ▁Führung - ެއް - duce - ▁работы - ▁footprint - енным - ▁davant - 雲が - 遥 - ▁Jordan - kommt - ▁germ - ▁Ни - ▁然后他 - ペン - определен - ▁migra - 籍 - ▁Lebensmittel - ▁catching - ▁الناس - ▁legte - ▁erstellen - 重复 - ▁fragments - ▁моя - ▁پێ - ிலும் - ▁près - 備え - 非洲 - emma - siga - 类型的 - 不懂 - ▁dismissed - 信じ - ▁Karl - ▁атрыма - หน - ▁دارند - ▁selle - ▁exhaust - ▁Marion - 警察官 - 他人 - ▁electricity - ▁Jak - situ - いるという - dige - leit - جة - ▁coordina - 创作 - 离开了 - visor - 谋 - ▁deliberately - ▁glowing - 燃 - 会社の - ให้ - 予約 - ▁fraud - koresha - 加速 - ▁север - のもと - ▁خا - ▁membre - ర్ - lò - ▁yonder - 山口 - 一声 - ▁constructed - ▁corrupt - ▁madness - ▁crest - ▁দেখ - このような - ffa - ▁natur - ▁مخ - 如果您 - lv - 賃 - ▁থেকে - ▁हो - ▁дараа - ▁Kern - 制裁 - энне - acağı - 的学生 - силь - もそう - ▁Page - ▁defeated - ▁creep - ▁Erinnerung - ▁gallery - ▁eve - body - dron - 外国人 - ▁кооператив - ▁ojos - полага - культур - коло - 今後の - 他们说 - ические - коў - ▁Vergangenheit - ▁welfare - 完全に - ▁disagreeable - ジャー - ▁jongen - アジア - tanto - طف - 你没有 - เขา - 寻求 - ৰা - ▁retained - jj - griffen - ▁komt - ৰি - ション - 饮 - 状况 - ่อ - フト - 我妈 - barkeit - 一片 - センチ - ▁Ef - ▁donner - 届け - ▁Fest - ধা - 这张 - 障碍 - ард - ▁femmes - たん - ▁vot - ▁monta - 矿 - ▁கொ - ▁comparatively - ▁policies - くれた - gic - ▁என் - 減少 - 同時に - mord - 課 - ▁зада - ▁lump - 商店 - yaga - 给她 - 各地で - tafel - '.000' - ▁blowing - ▁Beste - ▁miteinander - すべての - ▁Miller - ▁кроме - の上に - 脉 - ▁откуда - 纵 - ழை - ▁Nda - 广播 - こちらは - ப்பட்டது - 西部 - ▁observing - ▁earthly - ▁marketplace - ▁отдел - ırı - ▁있잖아 - licit - 在家 - SA - ارة - ▁loudly - ▁negoci - ▁phenomenon - ▁hacia - aktiv - ▁wach - بوو - ရ - 思われ - んじゃないですか - ▁bass - ▁Bul - 不上 - ▁corruption - ▁punished - 現実 - ▁expanded - ▁yielded - ▁inquiries - bereich - ▁competitors - fläche - sanga - ▁Battle - ▁lifting - ▁சரி - hib - Ү - 쓰 - 傾向 - ▁Bag - 宣言 - ▁devour - bintu - otto - ▁Louise - 掉了 - ▁dios - 勢い - これらの - град - こうして - ▁tragic - rima - 缘 - ▁landlord - ▁compris - Hi - ҙе - 自転車 - ▁täglich - tili - ▁deemed - ▁tempest - ▁rejected - corre - ਸ - ▁توسط - slag - ▁över - ▁أنا - 不出 - ▁assassin - 姉 - finanz - ▁реально - ▁있고 - ▁bonds - ▁mois - лены - ▁Tiere - ▁Sonnen - వా - ▁genetic - ▁Garden - ▁Vice - ▁shells - 还有一个 - ▁According - ▁excellence - ▁никаких - onym - ▁arbeitete - ▁urge - زل - 也要 - 女性の - ▁broader - 婴儿 - muka - ▁süre - 見る - ▁داشته - пита - мал - ▁захо - ▁respective - ▁motiva - 全ての - 年生 - ▁Motor - ▁devant - рма - しましたが - 仇 - メーカー - ކަ - 疫情 - control - ▁руки - шёл - 稳 - ストレート - 胃 - ▁underneath - ▁tới - を使う - ▁сторон - valuation - ▁мной - ront - شكل - bruch - ▁Catherine - ▁слож - ▁ribbon - ▁chains - ▁Umwelt - ваецца - ▁dicht - 所以他们 - 弃 - mania - 당 - ▁greeted - ▁joc - 所以他 - ▁gehören - zeichen - held - 绑 - یە - چە - 開かれ - fähr - ▁язык - rusha - 大家好 - ▁whoever - 栏 - 取れ - yam - 興味 - gereza - ▁haunt - ▁circum - 安静 - 翼 - ੇ - ▁formidable - 向你 - ▁revel - ▁прошло - ▁Kong - ▁Oni - klag - 是吗 - ▁Hind - 選手たち - father - 都被 - өрө - জি - ▁gladly - 是真的 - キュ - ▁quanto - oxid - 出た - 認識 - 鍋 - 俊 - avant - ▁regulatory - 观点 - ▁вещи - 图像 - 建设 - ▁več - 狙 - 智慧 - 损失 - itel - ▁pleasing - であれば - ▁Gesetze - ▁mentre - ▁participation - ▁Hawk - ләр - 很棒 - ▁опас - ▁gaf - 很久 - iq - wandel - 細胞 - ▁Frederick - 체 - 我们还 - ▁côté - 收听 - issant - ▁personnes - ▁decade - ▁yaşa - ごと - pte - ▁Bou - 大学の - 最大の - ▁twisted - ▁keiner - ▁heroic - 一个月 - 两种 - カラ - 見込みです - zeichnung - 智能 - 店舗 - uke - ▁Anem - ▁passer - muş - ▁timp - ▁прад - タル - ▁Finanz - ▁Charakter - ▁tenth - gles - дор - ▁arrange - tino - 彼らは - ▁могли - гран - うえで - ▁attending - ▁nhìn - 就没有 - пись - ▁gevoel - брос - ▁Deutschland - 投資 - šanas - طب - ходить - ▁месте - ▁triste - 事物 - ▁damn - ▁convent - スマホ - ▁exclamation - ▁staircase - geber - chron - ▁wenige - てくれた - 主动 - cela - uci - ▁엄마 - ▁breathed - ▁lovers - ▁adjustment - hne - 会儿 - 耀 - いるのか - алга - をお伝えします - 士兵 - ▁misma - ちゃんが - 费用 - ifik - ▁environmental - 줄 - ▁nữa - ▁polític - ویل - 场景 - ́ - гул - ▁Armee - зор - ▁injustice - たかった - ▁deinen - زند - 不像 - ときは - determin - 訓練 - ▁Änderung - 高校生 - 使用的 - ピッチャー - hô - ▁muchas - теля - おき - wał - ира - ▁amazed - 矢 - ― - ▁Dissabte - ▁tien - 公式 - ▁vrai - 広がり - ▁attach - ţe - 酒店 - ▁cruelty - ▁avoided - ▁бал - 主要是 - 和他们 - ▁genom - учен - 碗 - ▁ambitious - ▁danced - фр - 佐藤 - 昭和 - 記者 - ▁اینکه - ▁但是你 - ▁особ - விட்ட - ▁acute - ▁fee - ▁Franklin - anna - ▁benutzt - vallen - сар - ▁bestand - ▁maintaining - ▁seeds - 帝 - ▁amerikanischen - ▁அறி - るんですか - ▁downstairs - ▁Те - ▁Besuch - ▁있었 - ▁você - ▁marvellous - ▁quaranta - 掌握 - ▁Schei - ▁twe - ▁بالا - 就不会 - ▁Liste - バル - 开始了 - fari - ▁Fried - 我们今天 - ր - 郡 - 남 - ▁lớn - ј - aggio - ▁Leonard - ▁Vari - '700' - かれた - ▁نوع - 患 - firma - 伝統 - ▁Ring - ▁wrist - 更大的 - хэн - ▁Ти - geist - DA - ▁qo - 干嘛 - beera - কি - 受伤 - 援 - ستر - ▁bắt - ▁đề - ▁fos - 和一个 - ▁Maz - esprit - に入れ - 陰 - technologie - いま - cito - パラリンピック - 优势 - せない - ▁Whatever - grand - ▁negli - ަށް - ▁compromise - cula - ▁augen - улы - 首脳 - offici - ▁останов - Man - ھا - 大丈夫です - ▁clasped - の予想 - 旋 - かなと思います - ▁мире - ▁evolu - acqua - ▁000 - gression - 然后我们 - ▁abode - три - 教室 - ▁بم - アフリカ - 缝 - ▁Armen - 一万 - ▁Gordon - ▁Ве - ▁ranch - 葬 - числ - лага - 範囲 - ▁moss - ▁havis - ansi - ▁observer - 々と - ▁shallow - ashaka - ▁یاد - ふだん - 蛇 - 扇 - 担任 - ▁Things - vēl - 故意 - ▁placing - ▁schönen - ▁độ - reef - το - дин - 簡単 - ▁Objekt - ▁Lou - ▁situé - の一つ - ▁reporting - পা - ▁precaution - ▁Sylvia - وط - ghan - ▁judged - овы - ▁Fern - 終わった - niej - ▁довольно - lif - теп - ▁muse - ▁Harris - と共に - führer - yên - 桜 - 去年の - ▁agitation - 伟 - おととい - ▁داده - ▁nationale - へん - 很有趣 - ▁Ry - 강 - ▁erschien - 装置 - участ - をしていた - 命运 - zego - ▁lime - 我认为这 - 我看到 - 群体 - kiwa - ▁الأمر - きれい - 多么 - ▁ancestors - வாக - ▁بودند - ▁ولی - ▁treaty - ▁heroes - われた - ▁이거 - 労働 - 訪問 - 军事 - ▁gewinnen - 五个 - ▁그래도 - dama - ▁smallest - isé - amiento - пры - 统治 - 私人 - 小学 - ▁thorough - ▁рассказыва - ▁சொல்ல - gesetz - 没关系 - ▁shocked - ▁destination - ▁último - ▁ardent - 堪 - 価値 - виз - ▁protested - ▁Crow - irira - 把他们 - 菅 - ▁neighbours - ▁sahip - ▁ceux - 小孩 - ワイ - ▁ثم - 就是我 - 转向 - 爽 - ▁divid - ▁akken - 针对 - 走向 - deel - お二人 - 符合 - 演出 - 感謝 - eleva - ▁nehme - ▁вось - ▁arbeitet - ▁خلال - ▁сильно - نْ - 邦 - ▁زمان - 不容易 - ▁نشان - ゼロ - 記憶 - ▁scout - ▁normalerweise - ▁conclu - '75' - ▁Яны - ▁Nummer - 的天气 - 女朋友 - かかり - بخ - szcz - 妖 - ▁trước - 概 - ким - を見つけ - ▁foam - に行った - ▁punct - 都很 - ▁Grey - kret - ▁banish - ▁товарищество - ▁fragment - см - ▁calculation - ▁insect - ▁warten - stunde - tuig - 可以看到 - ёл - ▁concrete - ▁Bach - what - 合って - గ - ▁villain - 変わった - لية - ▁Suche - É - ▁gezeigt - ходят - ▁eran - ▁fantas - 消费 - ▁большая - ▁senza - ائل - maß - ▁lösen - ▁Marquis - ▁своим - あんな - 吃的 - ▁conquered - なきゃ - 琳 - ▁bản - ▁ilk - brand - 指导 - ▁dunkle - ▁hither - விட - ▁أنها - どうなる - 涉 - 链接 - нула - ▁resta - ▁Angelegenheit - ▁slumber - ▁hommes - ▁chatter - ▁geheim - ▁seulement - posé - ▁attentive - ▁souvent - 終了 - 時間を - ▁machinery - ▁sinner - ▁пара - 沿着 - ordre - 出来了 - това - ღ - ▁Temple - ▁мира - arrêt - わけですね - ▁шар - ▁sentir - ▁dwelt - 一把 - ▁chronic - ግ - ▁зрения - 赤ちゃん - 朋友们 - 企 - ▁Fire - лик - 파 - ▁manuscript - 運営 - 調べています - んだよね - 有限 - යේ - 这些东西 - ▁کردند - தற்கு - تور - ▁terrorist - ▁disclose - ança - ▁restrict - சை - ивает - vocation - ähne - чным - 鶏 - ▁Answered - ▁podcast - 交代 - ▁слыш - ▁Epi - 月份 - qq - 乾 - 歴 - ▁Tie - seks - ▁thật - ▁Schulen - chang - өм - чески - 新型コロナウイルスの - नी - ıldı - 空港 - ▁alliance - ▁guerre - を示しました - ▁dishes - 組み - ▁medal - 目が - 亏 - 集团 - 落とし - ▁Brig - の結果 - 把她 - 근 - 优秀 - ruka - इ - 年以上 - ▁scope - 边缘 - ▁остальн - ▁interpretation - пар - ますけど - だということです - ▁Lü - ▁drowned - ▁человече - プレゼント - ディー - 蓝色 - 的部分 - ▁знает - ▁quasi - 2% - aigua - އް - 始めました - мей - ▁torch - ▁shopping - 基金 - ildi - 一大 - 준 - 五百 - ▁sinking - limi - 让自己 - ▁entry - 会不会 - ▁cinquanta - ▁شيء - ▁但是在 - おいしそう - ▁хотите - ▁Burg - 脏 - lise - stack - 女の子 - اعت - ▁nueva - ▁riep - poro - 工程师 - ▁afforded - 交渉 - 相撲 - ▁زمین - 上がった - çar - 唉 - قام - ▁dealt - 罗马 - ▁libre - ▁terrified - んですかね - ニュー - boj - ங்களை - 雨が - 的家庭 - ▁reducing - 冲突 - problem - ▁campo - ▁telah - ຂ - 繰 - χ - коммерческое - ▁unterstützt - あさって - laki - gula - ▁Steven - ▁recu - 挤 - ▁embargo - ▁slaughter - 丑 - kono - 特征 - ура - ▁offended - 好不好 - ▁tracta - 谷歌 - ▁இருக்க - ▁Ressourcen - ▁família - ▁поскольку - pio - ▁이게 - ▁probability - jó - キング - 山さん - ▁Alan - ▁otras - ▁عليه - ▁تخ - শু - article - でいい - 술 - сси - 范围 - ▁faz - ▁reasoning - ▁seasonal - ってきました - ▁Bess - 築 - ▁learnt - werp - ▁patterns - 年度 - ▁Komp - zehn - mien - клон - вр - ▁raus - 茂 - ▁которого - ▁садовое - schläge - ▁reinforce - 爆発 - ▁Harold - ▁estoy - ரே - ▁cùng - ▁aufgrund - ▁completion - ▁dernier - проч - দিন - شون - 一代 - 引き続き - ระ - ▁fees - ▁eldest - 你不是 - 司机 - ▁Moon - ▁mevrouw - ▁Fur - 放下 - ▁parler - 黄金 - ▁provinces - strahl - টে - رخ - 统 - 왔 - vies - 凝 - และ - 大きさ - filter - ▁clue - 達成 - 申し - 把握 - ▁Fli - してます - ▁Risiko - 屏幕 - ▁partit - 检 - ário - 傘 - 的照片 - ▁subscribe - ▁publicly - 野党 - ınız - ují - ▁queste - ▁Michigan - ▁Clinton - いだ - itate - ▁façon - ▁Beruf - iteit - ▁Opfer - шил - ▁shillings - ▁offerings - ţii - 彻底 - ▁chiar - ▁stolz - 見ていきます - ▁shaft - 내 - 在我们的 - ▁所以说 - ▁comer - 奏 - ▁kamp - ▁тийм - ▁verder - ▁carrier - ▁قال - てしまった - んだから - ▁நட - 산 - ทํา - ▁Behandlung - ▁borders - ▁جهان - ▁alongside - ӗ - ▁договор - ▁Präsidenten - 我们能 - ▁insects - ▁recommended - ▁fluid - 妇女 - discipl - ுடைய - ▁mole - 最後は - 올 - ▁другом - 店の - كس - ▁göre - ▁Gefängnis - ▁consolation - ▁Einfluss - ▁sper - ▁alguns - ▁kelkaj - Su - wanga - ▁naught - ▁tako - ▁profond - hängen - いただき - こん - ▁bump - برد - 非常好 - альная - 的父亲 - ▁Orleans - ▁garrison - ▁bekam - garde - ▁freundlich - பட - ▁whither - ▁Со - ▁mater - ▁recording - ▁edil - ический - いただいた - 想过 - dió - 孝 - ▁offend - ▁submitted - ▁États - ektiv - 什么呀 - ่น - စ - ▁hazard - 他们可以 - introdu - чака - 入れた - ను - ▁prevailed - ▁goede - ▁algunos - ▁visi - 呗 - ánh - issait - ▁causing - 制定 - 千葉 - いましたが - stricken - ▁projet - ▁fortnight - 上下 - 在家里 - spiegel - modul - Comp - がこちら - ▁serait - ෂ - ▁clam - Krista - ▁Pala - feuer - ancia - 事務所 - ▁истории - 唤 - コメント - ▁клуб - ▁legisla - 扰 - ▁altri - ことによって - 讨 - ▁parlament - 代码 - ▁добро - ▁ashore - ▁tempted - ït - 标志 - еду - すぎる - 畑 - ▁количество - 帅 - ▁شروع - ▁schwarzen - 尊 - ෙන් - 高く - gebildet - nvention - 好啊 - ▁announce - ▁sinh - ▁отношения - ▁geschickt - లు - 去吧 - ▁جای - ▁True - 完美 - تنا - 魂 - ▁unnecessary - ▁frustra - marin - を続けて - اني - 曹 - 燃料 - ▁acum - ▁obliga - මු - 鼠 - JR - ций - ފ - ල් - ▁dispens - مم - ▁tart - ▁oriental - ▁вс - čno - 損 - バイデン大統領 - ▁actua - wärts - ▁impatience - ▁torrent - ▁bamb - いやいや - 垂 - 부터 - ▁faintly - ajā - мян - schütt - 知事 - 我把 - 迎えた - ▁Barn - ▁pill - つながって - ▁поводу - ▁assigned - ▁regain - latin - èl - ▁шест - ▁Marsh - ▁Row - ▁drawer - klop - 且 - ▁Gang - 문 - ▁franchise - ▁attained - ▁числе - ▁Normal - ▁controlled - 分开 - んでしょう - حص - ▁Psy - лучш - kiza - berries - 社員 - ați - ▁hwn - sional - わず - ▁swamp - 绕 - っち - ត - ▁Diskussion - ▁Hälfte - ▁promising - ▁части - gada - ▁работу - ▁capa - 我给你 - لات - 二零 - ▁verses - 这个词 - ▁battery - ▁усё - seid - زر - あい - ▁இரண்டு - 的文章 - ▁preference - ツアー - ▁Holz - گا - ź - ওয়া - ▁legacy - ▁salon - 相似 - ▁emerged - schalt - ▁greeting - ▁trumpet - 夸 - حمل - 温泉 - ▁настоящ - wedd - 離れた - ▁feeding - 立場 - аваны - ከ - 我们如何 - ▁Schwarz - ▁Roll - ▁failing - гьы - ▁guided - 診 - 降低 - ▁mı - މަ - kuta - ▁nhân - 銀行 - tagon - قات - uganda - ▁پار - ユー - wain - ච - ▁Argument - ục - vär - といって - ▁Tā - ▁spared - ▁күр - 男性は - ▁Deci - 宝贝 - んだな - bett - ▁Hunt - ▁fram - 時には - ▁pursuing - ▁thời - ▁Hong - ▁Surely - ▁seeming - 引用 - 터 - ▁Association - ▁pauvre - itatea - ▁unterschiedlich - اك - ▁neighbour - ▁дуу - 艺术家 - ▁voller - ▁guarded - たんですけど - ซ - 立って - ▁resent - 新型コロナの - ▁гри - 相手の - ▁работает - ▁нормально - ▁Alfred - polo - 疯 - unddreißig - 오 - 踏み - トマト - ▁сви - 凶 - 有用 - dorf - '34' - эш - iện - ▁Jacqu - 从小 - 也不能 - ▁என - troph - abri - ▁verbracht - ▁jolly - jana - ▁Quando - ▁boldly - 他被 - 糸 - ▁dominion - ▁housing - makers - ジー - 始めて - ▁Pick - න්නේ - 驾驶 - ▁martyr - ▁brisk - вший - ▁бара - ワーク - ▁hush - 储 - 弥 - 매 - ▁Greece - 在这些 - ▁Rub - messen - されていました - 那个时候 - 麦克 - ▁durchgeführt - ▁rocky - '65' - ራ - ▁intercourse - 滋 - 梅雨 - ▁morir - 金属 - தில் - quarter - プラス - ▁implementation - 새 - 练习 - 策略 - ▁clergy - ầ - ▁sõ - ▁bending - 饿 - ▁Sorgen - ▁statisti - син - ▁2015 - っていうか - يط - ▁тобой - Orient - 膨 - mén - 批评 - ▁exhibition - ▁tossed - スペイン - ▁preceding - ▁Nell - 主义者 - ıcı - чны - ▁Brian - ▁League - 强大的 - ▁cambia - ▁när - 崩れ - ▁twentieth - ▁همچنین - ▁chuyện - ▁Darwin - ▁hohen - ▁illustration - ▁nama - ▁supplement - ▁Karriere - ▁sadece - 感じる - 补充 - ▁Sche - ancien - conte - ▁carro - 在这种情况下 - иров - ▁Anderson - 反映 - ▁weißen - ▁cuộc - 卒業 - 物価 - ▁Mehr - を求め - ▁merge - 次は - رن - ▁المع - ▁Sí - யோ - જ - ඳ - 男朋友 - 有名 - bic - ▁Histori - ▁lahko - ッツ - 埼玉県 - ገ - ▁collapse - ▁которым - 这件事情 - という事 - があるんです - ▁петр - مور - visa - 奪 - 年前の - ▁Sup - ठ - ▁screw - ûn - ▁Gottes - 为自己 - 实施 - ище - ▁Neben - 石头 - 很明显 - ို - ặ - ▁characteristics - 不少 - ould - たこと - ▁traces - τα - 正直 - ▁guitar - līdz - ▁tropical - 剪 - ▁desirable - 子供たち - 規模 - 入れる - 芸人 - ▁starke - тая - ▁Tradition - ţă - ▁இல்ல - '32' - ▁mystic - ацыі - ▁unua - therapy - ▁begonnen - みたいに - ▁хочет - ▁potent - ▁Molly - ▁unmöglich - ▁wildly - 旨 - ▁Arnold - 彼得 - hoff - ▁Rio - ▁creek - 集まって - ▁searched - سبب - bali - ▁remembrance - 規制 - 给大家 - 機会 - るべき - ▁обще - kute - ▁Zahlen - ▁standpoint - ▁Brook - kku - лена - umbi - ▁gusta - 夏の - ▁އެ - 몇 - 용 - ボールを - ▁shan - 大声 - 爷爷 - sichtig - ▁генерал - 平静 - 廊 - ▁fé - ▁Waffen - нк - コード - NG - 强调 - ▁tremble - 计算机 - ਕ - ▁ఆఁ - ▁Coast - 你这个 - ▁demonstration - 有着 - それも - ▁일단 - мак - Now - เล - ▁requested - ▁Civil - ▁Fen - tím - ▁logical - 度过 - 的精神 - чит - ਹ - ▁anfangen - న్నా - ителей - ▁Ці - werken - 现象 - 迹 - ▁unity - ή - ▁요즘 - menya - ▁dormi - klad - 면서 - heden - ural - ▁richtige - 雨雲が - ▁declaration - ▁fir - ▁herunter - excursió - 依赖 - 夕 - シュート - ▁mesmo - ▁fueron - のですが - 结束了 - われる - ▁Agnes - 国連 - ▁александр - 烂 - ▁incapable - ▁Madrid - 3% - ▁reserved - ▁så - аваць - ▁será - かく - ږ - teeka - ▁weekend - ▁suck - 举行 - жим - त्र - ▁alchemist - ▁conspicuous - ▁Hör - とみられる - ▁Kann - ▁vrij - ▁Short - gence - ▁vault - aime - ▁олон - ▁تغییر - 战略 - 天下 - ▁waving - 保留 - фу - 寂 - vió - fahrt - ▁Entscheidungen - 两千 - ▁mansion - avoir - ▁collective - 差不多 - ▁alto - 血液 - 我再 - ▁garanti - ▁surroundings - 恐れ - ▁flush - ▁Mariya - ▁froh - いますね - າຍ - ▁serie - luck - ▁Nature - ▁بشكل - road - ladı - 太平洋 - ▁внимание - 訴え - conscious - ▁cough - ▁dahin - мова - ▁Во - ▁bởi - ▁gedaan - issement - 説 - zugeben - ▁einzigen - ▁acaba - ▁Edith - ▁சில - 建て - 阳光 - NATO - ror - ▁pueblo - ▁verändern - ▁befand - ▁但是他 - エル - ってくれ - ▁Columbia - ▁строительный - ▁Griff - ▁நகர - ▁Ding - haupt - 动作 - んだって - Instagram - ▁loin - ▁praktisch - ▁traveling - ێک - 年齢 - lili - 食べた - ▁pregunta - ▁tym - hour - ▁места - asyon - ▁Zeiten - चा - ▁Missouri - ▁всю - 楽しめる - ▁Information - ▁يتم - プレ - ▁seguir - ▁attorney - ▁আমি - ensemble - ▁successor - 无论是 - ellement - рван - 1% - 咩 - ▁zač - әл - 俳優 - ▁amusing - sobola - ▁نیز - 怎么说 - 全力 - ▁mensch - ▁abundance - ▁reconcile - middel - 皇帝 - ▁gasped - ▁கரு - アプリ - 大切な - ▁لأ - ▁sólo - diye - 出てきた - ▁Kent - 天空 - ніка - 日曜日 - ▁fascinating - ▁мнение - ▁скажу - ▁лицо - chor - なんでしょうか - ▁yabo - куль - 构 - ▁tribunal - ▁yaptı - ▁Bour - ▁despatch - ▁Tun - セル - 算是 - 沟通 - ▁Daw - 磁 - 芬 - ս - 愚蠢 - 适应 - 说得 - ▁الْ - unternehmen - ▁contribu - ▁halted - 发明 - อง - ▁Marian - ఆ - 原谅 - ▁دهد - 揺れ - kurikira - ラーメン - 起きた - ailleurs - 見えて - pek - ▁pious - ▁Force - ▁accordance - korera - 怎么会 - Look - خور - 測 - ▁efter - ijas - ново - 編 - hala - 怎么能 - ▁sûr - ▁già - 文明 - 哪些 - 賛 - ድ - ▁Illinois - かけた - ギャ - кен - ▁riu - それぞれの - ドリ - ▁hayi - ▁poate - 居民 - 守り - ▁columns - метр - ▁Bien - 生日 - ając - サル - ▁pow - ▁Martha - ▁voix - ▁adoption - 宏 - ▁Fourth - ▁থাক - ▁смог - ▁kinderen - àtic - ▁comunica - 耶 - 扭 - ▁вроде - 戻って - ёс - ەکان - 一座 - お互い - ▁естественно - ▁speculation - ▁suspended - ▁vaccin - っちゃって - ▁bloody - ▁kämpfen - tré - ▁precipita - бен - ющий - 素晴らしい - ▁barri - ▁Mini - 社交 - くなった - ▁Spaniards - インタビュー - ▁lebih - ప్ - ▁Cousin - ▁honourable - 解放 - ▁estão - ▁compass - чих - stained - 剩 - ▁cathedral - ▁نیاز - 你已经 - ▁месца - ▁baseball - ▁говорили - pov - нә - furi - schafft - 亲自 - bizi - home - ▁ayuda - ってください - putation - ▁tersebut - ▁আৰু - నా - ▁детей - ▁protein - ستی - ▁tamam - бле - gisha - ற் - ▁Seit - ▁слишком - 프 - 原子 - نَ - てほしい - ▁yose - रे - ▁numero - 芽 - ہیں - ▁йә - ▁mogelijk - ▁тое - riko - 有两个 - urage - ល - シリーズ - 契約 - 醉 - logue - ▁Rev - ▁bonnet - ▁edition - 足を - ▁Hamilton - ▁bằng - ург - ▁routine - 送到 - ベース - ▁район - ▁شوند - ▁Iraq - ރު - تك - படி - евский - ▁gelesen - ුණ - 专门 - été - ▁Warren - jski - aggi - 互动 - 時間に - ▁gospel - կ - 描 - ▁bodily - ▁clung - ▁пример - pane - 辈 - 형 - ▁mature - ▁Spitze - 近くの - ▁Frieden - ▁eagle - そこまで - ▁dla - führ - ▁frog - 对她 - ▁proto - bouw - 我们没有 - ▁Wohn - ināt - 昆 - ▁ermöglicht - 便宜 - ▁Beach - 伸ばし - عرض - 买了 - けさ - ▁diamonds - ▁happier - rote - あふれ - ▁vater - 見られ - ▁partial - 窗户 - ▁außerhalb - ▁respected - ▁halbe - ユニ - 思いを - დე - sız - 意味で - ▁часто - 当她 - ▁Really - тара - ▁Botschaft - aura - laba - кина - ▁возник - るもの - に乗って - esco - ▁ລະ - ▁Mak - ▁Mikro - ▁Top - ▁brig - ▁Night - 第四 - 仔 - 扫 - ေ - 困惑 - anstalt - ▁thorn - ▁gewisse - ▁bike - ▁höchst - 肝 - ▁aunque - ▁zufällig - овым - 陽性 - 官员 - বু - 普通の - ▁пиш - ▁winding - दि - ▁Made - ▁بده - しゃ - ▁seixanta - ▁knit - ▁Ron - 分手 - ▁leisten - 才会 - ▁errand - ▁Beweise - 保守 - ▁Sound - ▁moyen - ▁comprehensive - ▁товарищ - ▁Jahrzehnt - ことから - 水分 - eṭṭ - トル - 多久 - 辛苦 - ▁halo - wirtschaft - ▁veya - ▁Robinson - を取って - ▁exert - ▁کسی - 晴 - ▁resulting - ▁ejemplo - ▁ensuite - urwa - ▁pepper - 直後 - kê - 倒れ - ▁problème - 申 - 塗 - tandukanye - 铺 - ▁стен - 近くに - 有趣 - ▁respectful - ▁بىر - ട - твар - ▁голову - こちらも - ▁joyous - म् - 唯一的 - ▁смотрите - 炮 - ▁hymn - луча - 也不知道 - ▁relacion - ▁необходимо - 把这 - guha - 净 - ▁offensive - 冷静 - 视为 - कार - 锅 - ▁먹고 - ▁manche - ▁چرا - ▁êtes - ▁wrth - 发挥 - يون - 这个国家 - ▁нават - 指定 - ▁Victor - ▁Ez - ▁willingly - ▁Privat - двига - ފަ - ▁childish - 絶対に - 历史上 - ▁tiger - richtung - ▁Га - 㗎 - ▁guk - となっている - ▁siya - 思维 - меж - ▁edit - ▁één - 的发展 - ▁удар - ▁precise - 近い - ことがある - ▁nineteenth - นี้ - ▁çocuk - ▁unabhängig - ন্দ - ▁witnessed - оцен - ▁gyfer - abanya - bala - леч - 八十 - ▁scarlet - ▁başka - 是这样的 - ▁welcher - عي - 篇 - VTR - ▁Allgemeinen - ▁درباره - 集団 - ▁tinha - 一緒 - 来ました - ▁nieuwe - ▁Monaten - ▁organiza - − - 仏 - 厉害 - 贷款 - 居住 - ▁Schlag - 看你 - ▁Amerikaner - ▁Rede - 殺人 - 去世 - zaj - 烦 - 比較 - 재 - meza - خارج - кала - ▁денег - TO - もう一つ - 听众 - Һ - 我们想 - ▁Traum - ▁negotiations - wuchs - ▁duke - ▁handful - ▁Mara - 覆盖 - 行きます - শা - 通過 - ▁tact - ▁старо - 上来 - ▁Proto - 孤 - ▁celebra - ▁academic - ▁existe - ▁Wohl - ▁моей - ▁Barb - いたい - ▁mächtig - autor - ▁generated - ▁Tema - ▁அவரது - cushion - ▁kwenye - ▁aynı - menge - に参加 - uzo - 凤 - ▁doğru - ▁Suppose - وارد - ▁demonstr - 雰囲気 - 哇 - ▁хол - ▁تنها - ▁consented - の方に - 東日本 - mada - マンション - ▁Alas - 为此 - 带来了 - ▁vehicles - нец - ▁спас - party - ና - ▁понять - 維持 - ▁deserved - ▁Kreis - enca - ად - ော - 開幕 - 日目 - ▁eighteenth - komst - ▁herein - マル - 贾 - 更有 - ▁dealer - 胆 - 的专辑 - ▁adequate - рыва - ▁счита - ▁behave - гуля - ränk - ▁preacher - ▁accomplishment - を行い - ▁jemandem - 推动 - 裕 - ありますか - يرة - pí - ҭ - 原さん - يى - ਨ - ▁herauszufinden - 贸易 - ▁отвеча - 参考 - ▁Pie - ▁Rücken - ▁banc - 邮 - 饼 - ▁sympathetic - ▁prose - 将是 - čen - ▁Grad - ▁entgegen - ▁tất - 能源 - 提升 - Ame - '46' - ▁Ella - ▁speedily - ブリ - ▁линия - ාව - 六个 - ▁возможность - หล - город - ▁boiling - ▁harbour - ▁premium - ▁loyalty - ▁investigate - ído - ▁Strom - kennen - ▁justify - raient - 透明 - ▁ungefähr - ▁approximately - 통 - 疲れ - ▁komun - bwiye - ほうが - ▁europea - ダン - 比例 - ▁Nachricht - ▁mieux - ▁suited - ありがとうございます - ▁Pete - haired - lässig - ▁Nel - 子どもの - たくない - لَ - ஸ்ட - 盆 - ▁Benjamin - いきたいと思います - жиг - pera - ▁Ruhe - ミン - хад - рет - 広い - natured - ▁liquor - ▁dürfen - ▁tính - ▁parle - 状況を - 走吧 - ▁يكون - ▁Ал - kapa - ▁Moore - ▁thanked - aña - ▁breathless - ▁нашего - wadde - ▁üle - нием - ▁Jungen - 炉 - ▁mortgage - চি - 成熟 - 实践 - вернул - луг - ▁rejoice - 风格 - ▁первых - ▁Kongress - 竟然 - ▁heiße - 怎么回事 - 相比 - ões - ▁traurig - Ex - шай - 弁護士 - йын - AP - サイズ - ▁Version - bati - dav - ▁yaba - presa - 年ぶりの - 祈祷 - åg - ▁dreaming - けれど - 種目 - 高速 - ▁Allan - 不对 - ▁deiner - cross - சே - очка - 充分 - ▁climbing - ケーキ - ква - 始まって - ▁единств - ▁Nelson - саб - ▁මෙ - ▁commend - 脂 - どこか - ▁usage - 為 - 楽しんで - 真实的 - ptic - ერ - ▁potatoes - ▁spreading - MS - ▁assumptions - 在于 - ▁nước - ▁sakj - ▁böse - niveau - 少女 - 芳 - ▁youngest - 评估 - ▁robbed - 常に - ▁reluctant - 見られます - ▁clearing - 空間 - 高级 - спор - ▁genial - 想定 - ▁Beck - ţie - ▁хто - いこう - buk - ▁peuvent - ▁strive - ▁theories - というのを - таш - ▁Como - ▁dụng - 양 - ▁waarom - ▁حول - 触れ - 险 - ▁slap - 勃 - ▁Unfortunately - 死者 - ▁dirt - ünü - けた - 限定 - 真ん中 - 他把 - 評 - zional - ▁nuit - 句话 - ることができる - ▁Colorado - ▁ehemalige - ▁equality - ▁Krebs - ▁kişi - graben - ▁wichtiger - ▁Gy - ▁Qual - ▁allies - ▁inspire - ▁делает - ności - ▁Realität - 成分 - ほう - 踊 - 叙 - ▁uncommon - 悬 - ▁siege - тим - हि - ▁Umgebung - 了一下 - chod - meɣ - らせて - ▁quá - kurs - ではなくて - ▁milion - ▁Gent - 其中一个 - 慮 - flamm - indre - กล - ▁subsequently - Domin - ▁anterior - ▁horrid - ția - 释放 - னு - カリ - 会让 - ▁Blake - 日まで - venue - 肯定是 - 仕事を - 国防 - ▁sergeant - іль - oloji - 实验室 - пот - ▁Eva - ологи - イタリア - ▁shrill - 这就是我 - 食感 - ▁tej - 更高 - くれる - 伟大 - 龍 - ▁Navy - 做到这一点 - schluss - ▁einiger - ▁beobachtet - ▁гаражно - ▁salva - 美しい - 自从 - BA - فض - fanya - ▁학교 - 発達 - ški - Reg - ▁singer - 心脏 - ▁sina - ▁Tea - 稼 - luft - ľ - ▁fulfilled - ▁anstatt - ▁Mme - ▁sombre - 売れ - ▁forbidden - グル - ▁Ruf - 不确定 - 毎年 - 这也 - 縁 - 售 - ▁remedy - ▁violin - 太陽 - ▁එක - ゆっくり - ▁ноль - ▁foresee - ʻ - 長く - ▁наступ - ▁үҙ - ▁sprak - τη - ▁rebellion - 週末 - ▁tế - ▁Try - 也就 - রু - ▁sanction - ご飯 - anje - ▁знаем - ▁точки - ▁kubera - 拿出 - ▁vacant - ▁ມັນ - ▁Antonio - ▁prev - さんから - ▁heures - ▁oficial - yama - ▁Herbert - 丘 - cado - 着急 - ▁sweetness - ▁来一首 - 可以说 - 或者说 - ▁naval - ▁veins - 蓄 - ਦ - ▁Ländern - خان - ▁arribar - ▁meek - 大使 - ▁Divendres - ▁Tanzania - いかがですか - 伦敦 - 头脑 - 歌手 - 人群 - 知道你 - ▁страна - ▁Silver - ▁actively - ないか - ▁Wesentlichen - ▁sect - ராக - ▁Jew - treffen - 言われた - ▁neden - 他已经 - ހ - ▁случай - ჩ - ▁некалькі - schieden - Franc - 听着 - 团体 - ▁corpo - straße - ியது - ▁hilft - ţa - ▁telegram - が好き - 作戦 - オーストラリア - ▁насколько - ▁deprived - biro - 听听 - 正好 - лева - ▁hesitate - 可怜的 - быт - 虐待 - ▁arbitra - ▁confront - ▁Gli - 一遍 - 如果你想 - ▁priority - 有利 - ▁wanneer - ▁hindi - 返し - hwa - 年後 - ▁float - வ் - 次に - ▁nơi - ▁explica - pfa - ダウン - ▁swore - уха - 延長 - ێن - もらいたい - lette - ▁kiuj - ありますよね - ▁luz - ▁регион - ▁experts - 系列 - 共产党 - ▁tissue - ▁schwach - バッ - ▁komm - を使い - ▁haunted - ▁années - かぶ - عَ - 喊道 - zaam - anın - ▁pinch - bericht - するか - ▁đổi - ▁desolate - 的经历 - казаць - ▁üç - 働く - 年ぶりに - gesprochen - ▁amidst - 强大 - ▁figured - ▁добры - ▁tooth - ▁massa - 離れ - ▁Cole - ▁solemnly - 三振 - 很好的 - த்துக் - ▁encontrar - ண்டி - 动机 - ▁cultivated - embla - てない - ▁veux - ▁Marilla - ▁resolute - іі - неп - ▁jaroj - grado - ペース - 帰り - ▁weinig - ▁hearth - ئة - 眉 - ▁ошиб - 成绩 - ▁حتى - ▁poble - ▁dieselbe - ▁exchanged - ▁prova - 頑 - ▁هستم - ▁Rahmen - ▁emerging - ▁Dokument - ▁cảm - ▁Cela - メジャー - 拉斯 - の裏 - wirken - ▁Industrie - ▁Zweck - 然后你 - 安慰 - 上空 - гээр - лым - ▁Яна - などは - 娱乐 - ▁contradict - جار - 你真的 - ▁Haushalt - ுகிறது - ▁reflecting - ▁filo - 他们没有 - ▁Hij - ▁тихо - かす - Con - ▁Verfügung - ▁innovative - ▁disconnect - schel - ▁میں - 失望 - ▁circul - رح - ▁nhau - 感激 - aurait - 効 - kabi - ▁erzählte - 棋 - ▁remembering - ▁retour - ▁habia - 旦 - icamente - Esp - ▁cuenta - perto - 你跟 - ればいい - ▁никак - gewicht - ▁konzentrieren - ▁Ausbildung - スープ - 报纸 - ▁தலை - ছা - 場所に - щик - 的生命 - 될 - 有点像 - 几个月 - ných - ▁Universitat - haven - ẩ - ▁möglicherweise - ▁posterior - 夏天 - ▁specially - 復 - ática - ▁Hü - ▁verdade - ▁normally - écri - ▁correspondence - 今から - 聚集 - 向上 - ▁relates - utiliz - 写作 - ▁союз - ▁Kun - 時半 - ▁thereof - бат - modell - ▁gotta - 手術 - ۋ - かわい - ▁slice - ▁Zug - 変更 - جب - ませんか - Can - дні - Ө - ゲスト - 的内容 - тав - аас - ▁Kirk - ▁contend - ▁persistent - ▁suffice - ▁consumption - ▁gloves - ▁coral - лык - というような - ານ - ▁circular - 里克 - 会合 - kwiye - လ - 抚 - ▁хоть - плы - ▁الان - に加え - っつ - ▁දෙ - ▁erinnere - 合适 - ▁څه - 联盟 - ▁cautious - iyordu - ▁bitterness - 覧 - ▁ببین - General - samkeit - garten - ህ - 仓 - ▁tìm - ▁australi - ▁Oz - ▁répondit - ாங்க - 分配 - 成就 - 了起来 - ▁watu - ▁ddim - 物种 - щих - kire - ▁ruled - ▁Pep - ▁fanden - ▁quarterly - 来讲 - 基準 - ▁construir - mita - ▁baixa - profit - ▁Fähigkeiten - アイデア - ▁diwujudk - 違います - を巡って - 知り - oxy - ▁Kli - 两次 - ▁viola - 涨 - овали - curi - ätig - смотрел - 働いて - ▁dicho - ں - 克里斯 - ▁secondary - ▁mujer - ▁impine - ラスト - 惑 - くり - ▁greatness - 키 - 牢 - 授業 - ▁viva - ▁Wur - 尼亚 - ▁exertion - ▁identific - ▁curtains - vana - оруж - රි - ▁lachen - 場所で - vojo - mö - ▁yapma - Pf - ▁cooper - 我还是 - ▁corresponding - ▁Khan - ▁pony - ▁Annie - ▁shrewd - ▁найти - leistung - もあって - ▁diğer - شە - 見てみましょう - ▁относ - ▁bidding - 服装 - 国外 - ▁alsof - 否定 - erade - ▁parlour - ▁কথা - Risas - ▁gamb - 教师 - 他想 - 協議 - ▁bekommt - ▁regulate - 一口 - 杀死 - ▁сказали - ▁آر - ▁Түүний - 罚 - ▁tightly - asso - імі - ій - ▁salary - ሽ - 気象庁 - park - kab - 筋肉 - 人士 - ▁privat - 識 - ▁nachdenken - ▁Karte - ▁rhai - ▁евро - ▁меньше - ツー - AD - 杆 - ▁gefährlich - ▁schätze - 四年 - gä - زى - ▁disagree - uvi - ▁starve - давать - ▁dagegen - ▁ufite - 見通し - ▁большое - ▁süß - ibyo - ▁slay - әһе - лекс - ▁vais - ▁summary - vēr - даж - bourg - හා - 純 - びっくり - 一辈子 - 企画 - vidi - ▁வந்த - Hey - 大卫 - 玩儿 - ▁câu - ▁occurrence - 咪 - 끝 - 纷 - 研究所 - 的确 - ▁assuming - ▁어디 - ▁roast - ▁склада - ▁Ernest - ▁Gate - ▁carries - ▁parcel - ▁бага - ▁திரு - 煙 - 摔 - phra - ▁Dorf - 分かり - 你不会 - قان - の仕事 - ▁spielte - ▁Planeten - 随后 - ▁pavement - 現状 - 共有 - 提案 - 随时 - ▁falta - pida - juk - ▁Mabel - ▁cautiously - 呂 - θ - 脾气 - 期望 - 浪费 - ranye - 地图 - ウイルス - 神奈川県 - chair - ▁elfu - বো - この時間 - ▁yake - ▁remorse - gá - rong - ▁uitge - いくと - ▁обрат - ographie - ▁benim - пок - 六十 - クリスマス - ثلاث - ▁되는 - fico - ▁proced - ▁있는데 - ▁McG - 逮捕された - ▁integrated - ▁fiery - brun - ▁influenced - あた - 誤 - ▁ນີ້ - 仕組み - ▁Haut - いかない - ▁relieve - 准确 - zuführen - volk - ▁مې - beck - ▁steward - grä - ▁temperament - ligi - 七十 - 緩和 - 阻 - 鎖 - 培训 - ▁Donc - 到现在 - ▁Stimm - گه - 电台 - ▁picturesque - ▁Zeichen - ضحك - ▁fing - ▁دارید - ▁manière - ▁transportation - чку - скры - ▁dritte - кур - 主意 - flug - ▁diferents - ▁stray - 消费者 - ▁Kamera - 两人 - ▁Afghanistan - 拔 - gah - ▁рядом - కు - த்திற்கு - ▁señora - になってる - ▁umut - طل - ಂ - ▁distin - 华盛顿 - ▁Außen - ▁Molt - ▁converse - යා - 業者 - тэн - ▁моему - 为我们 - হু - ических - ▁stalk - ついに - وجه - ▁combine - 奴隶 - ▁supposing - రా - ▁Beziehungen - ▁insanlar - ▁bloß - 董事会 - 볼 - مدينة - ▁знаком - خص - ▁trotz - ▁inventor - 供給 - 不是我 - জে - ▁arriva - 緊急事態宣言 - ▁artillery - ▁chariot - ジャン - ▁cabeza - ▁productive - ▁Gerald - ▁Persian - ▁Corn - ▁fapt - 廃 - 渋谷 - ulated - ▁machten - ög - 50% - ▁Martí - 避け - ▁banner - נ - ှ - さえ - ▁nebo - ▁Sozial - もうちょっと - ▁پو - wohl - ▁verbessern - ▁мозг - ▁kommer - ▁impart - 太好了 - 겠지 - ゅ - 에는 - マス - ▁juist - ▁அமை - ▁показал - 勇敢 - ▁Within - ▁shelf - kami - ▁Schle - ่ง - ▁ответил - ▁Pray - drop - 洁 - альна - ▁tira - ファイ - funa - ▁ўз - 歇 - ▁Kansas - ▁genre - 统计 - ▁fulfil - ▁Tä - 온 - lardan - 広がる - leiter - ▁determina - 叫我 - कि - ▁vedere - 拍摄 - иков - 低気圧 - ▁Thursday - ою - 的父母 - stol - とかも - jährige - 给自己 - hali - anoj - Gelächter - ▁раньше - лося - bow - related - いらっしゃる - يه - 不是说 - ▁zufrieden - 違反 - ▁který - ▁aufgenommen - ▁آل - лаш - 其实是 - 提前 - 翻译 - ती - ▁commanding - ▁Einer - ▁spider - ▁كه - 出于 - ové - 金曜日 - ▁Pakistan - ▁niemals - ừng - マジ - ▁intensity - лыг - ▁passar - ىدۇ - ▁conventional - ▁bewildered - ▁hört - 選ばれ - ▁allowance - ҳа - ▁lorsqu - 震惊 - そこは - ▁tiek - iez - 名古屋 - ̣ - ވަ - ▁كبير - 队长 - யல் - ▁focusing - ▁draft - だよね - 対策を - 浩 - 抵抗 - ▁أحد - ▁Cast - 澤 - iĝo - දා - ▁ringing - ▁youthful - 響 - ▁geloof - ▁canvi - ▁vegeta - 代理 - ▁reconciliation - ▁мужчин - ▁jungle - 民間 - šķ - ▁trov - ▁matin - ▁sentimental - の写真 - 世界各地 - ▁trouver - ወ - ▁Struktur - ▁awhile - ▁jaun - ▁steeds - くなります - организаци - 昏 - ▁français - 钻 - れている - ▁helmet - ▁progressive - tuv - ppet - 温柔 - ▁elderly - ▁Beat - も多い - 置き - ড়া - Rires - ً - ▁Auge - imbi - ▁bộ - ▁dimensions - ckel - 亿美元 - ▁corporation - 两位 - 日々 - ▁zahlreiche - ▁saith - ▁Allah - clama - ას - ▁Besitz - ▁năng - ▁Verwaltung - ▁fugitive - ▁скажем - ▁optimistic - 予算 - ▁parade - が非常に - 不明白 - лета - ▁mining - ▁thay - ▁Mission - っていうのを - ობ - ▁einzelne - 烤 - えない - ▁apenas - ▁Selle - gemeen - ▁Turning - 为他 - ▁круп - ▁сторону - ▁glancing - 길 - 安全保障 - 充满了 - ▁Vincent - ▁pensi - ▁muti - 側は - 揭 - ար - 在过去的 - fí - 的看法 - ▁intervention - 偶 - 古代 - 奋 - 執 - mädchen - ▁resemblance - articula - வரை - ▁nosso - ▁контрол - 类型 - ▁kubona - ▁polic - 晶 - ▁재밌 - ▁стало - 这个世界 - 太郎 - 驚き - ▁plague - 你先 - 的变化 - 全世界 - ▁Venus - ▁verstanden - 狙い - ▁recognise - 最喜欢的 - 増えて - pta - ▁diversaj - を認め - maschine - 梯 - ▁pourquoi - ▁Joel - ▁drap - ▁leise - ▁terrace - に移 - ▁hablar - 保健 - あすの - ▁disput - ▁traitor - seitig - 躺在 - ▁fuera - torial - 膜 - ப்படுகிறது - 開いて - ▁Ocean - kama - щен - ▁debat - らん - その中で - مەن - 一歩 - 的时刻 - ▁delayed - を食べ - ת - ਿ - ▁funkci - ▁Erfahrungen - ▁seule - няў - 头上 - ▁sullen - 誉 - ▁riot - ▁каких - ▁Castell - vè - estro - kenntnis - 这个故事 - ▁அதிக - 赤い - ダブル - ▁mau - 我又 - 一个很好的 - енә - 言うと - ▁тупик - coli - ▁Sala - тыш - ارات - 時代の - ▁assisted - চ্ছ - ▁sever - 过去了 - ▁Milliarden - についても - ▁Peru - කි - istischen - ību - を含む - କ - ▁định - ▁Fortschritt - льная - రు - 我爸 - ことができます - эння - ູ - 干净 - ▁Spaß - だいぶ - ▁Golden - ▁schneller - ▁successive - ▁marsh - ▁adhere - âr - िक - ▁cavern - ▁дух - ▁discretion - ▁flexible - 少なくとも - 耳朵 - ▁invasion - шат - ▁gaining - 有任何 - ▁плане - ▁Fitz - ▁ними - рыв - ӡ - ▁gigantic - ▁важны - ▁пойм - ències - ителя - ۋا - ▁undertaking - schicht - ▁impress - 大雪 - ▁onward - ▁Committee - 唱的歌 - ▁loath - ▁나오 - ▁wengi - ▁Öl - ▁awaiting - 主張 - लि - 取决于 - ▁Experiment - 高騰 - ▁таго - nombr - ▁아무 - ▁prv - 電車 - tically - ĕ - ▁bail - ▁milieu - ▁streng - gekomen - ▁leiden - ▁تحت - තර - halen - ▁dorthin - 本部 - ないという - мес - Смех - ▁scrutin - 在线 - 皆さんに - 円安 - ሁ - ▁Öffentlichkeit - әт - kili - 的角色 - ▁sujet - ság - ▁Dijous - étude - 物理 - ER - rito - 我先 - ▁subsist - ボン - 沼 - ▁gefragt - ▁maître - ▁veteran - 観客 - ▁diversity - umbu - িয়ে - ▁rechten - ▁menace - nimi - ▁unfold - ▁seien - ▁Academy - ▁sekolah - ▁accounting - ▁Miriam - たいと - 勇气 - んや - ▁fright - 控え - 上学 - そうですよね - ▁Wissenschaftler - ▁Leon - ▁retorted - овыя - ▁Feder - ▁importantly - топ - 也好 - ▁folgte - urile - 拳 - ▁Chamber - ▁terrific - ▁incentive - が続き - ▁gezicht - ▁Tau - ▁вокруг - ▁plunder - ▁показыва - preis - ▁rubbed - нікі - 上げた - 纹 - ▁comparable - ▁roedd - ▁кост - ▁descri - 部长 - baho - ▁quiere - ▁conveyed - zette - ▁byinshi - ▁Murray - ▁пусть - ▁virtual - 驱 - しまして - ▁Ро - ▁dominant - 稿 - дала - ▁đâu - ▁courtesy - 合法 - ▁энерг - ▁orbit - ▁получается - ▁policeman - خاطر - ▁болно - bă - 体重 - 注意力 - 織 - 됐 - 变得更 - ▁laatste - 郭 - ▁Richmond - 是一名 - ▁Produktion - ▁смысле - あたり - 动力 - ▁hiyo - 試験 - 痴 - 柏 - ▁harp - ▁angrily - ▁стали - ったこと - ▁deliberate - ▁мае - 地球上 - appelle - 丰 - ▁yani - 柜 - үй - ▁Patrick - ▁begon - ▁Sau - reɣ - ▁vegetables - を挙げ - 느 - ▁mysteries - ▁voran - ▁nossa - ▁speeches - さぁ - ▁yella - ▁Hinter - ▁už - ▁sotto - ▁bestowed - 起きて - ▁beautifully - ▁Denk - 侦 - ましたけど - ச்சு - হে - xel - 第五 - 的音乐 - шчы - లో - цый - ▁Nta - に入った - この辺り - ▁Daher - ▁algorithm - ▁begannen - Friend - 私たちが - ▁Questo - カット - ▁عام - ▁Cher - '37' - 見えない - ▁gesti - ▁commis - grond - 가지구 - ほんとに - ▁preach - хоз - ▁airport - '3000' - ▁hierher - アナ - ▁doubted - 所属 - zusetzen - ▁Jeanne - つながり - 滅 - ации - 隐藏 - ذهب - 差异 - ნი - ško - ایم - éré - ▁Kentucky - 逐渐 - umugore - ▁fiercely - ▁sensor - ிருக்கிற - වේ - iều - ▁известно - ▁dreary - ▁embark - ▁energetic - zze - чар - ▁deceive - 比我 - ▁Alma - ▁پیدا - ▁Vision - А - ▁chorus - に入り - おいしく - 鍵 - 坡 - ▁overwhelming - ▁Hintergrund - 建設 - ▁немного - มัน - AT - hog - kiye - page - ちゃんは - ▁Sachen - ▁yali - ▁deepest - reiche - щие - ▁capability - 誕生 - ывают - viel - ▁übrig - 图书馆 - ন্ত - logi - ▁Flug - コロナ禍で - 阿尔 - ▁junto - ာ - 绘 - っていうふうに - ▁Untersuchung - 但在 - astro - 免疫 - tuma - ES - streit - 政治家 - ってきて - ▁communicated - ▁Gregory - 遊び - ▁ئۆ - letter - лян - 工夫 - ▁provoca - 我们看到 - 工事 - ▁hospitality - 信心 - ванне - ір - ޮ - ▁refrain - ▁meditation - ▁phát - ▁Luke - ▁الغ - 祥 - スキー - ▁futuro - 给予 - ▁самый - 歳で - سون - glaub - выше - 桶 - risti - ង - 舌 - kämpft - 牺牲 - なぁ - 的例子 - を目指し - ভি - тир - ▁Firefox - ▁diferentes - gescho - 観測 - ▁erklärt - ধ্য - をやって - 下载 - 的基础 - ▁живот - 供应 - ▁prudence - 法庭 - ▁hoarse - 便利 - ▁joyful - ▁raid - bril - geza - öpfe - ▁woorden - ▁вполне - ▁incessant - ▁Field - 爆炸 - 逆転 - има - 並み - ▁países - ▁bestow - 著名的 - 成为一个 - ▁Biz - 打って - ▁Microsoft - ▁увидел - ▁பகுதி - 资料 - を持ち - ▁complexion - すみません - 導入 - 上次 - larga - ▁Operation - ▁quiero - したという - ママ - お話を - خواهی - ▁пятый - ▁necessit - ▁Methode - 的女儿 - sprache - 大小 - ▁tante - alina - хгүй - ▁approve - OS - 就是这样 - 伪 - 董 - ษ - 找不到 - пир - 陽 - 深入 - ▁travaille - ▁gast - ▁rhan - ▁infinitely - ستخدم - ヘル - ▁Graf - white - ▁Sehen - ▁vient - ▁أخرى - ▁calendar - geschäft - ▁compact - ▁Institution - 方がいい - カップ - леш - बा - みてください - ▁redeem - ď - 旧統一教会 - 皮肤 - 艰难 - schijn - 三百 - лок - ないといけない - ▁Half - ывает - gelöst - 姿を - いける - ▁swallowed - ▁warrior - ложен - وص - 坊 - ▁உம்ம் - ▁Foundation - ▁Melbourne - バーディー - ▁lazy - 産業 - ▁trauma - ▁splash - 中には - ▁cheerfully - ӧр - 就这么 - ▁Pay - نز - эж - ▁wanjye - 纪念 - 之一是 - ▁hỏi - ▁газет - 婆 - oiled - ▁geschlossen - ▁vocal - NA - 中でも - ▁mujeres - ▁realidad - ▁conversa - பர - ▁Fällen - ▁Shan - morph - ಗ - 咬 - ▁наблюда - ▁adapted - ▁deinem - tijd - ▁välja - 熱中症 - 串 - 它们是 - ▁வரு - いること - ▁наше - ▁eagerness - 请你 - ▁şeyler - 並んで - ښ - 천 - ▁አዎ - ▁isolated - ▁prairie - ▁carved - ▁industries - ▁яму - ▁veure - ▁verdienen - ▁pouvoir - の様子 - 複数の - ▁impatiently - ▁futbol - дают - 俺が - ▁そのため - 是非 - ▁eternity - ごめん - ▁Theorie - хим - 外出 - ▁hafi - ▁pulse - 支配 - 严重的 - ▁Калі - ▁probabil - 前後 - ▁Business - 最新的歌 - ▁comrade - ▁morality - ▁chemin - アリ - gou - みんなが - chá - vina - ▁Instrument - 使える - ゃ - ▁strand - vik - ▁dret - 又是 - 马克 - 행 - ▁invariably - ▁Portugal - ىدى - ▁хүр - ▁Té - ▁melted - 遇到了 - 言われ - 袖 - ▁Dw - の中から - ່າ - が確認され - 履 - 贡献 - ঙ্গ - ▁Orte - ▁рабоч - овые - ▁اول - ▁crude - stuk - 摘 - ▁Foot - 會 - ▁difficile - كار - ▁transferred - lph - 不一定 - ший - ▁setanta - ▁мере - material - 買って - ▁Manchmal - ▁sponsor - ▁verkaufen - ulin - しん - がたくさん - ▁disposal - ▁международн - 多少钱 - マネ - ▁quarante - gültig - ▁erected - гээ - ▁frighten - ▁hurriedly - 中止 - ▁Torre - ▁eerst - ième - 箭 - ▁trousers - ▁Aspekt - ▁draught - rischen - fowl - ▁Worten - ▁sunny - ▁rencontre - ▁Lle - ロケ - 市では - ME - із - ▁dije - 残る - īja - ▁Tower - 艘 - ▁pê - ▁사람이 - ▁fürchte - 多大 - '33' - ▁argued - යට - 某人 - sigur - grown - ılı - cò - igeze - 唱歌 - ಪ - 次々と - ▁fulfill - 遍 - ▁mesure - gawa - ▁Ми - 筹 - 놀 - ▁Vertrauen - équipe - ▁Kontakt - ▁Isaac - 的目的 - ▁constitutional - ▁حرف - الات - 人工 - ▁erhob - ▁swinging - ▁dissolve - ▁feedback - ▁noranta - 复杂的 - ▁общественн - geworfen - ▁Despite - IC - ▁erster - を発表 - ໍາ - لې - 尘 - 试着 - 信用 - ▁tribute - ấu - gezeichnet - なのです - dito - 经历了 - ▁города - 电子邮件 - ▁schützen - 冒险 - లే - لعب - ▁diffus - ▁extending - ▁квартал - 瞬间 - сидел - ▁sử - liwa - ▁zusätzliche - ▁thoughtfully - ▁dolor - পে - 都内の - ▁behandelt - ▁electronic - 先ほどの - ▁fossil - ▁Quina - 災 - teko - ▁acres - ▁گروه - に関しては - ▁camino - 戦闘 - vangen - ▁stirring - ▁mourn - ▁appreciation - انية - 回忆 - ткі - 出口 - ▁consul - லோ - ▁направлен - igkeiten - ைக் - ▁Testament - ▁representing - ▁marching - يين - ҩ - 외 - 警報 - vī - ▁multipli - 我开始 - ▁grind - ▁Meister - 这份 - ▁женщин - ▁barren - 我确实 - тей - じゃないか - ▁тяжел - 良い - بور - weite - 难以 - chè - 挣 - હ - 稍微 - ▁elevated - 大規模な - まれた - ▁molecule - ▁rwose - ▁slechts - 惜 - ▁influen - icul - ▁bast - 務め - ▁món - 说的是 - ▁unseen - ▁stain - ई - ステージ - 培养 - 決まって - '48' - ▁möglichen - はどんな - clav - நிலை - わし - ってしまう - ▁کجا - ▁trivial - がついて - ▁historic - ▁vierzig - ▁accessible - ▁skies - ▁همین - 一篇 - ▁figura - ▁interaction - ucu - ▁Forest - ▁leider - ▁também - 泪 - ▁Technik - ▁dernière - ▁reicht - ワード - ▁komme - 钢 - 闷 - ▁agencies - ▁polished - ▁предлага - ēr - ▁verstand - ▁chocolate - 沮丧 - 乔治 - しているということです - ▁stump - ▁primo - ▁odor - ▁knelt - ▁dirigi - デー - 杰克 - 客人 - 因为她 - ▁puzzle - が出た - 怨 - 盐 - 与此同时 - ▁کمی - ▁quina - 问他 - ▁اسم - ▁gender - 或许 - ▁counting - 今大会 - ▁އަ - ▁Tamen - दा - ▁треб - пс - ▁науч - ▁exhibited - umuryango - 包含 - ▁llarg - ▁Kanada - 결 - ようになって - bella - ▁rejoined - ▁кап - いただきたいと思います - ▁fain - ▁vinden - ▁এবং - ▁penalty - ັກ - ▁بأن - にあります - ▁Mala - 向こう - وضع - 我需要 - ▁grape - ▁курс - скіх - ▁crust - kutu - 态度 - ▁تش - '=' - ▁EBITDA - 悠 - 져 - ▁Officer - ▁ممکن - 元素 - ▁trường - を前に - ▁установ - んだね - ▁enthusiastic - ▁wail - ▁assumption - ▁Minute - ▁które - ▁wütend - 觉得自己 - ▁mwy - istisch - 保険 - ▁plank - ▁floated - ▁نور - ▁jaren - ▁console - ▁null - を求める - ▁expenditure - ▁rosy - ambul - 连续 - boden - singa - ▁daraus - 冷え - 吵 - Ю - ▁Tuesday - ▁Levi - ▁exclusive - ҭа - proof - ▁demonstrated - ▁Tennessee - 菊 - 邪恶 - ющим - жат - ▁casting - foje - 说服 - ▁comic - 牵 - ▁desperately - ということですが - 专辑 - 弟弟 - 斗争 - 喝酒 - ▁interrog - 少数 - ▁database - ▁humility - ▁soort - жение - ▁august - 的第一个 - مرة - ▁carrera - ▁innumerable - ▁Grenzen - 時ごろ - ▁freight - ▁Data - EM - 覚え - かさ - 后果 - クレ - 炒め - пат - ▁பேச - ▁করা - ਤ - ▁тэгээд - ▁verkauft - ▁weltweit - ▁Marine - ▁kerja - ▁Cul - schlaf - 灾 - 謎 - ▁거의 - ▁dispose - ▁minim - taking - ▁склад - ▁unnatural - ▁Sophia - 違って - ▁Unión - 整体 - 溶 - تَ - ▁ikintu - ▁cinquante - ▁passive - ボード - 跟踪 - ▁relating - 戏剧 - 解决方案 - 精彩 - ▁Montag - ▁гэтай - 青年 - ▁Alban - ▁главное - ▁Imagine - 寒さ - ▁Todes - 他现在 - ▁congregation - 等你 - ▁Station - сет - righteous - かわ - ▁amiable - ▁Genau - ▁praying - おなか - ▁Är - ▁dadurch - бед - ですとか - ▁bieten - مُ - ▁Cafodd - 喷 - rijk - ▁tiến - ▁radiant - 回头 - ▁dispar - ▁Serie - ▁xem - ショー - ▁dintre - dokument - ▁samo - gezi - ▁sublime - ▁awfully - tev - ▁transformed - 赖 - 稲 - だとか - ▁gewoon - ▁fotografi - ших - ▁Kenya - ▁vya - ▁annoyed - ▁starb - స్ - ▁hàng - 証明 - 最新の - ацыя - ▁видите - パリ - ▁bucket - が生まれ - 繁 - 读书 - 不相信 - ▁слышал - ▁gasp - ▁vicar - 段階 - 癌症 - tığı - アイス - 咨询 - 带来的 - 之下 - まるで - sorge - ▁Schulter - ▁хочешь - 欣赏 - ▁olur - medic - ▁Larry - ▁Twe - ▁hả - ▁درست - ▁Mayor - ▁Everyone - ം - ຽ - 性别 - ▁Plato - horse - ▁decisive - 先発 - ▁praw - ▁bubble - ▁класс - 尝 - ▁interrupt - ▁système - ▁Marcel - 多数 - ▁einzelnen - イチ - افت - ▁ہو - 衝突 - ▁zeg - 原始 - ▁afge - حب - ww - 漂亮的 - ▁Engineer - koloni - قار - ▁figli - ▁quaint - ▁вчера - 现在我们 - ▁буй - ▁admission - 安倍元総理 - ▁Bobby - вался - ▁Hollywood - pion - 一些东西 - ▁philosophical - ▁specio - American - 心中 - office - wè - движ - 出发 - burger - ▁unworthy - ▁скорее - 발 - ▁같은데 - جنوب - ▁trở - が続いています - 市内 - '38' - ▁Crist - ▁отказ - IP - hurst - ▁برو - 麺 - ▁signature - হি - 獲得 - ▁Männern - buye - 見ていきましょう - ▁nke - фан - ▁ئې - ▁Aaron - ▁ප්‍ර - ▁linked - ▁snapped - ▁Mine - чки - පා - 做一些 - 将会 - ▁coffin - ▁Nya - してしまう - ruff - の前で - ▁வந்து - 街道 - ić - ศ - 骄傲 - ▁eingesetzt - が必要です - وک - ▁такую - geteilt - ▁concentrated - 贝尔 - ▁heutigen - ▁Wid - なければならない - 掩 - ▁vuitanta - gebrochen - ▁nourish - ▁tourist - ▁plein - шан - 轻松 - 札 - ▁Twenty - ▁öffnen - ▁Solomon - ▁ຊິ - esque - ▁слушай - ▁putea - ▁صورت - 落下 - 詐欺 - ▁социальн - 相同的 - 焼け - konna - ▁канал - ▁Personal - ▁hurrying - ▁Sandy - ▁Houston - ▁grandi - ứng - ▁examining - یده - ▁enlarge - 完成了 - ▁punch - に来た - прэ - 柴 - 連携 - 両親 - ▁Wohnung - ▁Ansicht - ▁Kelly - 的房子 - ▁உள்ளது - ▁страх - ▁kicked - ▁monte - ▁Zwischen - ிலிருந்து - ▁сельсовет - amazi - ▁Konzept - ロシアが - ادة - ladi - ▁Mannes - に合わせて - 花园 - 考试 - schwei - 列表 - ▁psych - ▁facile - gali - ோம் - ▁inspection - ▁européen - 她会 - kaka - ▁Gill - 西班牙 - ▁Jay - 哀 - spruch - тээ - ▁faci - wego - origine - を出して - க்கிறது - 而已 - ▁зай - なども - écha - ică - gänge - 惩罚 - ▁bacteria - ṛṛ - ▁Hold - ▁Indiana - ▁grasped - ▁Geräusch - एको - лыш - ▁bored - дзень - ▁бесп - ▁Jenny - tiere - ▁fashionable - Plattform - ▁banquet - ही - ▁educational - ▁venu - ▁kvar - ▁tailor - ▁primero - ▁журналист - ování - ▁startling - 说的话 - giv - 砲 - 申請 - 昌 - 裤 - ▁loko - ▁llibre - 新聞 - を出す - ▁nannte - ▁feminine - ▁yiwen - 看待 - ▁sneer - ▁blog - 土曜日 - ▁conjecture - ▁девятьсот - ▁Señor - ▁balloon - 相处 - ▁aggr - 即使是 - ▁entschieden - رز - bürger - ▁Jonathan - 严格 - ebla - 这么大 - 心理学 - čni - パラ - 一套 - ▁underground - ▁phần - пустил - ıyla - ▁gihugu - тәр - 不是你 - ▁Muse - 定期 - าน - ▁pioneer - ▁thông - ogene - کلی - ▁உட - ▁persisted - 高級 - はっきり - ▁öğren - を取る - 工厂 - 进化 - щения - ▁Hari - champ - ▁mourning - ▁erano - 資金 - ▁военкомат - 帮忙 - ネタ - ▁ignore - 今度 - žu - نے - 面包 - нап - 岸田 - 衝撃 - ▁hejuru - 石川 - hund - wähl - খে - ▁translated - boten - körper - 働 - ▁perdu - ▁frühen - ▁ragged - தொடர் - ▁storage - 万美元 - 僕ら - ▁kuvuga - مند - 使得 - ▁earthquake - ▁внутри - たちに - yorum - альных - ▁ааа - ▁женщина - 拓 - 激しく - ▁Cada - ▁Holmes - ▁readiness - ▁entrar - ▁Finn - 从事 - rän - ▁образова - ▁olive - terri - aquell - ސ - 对不对 - ▁شرکت - stürzt - ▁whatsoever - ▁此外 - ▁Chu - से - ่ว - 轨 - ▁cancel - ▁waiter - キャン - 有事 - 北陸 - ▁manifestation - 的大脑 - ▁Gulf - 爆发 - 今年の - ▁زیر - ▁digest - ▁polar - 岳 - 点儿 - あるんですけど - ▁glittering - வெளி - ▁چون - ической - ▁parlor - 逻辑 - ▁реши - 摆脱 - ▁nearby - ▁Debatte - ▁lucr - 葡萄 - ▁Erklärung - ▁signor - 农民 - miento - ▁poetic - ली - மைய - ▁Fach - ▁clothed - ▁sanit - 不幸 - where - ấp - 识别 - 겠 - ▁dismal - cession - 我才 - 叛 - ▁связи - tiene - 之中 - ▁musket - ▁legitimate - 立法 - yerek - ディング - ▁Bran - を通じて - ようやく - ▁Vietnam - 辩论 - 決まり - 高め - ▁connu - குறி - ▁dachten - 稍 - ▁ingredient - ▁podía - 打破 - ▁Belle - チャレンジ - ▁опера - ▁breadth - ▁kostenlos - 厨房 - 더라 - 玻璃 - ▁Verantwortung - 兽 - ▁cardinal - дох - ▁Newton - わかる - ▁داریم - 邻居 - どのような - 借り - ▁clergyman - ▁tenderly - ▁المت - eeuw - ولد - ▁Jefferson - ▁societies - ▁бөгөөд - ▁ຫັ້ນ - ▁goddess - ▁metrics - лыҡ - ▁Hebrew - ▁pasture - ▁Dé - ▁праца - ▁повтор - 山田 - 雨が降り - 最佳 - ▁pouvait - estructura - ▁paradise - gebung - ▁necesita - ▁renov - 感染対策 - ලි - ẻ - 4% - ▁disturbance - 穆 - 形状 - ండ - étais - るなど - 青春 - 男は - ▁gerecht - 考え方 - ძ - ▁religiöse - 車両 - ▁Bang - ▁maakte - ▁gestern - 意大利 - 侵 - ▁contrived - zellen - ▁sweeping - ታ - 大変な - 人が死亡 - badde - 予定です - 道歉 - ▁kutoka - ▁clasp - annu - と思うんですが - อน - 進化 - Philosoph - ▁seria - zelve - 笑话 - ▁obstinate - 름 - ▁echoed - 支払 - energia - ▁coverage - ▁gag - ▁devote - ▁fuss - ▁Janet - インターネット - コーヒー - ĥ - ▁convenience - うまみ - ▁ŝt - ▁realise - raub - AN - бач - ▁fühlt - 하다 - сни - 北日本 - 態 - 騒 - 貨 - ▁když - ▁satellite - ▁disdain - 演唱的歌 - ▁Insel - ▁attire - ▁jardin - ▁purity - ▁dasselbe - ▁Тэрээр - ▁richtigen - 洲 - ▁hasty - ▁welcomed - ▁manipula - телей - ехать - 这首歌 - ญ - ▁እንደ - 抱怨 - ▁بدون - schreiben - цам - カフェ - 網 - ▁oxygen - కి - جام - information - ▁tiuj - あそこ - ▁upside - ▁Sü - ▁Palmer - ື - ▁Был - ▁solely - ▁retirement - ▁engaging - ▁meilleur - ▁руку - ▁Elsie - 不是一个 - 乐队 - ▁subdued - মো - ▁ئەو - ▁그치 - 手指 - ниц - 必要がある - あなたが - いずれも - 塚 - ▁Verständnis - 불 - ▁everlasting - ▁честно - ▁keer - ▁کړ - に向けた - 给你们 - ▁attachment - ది - ▁Nine - ▁invalid - 毫无 - 成果 - しばらく - ◆ - 燕 - 扮演 - ▁wählen - ▁волн - ▁Regeln - 但我们 - ▁armor - kracht - 演説 - ▁دید - ッキー - ▁cierto - ▁kasuta - ▁Rasse - ▁surpass - ▁Ар - ▁muscle - 仆 - ǧǧ - ▁другое - бав - ▁کمک - خورد - ლა - '95' - 一部の - ▁دنیا - 痛み - 鲍 - 奸 - ▁dictate - ▁tested - ▁jog - 広く - ▁banking - ▁blink - ▁Better - ▁jahre - 小朋友 - ▁nützlich - 床上 - mütig - ▁Ул - ▁tension - ▁jove - を進めて - 五月 - 的态度 - ▁Ward - ▁nhiên - sibil - ▁Tippett - 杂志 - ▁jumping - ▁région - illion - مین - ▁порядк - ▁traveled - ▁appearing - ▁eminent - ▁Folgen - 辱 - particip - 生地 - ▁красн - クル - ▁Они - スカ - certa - 堀 - ெய் - ▁darunter - ▁Kevin - ▁recur - андр - 徐々に - 衆 - 夹 - ▁Sitz - ▁agitated - ショート - kiem - ▁Almost - ální - ▁dreaded - ▁свер - ▁escala - ▁segundo - ▁okwe - бла - ▁realised - fassung - ▁fonction - ▁submission - ▁assess - リオ - 指控 - ▁gradu - 彼女の - 訪れた - koli - 受欢迎 - ▁الأول - ու - 감 - ▁Game - 我非常 - bedien - 随便 - ▁Cour - ▁flowed - したうえで - 多个 - ▁اليوم - ▁khu - ▁frère - ય - かつ - ▁tennis - ▁blijven - abaye - ეს - ▁juice - ▁Lisa - 先頭 - ▁forgetting - ▁nhw - お届け - fydd - uwen - ▁Meilen - 大学生 - 这个地方 - ▁Füße - ▁ситуация - ▁eloquent - ▁видео - сны - ▁trotzdem - 盯着 - ▁mmhm - ▁사람들 - iano - 向前 - 请求 - パパ - бег - ზ - ▁cursed - ▁pathetic - 规划 - ▁Talk - ▁Ausdruck - ▁Kritik - ▁habitat - 整理 - 足球 - পি - ▁wussten - ▁yawe - 인데 - 输入 - を重ね - boek - ▁процент - ▁doute - umunsi - ▁zitten - 明日の - ddin - ▁hayat - ▁Indien - ▁Vergleich - ▁ultra - When - 鬼子 - ▁endured - ▁Geheimnis - ▁خواهد - ▁irgendwo - 旅游 - ▁kalt - 已经被 - ▁Order - 収穫 - undvierzig - ▁medizinische - ▁вульф - ▁suppress - вец - 8% - 嫁给 - ▁personage - ▁невозможно - ึ - ▁fortress - ▁Key - と指摘 - 正面 - ▁attributed - AC - unica - 選んだ - 老鼠 - ▁Kaiser - 判決 - われて - clar - ▁Scar - ▁monstrous - ▁çünkü - ▁интернет - 妥 - kker - ▁Turkey - 火事 - urira - ▁Pul - 的身份 - ▁vacation - ▁imposed - 侵攻 - maktadır - ▁vật - 这段 - ▁tüm - 准备好 - ▁china - oboka - 的消息 - ल् - 紀 - ▁ситуации - 晃 - ▁Stuart - ▁أيضا - ▁Twa - ▁камп - adora - 景色 - īgi - ▁asserted - ▁extinguish - 各种各样的 - ▁proprietor - ▁좋은 - ▁скажите - பின் - ▁respekt - 我们认为 - ▁hinauf - тура - ▁stati - ▁digo - ▁Heath - 上手 - ▁stability - 吻 - ▁другим - ▁Maryland - ▁stamped - 做过 - ▁лиш - ▁evangeli - ▁verließ - ▁அவர்கள் - ▁kadın - ▁oppose - mizi - そうですか - ▁mnie - 快点 - ▁Grün - ▁screamed - ▁önemli - ▁tahu - ▁Wallace - spoonful - ▁bemerkte - ▁durum - ▁associ - きれいに - रो - قطع - 就没 - ヒント - ొ - ▁superstition - 호 - ▁shrub - 暂时 - ▁gewisser - ▁extinct - ▁放一首 - ▁мэдэ - ▁acceptable - ▁wonderfully - ▁يُ - züge - ▁verdi - ▁grid - 打击 - 牧师 - ▁identical - 情报 - ▁verdient - 南方 - 請 - ▁kimwe - 他にも - ▁eleventh - ▁Dora - ▁nicely - 雾 - ▁tốt - ▁حيث - が大きく - ▁Lernen - 形象 - ▁wiederholt - ட்டை - கிறது - ▁Maggie - ▁consumed - ▁sanft - beug - ▁await - ▁içinde - த்தார் - 我们家 - ▁Шу - અ - トラブル - ▁inflation - ▁york - ▁indirect - ▁bietet - ▁máy - ▁depended - ▁Kerl - どうする - sexual - 倾向于 - භ - ▁друж - ▁brillant - のまま - ▁punika - ▁blond - 6% - テスト - ▁nowadays - ▁сначала - リズム - 盲 - ▁horizontal - 停下来 - 的产品 - ▁Online - 不好意思 - фарм - 決めて - ▁Minnesota - ▁faculties - ▁gewonnen - 世紀 - ▁dwarf - ▁Pier - ▁тус - ▁Bella - ▁ăn - ▁chưa - ▁bilo - ▁meisje - ▁Dewi - ического - ▁prosper - 空中 - 女性は - ▁geschaffen - ▁litter - ▁tlie - būt - ▁нічога - ▁kết - ▁پنج - kî - ▁podem - hydr - gespielt - 10% - ▁louder - 小时候 - ▁kilo - ▁trigger - ▁fig - ▁Honor - だと思って - ▁principalment - 談 - ▁realiza - ▁tries - ▁Penny - ないんです - ▁farklı - ▁elimin - 产生了 - ▁tätig - ơi - ▁accusation - ▁pry - gruppe - ▁kuwi - 画像 - 抗议 - ▁افراد - 薪 - ▁parson - prozess - ▁abide - 休み - 在那个 - ▁rive - ▁наконец - ▁giorno - ▁فیلم - ▁সং - 教练 - ▁chambre - işle - ▁automobile - ஷ் - 背中 - ▁plantation - 的母亲 - わずか - ▁wichtigsten - cari - ▁erlaubt - 加拿大 - تحدث - ▁Про - 我叫 - ▁Lager - 这些事情 - ▁siaj - ▁Carter - awen - ▁répond - រ - ▁cambio - ▁strife - ▁consegui - ロック - ▁utterance - 拿到 - 陷 - 下班 - ▁Parker - 縦 - ▁видимо - zukommen - ▁こうした中 - ▁Chap - ▁bastante - ▁eliminate - ▁Compa - серьезн - tsiooni - ▁astonishing - ఎ - ضع - živ - ovitch - 站起来 - تری - 有意义 - 一眼 - ▁вечер - ▁привет - ▁bliss - ▁stepping - ▁raft - ggling - ▁animated - 木さん - 小学生 - نق - einig - န - ▁Christopher - 学到 - ▁tyrant - имся - ▁Circ - third - ▁breach - ▁nostra - ▁ئەم - ▁Heart - 如果说 - wamu - 軒 - ▁grundlegende - ▁прежде - رَ - ▁agricultural - ▁жыв - туры - чныя - やりたい - スタイル - 拆 - 脂肪 - っぽ - 微博 - حدث - ▁weaker - 只会 - 竞选 - ▁Können - drängt - की - ▁geliebt - расп - ▁appealed - 連れて - ▁tercer - 带走 - 提到的 - 직 - 原発 - чив - 坏了 - ▁иногда - ност - ▁ужас - ▁защит - ▁dauert - 在外面 - 全国で - 槽 - ▁ஆனால் - 観光客 - ▁qualified - ▁dismiss - 神奇 - ▁جدید - ▁Edinburgh - ▁internacional - னும் - ▁fwy - ▁lucruri - кажет - リーダー - ▁Theater - смотреть - ▁attempting - ▁затем - һын - ▁stretching - ▁аан - ▁hệ - 偶然 - ▁是啊 - 日本海側 - amategeko - ▁verstehe - Gelach - ▁correctly - ▁занят - ▁whispering - ▁virtuous - ▁ayrı - 強さ - ▁Snow - ▁survived - 朵 - 土砂災害 - ▁별로 - ▁assertion - ▁technische - ▁comercial - 扩大 - ▁erlebt - でもいい - ddie - ▁dump - ▁lemon - ▁Unser - ▁traditionell - 忍受 - できなかった - ▁Blu - 村上 - 一路 - ▁एक - ▁конкрет - ▁resigned - 无论如何 - proc - ähnliche - ▁comfortably - ▁Hof - ▁duration - ▁romp - ственно - プリン - ▁bamu - 親子 - ▁recess - ▁Oscar - 鏡 - ▁находится - 之处 - ▁анти - せん - 犯行 - ▁Hospital - 这将是 - 누 - ▁Maßnahmen - ▁dispatch - lisi - ifuza - ▁physically - ▁حالا - ▁streak - ▁derecho - 这句话 - 両方 - 产业 - 八年 - 付いて - читать - ▁mož - ▁schaute - 楚 - ▁sculpture - 注册 - ▁কৰি - дина - 亿元 - وں - ▁flot - ▁regardless - ▁babe - дова - ▁Atom - 郷 - 窃 - ▁Eleanor - ▁telescope - 叹 - сво - ▁komunumo - களும் - ▁denomina - ێت - ▁nelle - ▁vamos - 物体 - gefangen - 疗 - त्य - ▁집에 - 迁 - φ - krimin - ▁dritten - も含めて - ▁Estados - 做的是 - 方案 - үүлэ - شعر - kintu - 偽 - ▁betrifft - ▁magnitude - 乃 - ▁sequence - ▁العمل - 津波 - تُ - bildung - يَ - ▁wunder - とされる - 这里有 - 是一个非常 - 義務 - ▁gris - ▁нужны - ▁erklärte - ならない - kale - ▁Syn - ▁ambassador - ▁despise - ▁compelling - ▁cooperation - палі - ▁embraced - ▁газар - 找你 - ▁teatr - ្រ - ▁Harrison - koko - つま - ▁труб - koht - 打电话给 - ▁destaca - ോ - ▁refined - 了一会儿 - ▁squire - 你们俩 - lohn - 直前 - ▁lapse - హ - ▁overwhelmed - ▁Album - ▁шестьсот - fashe - bika - ▁hunted - ▁tackle - ▁voraus - ٍ - ూ - ▁outstanding - 发现自己 - 不管是 - ▁thyself - ظهر - пуск - فكر - 長期 - いわれて - ковский - 姨 - ▁collaboration - 思え - 小屋 - 欲しい - ކު - 际 - ประ - お酒 - ▁excite - 札幌 - ▁технологи - ▁tray - ▁weighed - ▁система - ▁awaited - ▁посмотрел - 我们不能 - ▁Director - ヴィ - 従業員 - ▁Dimecres - ▁regelmäßig - ހަ - ▁glacier - 冷凍 - ங்களில் - ▁pleaded - 赚钱 - асць - ▁taun - ▁разные - タイトル - nwen - ▁excursion - ▁scenario - ▁ຫວາ - ▁Sendung - 配置 - 僅か - ▁sealed - ▁propre - ▁grote - ▁врач - islam - ▁Nieder - 羞 - 挨 - 喜马拉雅 - ▁Terri - ウル - рел - yita - 月曜日 - 博物馆 - ▁vấn - abaturage - ▁Osten - ەیە - ▁göz - ▁enthalten - paid - Ɣ - œuvre - 腕を - ▁neuer - ģ - ▁болох - reisen - ▁цар - 沖 - ▁Probably - োৱা - ▁Wachstum - ▁skeleton - ▁Pomp - 消失了 - ▁Consider - ▁Cameron - ▁drunken - දී - ▁podría - eremoni - ▁resentment - ießen - 孕 - ց - ▁Wednesday - 緩 - ▁privacy - ওঁ - ▁harness - ▁இல்லை - 政党 - ▁yep - ハウス - ▁küçük - кле - znam - lux - 出てきて - ▁그걸 - ▁kugeza - 全身 - ▁Europäischen - ▁satisf - witz - 的每一 - ▁subsidi - 種類の - ▁처음 - linda - ガラス - ىسى - ël - ▁کامل - ▁патр - ▁cooked - ativa - trakt - 劫 - ▁geschehen - 袭击 - ▁snatched - ▁série - ▁Universum - бин - 杀人 - ▁знать - ▁exercised - ▁Franz - ▁insane - 没人 - UN - ▁vertraut - ▁bantu - wyl - ▁Lamb - youtube - இன் - ▁Quindi - ▁কোন - ▁Мин - ▁imitation - ▁audio - ▁zyn - щение - ாளர் - 沸 - ▁expertise - もしかしたら - 仗 - 资格 - 短信 - ▁tusschen - 抬 - ▁encouragement - ▁concur - 命名 - 发送 - ▁lucru - プロジェクト - ં - 酬 - avour - ỹ - 莲 - 英尺 - ▁verfolgt - 年生の - わたし - 跟大家 - ▁Entre - わからない - 难过 - フライ - ▁касается - 苍 - ▁Hudson - ▁numa - ▁Gewinn - kanya - ▁obstant - ▁fakt - ෑ - 有助于 - ▁одоо - ште - ▁aircraft - ▁реша - mayı - ático - 栗 - 呵 - ▁убийств - 下がって - ▁войны - 乗せ - 則 - ▁Häuser - ▁இர - ▁Saya - 現金 - ドン - ▁bilden - 締め - という話 - 向他 - ▁Bless - infla - ▁quantities - ▁опыт - ▁мус - ifiziert - ▁Deutsch - もう少し - ▁Deck - 的状态 - 北方 - ナンバー - 陸上 - ▁stesso - ▁warfare - யூ - そうそう - سك - 的速度 - ゼレンスキー大統領 - こだわり - ▁jullie - ▁알아 - ▁wolves - ▁între - ▁boiled - ▁தீ - ▁تكون - 任何东西 - 激动 - லு - 狱 - ▁Apollo - ▁everyday - ▁процентов - になれ - ▁complained - ▁attendance - ▁strengthened - ▁infantry - ▁arriving - 是一件 - ▁bajo - 俱乐部 - ▁мөн - 厅 - の間で - ▁slash - ▁Log - をつか - 争论 - hilfe - ▁эффект - ▁Kreuz - 几次 - ▁Joshua - ▁какую - ▁Sind - ▁Zeitung - :“ - 资产 - 見られる - කු - ▁digging - ▁honesty - ▁Norden - ▁spreche - ▁chwarae - ▁дверь - ▁Horace - ▁никакого - мут - йтесь - ▁ҡара - ▁Certain - শে - ாவது - セント - カ月 - ▁счет - 期限 - ▁sev - 广州 - خنده - ▁пя - ۇن - ▁какого - ▁пару - ▁surgery - ▁Strange - お肉 - ▁بۇ - ицы - mico - ▁подумал - ndung - ▁одну - しまい - 周末 - дө - kibazo - гийг - ▁explaining - 場に - ▁Brust - 媳妇 - 鎌倉 - ұ - ▁Supreme - ▁junior - ▁precisa - алда - Wh - ▁curios - ▁pobl - ▁дене - ▁همان - සු - ▁righteousness - ▁imitate - ▁spray - 这样的人 - ▁Thi - achtet - のようなもの - 看起来像 - මි - 革 - ということなんですね - Risate - 坑 - quisition - ▁Duchess - ▁فرو - ないよ - 你可以看到 - ▁نگاه - 確認された - ▁бя - ▁vary - bär - ▁Media - を行った - 只需要 - ▁writ - きょうも - kuku - あいつ - 你可 - ډ - ヶ - ▁soutenir - ▁effektiv - 我跟你说 - пай - 発売 - ▁corona - чные - ilian - ▁قىل - евка - 貌 - රු - ▁Continu - ▁Zweifel - があるので - ▁resume - tsinda - 他认为 - 刊 - 清晰 - ▁Sultan - âme - eilt - する方針 - ▁quedar - '55' - トイレ - ければ - gène - 鹿児島 - 懒 - ▁නො - ▁única - aĝo - 围绕 - 每周 - ▁moviment - 大胆 - dział - ▁vanish - ▁whereby - もらえる - влек - ▁kız - ▁Studio - スパイ - ▁reap - ▁верн - ▁ভাল - ▁passé - ddling - design - ▁ahí - ▁distrust - 货币 - ▁governed - گل - 超越 - 是对的 - ▁superiority - ▁harmless - lamp - 工作人员 - ▁gewählt - ▁orphan - ▁angenehm - cè - ▁vaguely - 迎え - ▁хотят - 敏感 - арга - ▁فرا - ▁authentic - 娶 - ▁squeeze - ▁plough - 国民の - いくら - 协会 - zusehen - ▁Està - ▁Ethel - ▁کشور - ▁கூற - ロボット - ▁squirrel - ساب - 有问题 - 监管 - 自杀 - ▁Rich - ▁yanjye - 伊藤 - 一些事情 - ▁vardı - 体系 - ▁tudo - ▁oven - ▁Satz - ▁Bedingungen - ▁buchstäblich - ▁izmanto - ▁puerta - 食料 - евой - ▁Keith - ▁Cuba - ▁benefici - ulating - ▁stealing - സ - ドア - wurf - 回报 - 枠 - ▁Verein - بحث - ándose - නය - ▁seemingly - の高い - unterschiedliche - ண்ட் - ▁прекрасно - ところです - ▁outdoor - ▁stride - 新幹線 - ▁mientras - 详细 - ț - ▁ảnh - ▁dresses - ходил - 这时 - 時まで - íme - ▁affecting - クラブ - šā - 儀 - '`' - жым - ார்கள் - 一颗 - 可以通过 - を続ける - ▁stata - ▁واحد - 惨 - 还真 - ▁depressed - ▁সেই - länge - 飛ば - ▁broadcast - ▁kujya - ▁preaching - іла - ▁gezegd - ▁arasında - 结论 - өнгө - ▁conjunt - CD - ▁Alors - 農家 - ▁tradi - 総裁 - 積み - лээ - ▁englische - corro - 颗 - ▁Barry - ▁contempor - ível - ▁repeating - ▁বলে - ▁российской - 前提 - 少しずつ - ▁صدا - ▁voted - ▁سره - ▁путина - ▁untersucht - 有一次 - anomena - ▁providence - schränkt - 不正 - デジタル - 雕 - voja - ▁überrascht - ▁зелен - 駆け - ▁presu - ▁Ку - ▁confidential - 嘘 - ▁Jupiter - ▁dignified - rott - ▁implied - 7% - ▁drifted - ▁обычно - ▁dunkel - ▁crucial - bewegung - leɣ - ▁cruise - 相手に - ▁contrari - 得太 - ▁corte - ▁letzter - vå - ▁umwana - 居然 - 울 - にわたって - 久しぶり - ▁hành - ▁tưởng - ▁کوچک - ณ - ража - 过得 - ▁priorities - 所谓 - ▁işte - க்கை - 考虑到 - 気分 - ▁Lloyd - 登録 - zubringen - ▁ĉefe - ▁recept - ிருக்க - ▁считает - ▁практически - ▁wrinkle - ுகிறார் - 及时 - ▁Fru - ▁Lydia - ▁semble - ▁bann - 祖父 - 女生 - ▁financi - ▁wiped - 犠牲 - ۱ - empi - aquestes - ▁inherent - ▁Bring - ▁pouvez - 最後まで - цяг - 軽く - greif - ▁rumor - ホームページ - അ - كان - ▁görün - ▁sadness - '39' - ▁Faith - усь - を進める - ایت - パワー - ▁colonial - ▁Sister - 访 - 忧 - 孟 - ▁அவள் - ▁Weiter - cipi - ▁silenci - прям - accus - を示す - ▁endeavoured - 노 - 魏 - ▁thường - ▁يجب - ▁полно - пел - あげる - 疲 - 晩 - 須 - ▁обязательно - ▁무슨 - ♫ - 猴 - ▁cotxe - ▁Observ - ▁gorgeous - 外部 - ▁roaring - 阔 - ▁fret - 护理 - 天才 - ▁Nachde - сну - àng - ▁advertise - ▁mournful - ễ - ▁понимаешь - 밥 - ▁Prussia - ▁مشکل - ▁scharf - ▁hopeful - ▁disciplined - 确实是 - 公表 - тали - ரெ - ▁mặt - ▁stooped - 暴露 - basha - 決して - Europe - ▁Там - vira - 表现出 - ▁hause - 三天 - schnitt - ▁foster - ▁plac - 生病 - ▁ownership - ▁woord - ▁Está - '2014' - 認め - дро - ▁potreb - ▁obtaining - ▁ҡал - ▁trình - ▁vegades - 依頼 - を超えて - ▁epidemi - 諦め - 公正 - ▁Angriff - ▁pagan - 応え - できました - ▁eloquence - ▁தனது - 监控 - ▁Spur - thought - ▁спец - итесь - ▁patiently - ▁catastrophe - ▁caracter - က် - 的概念 - ก็ - ▁Venice - 转变 - ▁üzerinde - 溪 - ▁людям - ▁orchard - ▁Mama - ▁زیادی - ▁那我们 - の歴史 - ftig - ▁Kurz - ▁dạ - Risos - ıp - 行う - өт - ▁courteous - indulgence - 但她 - ▁микрорайон - garagaza - ▁بودم - ▁gegenseitig - 了下来 - ▁ingenious - TA - état - 誰が - ▁ikibazo - 消え - 鑑 - େ - ▁москвы - جعل - ▁ожида - 股票 - 标签 - 好事 - ▁przed - 氏が - क्ष - ▁schä - ▁mirth - 救助 - andika - ▁liable - 刃 - ▁liking - madı - ▁historically - ▁presque - ▁представля - ▁Klima - нікаў - ▁executing - 听到了 - ంది - 時代に - Good - 貸 - 苗 - ▁امروز - ▁иә - явлен - 神圣 - ▁amuse - 肚子 - 这真的 - ▁gradual - ▁favourable - джи - スポット - 挡 - ▁considér - ▁нравится - ▁implica - になってくる - ▁creu - ▁făcut - 20% - ▁değiş - ▁Valentin - と呼ばれ - か所 - ▁rubber - ▁கட்ட - อา - 罩 - ▁administrative - жыць - ▁Су - 反而 - ▁truc - 慣れ - ▁waking - 合格 - 引っ張 - ▁постоянно - ▁overflow - 輸入 - 指数 - لەر - ▁Europ - verkehr - kamu - ▁drehte - 新鲜 - ▁Demokratie - eusement - ▁strained - ▁entlang - やってみ - рская - ▁영화 - نظام - ▁tidings - 你可能会 - টো - ▁ребята - ▁unconsciously - ▁والم - ▁goose - ▁serene - ▁extravagant - 问道 - ▁quoted - ▁laboratory - вд - 内閣 - したいと思います - 负担 - ▁камен - ▁lettre - 但他们 - вацца - 阶 - ▁gewöhnlich - 陪伴 - ▁provoke - ენ - 製品 - 组合 - ▁dumm - 老爷 - ამ - 继 - ▁embarrassment - 児童 - ▁expressing - ▁reaches - 撤退 - 夜晚 - 一辆 - ▁crowned - 周年 - 痕 - 拐 - ▁Bäume - ▁Schönheit - ▁thôi - 我认为这是 - ▁новый - 不舒服 - そうした - 価 - ▁exaggerate - ▁다른 - ▁العديد - 这并不 - ▁despised - ▁Milli - плеч - 应对 - ▁vicious - ▁thicket - ливо - 不明 - ▁reminder - ▁Divi - 筆 - gibt - ▁complement - ذر - 小林 - を通して - 町で - щий - ▁vragen - を観測 - 这就是我们 - ▁kontrol - ゥ - հ - スケート - 売り上げ - ▁Verwendung - لِ - ümü - 城镇 - ▁produit - uğu - 合意 - んでしょうね - ケア - ▁wünschte - ▁Exactly - ▁świ - ▁غیر - ▁boca - いろいろな - คน - ▁হয়ে - فَ - するのが - 有一点 - clus - ▁finns - ▁scientist - 疑いが持たれています - ▁Muster - ▁çıkar - слух - ▁ມາ - ▁concentration - プラ - 吹き - ているのが - 帆 - 陶 - ▁kümmern - 荣誉 - ▁праз - ▁Muri - zeigen - 弾道ミサイル - ▁cualquier - ▁Niemand - ▁Stamm - ▁Wirkung - ▁blazing - 拿走 - ▁годы - 焦点 - 我知道你 - ▁quả - ▁oars - 地域で - 選択 - strom - 暗示 - 线索 - ▁ziet - ▁protecting - ▁spit - ització - ▁flashing - பர் - ▁Zusammenhang - ▁Qualität - ▁psycho - ▁prêt - ▁보면 - 复制 - が増えて - ▁Julian - ▁consulted - rinda - 增加了 - 查看 - 织 - 逮捕されました - ক্ত - 戦略 - リアル - 太大 - ▁Roland - ृ - ▁receipt - 篮 - ▁Russell - 绝望 - тоў - полов - ▁нами - ▁жест - ▁pleasantly - ▁வழி - ▁यस - PR - ▁terrorism - 平安 - iyoruz - 引き上げ - ▁Zweiten - ▁overtake - ▁apron - ▁quindi - فور - 狠 - ▁classroom - ▁luncheon - ▁server - ▁иначе - ▁Than - ▁enfin - ▁Line - ▁celebrate - ▁இருந்தது - 还可以 - 同様 - ▁поддержива - 堵 - 体調 - ▁exploration - ▁mã - ▁نحن - ▁Wunder - 天堂 - 大谷選手 - щего - 取消 - 被认为是 - ▁اولین - やっと - 状況です - ランク - 消費 - ▁Literatur - ▁будто - landı - ▁Font - ▁Expert - тыя - ▁decidedly - っていない - ▁feat - 国葬 - เร - ▁Hawaii - ▁Schwierigkeiten - ▁đấy - kiko - ▁instructed - 디 - ▁Letzte - 역 - ▁Dimarts - でしょうね - ▁manual - leib - ائي - 瘦 - ▁contemporary - ▁tõ - ▁passenger - 在哪儿 - キャンプ - borg - ▁personnel - ковская - 尺 - జ - ▁nachzudenken - 証拠 - ▁removal - ▁государства - ▁встреч - ▁Einrichtung - っけ - ▁ocasi - тап - concili - väg - ăng - 殴 - 포 - ▁Ontario - ▁foremost - gegeven - tauchen - されたのは - ▁стоял - 朝から - baum - ▁Absicht - ▁fling - kamera - されてる - بى - ▁discontent - ▁regime - ことになりました - 現れ - ▁Quite - ▁bleef - ▁interruption - ▁кеше - 本周 - ▁heathen - 밖에 - ▁pesca - 危険な - ▁benutzen - ▁arriba - ▁skilful - ▁lingered - 体操 - న్న - ▁zooals - ってほしい - 稀 - ষ্ট - ▁scheen - ▁bibli - عات - шты - ▁praised - ▁açık - писан - ▁Thompson - ▁பயன்படுத்த - 陵 - ▁Baltimore - ▁bölge - ▁জন্য - ▁lavoro - を見ると - ▁malheur - ▁dadi - ඇ - 就开始 - ▁exclusively - ▁financing - ▁spake - нең - ibindi - ▁contradiction - ▁fino - ometer - スイーツ - Aplausos - ▁prosperous - ごとに - ▁utiliza - íamos - ▁печ - ▁Einsatz - ▁Dabei - ▁discount - 商量 - 没有办法 - ▁Bright - ▁mulher - 配合 - schmerz - PA - ▁shifting - ▁ditch - 詞 - ▁cuerpo - كرة - ▁detected - ▁collecting - städt - ▁Given - ▁inherited - マリ - ▁Wang - ▁peter - œ - ટ - ▁Wahrscheinlich - ▁gezien - 祝福 - ждения - larının - 一起去 - ▁versuche - italien - ▁Peggy - géni - あした - 话说 - 此时 - ლ - ▁Streit - ▁preached - 勧 - ▁hakkında - ▁موقع - ▁shrugged - ovna - сцю - ▁அதன் - ▁Manchester - を受ける - ▁ужо - 你喜欢 - ▁distressed - sehn - ▁meteor - 感染者数 - ▁Arizona - ▁そんな中 - 구나 - ▁подожд - 心灵 - 靠近 - ▁Garten - いいのか - ▁shipping - ▁desenvolup - ▁verbreitet - ▁frantic - ▁erfüllt - Institut - ▁plaça - национальн - भा - ▁svět - 它将 - nood - ліз - ▁cultiv - meid - 路线 - ▁piled - 同一个 - 舎 - 趟 - ▁gehabt - ▁называется - 張り - だと思う - ▁portal - ▁müsste - вок - ▁paragraph - ĵ - ▁Toronto - 見てみますと - ▁borrowed - වැ - ▁некоторые - ▁Cardinal - lê - ▁visibility - 尸体 - 还得 - ▁Schatten - К - 機械 - ▁نفر - ▁temporal - ▁stammt - ื - 看过 - 悟 - ▁asylum - 大好きな - すぎて - に近い - ▁iawn - ىلى - 洗濯 - 珍しい - 审查 - ▁hyper - ▁Krise - 友人 - apport - ▁tanta - ▁umbrella - 手紙 - ▁copies - 癌 - runner - 正义 - そうと - somero - ▁stärker - affi - やろう - 更容易 - ▁buli - ▁للم - пресс - ▁Children - vreemd - を紹介 - avons - bogen - ベー - ▁matches - ▁nennt - ▁توانم - 也能 - 肯定会 - ▁applause - 肠 - ނު - ▁Stewart - したところ - 究竟 - ▁Nina - ▁треть - ▁substitu - 任何事情 - escent - ▁üblich - ▁cement - コーチ - märk - ▁공부 - ▁seventeenth - ছো - اشت - ▁UM - ydı - 完美的 - пак - bido - ▁Grab - ▁pirm - ifies - 希腊 - ▁generosity - 部署 - '120' - ▁tipus - 山本 - 基督 - を抱え - 邓 - గా - শ্ - ▁carta - ▁цаг - ▁Tel - ▁الوقت - 冬天 - ছেন - を守る - とかそういう - ▁самого - очку - ▁Sanders - 现在已经 - häuser - 整備 - ▁красив - ▁பண்ண - 打つ - ▁состав - 我只 - ี่ - するのか - څ - 悦 - ▁внутренн - 予測 - いいね - バイク - әҙ - ▁Fleisch - わよ - ▁أَ - 最后一个 - 特定的 - 裸 - 辣 - ▁produziert - 粉丝 - イギリスの - võt - 寿 - 出门 - ▁любой - бот - の方は - еньки - 9% - ▁کنی - ▁Trent - 营销 - ▁klug - ▁necessari - 以便 - entreprise - ▁grub - 有钱 - алды - ひとつ - бә - ▁Mason - world - ▁सम - மோ - ຫຼ - ▁baggage - ▁orchestra - ▁bafite - ▁mooi - 这可能 - ▁diminu - ▁جنگ - 铁路 - jev - せっかく - 涙 - 吞 - 官方 - ▁fortunately - ▁доста - ▁abrupt - печат - ▁macro - 下り - gull - ▁jugador - 诸 - ▁Might - ▁konkur - ▁migration - 巨人 - ▁surf - traction - waż - かしら - ▁Betrieb - строй - を起こし - ично - 銭 - ခ - ▁beteiligt - ▁congratulate - ▁provincial - ▁aktuelle - ▁Signor - قۇ - ▁frisch - ▁schuld - wezi - ▁yapıyor - ▁joven - ▁여기 - ▁dorm - エース - ኝ - verfahren - ▁новая - 提出了 - ぶつ - ▁behaved - ▁Owen - 分裂 - 测量 - 我想知道 - 等我 - 漠 - 煤 - ▁fühle - ▁Turkish - フィー - 会发生什么 - 었는데 - மீ - ▁depois - ▁cél - ▁zegt - ▁Militär - ▁judicial - ▁començar - ▁Hinsicht - って言った - 另一种 - ▁Terry - umugabo - ▁petty - düğü - ▁consultation - ▁всей - льская - 放松 - ಡ - ಹ - ▁getötet - ▁gọi - ▁jurisdiction - ▁vicinity - ▁шоссе - ▁muerte - ▁Thy - グルメ - یەک - 練 - ▁zentral - 岐 - 峡 - ø - 격 - 不怕 - いきなり - havi - 布鲁 - ▁chama - ▁yavuze - 一杯 - 公子 - 福島 - 総合 - 長崎 - cuff - 縫 - 坛 - 笨 - ▁Quart - プログラム - ▁Flugzeug - 恐怕 - ▁Phase - につながる - ▁brandy - 灾难 - ▁treating - pflicht - うれし - 黒い - ▁Как - ▁corri - 하니까 - どこまで - крыл - ▁Yankee - 弄清楚 - ▁clinic - 業務 - ▁Full - ▁steckt - ▁ernsthaft - 選手も - 自動 - ▁rascal - парат - ucci - 録 - ▁disadvantage - 질 - みそ - ▁Stil - сця - ড়ি - ▁Verfahren - 你说的 - ια - ▁savu - ▁шүү - 崇 - ▁어떤 - 見た目 - ▁Schrift - あれは - 可能性があります - 克拉 - iddwa - 時間帯 - 姐妹 - 柯 - セー - ▁basketball - と思うので - بری - сцен - ▁heftig - quant - ▁fringe - ▁strategi - ▁gewissen - ▁handling - 争い - 環 - ▁Đây - ▁flicker - ▁rainbow - ▁intrigue - ▁awareness - ▁mostrar - ▁iom - 到时候 - ▁Crown - ▁Cab - 醒来 - ▁решения - ckling - bikora - قدر - 顺利 - 机场 - gruppen - 慈 - 坎 - ▁господин - ▁cylinder - ▁clinging - работал - ▁simultaneously - sohn - ▁Wright - 'ON' - ▁khá - ▁ascended - ▁advi - 周围的 - ▁proclaimed - ぼく - ▁böyle - lotte - ▁prezent - ▁bezahlt - 봤어 - 万一 - читал - 一个新的 - க்கிற - 那时候 - ▁buzz - ▁accuse - ▁легко - ▁Box - ▁peine - 彭 - ▁rezult - dessus - したのが - ▁Federal - lender - ことになります - ▁strait - ▁initially - свет - очки - 偿 - ▁embarrassed - ▁evaluate - ▁인제 - 成年人 - 这个节目 - ▁або - ▁bijna - ▁Ful - 明治 - ブラジル - 妨 - ▁Konflikt - ▁pyramid - ▁چطور - uɣal - ▁поднял - 无数 - ání - பட்ட - ビジネス - ▁صحبت - ▁njegov - 案例 - ▁visage - 等到 - ▁než - ¡ - ▁Bewusstsein - ▁Kommunikation - ▁друзья - ▁sworn - ▁acceptance - ▁mehreren - ▁我想听 - 审判 - ▁discoveries - 込 - ▁housekeeper - ▁Persi - घ - ▁indispensable - ▁submarine - かき - druž - attend - 基础设施 - ворот - ▁محل - ▁перш - 还记得 - ▁slate - ▁pubblic - ▁elkaar - ことば - 的任务 - ▁complexity - ▁appreciated - üßt - 妃 - ▁société - 宣传 - しているんです - 配信 - ального - ▁rejoiced - ▁закончи - kampf - ▁evolved - 切った - 绳 - zentr - 下がり - ীয় - もち - ▁Step - ▁bland - قف - 批准 - ▁Gerard - كتب - مام - 的主题 - ▁pedra - ▁trá - 会話 - seorang - 坚定 - ▁Swan - バレ - سلام - ▁oyun - ▁Neuro - 洪 - ▁continual - カナダ - ▁tiếp - ப்படுத்த - ▁کتاب - 天皇 - ▁Project - 幻想 - ォ - ់ - ▁Zealand - ▁vergangen - ▁Lauf - を見た - ▁طريق - ஞ - ▁historische - ▁До - kijk - ▁Came - පු - ▁москве - ވެ - ▁Caroline - цией - 听说过 - ▁cœur - ▁الكثير - ▁renown - ҡан - 车站 - ība - お伝えしました - ▁bezahlen - ▁frail - 两者 - ▁ясно - ્ય - 泄 - ޅ - ▁sozusagen - ▁través - ▁elevation - ▁patro - читыва - わけですよね - ▁consistently - ぇ - と述べ - ▁accumulate - ▁describing - ▁beberapa - 农村 - ▁estimated - ゴン - għ - īga - が起きた - 你如何 - ▁Quand - ▁belangrijk - ▁hợp - ▁neighbouring - விய - ▁laquelle - ště - ▁часов - ▁Kara - طلب - 江戸 - рик - ▁accuracy - ▁physics - 构成 - 导演 - сер - ▁самых - گەن - ▁gleaming - funktion - ▁zoals - 普通に - ელ - BM - 纽 - レシピ - ώ - 破壊 - Bahn - ▁owl - 浪漫 - ▁severity - 赔 - ▁disregard - sammlung - 発電 - 图片 - みたいな感じ - নী - ▁regal - mbye - ▁вариант - ▁обнаруж - ▁сожалению - ▁Bạn - ▁saloon - ▁revolu - 三千 - håll - ▁ради - േ - ▁Spencer - fungu - ▁practised - ▁Million - ▁разве - ▁efficiencies - 尿 - 锐 - 広げ - ▁Dž - ▁друга - ▁Sky - حصل - ▁خاص - ▁burada - ▁envoy - ▁referring - ▁prolonged - ▁Bueno - を巡る - ▁monet - 優しい - ▁потеря - ▁Tek - ivamente - сор - ▁Ecke - ▁замечательн - 皆様 - の様子を - ▁savoir - 公里 - ▁අප - 欧米 - ▁customary - ▁melhor - guye - 深く - 把我们 - ુ - ▁hurricane - ▁persecution - ▁subscription - ブランド - ▁начала - ▁одном - 所能 - દ - ▁hauptsächlich - ▁Ghost - ▁Squire - ▁мистер - はこちらです - ▁terminal - شاهد - dodd - 雨雲 - ▁blev - fluss - 絞 - 受害者 - 尴尬 - ▁Wunsch - ▁allerdings - 笔记 - ▁Horn - ància - 明星 - 校园 - 凍 - ▁Prinzip - ▁힘들 - ▁كو - uḍ - Will - ▁будуць - озер - ▁expose - ペア - miseks - ▁суу - ▁melodi - 飛行 - ▁sasa - ஊ - ▁apparatus - ▁아직 - 事务 - ▁Chile - 半島 - ▁너가 - CM - কো - pital - 棚 - ▁eyebrows - 仕掛け - 仪式 - ▁garment - ▁kvin - เส - 理性 - ▁navig - って思って - 県内 - 这一次 - ▁Geschlecht - 崩溃 - 목 - kuti - ▁Small - 午前中 - ▁lumber - ів - 所以我认为 - 严肃 - 过度 - ▁그건 - ▁Мне - архи - ▁forgiveness - 相同 - 长得 - blé - зур - ▁olw - ▁ankoraŭ - ▁Ankaŭ - ▁மிகவும் - 製造 - ▁rouge - 容器 - ந்தார் - ▁damned - ▁lurk - 市内の - 머 - 欣 - ▁Detroit - ▁ridicule - ▁случилось - 偶尔 - 加州 - ▁creeping - ロシア側 - ▁wohin - ▁occhi - 勘 - 遮 - 伐 - 摧毁 - ▁Pflanzen - ▁commodity - ▁conspiracy - 遗憾 - operative - ▁schimb - 暑い - ▁hynny - こんな感じ - 教えてくれ - ▁Tatsächlich - ▁président - ▁nghe - ▁Тэд - 激励 - ▁inheritance - ▁interference - sabye - 验 - angwa - ブルー - ▁podia - ▁заявлени - だったので - アーティスト - 震度 - 出会い - ้น - ▁lavora - していること - larına - ▁ег - 良く - ▁horseback - ▁самым - ▁Atlanta - 怎么做 - ▁Kha - век - ▁undertaken - ▁moist - 二十四 - 经典 - жин - ▁государство - 安倍 - 合适的 - ▁enorm - 男の子 - ▁emphasize - 圣诞 - espècie - ▁inspector - ▁colli - 六月 - ượ - 不是吗 - ▁Statu - 仍然是 - ешься - ▁proudly - ようになりました - 旬 - 榜 - ▁пункт - 準決勝 - ▁Brooklyn - mıştır - овича - 师父 - 皆さんも - RE - ▁Trust - 代わりに - 检测 - ▁optimize - 这可能是 - と見られる - 一根 - ▁keel - ībā - blad - 重症化 - vamo - صنع - を目指して - ยา - ベン - ▁allusion - پە - お手 - ▁senator - 只是一个 - مثل - ▁Fortunately - 瞒 - ▁defiance - ▁metod - ▁maggior - 買い物 - わかった - ▁väl - すり - ▁hereafter - ▁пакуль - ▁avem - してくれた - leitung - ħ - 있 - 保障 - 郁 - 后悔 - Episode - ▁semana - ▁hostess - ▁propag - 発信 - prinz - ىپ - ▁Geoffrey - ▁perquè - ▁vulnerable - ▁Umstände - ▁đồng - ▁krist - 魔法 - 有能力 - 卧 - を防ぐ - ▁دختر - ▁armour - ▁eĉ - はもちろん - ▁psychological - 名为 - เด - ビア - 精力 - ▁вперед - 呐 - 頑張り - ▁hübsch - だけではなく - ▁desirous - kibi - ▁erkannte - درا - ▁retten - mızı - ▁babi - реп - コロナ禍 - 匠 - ▁konzentriert - ▁verursacht - ▁george - leştir - バター - ▁произошло - ▁Krankenhaus - 收拾 - ীর - 業界 - すき - ▁conoce - pür - cible - ивают - 衛 - スケ - ▁четвёртая - ▁솔직히 - 瞧 - ▁Würde - нув - hagi - ▁patriot - を訴え - эргэ - ▁среди - 된 - ▁Veränderungen - 짜 - ▁Manhattan - 추 - ગ - 拯救 - speicher - 我们希望 - zünd - 友好 - ▁wesentlich - ▁systematic - ▁инде - 足以 - 얘 - ▁suppressed - ぱ - ▁manhood - ▁Edgar - ▁herausfinden - ▁Tul - スリー - ▁spoiled - ▁solved - ▁sixteenth - 孫 - ▁simplement - ▁تک - ラム - ▁fireplace - 腐 - ွ - ▁einzigartig - ▁matrimoni - ▁majoria - ▁Often - 就任 - ▁Global - 退休 - 発展 - ▁смотри - ▁llegar - しません - ▁concession - ▁dimension - ▁Pause - 罗斯 - 던데 - しないと - まれて - 崖 - ▁Цяпер - ▁области - 鹰 - ▁unglücklich - ▁трудно - த்தா - ▁remarkably - 提示 - ▁shareholder - pugn - 植え - ע - ▁Oberfläche - ▁Daily - ▁beobachten - ▁chaos - 募 - 美好的 - ▁ghastly - ▁karakter - أصبح - 伙计 - 运营 - 附近的 - 無事 - ▁nchi - 大规模 - われわれ - ▁willkommen - ▁pregunt - 我有一个 - ビデオ - ▁اتفاق - ▁hết - ▁bicycle - ▁Abschluss - ▁deployment - ganya - ▁angefangen - ▁darted - 応 - 素材 - ▁пасля - ▁прибыть - ▁bảo - ▁Signal - ▁publisher - võ - ▁Poland - ει - kunga - ▁blut - ländische - 火山 - เก - 勢力 - raad - 七年 - ▁dusty - 費用 - geladen - ▁выборы - fluent - schneiden - 초 - ▁intimacy - 捷 - 在我看来 - たいと思います - بند - ▁아니면 - lå - ▁Interessen - dimensional - 遂 - ฟ - ▁ведаю - ▁priorit - 約束 - ული - валася - ▁виду - DC - ▁faisait - 描いた - genomen - plau - ▁Watch - アナウンサー - ▁erscheinen - fähigkeit - ▁två - ▁Brasil - まとめて - 这是一个非常 - ṭṭ - einheit - わかって - soever - 토 - ▁гораздо - 耍 - ▁برنامه - خوان - вших - crimi - を含め - ▁Nahrung - ▁Fünf - ▁Dublin - روس - ▁Seattle - ▁cricket - govor - ▁flavor - 一応 - 我不认为 - 预算 - скія - ▁Atem - 改造 - 富有 - ▁удиви - 注文 - კა - ▁nursing - 剣 - باب - ▁максим - ▁зүйл - ▁Quebec - ▁advertisement - ▁Tief - 組み合わせ - どうなって - ▁Ladies - ▁plunge - ▁Après - ▁admiring - пач - ▁Soldaten - ▁Early - ▁ساعت - 取る - 他是一个 - ▁거기서 - ▁Gospel - ▁implemented - ▁dragging - ▁roam - ▁amateur - 野生 - யு - 央 - ▁handled - 黎 - ▁Craig - Ḥ - ▁ເດ - ▁Frankreich - 分布 - ▁diameter - ピーク - ▁integrity - ▁რომ - geordnet - 代价 - ▁kontraŭ - 锻炼 - ▁Obviously - ▁buffalo - ▁carpenter - 所有人都 - ங்களுக்கு - ▁একটি - ந்தது - ▁توانید - ▁scheinen - ējā - 歩く - 无关 - альным - ▁pilgrim - ▁Virgin - ▁chiama - ぁ - リップ - ▁وهو - 劣 - 惹 - 豆腐 - 誕生日 - ▁Police - unterricht - оваться - ▁installed - objekt - イラ - 七月 - ▁shun - ▁sorgen - 민 - อย่าง - 盒子 - ▁адно - मि - ▁tutte - ▁civilized - école - 岩石 - 排除 - Râsete - ▁öffnete - 失礼 - ▁спорт - 派遣 - كَ - تعلم - ▁declara - わら - ▁pitiful - ▁شدن - kārt - 所以这是 - 氧 - ▁Sweden - моў - komeje - ▁Merri - したのです - жә - كۈ - が登場 - 时光 - 教学 - ▁twig - ▁Research - ▁oppressed - 介護 - ▁kugirango - йшоў - ▁slack - ▁Commander - ▁noisy - டெ - 舒服 - ▁quién - ▁собственн - をはじめ - 合理的 - 晨 - ▁Ukraine - 겨 - 祸 - ▁striving - があれば - ▁எனக்கு - ▁речь - يست - 所以我想 - ▁Watson - 那一刻 - ▁Block - держать - က - ▁rejoicing - ▁verheiratet - 分かんない - 书中 - ▁carelessly - ▁Scottish - ▁castell - ▁spoon - かせて - ▁neighboring - ▁встреча - 腺 - ▁обсужда - 饮食 - ▁гэсэн - ంద - ▁roared - 最初は - ▁материал - 亲密 - お昼 - 整天 - ▁cynnwys - ▁существует - 并非 - ▁atrodas - 生长 - 怖 - 数百 - ৰে - ▁تف - 议会 - писать - ৌ - geblieben - 饱 - 레 - 栄養 - ▁perished - ▁Terror - 我当时 - goro - 无聊 - おうち - ▁Fast - 锦 - ▁liệu - linna - 单独 - 려고 - 让人们 - ▁أيضًا - 贺 - 掃除 - ▁சேர் - ового - பிடி - ▁ອື - ▁chị - 第二次 - ходзіць - ẹ - ▁gael - 繰り返し - ▁бала - ▁Saul - Plat - 祈 - 新規感染者 - 担忧 - 分ほど - 過去最多 - ساعد - gaciro - ▁supo - गा - ▁glimmer - 但是如果 - 宜 - 遣 - ባ - ▁얼마 - MA - ▁அவன் - ▁assessment - నే - ▁갔다 - 在网上 - 暮らす - ▁Investor - ▁баб - ზე - ▁сябе - ணை - 等于 - ▁malice - wyth - ▁unusually - 大切に - 지만 - ぽ - 己 - اقتصاد - ▁twelfth - 丧 - 五千 - ▁Interview - gerufen - 인가 - ▁черн - ▁enthält - brä - 減ら - 折り - ▁принял - ificació - اجتماع - ▁adjacent - ▁irresistible - 踢 - 継続 - 咱俩 - ▁бич - ▁futur - ▁Hannah - 发言 - adolesc - ▁Austin - ▁voet - findung - ▁başladı - haye - ▁Vertrag - kuzi - ▁conductor - ▁Lieblings - ▁nobility - 单词 - ないこと - طرف - writer - 松本 - ▁பெண் - 任命 - ▁repress - よろしく - ▁brightness - ▁negativ - опо - ▁irgend - '......' - ▁implore - ▁commonplace - ▁aldı - ▁Meter - ▁ligne - ▁batteries - 援助 - 抗議 - 早餐 - 心态 - ▁wünschen - メール - нена - 冷たい - 切る - 逸 - ▁pequeño - ▁structural - ▁refusal - 乖 - の中でも - 维持 - ▁missionary - ıdır - ▁Alp - キック - に乗り - 所以如果你 - ▁qualche - ologic - ▁Ereignisse - ▁알바 - 岸田総理大臣 - ▁алло - ▁kinh - ▁europäischen - これからも - 挙 - 掲げ - ਲ - ▁맨날 - ▁unfair - 农业 - ▁phụ - 跟我们 - ▁чём - ▁сним - 提交 - identifi - ▁posible - 道具 - ▁tariff - 린 - ▁declaring - ▁augenblick - ▁calor - 話し合 - әү - ырға - ▁krijgen - 躺 - ▁exalted - ▁crise - grafi - етесь - 疫苗 - ▁секрет - ▁wußte - ▁electrical - ▁المس - ۇر - ▁grunt - ▁vapor - 地址 - ▁packet - ▁ўсе - こんにちは - ▁가고 - ▁Бер - ത്ത - ▁costat - ▁beschreiben - RO - ▁разных - ▁Kitty - ▁automatic - 飛行機 - ▁exceeding - half - 涌 - ▁anticipation - 额外的 - こともある - funk - intensive - が見えて - スペース - loqu - 動いて - 论文 - ̀ - お茶 - ▁என்பது - ▁supplier - 怀孕 - 報じ - மர - ▁приходит - 你可以在 - ▁우리가 - Ž - ▁mathematics - ▁preliminary - ▁mindestens - ▁오늘 - بەر - ინ - ▁Тут - 専用 - 鉴 - 색 - ▁Original - ▁fröhlich - ▁электро - ▁режим - истов - 哪怕 - 模糊 - と思うんです - ▁sparkle - ▁jî - исто - ▁sparkling - ▁Mate - ▁mbili - คร - ▁grandeur - 试验 - 专注于 - ▁verantwortlich - 何でも - ▁wambaye - ▁diplomat - ▁президента - bbling - の中には - ▁charter - ▁jünger - 英語 - جلس - 运动员 - ▁piety - ちょっと待って - tatud - 营养 - ▁Prop - ліва - ▁sorrowful - 死去 - 香りが - 马上就 - 衆議院 - ▁stumbled - ▁choosing - ▁blunder - 信頼 - நிற - ▁Homer - 先制 - 仕上げ - ▁entrepreneur - ▁sechzig - ▁utility - альные - ▁Đó - 朋友圈 - ▁трав - レストラン - Á - ▁eindeutig - ▁loneliness - ▁shattered - 宴 - 淘 - ▁Nehmen - ▁Oregon - '".""' - ▁devient - 维尔 - netz - ▁хамгийн - 루 - 安装 - ▁вижу - ▁geschah - ▁Student - グリーン - ▁بأ - ▁precedent - ▁wreath - ▁싶은 - ▁مرا - ▁fué - demokrat - நா - hér - 常常 - ▁yacht - behörde - ▁fami - と比べて - 炊 - 狩 - ▁Connecticut - ▁Evrop - ▁Judith - ছিলেন - વા - 貴重な - ▁Always - maßen - ▁средств - ▁westlich - gratul - ムー - ▁Kö - ▁Zentrum - ▁socialist - ரும் - ▁Modern - ▁влия - 遇见 - 한데 - ▁Pedro - සා - працоў - ແລ້ວ - ▁tecnologia - 來 - ▁праблем - ▁Vogel - ▁Lauren - ▁berühmte - ▁Frenchman - analyse - 号码 - ▁Justin - 机制 - 声称 - foli - 対決 - に入れて - வர்கள் - চে - ▁delicacy - 侯 - ▁Nigeria - නවා - ▁banyak - 舒适 - ▁nyingi - ウム - ▁stimulate - ddau - 扩 - крыты - 几乎没有 - 躍 - ▁imprisonment - スペシャル - ौ - ▁Verstand - ▁shawl - ▁proteg - ▁Zusammenarbeit - 擅长 - лэл - ▁continuar - ▁однако - generation - けがを - wahl - ല - 拝 - 脆弱 - ▁девочк - ▁població - ▁wirtschaftlich - 有名な - ▁жар - ▁мама - 舒 - 挖 - கொள்ள - ваюць - チュ - ▁frente - ▁przez - えば - ▁translate - ▁tylko - ▁sincerity - 可爱 - ▁Percy - ▁Kiel - ▁vorsichtig - ▁verbringen - ▁convertir - assemble - hinda - альной - 倡 - بِ - ▁смысл - 燃え - ҟ - garuka - рэн - ▁już - 棍 - ▁affliction - ▁stubborn - 후 - ▁Junior - ▁ooit - ▁suivant - たりとか - ▁মই - ▁keenly - ▁хэрэг - 懂得 - 屏 - 在某种程度上 - ▁aussieht - ▁possibili - ▁Vorschlag - 少なく - 创造了 - äck - 远离 - こんなこと - 争议 - だからこそ - ▁Kwa - gerät - ▁Drogen - вялік - ▁ministre - 年纪 - ▁Centre - を見る - 熬 - 推出 - ల్ - 見てください - modoka - 汽 - čne - ▁avut - steck - ▁Ryan - ▁Otto - ▁Aussage - 另 - ▁Farmer - রের - 扑 - ▁automatically - 島さん - 初め - 把这些 - ある程度 - strich - ▁принципе - きれいな - 十字 - ▁Alaska - 请问 - 聚会 - ▁ہے - ▁встрети - সব - 言わ - ▁inquiring - ▁outfit - avlja - ▁flog - ▁bildet - ▁consisting - 层面 - ▁Morton - 脇 - わかりました - ▁فإن - ▁نزدیک - ▁специальн - ▁şimdi - 有许多 - درس - ▁Baker - ▁intensely - ▁rejoin - rypt - ▁cama - ▁institute - ▁screaming - サポート - あらゆる - ▁Between - ▁Management - ▁trọng - ▁Geburt - ▁Heimat - ▁countess - 这么多年 - ▁verdict - ائية - そうなんですね - ▁چهار - 听见 - ۇق - 実はこの - おすすめ - ▁Ahora - ▁hiểu - ▁Allerdings - ▁Umugabo - ▁waarin - ▁sniff - ▁gravel - ▁vegetable - ▁dirige - ▁үйл - 美好 - لىرى - 撕 - 趋势 - ▁않아 - звы - wünscht - ▁pathway - ▁voting - zetten - ▁trả - トレーニング - 티 - ▁removing - ▁ritual - ▁Szene - ▁forsake - 二十五 - ▁хамт - ▁коль - 所以它 - ▁Verlust - ▁spectrum - ▁sandwich - ▁healing - ▁вспомни - добав - ▁largo - 名单 - 只是为了 - なければいけない - ည - ▁imprisoned - ▁انتخاب - 宿泊 - 要素 - ▁заяв - ▁سے - ▁fehlt - ▁Antrag - 认识到 - ▁Kleidung - 否认 - 殖 - 伊斯兰 - 碑 - ▁verfügbar - ▁Option - ▁sneak - ના - ▁falsehood - 神経 - غرب - 設定 - ▁frock - 估 - ▁lawful - 列車 - 谎 - 挟 - ై - ಬ - ▁گرفته - ▁Praxis - ▁aṭas - ほかにも - アラ - 之类的 - 感動 - 决心 - ▁reliable - 鼻子 - ҙар - ▁глаз - ތަ - 燃烧 - ▁Arabia - говарива - ▁Direct - ▁regió - ▁natura - パフォーマンス - 厄 - 向かい - 公寓 - ▁працу - ▁psychology - ▁Division - ▁wandte - 生物学 - ▁Agent - 昨年 - ▁Erwachsene - 預 - 玲 - ▁stripped - ▁donkey - уваж - jährigen - าร - ▁حيا - ▁avea - ▁mechanic - ▁surge - 寒気 - ▁History - ▁natuurlijk - ▁restructuring - ▁dainty - ▁probablement - 这些问题 - 币 - قَ - 涼 - ▁Simpson - ▁vẫn - 抑郁 - ▁viņu - ▁Oberst - ▁trench - ▁подход - kontroll - ▁unexpectedly - ▁وقد - ▁ieder - ▁Shaw - 舅 - ▁maior - ▁emphatic - ▁gratify - ▁contemplation - ურ - ▁adviser - ▁blik - shima - 颜 - 驾 - 糟 - 罢 - ▁твое - ビール - ҟа - ▁американск - ▁conosc - ▁geldi - ▁prediction - 嶋 - 巣 - ▁Kapitän - そろそろ - 疏 - 恭 - 感受到 - 国内で - ホント - 技巧 - 部隊 - 综合 - ▁felicit - कु - jord - ▁Walker - bā - ▁없는 - 潘 - яўляецца - ▁картин - 拠点 - ▁запис - ▁الذين - 火星 - ▁dikontrak - ▁inhabited - 蚊 - 锋 - ▁никого - 利润 - 血管 - île - この時期 - ▁پسر - 限り - 静岡県 - 以色列 - ▁mathematical - 的家人 - ▁Ärzte - ▁Marshall - 言ってる - 感动 - 转移 - きましたね - ▁shark - 終わって - ▁última - 幹部 - 目撃 - 黙 - ▁hinzufügen - まいります - ▁Campbell - ▁transit - 난 - culpa - ▁veranda - سوف - 共通 - ▁Sicher - يلة - 進み - 乾燥 - 味わい - が出ています - ▁coil - ▁wickedness - ▁fremde - ▁wistful - ▁домой - ▁شدند - に当たる - ałem - ▁Ansatz - 傾 - 子育て - ▁staatliche - 候选人 - சூ - пет - 困難 - ソー - ▁новые - 少爷 - ▁hiç - ▁خب - 校长 - ▁conseil - 奇妙 - થ - ▁Liverpool - ▁холод - ▁indicator - 真理 - ▁ricevis - ▁deutschen - 申し上げ - 受到了 - енько - ▁rocket - 疑問 - ▁racing - 基督教 - ▁Disney - αν - овского - ▁있지 - ▁prick - 挺好的 - ธ - 初戦 - ▁Wherefore - ▁città - ▁рабіць - église - が発表され - 中村 - ▁rector - ங் - ▁cuanto - ▁pēc - 陌生人 - ▁Aufnahme - گذاری - PC - ▁дежур - 胶 - пэўн - 框架 - ▁министр - 传递 - ▁людзі - ▁loaf - рийн - නම් - klima - ▁lần - 分ごろ - したあと - ▁каш - بيع - ალ - 浏览 - ▁Если - 我很高兴 - губ - ▁мая - 引发 - ▁sigui - ▁Hundred - ▁بها - ▁ändert - かなと思って - '900' - ▁mochte - нис - 标题 - 知識 - 下次 - ▁fliegen - 宮崎 - バンド - ▁opportun - guna - ▁dispara - ているということです - ▁ignored - ▁combina - ▁fifteenth - 尽快 - ▁dialect - ▁maximum - تۇر - gång - хир - ▁роман - ▁renewal - ▁sangat - 当たり前 - 同性恋 - ▁அல்லது - 闺 - ▁Englisch - ▁Bildschirm - 庆祝 - சிய - ▁около - 寒い - ▁bên - ▁lordship - RS - ▁언니 - ▁звонил - 美味 - ▁papel - ▁умер - ▁autour - ▁hurled - ▁Kath - වල - ▁mouvement - ▁Danach - いかに - ▁ewig - ▁westward - ▁Philosophie - ते - AM - にもかかわらず - ▁countless - ▁публи - ▁Send - 我们发现 - ▁отец - ▁последние - 抹 - ▁overthrow - 生きる - 加盟 - 放心吧 - ▁salud - structure - ▁tutor - 扯 - ▁انسان - ▁realization - ▁squadron - 呼んで - 键 - ▁apprehend - рыш - ▁диа - ブロック - ▁fertile - écria - ▁приехал - ▁handeln - ▁choked - прыг - 減って - ▁Вось - 扩展 - このうち - 启动 - ▁soothe - fellow - ỡ - 끼 - 扶 - ▁ilgili - ▁brigade - ▁latitude - お子さん - ▁کور - 进展 - 脑子 - 紅 - æ - ▁Wisconsin - ▁intolerable - ▁Hügel - この先 - ▁mantle - тельный - ▁legislature - ▁거는 - ▁싶어 - ámos - TE - ▁snatch - 久保 - 贈 - 诉讼 - держал - ▁kep - ▁заметил - وست - ▁займа - গু - вшие - ▁militär - ▁pasado - ▁Orange - ▁compliance - ▁Ozean - 紧急 - ▁Handlung - 言いました - 运作 - なんですけども - ▁хор - ▁scatter - spezifisch - ▁Gedicht - に向け - ▁conduc - вернуть - ▁schickte - ▁Hinweis - 償 - ▁столько - ▁фотограф - 游泳 - 守備 - ▁Countess - หา - ▁تأ - ▁видим - ▁kitten - ސް - 极端 - ▁superb - ▁пяти - சோ - 知ってる - ▁Education - ▁kompliziert - ▁thieves - 姜 - ▁bucks - 我们俩 - てくれました - ▁burial - 一体何 - ложить - 格雷 - бежал - मु - にならない - はありませんでした - ▁homeward - ▁probe - ▁hände - ▁першы - 有更多的 - ▁düş - ▁Help - ▁chế - ▁longue - ▁conqueror - ▁бүх - 活発 - ▁kehrte - ▁caravan - ▁sternly - 我以为 - ▁adjoining - ▁töten - ▁ນະ - ▁behauptet - ▁бывает - 采用 - シャツ - ▁deixar - ▁Kauf - 称为 - 葛 - 驻 - ▁thiết - 珀 - ▁preĝejo - 贪 - を広げ - ▁cradle - やろ - ▁thích - 出発 - ▁Gehen - ▁repay - жер - ▁лук - ▁scrub - 頂いて - تې - ▁circus - 如果没有 - ▁resign - ▁спин - 女孩子 - ミー - 役割 - ទ - preservation - ▁музыка - ▁член - ▁Petersburg - ganira - வன் - wachen - 公安 - ▁Bevor - 自行车 - ▁hội - 찍 - ▁Anwalt - ▁genutzt - ▁participants - 观看 - ▁confronted - check - ▁дети - って言う - 叉 - ▁raven - ржав - 徴 - ▁системы - ▁kullanıl - ▁পরি - ಟ - ▁predecessor - ▁многих - 亦 - ▁федеральн - 为您 - 曲げ - ಗಳ - 认知 - ▁sıra - ▁Grim - ێکی - 月亮 - ិ - キャラクター - ք - 斤 - ▁страшно - ▁Mehrheit - になりそうです - ▁Menschheit - 使え - 其他地方 - мәй - ▁آس - acca - 違 - 犹豫 - ▁хотелось - jong - ▁instinctively - 豊かな - ▁blunt - '250' - 留言 - ▁Flora - あるんですね - ▁splendour - ▁explicit - ▁Constant - ▁navy - ▁kumenya - で最も - يص - ▁folgenden - ▁supernatural - ▁thirteenth - ▁Medizin - ницы - ▁Oui - 加工 - ▁Cooper - ▁beforehand - ケン - 運転手 - 这个项目 - keneye - ▁fangen - ▁reasonably - ▁tiên - ▁Burton - existent - ▁دانش - 视觉 - ▁kennt - 议员 - 敦 - 贤 - ラウンド - ebɣa - ▁современн - зван - ▁container - ▁лев - leuchtet - を始めた - ambaye - ▁få - ▁sensibil - ▁scruple - ▁기억 - ▁vairāk - ▁notamment - ▁Gegensatz - 農業 - ▁exclaim - ▁Schlacht - ▁নিয়ে - ▁இசை - мель - 食べ物 - धा - ▁Happy - 趣 - ▁Peace - люд - ▁harass - 开玩笑 - ▁suspense - ▁treason - ▁사실 - ▁För - チン - ▁Marcus - ▁хозяйство - ▁alleged - ▁kettle - 케 - веж - マリウポリ - 気象 - ▁хэсэг - ▁verhindern - losigkeit - 掌声 - яўля - sparen - ▁있을 - grenzen - ▁bombard - ggio - στ - 订阅 - ▁darkened - ▁uplift - ▁diventa - どれだけ - ▁لذا - 谋杀 - 克服 - ക്ക - 防犯カメラ - ▁Westminster - ▁gezwungen - ▁đúng - ▁soothing - ނަ - ▁erneut - ▁واقعا - รา - 与其 - ▁precept - ▁وإ - LA - 一点点 - ▁lượng - ▁черт - ▁settlers - Qué - ▁بچه - ▁momentary - ▁Ні - حاول - ▁Wonder - 文本 - ▁кел - ▁Soviet - ▁হবে - ▁Kleid - ▁bazı - ▁Senat - ▁Moor - ▁proverb - ▁actress - matur - న్ - йым - ▁জান - ▁sentido - 耶稣 - ▁trifling - ▁vertrauen - ▁дмитр - ▁equation - 有足够的 - ▁rotten - ▁tutta - ▁второго - 账户 - ▁überzeugt - ▁daudz - فريق - 房子里 - weisung - ▁biological - ranno - స్త - 但是如果你 - ▁gouvernement - ▁Bühne - ▁tâm - こない - ▁zweihundert - ▁крут - ▁начинает - activitat - гээд - 視聴者 - ▁Actually - ▁điểm - ▁quivering - ▁Biden - ▁principio - ▁Italien - ▁schade - IN - ▁Ça - wash - ▁Child - heli - ▁أول - 凌 - 약 - 主持人 - 铃 - ▁медведев - ▁Benutzer - ▁эксперт - ▁geändert - ▁allocation - なと思いました - 看不到 - 그 - ▁strove - 提议 - ▁Му - 自宅で - 資料 - ɣur - agrada - ▁schlafen - ▁Cran - ▁Shah - 分野 - يقة - ▁البر - ବ - β - ▁erhöht - 宠 - 閉じ - सि - ▁Living - ▁Тр - 自主 - 捜 - ▁hagati - ▁contemptuous - 眼泪 - ▁NPR - ▁oggi - ▁камер - ▁coneix - 針 - ▁вызыва - ▁Foster - せば - ируют - ▁لطفا - ▁experiencing - ը - ▁acuerdo - 况 - 大人気 - 見通しです - きちんと - трымліва - ▁giải - 霸 - ▁merciful - ▁subordinate - 范围内 - ▁حتی - built - 这个东西 - ですもんね - 骚 - ▁analyze - ▁Bunny - ▁Dallas - 正如你 - گران - lož - тверд - 増えている - Tool - 第三个 - コントロール - ັບ - 様々な - ▁survivor - ▁விளையாட - ▁ubwa - ▁Chester - ▁Wann - ▁vermute - 屋根 - ▁있다 - 诚实 - jyanye - ▁klingt - ▁chú - ▁skr - ▁приказ - луб - ▁fiend - ▁Şimdi - 阶级 - 模様 - günstig - 意义上 - に到着 - ▁filings - راض - と思っています - ▁utilize - 挥 - ▁victorious - ১ - ▁staggered - ▁vorhanden - ▁помню - ņēm - ▁警察によりますと - 队伍 - ▁Felix - ▁thức - を決めました - ▁характер - ▁Treffen - 做到了 - 兔子 - ▁depict - 誇 - ▁biraz - 白天 - satisfied - ▁Président - époque - ▁recupera - ることができます - 频 - 市長 - ▁Má - ▁gerçekten - partei - тверж - financ - 完整的 - bilidad - ▁siguiente - 殿下 - вэр - სი - ▁titre - 船长 - ▁Country - ▁Money - manuel - 박 - ▁Ancak - 昨夜 - ▁achtzehn - ować - ▁Babylon - 沃尔 - కా - 不良 - 信念 - と思うんですね - trekken - 絡 - ▁inevitably - منطقة - ▁Beobacht - ▁Veränderung - 乗客 - っていきます - tracht - রো - ▁život - 今日から - 斑 - ▁Beyond - ▁així - ▁말이야 - 後ろに - ▁humid - 遭受 - wirkung - ▁especie - exclusi - ▁classical - 皿 - ▁помочь - 尊敬 - ▁remnant - 旋转 - っぽい - ▁взять - 低下 - 処分 - 什么都不 - Argent - 激情 - ▁Augustus - 彦 - ד - ▁cependant - intéress - ▁pronto - ▁зори - 武装 - ▁ruling - ගෙන - ▁decoration - ▁Kohle - ▁شکل - ັງ - ▁Highness - ▁verlangen - 古い - テロ - されること - 言います - ▁besuchte - 一堆 - らせる - ▁Adrian - ▁patrol - へえ - matik - 剥 - 尻 - 壳 - ▁секунд - ▁awesome - ▁아빠 - ▁Parlement - ▁Maxim - 八月 - ரின் - গ্র - 締 - 物質 - ▁opini - 長年 - ▁ladyship - ▁identi - ▁bribe - ▁ເອົາ - Ɛ - ▁Columbus - 岗 - ▁mauvais - 溜 - ▁Eugene - 房地产 - ▁وهذا - ▁sáng - రి - 計算 - ▁Stanley - spetta - 看一下 - 平方 - லில் - тянул - 入国 - 関心 - ▁Delaware - 庞 - 蔡 - ▁beantworten - 继承 - アイドル - 教団 - 普段 - 周期 - ▁varying - ▁homoj - に基づ - ポーランド - ▁Providence - ▁municipi - ▁äußerst - 浓 - ▁Festival - ▁verbinden - ▁equipped - ▁measurement - と思ったら - 眼前 - ▁držav - გი - 各个 - 聞きました - ▁annoyance - މު - ▁Fit - 小型 - 网上 - වෙන - ▁dinero - 認められ - 兵士 - observa - 隔离 - კი - ▁خوش - ▁Liberal - を繰り返し - 密码 - ▁marché - ống - πο - ▁수도 - 会发生 - وظ - ▁vị - ▁haughty - ▁preocupa - ▁ecosystem - ▁만나 - 可爱的 - ▁unkind - ゆっくりと - ▁bunk - impuls - ▁hardware - 話す - と思うんですよね - ดี - springen - ▁đều - 併 - 拨 - ▁матери - ▁medicina - ▁Marco - ▁Maud - ▁imaginary - ▁racial - ▁avere - köz - ▁бүл - ▁тады - 婦 - つなげ - ▁dynamics - ▁алексей - ต้อง - ▁Jasper - 他妈 - ▁versteckt - რო - 主播 - 見ている - ▁furiously - ▁universities - 问问 - ▁пути - লাম - ▁Guerra - روب - әлә - ▁Luis - 哲 - 多次 - 吉田 - いれば - ▁rwego - āli - 甘い - 深夜 - ▁Chef - ับ - ▁Harvey - ▁lúc - ▁ғой - ▁improvis - ัด - を増や - burgh - が行われました - ゴルフ - ▁headache - ▁quinze - ▁нашу - ▁nötig - ▁своими - ▁apology - ▁empower - ▁resultat - ční - ▁mwana - ▁Basis - ▁HIV - urukundo - ▁neunzehn - ▁உள்ளன - ▁malgranda - 失礼します - ▁turkey - ▁pretence - ▁அதை - ▁unaware - 世界上最 - නු - 言われる - 食べられる - ▁Carol - ▁Londres - ▁безопасности - ▁gewann - 買う - ▁courtyard - 考えると - ▁Einstein - ▁cielo - intuitiv - 秩序 - ▁منطقه - ▁billig - ▁glitter - ▁prodig - ▁Weltkrieg - ▁sincerely - 看不见 - illard - ▁ёй - cyaha - 른 - ▁phantom - ▁furnace - ▁позвони - 踪 - 喜び - ▁selben - குதி - ▁Pru - レーン - 涛 - 谨慎 - ▁hunne - ▁Schauen - ▁könne - 我真的很 - ակ - ▁qualcosa - ▁hermano - ▁parecía - に関わ - тически - 大臣は - ▁фед - 学术 - croft - 恋爱 - kogu - 拠 - вались - ▁обраща - ▁Aktion - ▁чинь - 韩国 - 认可 - ▁Food - 重症者 - عظم - ▁Jennifer - ▁español - ▁zerstört - ▁fuerza - ▁оказыва - ▁pluraj - ▁weariness - ▁Tausende - ▁relació - ▁Katie - ▁revolutionary - 悪化 - 保罗 - дзіць - geschrieben - ▁Malcolm - ▁crooked - ▁puedo - ▁fascinated - 神社 - 假装 - fleur - ▁vijf - ▁ваша - ويل - فعال - dzīvo - 項 - 钥匙 - ▁Innovation - ▁utilisé - ▁festgestellt - 分散 - 調理 - ▁grit - ▁Golf - nerv - ▁adju - ▁узнал - ▁Kubernetes - ▁crític - ▁Dyna - 解散 - ▁exploring - ▁Мар - ▁softened - 龄 - ▁minority - origen - ▁өмнө - ▁surtout - ▁obedient - ▁statistics - ▁মু - ▁boundaries - ▁repetition - ▁svoje - ございます - ▁Jessica - ▁besorgt - ▁acabar - ▁дахь - ▁dagger - ▁Betracht - ことにしています - いませんでした - 建立了 - ような感じ - ▁körperlich - ▁scary - យ - ▁distracted - ▁applaud - ▁bleeding - ▁Accordingly - ▁Tess - 非難 - ▁Hunter - ▁circulation - ▁зараз - ▁ravine - 勝手に - င် - ఁ - 裙 - ▁wondrous - ▁Harriet - ▁utilization - ▁suspend - لىك - 付けて - ્ર - เรา - なんだよ - ▁говорите - 同じような - ▁Palace - ▁upgrade - ီ - ▁controversy - 铜 - ▁refusing - に挑戦 - 苦労 - 番号 - ▁Fälle - ▁Special - ▁Vorstand - ▁Bereit - ▁Yani - ▁நெ - 耐心 - ▁Schnee - ▁Opera - ▁çalışma - まらない - ▁magnetic - ▁Constance - ▁Wörter - ▁behandeln - মন - 推进 - なんだろう - 培 - σε - ▁ammunition - ▁gahunda - ▁sarebbe - ▁kijken - සේ - ▁2010 - ▁Twain - 抑 - 生き物 - ▁Küste - Okay - ▁paddle - 抛 - 筒 - シンプル - 涵 - ▁indefinite - 妆 - ും - ▁luôn - ▁Titan - ▁popularity - ağa - 我父亲 - 目指 - ▁evolve - ▁Bydd - ▁Transport - ▁cyose - ▁Zitat - 这个想法 - ۋە - ▁hängt - できるように - ▁Raven - やってきた - શ - ▁všech - ▁securing - ▁herinner - を巡り - ▁Ariko - тып - entrada - ▁Feli - ▁bệnh - ▁краіны - ▁Gewicht - 询问 - chimp - 质疑 - ▁다시 - いったん - ▁Cup - オス - ▁compel - ▁iPhone - ▁братьев - ご存じ - ▁Ле - เธอ - 抜き - かれて - オフ - ▁ahubwo - fenster - ▁famine - খন - 茨城県 - ▁insignificant - 循环 - ▁Sagte - ▁Ronald - ▁groom - ordinate - ▁oyster - 関係者によりますと - ▁Cyrus - ▁Duncan - ▁وفي - වත් - ▁Deep - ติ - filtr - 火曜日 - ▁venerable - しょうゆ - ▁далеко - 案内 - ▁стане - ▁extremity - アニメ - 脊 - 近づいて - ▁Russland - ▁ukuthi - ▁چو - ▁trẻ - 王国 - ▁swam - ▁rimwe - ▁Emil - ピンチ - ▁robbery - 現代 - 环境中 - ▁элек - ▁দিয়ে - が相次いで - бросил - 驱动 - ▁Burke - ▁testament - betrieb - ▁italian - を持っている - ▁долг - ふた - ▁komplett - 貧 - ▁resurrection - ▁다음 - ▁klicken - ▁minimal - 桐 - 辜 - ▁زۆر - 砸 - rechnung - 早点 - 重视 - ▁Porque - 简直 - ്ര - лект - ▁ibibazo - 首歌 - 싸 - ▁untersuchen - американ - ▁এখন - やったら - ▁plaster - 逃跑 - ▁baptism - 太空 - connect - 好朋友 - tempered - ▁gemein - ▁இப்ப - 皇后 - ▁candid - ъезд - 拾 - 拥抱 - 構造 - 在上面 - 集まり - ▁හැ - قليل - ▁Lyon - ▁Kalifornien - ▁хотели - ▁passionately - kombe - 气候 - ლო - ޑ - ▁fácil - ▁Elektro - ▁машина - dolf - 適用 - 诱 - ▁میتوان - ウォー - 抱歉 - ▁አይ - ▁miglior - 揺 - ▁ekster - 懸 - 耻 - ▁grâce - ▁очевидно - ごめんなさい - ▁Further - ▁whim - ▁đối - ▁нашем - ▁Bloom - ▁Filip - ▁большие - ▁먹어 - ▁stitch - すてきな - 輸出 - ▁похоже - ގ - 星球 - ▁tumor - ▁suced - ▁nadie - 貼 - ▁бюджет - が必要な - 小小的 - ▁dankbar - ▁virtually - 明明 - ▁vernünftig - 漆 - 谦 - 合った - أخذ - ▁پشت - ▁mound - 憧れ - péri - つらい - ▁wholesale - を求めて - ▁liquidity - īju - ▁Alabama - ▁admiral - ▁mañana - جتمع - ▁surround - 一个女人 - 診断 - 激烈 - 結び - 的人来说 - kämpfe - 男生 - yinza - ▁hoàn - 同士 - ეთ - จาก - ▁сложно - ▁pleading - 加强 - 正しい - important - ▁greet - 晕 - ވާ - угл - ▁pierced - ▁illustrious - ▁shrank - ▁giống - 国境 - ▁kombin - مجموعة - 大厅 - écout - 霊 - 稽 - ▁гэтыя - ▁Fä - 获取 - yitibwa - ▁vieux - wacu - historie - ▁ministry - Kampagne - ೇ - ▁bachelor - ▁đường - ▁Pli - ruth - ▁honorable - abhängig - 偉 - ▁inconvenience - 님 - 逛 - ▁몰라 - ▁überzeugen - 検証 - ▁abolish - ▁погиб - скага - Mobil - ▁vegetation - してくる - 歩き - sinzi - hati - 埋め - ▁Kapital - යෙන් - 摄 - ▁بىلەن - حاضر - ▁Viertel - อบ - 从来不 - ▁እን - 角落 - ▁Class - 動く - 领导人 - 诈 - 毫无疑问 - ▁છે - ▁아니라 - 藤井 - provi - 灰色 - ▁metaphor - ▁outbreak - 警備 - ▁millones - 有几个 - 修复 - ▁어제 - 이나 - 郎さん - ▁Knie - ▁listeners - 还有什么 - dığını - ▁Perspektive - ▁apologize - ▁Abendessen - 我一直在 - праў - ▁맛있 - 跪 - ▁massacre - ▁chemistry - ▁mercat - ▁нужен - ▁наук - spitze - ▁lascia - 見つけた - ждение - 不要再 - 雷雨 - 停车 - ▁treachery - 冥 - 舰 - ▁Phillips - ▁uneasiness - ▁Ursache - ▁Что - ▁들어가 - stancia - ическое - wendung - ▁Mediterranean - 罐 - 肿 - ▁vehement - ▁interface - ▁Palm - ▁übrigens - ▁rumour - ▁Verbrechen - 无限 - BS - 惊人的 - 天使 - ▁bridle - かかった - 名称 - を決める - ▁wireless - アピール - ッピング - ▁telèfon - ▁loại - 窝 - ாட்சி - 市にある - 其中一些 - 维护 - ▁triple - ▁Brücke - 記事 - ▁сталин - ▁trebui - ▁tromp - ないよう - プリ - 中午 - ▁Horse - 詹姆斯 - ▁néixer - 忌 - 駄目 - ▁நீங்கள் - 促进 - 统一 - 裁判所 - ▁może - 师傅 - ▁equipo - ▁prophecy - ▁voluntary - ▁څنګه - ▁느낌 - ничтож - ▁یافت - ▁modified - ிடம் - そのとき - ▁trenches - いたんです - kumva - пыт - 兵器 - ▁Einstellung - نامه - 标记 - ▁Termin - ヴ - ▁станция - お弁当 - verhalten - 预期 - 三角 - 留下来 - ków - ▁Doktor - ▁angst - 我刚才 - ▁kneeling - ▁Leslie - 逃走 - 有意思 - ▁مادر - drž - ▁boundary - 甲子園 - prej - 審査 - ▁ceremonies - ▁спокойно - 連覇 - 交換 - 奇迹 - ▁груд - aardig - 年目 - 我希望你 - キャプテン - 畏 - ▁scoundrel - honneur - おかげで - 侵入 - 几十年 - ▁Milan - 歌词 - に関して - 聞かれ - 媒 - ▁сделали - 佐々木 - ▁henceforth - ▁Kredit - ▁athlete - ▁vigor - овское - дорож - ▁Pitt - eficient - ▁sentit - 値段 - 彻 - બ - ▁полностью - 种植 - 确认 - ▁kreativ - ▁Dickens - trained - ့ - ▁गर्न - ▁оружие - 見てみると - 日連続で - ▁crater - ▁communion - 分别 - прашива - 続けた - 本质上 - regierung - を行いました - ▁plug - กับ - 毫不 - パーク - eceğim - esthetic - ▁Brent - されていて - ▁violation - 聘 - င်း - ▁могла - 呼ばれ - ▁gorge - がありまして - ▁Integr - 基本上是 - வரும் - 一瞬 - 地看着 - puesta - してしまった - 新潟県 - こういうこと - 酵 - ▁Später - ▁людзей - ▁Lizzie - たぶん - ▁fancies - ▁cetera - 巨大な - rühm - zentrum - ▁politika - ▁illustr - ▁Ҡа - 음 - ▁самой - ർ - 畅 - ▁reappear - 났 - ހު - lector - 晴れて - ▁stockings - ▁discrimination - ▁nasty - ▁folgt - ▁duchess - 短暂 - سط - 始终 - られていた - ▁deutsche - ▁Stati - ▁earnestness - 自衛隊 - 碳 - ▁accommodation - ନ - ▁restoration - 欲望 - వు - ▁brake - ▁placid - 炒 - ▁сельск - 你有没有 - ▁Basil - präsident - ▁Dilluns - ▁reagieren - ▁mosquito - ים - ▁introducing - 周囲 - ▁Katze - 赋予 - 合理 - onfedera - ▁zamanda - स्त - 涂 - ▁долларов - pharma - 贫困 - ▁impose - が出ている - ▁Roth - ▁đô - ▁têm - 饰 - ▁Leidenschaft - 佑 - ▁எல்லா - ▁Verfassung - 突っ - 对抗 - ▁offenbar - 东北 - ▁voce - っていう感じ - 鶴 - ק - ▁volatility - ▁Пасля - ぴったり - ▁охран - ▁exhort - ▁Meeres - öffentlich - ▁avenge - 昭 - 盒 - ▁luxurious - แล้ว - ▁být - ごみ - 存储 - ▁그래가지고 - ▁variant - 予防 - 设施 - কাল - ▁realizing - 袭 - ▁موضوع - 叠 - ክ - ▁airplane - ▁Vienna - ▁modification - ніз - 主人公 - ▁Treppe - ▁melody - froid - ▁Rw - ▁growled - 星星 - богат - 伯特 - プレッシャー - ▁Archbishop - 勾 - ▁bất - ▁причем - 都不知道 - ▁آنجا - ▁shrine - ▁precision - 只要你 - ▁feverish - ハム - 顶部 - ствовать - ▁دیگری - ▁надеюсь - ▁hóa - intérieur - 要不然 - န် - ות - 剩下的 - ▁сосед - できるよう - ▁grinned - jumu - ハッ - zungu - ▁april - chamber - ಅ - 迷惑 - ▁warehouse - むしろ - लो - ▁construcció - 这一切都 - ▁panting - ▁Davy - autant - あると思います - 酢 - ▁accommodate - ▁covenant - 갖고 - ▁transcend - ▁головой - ▁olabilir - ▁vô - ▁postpone - ▁زمانی - を入れる - 作られた - 按钮 - 赏 - ▁Schauspieler - ▁único - 処理 - ▁молча - ▁petrol - ▁шестой - ▁descendants - ▁желез - ならではの - ▁financier - ▁competent - ▁Olympic - ▁угодно - ▁tyranny - ▁Broadway - ström - ▁antagonist - ▁squat - ▁combust - 飲んで - ▁Want - ▁crec - ▁biscuit - ▁beneficial - йшлі - ▁каманд - ▁nursery - ▁reconstruct - ▁adverse - ▁beendet - ▁длин - ēji - சொல் - 身后 - ▁Conserva - 친 - ultima - 해가지고 - ▁wertvoll - 效率 - 취 - ▁Swift - ▁deceased - 侠 - ▁sicherzustellen - ▁идти - ▁بهتر - ▁bluff - سطح - ▁oppression - そっか - عامل - ▁inizia - ▁допустим - ▁ډېر - 忽视 - 微妙 - 抑制 - ރި - 度目の - EE - ってくれる - ▁colleague - ▁cứ - দ্ধ - выр - 丈 - 艦 - ラジオ - ▁احساس - 微信公众号 - schieben - ▁écrit - établi - 东方 - রণ - ▁northward - ▁нешта - 一分钟 - 捐 - ▁Baptist - 想办法 - ▁Pflicht - ▁aufmerksam - ▁produc - どうでしょうか - デモ - ıyı - 水果 - ▁zaidi - ٌ - ▁cherished - プラン - ▁exerc - 顾问 - ▁علا - sicherheit - myśl - ▁devons - ▁Mitgliedstaaten - ▁survival - 倦 - որ - 犹太 - ▁использова - 職人 - 爵士 - 기도 - ▁wolle - 調子 - ▁ведае - ▁ülke - größe - beruf - 大陆 - ▁sultan - 偏见 - ▁granite - ▁Regul - 出售 - ▁estudio - ▁ziek - ▁junta - 与党 - サラダ - 陥 - ▁disclosure - ▁sequential - ▁революци - ▁nhỏ - ▁Gesamt - ▁Gestalt - ▁отделения - 欢迎来到 - gewinn - 圧倒 - ▁accurately - 流动 - との関係 - 調べに対し - らっしゃる - ▁হয়েছে - ত্ত - 义务 - ▁Вот - රා - 苦しい - conduct - 日ざし - ▁frequency - ▁сёння - ▁Socrates - ▁دقیق - っていうのも - 意味着什么 - ▁উপ - なのかな - ▁offiziell - পুর - brew - ▁границ - ▁download - schmutz - 分間 - 沿岸 - 节奏 - ウクライナ軍 - 埃及 - ▁télé - 谈判 - ▁geliyor - brennen - もらいます - 螺 - schlüssel - سازی - দেশ - ▁முன்ன - ▁fraction - ஃப் - ▁Dazu - ▁lebendig - 去哪儿 - ▁đưa - ▁можешь - 垫 - ばっかり - 欺骗 - 总结 - ▁حس - бель - 呈现 - פ - カウント - ▁Switzerland - ▁ekzistas - ▁römische - ▁proclaim - ▁residential - ອກ - わけですから - 但这是 - ▁پول - إِ - 抬起 - ▁secular - ▁grievous - ▁diligence - ▁முதல் - 这个过程 - ▁exempt - 引导 - 洪水 - irika - ワイン - 俄 - オリジナル - 威廉 - ▁injuries - 更好地 - ▁transmission - ▁பிரி - 遠く - ▁resignation - 受け止め - ергә - ファー - 耗 - ▁splendor - ▁zurückkehren - 専 - 棵 - ▁شامل - archiv - 意志 - getreten - ▁legion - ▁offspring - ▁பின்னர் - 承認 - ウクライナ侵攻 - ▁மிக - kilometer - ▁ivory - bücher - ▁Evans - ▁clatter - ▁klub - 議長 - ▁Beweis - ポーズ - ▁Schließlich - ▁Venezuela - ▁элемент - 爪 - ▁sebuah - 晒 - ▁profesor - ▁fünfundzwanzig - 广场 - 说实话 - という意味 - 行列 - 未知 - ▁Details - ▁mâ - ▁holiness - ▁estudiar - ▁мисс - 济 - product - อยู่ - 委屈 - ▁இருக்கும் - 退出 - 復帰 - ▁gait - ▁costru - 活跃 - んですけれど - ▁employee - 期节目 - ▁kurya - 楽しく - ▁varieties - ▁superficial - もしかして - kundig - ூர் - を持った - ذكر - ▁tratta - ▁southward - ▁Medikamente - ▁Tang - 販 - おそれがあります - ▁بیرون - 护士 - ▁película - ゴー - изова - 換 - 很简单 - ▁Flam - ▁pourrait - 达成 - ቀ - ▁Ferdinand - ▁algún - ▁миллиард - 钢琴 - ▁Either - ▁Раз - ▁وهي - えっと - ▁benötigt - ▁hypothesis - 렸 - ▁Schluss - ▁komplexe - ▁endowed - ▁scenery - 連勝 - miştir - ▁peuple - саҡ - ▁babiri - 速い - 只不过 - ຸ - ▁ermöglichen - ▁english - ▁капитан - 医療機関 - ▁представлен - nął - ▁alteration - 拿起 - ▁repentance - 车辆 - 再現 - 克莱 - ▁newydd - うどん - bearer - سات - ▁людьми - ▁Indonesia - ▁spontaneous - ▁வழங்க - ▁legislative - 即便 - ▁rubbing - ▁wakati - 出来事 - 曇り - 属下的一个 - ▁getrennt - 数据库 - ▁damsel - னே - серд - Video - ▁посмотреть - 该怎么办 - ▁eccentric - ▁완전 - ▁نبود - ▁написал - ▁Eagle - 村庄 - 及其 - habwa - ▁lavish - گذار - 最終的に - ющей - 斯顿 - 栽培 - ▁protože - ▁nightmare - 另一方面 - であること - 临床 - amenye - ▁resultado - ▁Dreh - 漬け - コート - だといいます - 肩膀 - ▁Louisiana - ▁grotesque - ▁पनि - ▁restaur - ▁sitzt - 不思議な - ▁salv - ▁করেন - ▁bố - 年ぶり - ▁Rather - 这样的一个 - 読んで - ▁fung - റ - ĝoj - ▁действия - ▁schreibt - kanye - 解答 - ▁flirt - owski - ▁کردیم - ▁примерно - ▁prolong - 放置 - ▁vexed - 描かれ - 伸出 - ғына - ▁малая - ▁olacak - 甘み - ▁reprit - নো - 菓子 - 客气 - ▁tiel - できるだけ - 教训 - stairs - ား - 時過ぎ - spirited - 局面 - 述べ - 笼 - 幺 - 债务 - ▁Athens - ▁hydro - ▁mówi - 携 - ▁obstruct - 金钱 - 找个 - ▁депутат - ▁이번에 - 冯 - ▁شاید - kresk - ▁dringend - 以一种 - ▁Gaul - ▁gefällt - 约会 - 阁 - ▁возвраща - பெயர் - ▁verletzt - ▁Befehl - 現象 - স্ত - ▁heavier - ▁Darüber - ▁모르겠 - 一本书 - ▁ordain - 明らか - عيد - covid - もらいました - ព - ▁dipun - intérêt - ▁Mitchell - 很长时间 - 段階で - ▁микро - ▁thú - 年级 - 热爱 - ▁военно - 書かれて - の様子です - ▁Einheit - 拘 - おととし - づらい - ▁Gerechtigkeit - ▁verdammt - ▁intuition - ▁angemessen - getrieben - ▁oblige - геҙ - ▁außerdem - 婚礼 - geleitet - 特別な - ▁factories - 就不能 - 福利 - 古老的 - typisch - 終わ - 蛮 - 分かれ - ▁hoặc - ▁миллионов - ▁furchtbar - 瞎 - ▁대학 - ▁foliage - ▁drugi - ▁expressive - 收藏 - ▁furent - ▁applies - 很多事情 - ▁داره - ח - ▁cafe - waarde - エイ - 穆斯林 - ▁waistcoat - ▁memorial - 渡辺 - алтай - ▁który - 海底 - ▁velmi - 一阵 - ▁خودش - 函 - ▁achieving - ▁hanze - یایی - ▁самые - 広い範囲で - ▁crisp - فرد - ならば - となりそうです - 棄 - ▁Almighty - ▁defensive - ▁pouvons - 轰 - زده - ▁Drake - 軍事侵攻 - どうしよう - ▁Nathan - 暴行 - 网友 - ▁Excel - ▁offense - มาก - ▁plateau - こういうふうに - ▁течение - 作业 - 晚安 - ▁আস - ведения - ▁début - やり方 - ▁generating - 树林 - ▁замет - ▁vendor - ▁galley - ouvre - 毫 - ▁федерации - いかがでしょうか - ▁necklace - ▁binding - 阴影 - とみられています - 復活 - ▁cứu - 停電 - ▁khó - わけじゃない - беҙ - 하면은 - ▁არა - gesteld - 疼痛 - ▁haciendo - 晋 - ▁hína - ▁majestic - 改正 - த்திய - temperatur - zuhalten - ▁Norway - ▁Questa - ▁Portuguese - 열 - ソフト - やめて - liselt - ▁кӱ - waardig - ғыҙ - ▁Capital - išk - ▁Additionally - ▁исследовани - ▁நல்ல - ▁complicat - 又一次 - 一个男人 - 十分钟 - 回答说 - េ - セカンド - 枯 - 栋 - 是一样的 - ▁Madison - ▁afecta - ▁اتاق - знания - ▁دهند - замен - ジン - 基金会 - ▁московск - ェ - ▁durchführen - ▁disappearance - звуч - installation - 少しでも - 你会发现 - ▁erforderlich - 舟 - ▁Verteidigung - ▁золот - ▁பெரிய - ▁alumni - ▁lautet - 仙台 - ▁nombreux - ▁pacient - 场所 - ▁gaily - 致力于 - כ - ▁España - ▁humiliation - ▁هنوز - ▁Hitler - ▁dónde - ▁Edmund - 液体 - ▁gebeten - ▁konata - 术语 - ▁slew - ▁прошу - zustand - ▁Native - ▁confounded - んでしょ - 短期 - ▁Wach - ▁voiture - வான - ▁snug - ▁fearless - ▁reflex - チュー - 镜头 - ▁radiation - ▁vardır - ▁байгуул - ▁NASA - ▁perplexed - ▁여자 - ▁вялікі - 異なる - ▁specie - ホン - 但我认为 - ▁flaming - ▁discord - solució - қәа - 興 - 很清楚 - ēju - ぴ - 远远 - ющего - 言える - ▁мужчина - 睇 - ▁Verbraucher - ▁unbekannt - 色んな - génér - 暮 - ▁جوان - ▁correspondent - 獲 - ▁المست - 構え - ણ - ▁există - ▁ethical - になったら - dagi - мовы - өсө - ▁transparency - ▁ஆகும் - ▁Investition - ▁pretext - 肌肉 - 関東地方 - ▁другого - ▁Following - ▁قم - étend - ▁Steel - ignit - 罰 - ី - ▁Gouverneur - ▁Một - 見たこと - 押さえ - ニック - 预防 - 自殺 - ▁fick - ▁Abdul - oubli - ▁partake - பாடு - オレンジ - ▁bemerkenswert - ▁موجود - ▁Swiss - ▁shabby - ▁Jimmie - 圣经 - ▁худ - キム - ▁Program - 宪法 - ▁Confederate - ▁Geschwindigkeit - ▁miraculous - ▁specialist - 給付 - ▁worthless - даецца - 音频 - 你说你 - 僚 - ▁beeinflusst - 取り組んで - ▁tradicional - ▁recipe - 起訴 - ▁никакой - 決断 - ▁protector - ではないか - ▁tác - َلَ - ▁fila - kungu - チャンピオン - おしゃれ - ▁немножко - ▁settling - 風景 - ▁Cecilia - ▁байр - ▁següent - ▁unmittelbar - 行方不明 - ▁problém - ▁دیگه - 建築 - 犹太人 - ▁семьсот - いらっしゃい - īv - ▁gdy - 祭り - '2013' - ▁supra - 빠 - ntara - ▁Windows - ▁анализ - 火災 - 30% - 试试 - 地図 - ▁übernehmen - últim - 厚生労働省 - դ - ▁και - ▁دلیل - ▁knives - ▁প্রতি - писыва - 不同意 - ▁транс - ▁marshal - ຖ - 씩 - ▁imperative - ▁cupboard - 冲动 - ▁Josep - effizient - ▁воздух - ▁ripple - වන - سې - मे - 言え - ▁nommé - ▁comunidad - ▁дней - 尤 - ▁humil - 高齢 - ▁marque - シングル - ▁Argentina - ▁доктор - уулах - ▁voilà - ▁Except - ▁Ла - 하기 - ▁Library - ▁wherefore - げん - ▁Fernando - ▁yardım - ▁prostrate - 春天 - шек - ầu - 辩 - 況 - ▁видеть - ▁Augenblick - という状況 - ▁měl - 一起工作 - ▁pouco - 邸 - ▁todavía - ▁rusange - ▁piercing - 並ぶ - 冠军 - ▁Jesuit - 准备好了 - 離れて - 职位 - 碰到 - حين - ambiente - ▁unreasonable - ▁поговорить - ղ - ▁memorable - ▁acquit - ▁paradox - ▁Raymond - ▁volgende - イラン - Follow - ▁делают - ▁kesk - ▁Manager - క్క - 承受 - エア - أة - ▁때문에 - ▁escaping - ▁ມີ - 観察 - だそうです - 北京オリンピック - ▁моего - ▁Bauern - ▁rapidity - れん - 颤抖 - おかしい - ▁нужна - における - 跳舞 - ▁eksp - 列島 - 안 - クター - ▁가면 - 看板 - ▁лепш - ▁barber - 相当于 - 倾听 - ដ - ▁certificate - 蝶 - 鸣 - 踩 - ▁qualification - ruḥ - 嫂子 - ▁лидер - 感染症 - 割り - 还有其他 - 消除 - ▁மனித - માં - ▁настолько - 둘 - ▁магчыма - ▁hubiera - ▁vermeiden - ▁thống - 大規模 - ▁affront - 尼斯 - 战士 - ▁folosi - ぼう - ▁elementary - ▁чаго - 价值观 - を残し - czę - ▁Lippen - ▁corazón - 雇佣 - ▁Überzeugung - umweru - タイムリー - lendir - 遭遇 - ▁чита - ▁срок - 起床 - 놓 - 페 - ▁afflicted - ▁ĉirkaŭ - ▁negotiate - 記者会見 - 静岡 - 書いた - ▁skupin - ▁procura - ▁implant - ▁سەر - 都道府県 - 餐厅 - 亲戚 - ▁رجل - ▁revived - ۇپ - gebrauch - ▁liefde - männer - ▁rugged - ▁allemaal - 求められ - 어서 - 行星 - ▁بسیاری - 贫 - upakan - 尋 - ▁رنگ - ▁longtemps - ▁pregnant - 缠 - டே - ▁slipping - ▁facilitate - 하면서 - ejoj - ▁starving - 入り口 - というわけで - 爸妈 - マイナス - ▁urging - 诊断 - diğini - میر - ▁perilous - っていって - ▁عق - ▁Guru - 征服 - ▁existiert - ▁precipice - ▁استخدام - ▁dramatically - immagin - の疑いで - கல - アート - ▁delegate - ▁fervent - ▁качестве - 잡 - ▁полковник - 睁 - だと思うんです - மும் - 残念 - ▁completamente - いいと思います - ött - avion - ரீ - 没办法 - 라는 - ▁Maurice - ▁tiền - 萌 - ▁sammeln - деля - ▁Conseil - ▁boughs - ▁уровне - ▁радио - ▁возможности - 現れた - ▁cultivation - хватил - ▁incense - ▁왜냐면 - 脖子 - ▁waarop - 殖民 - объя - 現役 - 諸 - 優先 - ▁compromis - ▁parola - 雀 - ▁convincing - ▁ماشین - ▁bezeichnen - 戴着 - ミニ - ▁alluded - ▁Carlyle - ▁strode - рабатыва - ▁europäische - ước - ▁убива - 말 - ▁而家 - 措 - できること - ▁Identität - ▁zooveel - ▁vierzehn - ▁geleden - ▁irritated - ▁Heilig - 翌日 - 女優 - ▁tedious - 不清楚 - 釈 - ▁безусловно - ▁apprentice - ▁пространств - ▁hesitating - ใจ - ▁beschrieben - winkel - martial - ▁Nikola - ▁Malone - ▁Josh - ▁orthodox - 력 - schloß - 根本不 - treue - ▁Vull - ▁Pflege - studie - ▁tiam - 家具 - 密切 - ▁viên - ▁kritisch - 喫 - ▁mugihe - よろしい - ▁Geschenk - ▁되는데 - ▁secund - 童年 - ▁aasta - 结束时 - 取代 - ▁commencement - ▁Behind - ▁penetration - ▁judging - 繁荣 - ▁rapture - 奈良 - 有哪些 - 我只想 - 選び - ▁иметь - ▁fairies - 歩道 - ▁Depression - ▁ئەوە - 급 - ▁получить - 初日 - 释 - 回复 - işim - affaire - 多くの人が - お話し - 속 - ▁endurance - ▁regretted - ▁Großteil - 打印 - ▁gentil - ▁zacht - 裔 - ▁prescription - ▁Verbesserung - ▁михаил - ▁ہے۔ - GDP - forschung - through - ▁தனி - mutima - 別れ - 酒吧 - 生み出 - ▁comparative - ▁казалось - ▁valiant - ▁automatisch - ▁fanatic - 集合 - ▁одним - 桁 - ▁Pharaoh - ▁ມື້ - ▁snarl - を果たし - ▁rebuke - ющее - ▁louis - fleck - ▁ومن - 病例 - ▁ҡо - ▁Pink - qqim - ▁تۆ - ▁conscientious - ▁комитет - sighted - 成績 - 部落 - ▁věc - layan - ▁Grow - ▁forbear - ▁اصلی - иңә - ▁südlich - ▁Above - 合わせた - ▁Bertram - ▁Broad - ▁seneng - 三个月 - ▁видели - prak - ▁acerca - 资助 - 続けている - ▁கூ - 书籍 - 晴れる - 総理大臣 - 복 - 써 - ▁Kontext - ▁sunrise - ▁звоните - ▁constable - ▁крайней - 起诉 - inspiring - 完整 - ▁Schaden - 留给 - ▁Camb - ▁battalion - ちゃいます - ▁composer - ▁Voll - ▁även - 潜水 - ▁машины - 逗 - ▁vrees - ▁stereo - ▁scroll - ▁ذات - 死刑 - ▁fourteenth - 太平洋側 - ্ম - ▁cicl - nshingano - 前往 - ҙың - ▁وكان - ▁Angebot - ▁دنبال - ▁Chuck - ▁điện - ▁virtu - ▁demokratische - ▁остров - ▁plump - 本番 - 慧 - 軟 - ▁Maschinen - ▁whistling - 愧 - 忽略 - Laugh - ▁хот - 努 - ▁Fakat - łow - եր - ▁bemerkt - 垣 - 嫌疑人 - ▁geeignet - ▁제일 - ▁aucune - 水準 - ▁Abteilung - geschnitten - ▁abnormal - ▁вместо - ▁alguma - 寄付 - ▁kanggo - ▁durfte - ▁civilian - ▁creator - ▁sauber - ▁Entdeckung - ▁întâ - ▁америк - ▁ocurr - ッグ - ▁discrimina - 苏联 - ▁comprar - ▁Krankheiten - ျ - ▁mwiza - ▁două - ▁mold - ▁Europäische - ▁Ли - ▁чуж - ▁bedeckt - ▁பெரு - 綱 - 芯 - 弦 - ▁갑자기 - 鹅 - ▁biology - ▁picnic - ▁мысли - ▁magician - 相机 - 選手権 - ▁тракт - ▁Summer - 見つかり - 契 - 创始人 - 語り - 辅 - ▁ungewöhnlich - ▁plutôt - ▁دغه - 闯 - ▁Family - ▁benannt - ▁Bryan - ▁Brett - 或其他 - ▁mẹ - plast - 阿拉伯 - ঝ - ▁nécessaire - 朕 - 社交媒体 - پوش - 住房 - いらっしゃいます - 感染状況 - ▁типа - 慰 - 謝 - マウンド - メダリスト - ▁discouraged - ▁gelukkig - ▁refinement - ▁ایجاد - ▁зохио - ŝanĝ - 底部 - röst - ▁spinning - 衡 - entendre - нцев - 团结 - コンビニ - ▁Roboter - ▁abilities - 崎さん - 普及 - メイン - ▁Sitzung - ▁Clement - ථ - ▁coroner - 趣味 - ▁пятая - ▁gulf - ルート - ▁особое - 応じ - 分かりません - 将棋 - ▁tolerate - ▁Cuando - ennial - ▁Küche - 今天晚上 - ▁millionaire - 临时 - ọ́ - 厳しく - 金額 - イヤー - ▁Landschaft - ▁전에 - どうでしょう - ӹн - 飛車 - 患有 - ▁quitted - ▁evitar - 飼 - ▁đại - ▁gerçek - 是多么 - クリア - spero - ▁berührt - ▁hoorde - science - ▁Cecil - builder - 世帯 - mysl - ▁hâ - ▁programming - ▁এর - ▁அங்க - весел - 주고 - временно - 褒 - 胳膊 - ▁possono - ▁undertook - گار - endroit - 做饭 - できれば - ▁Führer - ▁nostrils - ▁économique - ▁果啲 - ▁развива - ▁promet - 教徒 - ▁основан - је - ▁gutanga - сроч - ▁clumsy - ▁请播放 - 秒钟 - ▁Rupert - ▁Prakti - 有効 - ▁extern - zulassen - ▁Ре - ▁Voilà - ▁компьютер - umukobwa - ▁unanimous - ▁sovint - ▁forthwith - ▁алма - 階段 - くなっています - ▁hôm - ▁refreshment - 任何其他 - 军官 - ▁Dunkelheit - 颠 - ▁زبان - 农场 - 小説 - ▁brauche - сурс - 学び - ▁Scotch - fighter - ▁Então - 帖 - bitekerezo - ▁Together - ▁spēlē - ▁بنابراین - ▁முக்கிய - ▁Queensland - 嘉宾 - 遗产 - 晚餐 - ▁imposing - ▁хороший - 収入 - ▁prakti - 文書 - っていうところ - 行われている - 趁 - ▁gospod - ▁okazis - 요일 - ▁хэв - 夫妇 - ång - 嫂 - 今のところ - dzē - ▁predominant - 절 - ▁imbaraga - ▁Fond - ▁tavern - ლებ - 芋 - ▁treacherous - 鸭 - 落ち着いて - ുന്ന - 布朗 - ▁noticing - ▁blanche - ▁чисто - ダム - ▁discut - ▁Charley - ▁Alexandr - ランチ - ▁Zucker - 訴 - ▁apoi - ▁wholesome - 청 - 共享 - 違った - 具体的に - 多年来 - '4000' - 줘 - ▁behaupten - ビッグ - gâ - ▁कर - 一生懸命 - ▁además - ▁Violet - ▁вопросов - ▁nziza - 隠れ - 病床 - 甚 - ఫ - ▁cripple - ▁Stärke - ▁așa - ▁temporada - 创伤 - ▁видно - ▁sentinel - ▁думать - リュ - чёт - ▁verlangt - 初期 - tūr - 績 - ▁disponible - ▁detained - Source - ▁která - 一个巨大的 - ▁commentary - ▁девушка - gesp - アスリート - ▁Armstrong - ▁prosecution - 등 - ▁тело - دعو - ▁اوس - ին - 购物 - решен - ▁cố - 汚 - 迹象 - ▁Schicksal - ▁calamity - 贯 - ▁nuovo - ▁bizim - 打扰 - 疫 - ▁simplifi - ▁musí - 釣り - ▁Európ - ▁luggage - যোগ - 芸術 - ▁эфир - ▁Anything - ▁القر - ▁سمت - mètre - ходили - 变得更加 - 展开 - ツイッター - ވެސް - ▁entschlossen - 保安 - ▁reproduce - ächte - 貫 - ତ - ▁정말 - ▁Welsh - ބަ - ▁güzel - ▁چشم - 速報 - ▁întreb - 上涨 - ▁chacun - ロシアによる - 不在乎 - ▁Vì - 诗歌 - 污 - ▁مدرسه - nascuda - ޓ - 薇 - ▁übertragen - ▁일본 - ▁voyez - expliqu - freiheit - 詩 - ▁diferencia - ▁erfordert - 美女 - 引退 - ▁eastward - buckle - 唯 - abilitat - ▁غذا - ▁Alpha - ▁волос - 水曜日 - 墨西哥 - ▁erheblich - գ - այ - ▁declining - 噴火 - ebene - ▁speck - ▁moisture - পার - ▁kommun - أشخاص - ▁Үүний - schleunig - глянул - ▁sociedad - Quest - ▁exalt - 使命 - 糊 - 尸 - ▁скажи - ▁зусім - ▁geholfen - 胀 - ▁солдат - 匆忙 - 沟 - ▁xả - ▁navigation - ▁බව - 去哪里 - 贷 - 拍手 - ▁Beethoven - 帽 - ▁fathom - ▁statute - ▁gufata - غۇ - ▁Blumen - ▁mängi - Unis - ▁sık - ▁psychische - 見つけ - ড়ে - mektedir - ▁pièce - 臨時 - ▁Morning - ▁infernal - нести - 这是关于 - ▁использовать - 励 - おばあちゃん - ▁detachment - 赔偿 - 偷偷 - 优雅 - ▁birkaç - 帰国 - ▁حسنا - 証言 - 喜剧 - ▁Athos - ▁хоча - あえて - ▁stecken - ▁дожд - 会导致 - ピンク - 尽量 - 買った - ▁bilong - ▁Planet - 览 - ▁Katherine - alphabet - خصوص - ▁Blanche - ▁العام - ▁велико - トルコ - 抜いて - ▁Campus - ▁squad - 到目前为止 - ▁elapsed - 앞 - ▁measuring - そっち - ▁Truppen - 亚洲 - 新宿 - ゲット - ▁physisch - 納得 - göt - aunted - ▁glove - ఏ - ▁ponto - ▁inspira - ▁compost - 性质 - 亚马逊 - ▁monotonous - ▁proclamation - ▁دوباره - ▁혼자 - নার - ロンドン - ▁homage - ▁participating - ▁appoint - ▁பற்றி - ނެ - نقل - 澄 - 肢 - ▁davvero - 졌 - 轩 - 伴侣 - 诶 - ▁хөгж - ▁eindelijk - 伊朗 - ▁yakın - ▁smote - ▁consolid - තිය - tumye - ▁доступ - ▁Alexandria - ▁Constantinople - ▁treasury - ▁период - 닌 - ▁espacio - 料金 - ▁duidelijk - ▁tradici - авана - ▁твой - 澤さん - あるんですか - ಾಗ - ميل - 欺 - ▁Vậy - ▁تولید - ▁آمریکا - 做了什么 - ▁absolv - ▁пройд - 処 - luğu - ▁langue - 一点也不 - ▁рух - lucid - ▁condescend - 旺 - ▁Mathematik - ▁Haufen - ▁그럴 - lardı - າມ - ▁словно - ▁inviting - ▁vegada - 录音 - ▁freak - ▁좋아하 - ▁humorous - ദ - ▁первое - ▁nördlich - ▁Stuhl - ҳәа - 吵架 - 実態 - ▁건데 - バッグ - ුරු - ▁doppelt - ▁fascination - 原油 - 丼 - ച - ▁discomfort - ▁persuasion - ▁будешь - traves - ▁burglar - ▁awarded - ▁болсон - ないですか - ▁fuerte - ▁novelty - 室内 - を迎える - ▁fidelity - ▁monastery - ▁мальчик - ▁verwirrt - ▁کودک - ▁cuán - ченко - 嘲笑 - ▁Diamond - が高まっ - 日常生活 - ▁blickte - ំ - ▁anecdote - ▁divinity - ▁finanzielle - ▁ethics - rrh - 理念 - ▁geography - organitza - お客様 - 悼 - ▁Cependant - schrecken - 起源 - ▁notorious - ▁început - ▁хэдэн - gespräch - ▁vibration - 污染 - ▁адрес - づくり - ▁Moscow - ▁Naples - ▁agafar - 几分钟 - symmetri - 教師 - ▁bamwe - ▁animation - ▁aisle - 点钟 - 長官 - ▁camí - ▁carrière - 萝 - 一点儿 - ▁kazan - ிற்கு - ▁борис - 季节 - 投资者 - spoken - 恥ずかし - 猩 - ▁историю - ▁caballo - 困扰 - 銀メダル - ▁Voice - ▁Dakota - ▁хочется - 融合 - ▁Multi - 争取 - ▁Vä - тычна - espér - 延伸 - ▁nghệ - 코 - 귀 - 狐狸 - 职业生涯 - ▁никому - ивается - therapie - ▁moralische - 跃 - าง - ตร - ▁voulu - ជ - આ - ▁economist - ▁여행 - ▁detached - 运输 - ক্র - ▁devait - ▁Locke - ▁sinister - ▁너는 - ▁чист - 厌 - キッチン - ▁secrecy - 典型的 - ▁Takže - ▁داستان - 依靠 - と述べました - 取引 - ▁женщины - ▁далей - 飛んで - 替代 - 集め - 代の女性 - ▁quartier - 芒 - 這 - тельные - ▁Slav - ▁suffi - 新潟 - 艾伦 - ▁humbly - ێر - ாமல் - ▁pudding - seminar - 埃尔 - と主張 - そうなんだ - намі - を越え - ▁Forscher - ঐ - ▁kürzlich - ۲ - 狮 - ▁reassure - ▁Puerto - ▁Senior - 적으로 - ▁pulpit - 原則 - ▁Portland - ▁solange - ▁Sergeant - ▁sidewalk - ▁forlorn - ▁Stress - ▁лица - جنس - 책 - 현 - ▁общество - ▁dripping - ▁chinesische - ывается - ▁Strong - ▁bandage - ▁cầu - ▁риск - 私達 - ありまして - ▁unequal - ▁Democratic - ▁Vögel - 支撑 - 财务 - ▁astronomi - 印刷 - กัน - 电池 - ▁marvelous - 美容 - ීම - 腸 - コンピュータ - ሚ - ▁línea - 挣扎 - ▁psalm - PCR - 100% - ▁Edwin - ▁Mack - を発表しました - عالم - コスト - авлива - ▁crouched - ▁morbid - ▁имею - ▁relish - ggins - ハンド - ▁wipe - ▁chemist - ▁titul - ใ - እ - ▁Montgomery - ▁perceiving - లా - 得很好 - 合計 - mişti - பை - Effekt - пром - ▁hacía - ▁нашим - 砖 - ▁dinosaur - コラボ - ▁coincidence - ▁isolation - کرا - ▁گوش - quê - ▁Unterricht - ▁recognizing - 杠 - 盈 - 砂糖 - ▁slecht - ▁sacrifi - С - mwami - ภา - 误会 - ложения - leuchten - wheel - ▁Sancho - ▁الواقع - ▁boyunca - ▁geöffnet - ▁betroffen - 発表した - ▁CapEx - 暴風 - ▁Curtis - 一个小时 - ▁heutige - ▁Clemens - 楼梯 - ▁usurp - छन् - 指标 - ▁Grav - 設計 - ▁Hamburg - ▁fantastisch - 労 - ▁irgendwann - ▁কিছু - ▁mỗi - ▁смерти - ▁worldwide - ▁shelves - Ҡ - ▁restrictions - 並べ - ▁candidat - いただける - únic - タクシー - 眞 - ▁snuff - ▁سخت - 雨が降って - کۆ - 主演 - ▁chalk - ▁جيد - ▁hobby - loč - ▁사람들이 - 訪 - atrix - ▁gestorben - ▁Digital - ում - ▁게임 - 吉尔 - ▁Imperial - 设计师 - ▁seguro - ▁geistige - ▁போல - を進めています - ▁தெரி - まぁ - ひっ迫 - を踏まえ - 朴 - ▁Oklahoma - ▁Lächeln - ▁possiamo - ▁люблю - 慢慢地 - پتۇ - ▁Họ - ▁grammar - блюд - ▁crab - ポリ - 重症 - 辞め - 疾 - ცი - ໃຫ້ - 誠 - ▁заўсёды - ▁Jeremy - ▁immune - 客厅 - ്യ - 許可 - ▁Eindruck - メント - 最后一次 - ▁compose - 选民 - 困境 - វ - ▁Especially - ▁вещь - ▁Express - ▁decât - ▁bavuga - 凶手 - ▁meditate - ▁그거는 - 採用 - 子さま - ▁Roosevelt - ▁nghiệm - ▁குழந்தை - 深圳 - ▁아예 - 猜测 - んだと思います - ▁elastic - atangiye - ▁Tipp - 瘾 - ▁vorbereitet - 拦 - 눈 - ▁Energy - ▁않았 - 评价 - 勤務 - ডি - ▁Brandon - が必要だ - 反発 - ڕێ - ▁нормальн - 南京 - 难以置信 - ▁kän - ▁recurring - シート - Europa - 跑步 - ▁Mereka - gelaufen - ▁Uyu - ▁успел - ストレス - ▁Jesse - 愛知県 - ▁dazzling - 查询 - ▁interposed - 珊 - ференци - ▁ĉu - ▁durchaus - ▁sedang - ▁принцип - ものすごい - ▁může - 江戸時代 - ▁Armut - ▁Perezida - ▁развития - ▁взаимо - 治愈 - 豪華 - ▁сделан - ▁адказ - ▁вниз - 厨 - 見直し - ▁война - بێت - 递 - ▁другая - ▁científic - 飘 - ▁signifas - ▁unterscheiden - ▁தெரிய - misel - シュー - ▁fiddle - ▁retard - ▁condens - ▁cultivate - 几年前 - 非法 - ▁hinweg - ▁enkele - とみられます - 行きたい - 歓 - ▁நிகழ் - グッズ - ▁Großbritannien - ▁цяжка - 辨 - ▁يوجد - 時半ごろ - 80% - ▁важна - schneid - 就行了 - 战场 - ▁Ula - هدف - 亨利 - 勋 - 虹 - ‍ - 散歩 - ▁Pfund - ▁Cassi - 析 - 代の男性 - ▁அடி - 细菌 - 학년 - ▁deceit - ▁Symbol - εί - ▁konstruaĵo - amatu - ỉ - 你会看到 - ▁Corporation - 喉 - 巾 - ▁cristian - 務大臣 - 全国各地 - ▁бөт - ▁Nacional - ▁Yella - ▁practise - ▁странно - ▁molest - ěl - 残疾 - まいりましょう - 僧 - 纯粹 - czne - ▁geographical - ▁activist - といわれる - கிறார் - ▁empfehle - ▁Tränen - ▁cantonada - ▁apparition - ικ - ジャパン - ▁devout - 到这儿 - ▁olması - ▁pronounce - ▁Dacă - 拡 - mauer - க்கிறார் - ıyordu - 大谷翔平 - ሆ - ▁bijzonder - ▁первого - ▁последний - ▁لطفاً - ▁languid - ▁urmă - ▁impulsive - ▁dennoch - đ - ▁aristocratic - カツ - ▁Saturn - ってきます - ▁때는 - こんなふうに - ቃ - ▁nyinshi - 咒 - ▁похож - вшего - 孤立 - ▁shilling - ▁sichtbar - ▁ඇත - ાર - ▁gaunt - アルバム - ぷ - ሄ - ▁Posteriorment - ▁insgesamt - 종 - 装饰 - 汚れ - ▁Eigenschaften - ▁конфликт - ▁пришлось - ▁Sweet - مدرسة - ▁Ĉe - ってしまった - 牛肉 - おじさん - 건 - 几周 - ▁pequeña - ▁więc - ▁nevoie - ▁elkander - ▁становится - ▁vigour - ▁civilisation - espai - ▁companionship - ▁troublesome - ாலும் - щим - ▁ouvert - ▁politeness - scape - 掲載 - 艇 - ẫ - ▁caballero - ▁trabalho - 伞 - ▁مرکز - ▁معمول - 奥さん - 椅 - ▁üret - ▁осво - technik - 吊 - ▁Nachmittag - ▁shroud - ▁wenigstens - roost - ▁ominous - 占据 - ▁Ausnahme - ▁Manuel - ▁высоко - ▁Unidos - ▁bagay - ▁здравствуйте - ▁embedded - ▁надежд - lògic - ţ - ▁ensued - ▁говорим - ▁supposition - ラブ - 韓 - ▁крым - صاب - ルーム - 布拉 - ボランティア - ▁celestial - ▁securities - ▁του - ▁이케 - 判定 - 决策 - 预计 - ものすごく - ▁называем - ável - 航行 - 可惜 - ▁diagram - ▁tháng - ▁Dennoch - ▁Tiene - を対象に - ёшь - klassi - ▁sexuelle - ▁umutima - нюю - 査 - 脅 - 玄 - ੋ - 哎 - ▁Amsterdam - 繁殖 - ▁குறை - ▁volcano - テレ - 天主教 - ступил - きょう午前 - ハラ - 单身 - ▁тэг - església - ▁temporarily - ጥ - ▁먹었 - ▁beseech - ▁epoch - 何ですか - ლე - ▁distingu - ▁Mohammed - ▁einschließlich - 넘 - ▁birlikte - ▁Analyse - 補助 - 轨道 - 대로 - 거나 - ▁Urteil - 貢献 - ▁verbessert - 凑 - ▁Akademi - ▁destructive - ▁fellowship - ваўся - ▁норм - 極めて - ▁되지 - ō - ▁Gertrude - 召し上が - ▁твои - ▁дүр - を務める - ▁escucha - ▁ஆன் - ▁Explo - ▁rogue - 変わらない - マッチ - ▁invece - democrat - ▁tình - ▁바로 - ▁heiraten - ▁whipped - 具体的な - 世の中 - 电视台 - 信じて - ▁geschafft - ▁clamor - streich - 準 - 加油 - 하잖아 - 册 - 冷蔵庫 - ▁Hampshire - 匪 - まもなく - ▁Stanford - ▁قانون - ▁palju - nguish - 润 - ▁اصل - ▁clenched - ▁سیستم - ▁sprinkle - ▁criança - 開かれた - ▁utawa - चि - قاد - ▁façana - ▁moustache - ▁그것도 - ▁традици - ρα - 防御 - ▁Beatrice - 頂け - 介入 - ▁shameful - ▁bagian - ▁имеют - 匹配 - влак - 挫 - ▁Exchange - ▁Wirklichkeit - 躁 - ▁honom - ▁modifica - ▁chủ - ريخ - ▁Utah - ▁mitigate - されていない - ▁celebration - чыцца - 繊 - ホワイト - ▁хүмүүс - झ - ▁sturdy - ▁Bundesstaat - 确切 - 礼貌 - schieß - 舞蹈 - 벌 - ▁verzweifelt - ▁lấy - ▁हुन - عيش - ▁chơi - 食べたい - ▁governess - ▁серьезно - 権利 - 消費者 - ▁Protest - ▁chuyển - ▁muffled - 疑惑 - ▁Voici - ▁gambling - ▁могуць - ▁хүч - 蛋糕 - ▁nekk - 都内で - ▁извините - ▁кровь - 吾 - ▁Chancellor - ▁Schriftsteller - 庙 - 처럼 - ▁lächeln - 摊 - zügig - ▁Больш - ▁lumière - защищ - ▁descubr - ▁Celia - ▁voters - お待ち - ическим - ▁width - ▁miliard - ▁ecclesiastical - ترك - ▁Motiv - ▁Bemühungen - ▁sanctuary - 字母 - 巢 - ▁tactics - ▁стрем - この辺 - zdrav - 鸡蛋 - ▁appelé - 色々 - ▁llengua - ច - ံ - ▁институт - ▁மூன்று - ▁champagne - ▁Provinz - ▁Training - ▁disseny - 見たい - ▁gitti - ▁சொன்ன - ▁Somebody - فضل - ▁semaine - ▁workshop - 修理 - ▁unreal - ▁eigenlijk - ▁மாற்ற - エビ - 緊 - ▁conserve - ▁Saviour - ▁değiştir - 辩护 - ▁пришел - 拥 - escence - ▁любим - ▁Ashton - ▁Stoff - послуш - населен - ▁Dominic - με - ▁پدر - ▁Miami - ▁utama - 交差点 - ▁réponse - ▁biệt - ▁aslında - ▁করতে - شرق - 描いて - ▁rebuild - ▁Beitrag - ডা - 塑造 - ▁обязан - 财政 - 切り替え - rechnet - mdash - 惊喜 - ▁Archie - ▁yazı - strafe - 著名 - 乗用車 - ڈ - ▁horribly - 是什么样子 - 芭 - ▁beauties - ▁verhaal - フライパン - ▁Saudi - ницу - ▁Medical - kreuz - ▁unsuccessful - 담 - 증 - ▁discreet - ▁happiest - シェフ - 默默 - ▁enabling - ▁elevator - 美洲 - ▁hydrogen - 代替 - に向かう - ▁landlady - ▁arguing - ▁ленин - ▁hateful - ▁Verkauf - 子弹 - ▁kulturelle - ▁остава - 婆婆 - 連れ - 설 - かっこいい - ▁пошли - ▁है - ▁सं - ব্য - ijoro - kontakt - ▁توجه - 经历过 - 順位 - 可怜 - ▁новых - 全国的に - 粮 - ▁collector - 竞 - ళ్ళ - テーブル - パターン - 侍 - spread - ▁помощью - ▁mayoría - ஸ்ட் - ▁мужик - ▁conegut - 募集 - 玩具 - ▁prodigious - ▁instruis - 俺たち - 積極的に - ▁ingenuity - 출 - 欺负 - ▁voglio - 广东 - ▁healthcare - お菓子 - ▁সম্ - ▁hillside - 플 - ▁медицин - ▁сельское - ▁reiterate - 赞助 - ▁gezond - 可能性もある - ▁очеред - தான - 追いかけ - ▁benötigen - ▁überprüfen - ▁fisherman - 五分钟 - ▁Dutzend - 余裕 - ▁forfeit - ▁acontece - それこそ - ālā - ▁Virus - ▁মানে - になりたい - 每当 - ▁protocol - ▁Rebecca - ▁Widerstand - 僵 - ▁begrenzt - ▁vollkommen - بناء - ▁cavall - 生涯 - kirche - トライ - 統一教会 - 幻灯片 - ▁ແຫຼະ - ▁renounce - ▁achtzig - ▁музе - 小さく - psycholog - 臓 - ື່ - わけではない - 群馬県 - ▁Mistress - ▁llavors - ▁involving - ▁mercado - 泳ぎ - ▁возраст - ▁Pretty - ▁wilaya - ▁intact - theorie - ▁capitalize - 你们两个 - ▁сабе - ന്ന - 弓 - accompagn - 萧 - ▁Kapitel - ▁hoofs - ▁Buddha - 防災 - ▁Yagize - atrice - ▁مكان - ▁içeri - 障 - ▁Alzheimer - ▁gratified - ▁француз - แต่ - ▁Prophet - 不幸的是 - 受験 - ▁levanta - ▁corporal - शा - ▁stieß - kompren - ▁arising - ▁площадь - 仰 - さず - ັດ - ▁default - ▁남자 - ▁دانشگاه - ▁дороги - ▁Sherman - ▁sweetheart - 碎片 - ▁repeti - պ - ▁merchandise - ▁кандидат - アンダー - ▁Abschnitt - كەن - ▁humming - möglich - ▁constituent - ушки - 哨 - ള - ▁birçok - ▁Усё - 棉 - ▁Träume - ▁deferred - ▁කළ - 二零零 - ▁anfing - bychom - ▁hareket - weichen - 託 - 扉 - 黛 - پذیر - ▁rubbish - ▁steadfast - 帐 - ▁giúp - ▁worüber - ▁আমার - 砍 - ▁alcuni - なるべく - ▁thereafter - ▁insolent - 升级 - ▁그러면은 - 編集 - ▁Fritz - ▁implies - 尾巴 - ▁lobby - 真诚 - த்திர - 出会った - ▁jõ - ىرى - 拭 - economia - ▁analytics - 運ばれ - ▁Alguns - ▁fiecare - ▁Montreal - ▁Bristol - ▁Geräte - ▁Leiter - 机关 - フルーツ - 幾 - ▁Whereupon - ▁malicious - ▁bizarre - ▁открыл - 权威 - ▁생각을 - 强迫 - ▁russische - ▁nchini - ▁spike - 转移到 - Yeah - hỉ - ▁данном - 木曜日 - 步骤 - ▁восемьсот - ▁trouvé - ▁gäbe - ▁старш - ▁площад - ▁aquellos - ктив - responsibilities - ▁Cleveland - ▁zukünftige - 間違え - ▁Mauer - gebunden - 这段时间 - ▁posibil - 手続き - 随着时间的推移 - ळ - ▁jüngste - ▁kemudian - 嚟 - ▁worauf - ▁defiant - ラップ - μα - ▁trị - авыя - 異常 - ▁umwami - ▁Rhy - ▁auditor - ▁disrupt - علام - 季度 - ▁specifi - ிங் - ▁Fifth - ▁информацию - 耕 - ▁begeistert - Excuse - èxit - ▁Fahrzeug - ▁تواند - ▁모르 - سکو - ▁literal - ▁уголовн - ▁республик - 細かい - ああいう - ▁सु - 失踪 - ▁Hunderte - 終わる - ▁працы - 猿 - ▁Flügel - ▁Você - ▁phòng - ▁Així - blätter - ▁sacrament - 恳求 - ▁Fakten - ▁물어 - ▁sicherstellen - 慈善 - ▁simbol - ități - 軸 - ▁oyna - ▁fragrant - ▁Ahnung - ▁أفضل - ▁이상 - ▁ஒன்ற - hagaze - そのとおり - 弘 - ɗ - ▁bình - ▁september - ▁Köpfe - 決まった - ▁поднима - あるもの - ன்னு - ▁pueda - ▁schrie - রাজ - دیده - くなっている - ▁Ayrıca - ▁степени - ▁Athenian - ▁infidel - ދު - ▁plupart - ▁abyss - ▁categori - ▁milestone - کشید - ▁Abbot - క్ - 该怎么 - 僕たち - ▁inasmuch - ▁удалось - ▁Champion - ▁Funktionen - ▁história - ▁syllable - ▁immigration - 差距 - ▁persoon - ركز - 摄影 - ▁contamina - ▁Umfrage - ▁consecutive - ▁erwiderte - ▁geschieht - ▁герман - ▁ĉefa - 満足 - 协调 - ▁Corps - 缶 - 做些什么 - ▁rhetoric - 人工智能 - ▁clump - diagnose - konsum - ウクライナ側 - 邻 - 福島県 - ▁பழ - 維 - ▁Vernon - ढ - ▁Pēc - ▁ecstasy - ▁erscheint - フォーム - ▁aumenta - ▁Republik - 兔 - ▁모르겠어 - ▁futile - konserv - ِّ - ▁disappoint - いただけます - ▁concentr - 商業 - ヤクルト - ▁Direktor - ▁Mildred - ▁партии - ներ - 那天晚上 - ▁patriotism - people - twitter - 手臂 - ▁đồ - ஹா - счастлив - ▁Priscilla - ▁Wähler - ▁escuela - 鹏 - ދަ - escapa - 收获 - ▁метод - ▁அரச - 搬送 - ▁толк - 拉丁 - もらおう - 時計 - ▁هنر - ஈ - ▁Exclaimed - 灣 - 브 - များ - 监督 - ▁Bewertung - 崇拜 - остр - ▁беларускай - ্ট - ফি - ල්ල - ガソリン - 幽 - ङ - ▁сергей - ▁Joyce - ▁dezelfde - 丁寧に - ▁буквально - 削減 - 城堡 - ▁академи - ▁pacifi - ▁MIT - ▁водо - 回顾 - ონ - 乐观 - ▁bırak - 亭 - 玫瑰 - 捡 - 忠诚 - চার - ▁руках - 反复 - ▁журнал - ▁юҡ - ▁клас - タッチ - ▁исчез - ▁swollen - 绅士 - 40% - 很多时候 - ▁форма - ▁rustle - masını - ▁எழு - ピアノ - ▁displeasure - 汪 - ▁ஓர் - حساب - ▁اين - ▁니가 - elimina - ▁consolidation - ▁цели - 監 - ▁unrhyw - ▁программа - ಂದ - ▁үзэ - ▁theoretical - 交换 - wirtschaftlichen - 风景 - deutsch - ▁camina - ▁diplomatic - 窄 - gegenwärtig - ▁influential - 牛乳 - ▁لدينا - ▁работе - subiza - プライ - 欠かせない - ▁Muhammad - ▁Norfolk - 氛 - ▁ঠিক - 奖励 - ▁амьд - 酸素 - ▁үнэ - ▁İki - ▁superintendent - ▁خۆ - shimira - が止ま - ▁задерж - 畳 - ० - ▁Teufel - ▁hostility - 튼 - ▁seperti - ▁tangible - ▁sèrie - gehoben - 劇場 - ▁verteilt - ▁cambiar - ▁integrate - building - undneunzig - 楠 - ▁medieval - ▁dungeon - ▁stammered - ▁ingkang - entwicklung - ▁dreizehn - シャン - ▁snare - 模仿 - ▁bahwa - ▁glorie - になってしまう - 辖 - ਵ - ▁khoảng - ▁информации - ▁Kommissar - ▁plentiful - ▁Schalt - ▁esperant - ▁Calvin - স্ট - であったり - ▁Belgium - ▁disastrous - ▁rôle - 共和国 - 异常 - 作为一名 - ▁fuori - 勇気 - ▁Hubert - ▁말고 - bgesehen - 完璧 - ਜ - època - ▁merrily - ▁Erwartung - менять - 圧力 - ▁генә - 昂 - ▁gobierno - 병 - お笑い - ▁desolation - ▁هستیم - ▁приходил - ▁occurring - モノ - 艰 - cười - 辰 - ▁husk - ▁Theatre - примен - ▁turtle - ▁trocken - こいつ - 1,000 - 停戦 - 祖先 - ุด - original - ▁نقش - 艳 - 꾸 - ▁penitent - ▁shovel - 祖母 - ▁implicit - ▁Commonwealth - ▁поселение - ىغان - 把它放在 - ύ - ▁glob - ▁televis - 百五十 - ▁Eigentum - 毯 - ▁bardzo - 昆虫 - ▁gelaat - höchste - ▁heritage - ▁되고 - ▁twinkle - ināja - 驶 - ס - ▁communicating - ▁emptied - ▁państw - ▁effectually - 鄙 - ▁많아 - ▁общества - 昨晚 - ▁Finalment - ausschuss - ▁Brüder - ▁deliberation - 不可思议 - นั้น - ▁akzeptieren - ▁Puritan - ▁couleur - 師匠 - ▁kontrollieren - ▁cartoon - ▁واحدة - ▁земле - incendi - パート - ▁இருந்து - ▁fascin - ጠ - 태 - ▁cemetery - ▁proyecto - 碰巧 - verhältnis - ▁außen - ▁eclipse - ▁مدت - gewandt - ыште - 食堂 - ού - プーチン - ▁neugierig - ▁Godfrey - 丫 - 遵守 - ▁genießen - ▁marquis - ▁rustic - muziki - fleisch - ▁knapp - 沖縄県 - ▁legitim - 象征 - 扮 - 枕 - ▁parfois - ▁Overall - ▁comunitat - ▁contributing - طبق - ▁Lucien - ėj - ▁unangenehm - ▁längst - ▁inaugur - ショップ - ▁pivot - 边界 - フード - 取れる - ▁Ня - ▁Cemetery - 點 - equilibri - 反馈 - ގައި - ▁spacious - ▁черно - '1.5' - 判决 - ▁eraill - ▁недавно - 聆 - ▁appalling - farben - 永久 - ▁información - kapital - 品质 - ▁schweigen - いずれ - ▁менш - ▁түгел - ▁Consequently - ▁Engagement - 란 - ▁попыта - ▁cumpl - yongera - ▁아마 - 借口 - ▁riddle - слыша - moedig - ▁নেকি - katholisch - ▁парад - ▁chestnut - ▁vieille - ▁Round - umvise - 摄氏度 - ▁sophisticated - ▁Absolutely - ▁aufgeregt - versió - ▁fragile - 土著 - それとも - ▁membership - 溶け - activité - ុ - 赴 - ▁नेपाल - 轴 - 능 - ▁oportunidad - ▁dictator - hững - 創業 - に入りました - ޯ - ▁familio - 爸爸妈妈 - stęp - ▁selber - ▁Bedürfnisse - ▁عبر - ġ - ▁Además - ▁Already - ▁développement - ▁occidental - ▁виктор - ▁umfasst - ▁barbarous - ▁مانند - 坐下来 - ▁châ - ▁curate - ▁america - päeva - ▁resembling - 琪 - 虚拟 - んだそうです - ▁Author - ▁bedoel - ▁Panama - ▁producció - めちゃめちゃ - 줬 - ▁운동 - ▁Bailey - ▁Sturm - ▁заниматься - поўн - ▁fratern - 机械 - ▁생각해 - ▁maximize - ▁снег - 爱尔兰 - ▁insensible - ▁欢迎收听 - ▁fragrance - 90% - ▁testify - ▁parrot - ▁гости - ▁feststellen - ャ - च्या - ტი - 任せ - 주는 - 麗 - 参議院 - ▁Sprach - ▁laikā - ▁تعداد - ▁excava - 傅 - ▁injure - twenty - ▁акт - 邊 - ▁composure - ▁gedanken - 消耗 - ▁нрав - šanu - ▁Robb - ふるさと - ▁хувь - ▁quattro - ▁утром - 娘娘 - ▁обратно - shusho - どういった - ヴァ - 迈克尔 - 순 - ▁искусств - 贼 - ▁Sabbath - ▁audible - ▁drog - ریک - ▁роль - 壤 - ▁Sonntag - ▁recreation - ▁trabalha - ගන්න - 清洁 - ▁ensuring - ▁үед - 運命 - ▁Universidad - ことはない - ▁medication - ▁médico - ຜ - 卫星 - 蕾 - ỳ - ▁premature - ▁седьмой - ▁نمایش - ▁Trinity - 友谊 - ▁오빠 - 女孩儿 - みたいなもの - schwäch - 违反 - 掛 - 赠 - ▁bourgeois - ▁برخی - ▁Гэты - 嘴唇 - 清醒 - ▁worshipped - ▁musique - ewwi - 恼 - ぺ - ▁industrious - ▁сердце - ▁Product - ▁glaring - 構成 - ▁montra - 昇 - ▁நிலைய - 焼いて - 卸 - ▁announcing - 엔 - ▁tournament - ▁consolidated - ▁появля - 言わない - ▁servicio - 干预 - ацыю - だけじゃなくて - ıydı - ද්ධ - ▁магазин - ▁electoral - ▁iedereen - faransa - 包围 - 衡量 - 만원 - ▁вечером - ▁کنار - spēj - ▁gauche - 遵循 - 顧 - ▁பள்ளி - ▁alrededor - ▁кризис - 喔 - ▁probleem - indirimbo - viennent - ▁Evelyn - ▁abgeschlossen - ▁Dream - 資産 - ▁reptile - 拒 - support - ▁compartment - ▁арганіз - 喽 - ▁responsable - schließt - ▁acquiesce - ẹ̀ - ▁الأشياء - ▁señal - 解雇 - ▁empor - сыҡ - ாகவும் - طرح - ющая - 跟随 - ▁Strand - ファッション - ▁Zunächst - ▁Henderson - ▁unlucky - ▁হাঁ - 大阪府 - ▁Linux - ▁Firma - ▁aviat - 被告知 - ෙක් - ▁Birmingham - 伝わって - 尖叫 - ▁french - potřeb - wiąz - ▁luminous - 修改 - १ - ৎ - ผู้ - รู้ - ▁Gefangene - 太棒了 - ▁eyelids - 監視 - ▁гараж - 弁当 - ファースト - ▁призыв - グラス - 辆 - どういうふうに - 駐車場 - ▁Fifteen - ▁antiquity - ▁만들 - ышты - 巴黎 - ▁gần - نەوە - ▁büyü - オランダ - américain - 严厉 - ื่อ - ▁چقدر - ▁atunci - ▁இணை - ▁hybrid - 봤는데 - んねん - ▁voulez - ኛ - ▁Physik - ▁gratification - chirurg - ▁সময় - ിയ - 飲む - ▁patriotic - 武汉 - 一首歌 - 必要があります - ▁cristal - ▁бір - 解消 - ▁disruption - ▁Comisión - ▁никуда - アンケート - ހި - ▁england - ▁Gustav - സ് - ▁необходим - ▁Anyway - ▁stát - ▁laborious - ▁emigra - ▁chiếc - ектор - gemeinschaft - ▁Wagner - ▁weibliche - ▁clown - ▁Kindheit - ▁включа - ようなこと - Stream - kräftig - 메 - ▁Entfernung - ▁hoffnung - ライバル - ▁следует - ▁muscular - 尊严 - ▁нарко - ▁varios - 悉 - ▁Building - ▁Matilda - ▁aproxima - ▁понимать - 衷 - ▁wizard - ▁aufbauen - ▁armchair - ▁дже - の疑いで逮捕 - ▁McL - 代わり - ېتى - 几个小时 - 必死 - ވ - 参议院 - 脈 - ୁ - 學 - ▁Octa - 损害 - ▁faltered - ▁عدد - 編み - ▁désir - subira - 噴 - 渋 - ▁arkadaş - ▁cavalier - 早晨 - ▁Freddie - 職場 - ▁Channel - 蹴 - 瘤 - Entwickler - ▁Beschreibung - ▁handwriting - ▁quyết - ▁سأ - ▁Elliot - ູ້ - ▁besagt - 包装 - ▁алып - ▁ricev - ▁Anspruch - mujyi - ូ - 론 - 表彰 - 診療 - 挽 - ▁ອັນ - ▁kaldı - ▁quelqu - 雨が降る - 圏 - 膝 - ▁beetle - ▁embroidered - 평 - ▁Standort - lassung - ▁வேலை - お値段 - լ - 酱 - 剖 - 琼 - 글 - ਆ - 惯 - 喘 - ০ - 蛛 - И - ð - 巫 - 亜 - ޫ - ޔ - 커 - 辉 - ಳ - 驳 - 표 - է - ਮ - 诞 - 임 - ੰ - 灌 - ସ - 翠 - ৱ - 撑 - 姥 - 寨 - ኔ - ష - 喻 - 嘱 - 皱 - ሳ - ξ - 篷 - 蜡 - ែ - 燥 - 譲 - 幹 - 钉 - 慣 - 括 - ੱ - 愁 - 潰 - ጣ - 滝 - 茫 - 駐 - ପ - 刮 - 侄 - 毕 - ٽ - 绩 - ঙ - 彫 - 嗨 - 큰 - 군 - 棟 - ű - 淹 - 傲 - 駒 - 듣 - 旭 - 屁 - 沫 - 賄 - 督 - 亀 - 覇 - 塘 - 滥 - 享 - 얼 - 腔 - 棘 - 확 - ါ - 斥 - 茅 - Д - 塑 - 閣 - ҿ - 翁 - 疆 - ြ - 臂 - 們 - 椒 - 戚 - 몰 - 癒 - ዳ - 鋭 - 합 - 愈 - ើ - 활 - 먼 - អ - 蔽 - 伽 - 磅 - ണ - 貯 - 吕 - 卜 - 毅 - 隙 - 框 - ဆ - 姫 - 乏 - 虽 - 삼 - 崔 - 잠 - 圭 - 丛 - គ - 鸿 - 駆 - ਂ - 鸦 - 紋 - ડ - բ - 渔 - 阜 - 坠 - 虐 - ඔ - ኮ - 碧 - 浑 - 懲 - 賢 - 魅 - 堅 - ਪ - Б - ጋ - 娅 - 尉 - 佢 - 溢 - ሎ - 餌 - এ - 敞 - 践 - 谜 - 谐 - 暇 - 哟 - 乙 - 郵 - ମ - 豹 - 贩 - 팔 - 嬉 - 畜 - ° - 狄 - 육 - 伍 - 虑 - 帳 - 饶 - 婷 - ़ - ዋ - 蔑 - 呈 - 匆 - 鍛 - २ - 逝 - 鞭 - 접 - $ - ፍ - 嘿 - 须 - 赎 - ჰ - ২ - ჯ - ፈ - 肤 - ڑ - 堤 - 洒 - ỷ - 떨 - 옷 - 쁘 - ט - ယ - 煎 - 弊 - 손 - 矮 - 茹 - Î - 國 - À - 蒋 - ג - ሉ - 萩 - 紛 - 隅 - 焰 - 浙 - 망 - 塌 - 嗅 - 抄 - 翌 - 怎 - ೂ - 徽 - 럽 - 湧 - 쳐 - 齿 - ޖ - ዛ - 죽 - 纱 - ዚ - 최 - 寞 - 耽 - 冤 - 霜 - 廣 - 맛 - 癖 - 족 - 冈 - ધ - ೊ - 徹 - ሱ - 寡 - 붙 - 姚 - 淋 - ખ - 柿 - 討 - 貝 - Ғ - 龟 - 酪 - 茎 - 循 - 兜 - 闇 - 참 - 纤 - 捧 - 혼 - 搁 - 捉 - 舱 - '[' - П - 滤 - 債 - 별 - 脆 - 蔓 - ฐ - 亨 - – - 瞥 - 媛 - ഷ - 汀 - 澳 - 웃 - 谣 - 씨 - ହ - 掏 - 址 - 炼 - 쉬 - ҷ - ೀ - 憎 - 歧 - 竟 - җ - ڻ - 攀 - ശ - 雌 - 笛 - ؛ - એ - 颈 - 橡 - 抖 - 泼 - 餅 - 钩 - О - 厦 - 충 - 嘞 - 셨 - 搅 - 仑 - ಚ - 斎 - 斐 - 囊 - 테 - ቅ - 蛙 - ိ - 酔 - ɓ - 湘 - 瑜 - 滩 - 虾 - 肘 - 랬 - 滨 - ŷ - 백 - 俳 - 垒 - 哄 - 독 - 霉 - ـ - 변 - 庸 - Ё - 繋 - 瓷 - 训 - 笠 - 栖 - 緒 - 藩 - 십 - 즈 - ಣ - ሪ - ຢ - 逢 - 溺 - 窟 - ထ - 殻 - њ - 慶 - 勉 - ඟ - 岭 - 舗 - 锡 - 웠 - 嵐 - 堕 - 랐 - 怜 - 揮 - 夷 - ። - 릴 - 홍 - 奨 - ຶ - ծ - 挪 - 忽 - ူ - 袁 - ۵ - 휴 - 卿 - 험 - శ - 披 - ચ - ಶ - ൻ - թ - 衔 - ካ - 처 - 吨 - ੈ - յ - 俵 - 쪼 - 妄 - Ŝ - 坪 - շ - ਼ - 습 - 梗 - ̉ - Ṛ - 汰 - 芦 - 투 - 陕 - 碌 - 扛 - ۀ - ੁ - 烛 - എ - ቸ - 넣 - 竖 - 倫 - 패 - 络 - ם - 惧 - 霧 - 숙 - 셔 - 窒 - 懐 - В - 筛 - 粛 - 濱 - 溝 - 肾 - 廉 - 剛 - 颁 - 斌 - 浆 - 匿 - ഗ - 眺 - 뒤 - 언 - 魁 - 蓬 - 顽 - 奄 - 咐 - 译 - አ - 壇 - ቤ - 铭 - 藻 - 狐 - 霞 - 吟 - 肆 - 盾 - ళ - 朽 - 炫 - 답 - 짝 - 빼 - 貿 - 굴 - 囚 - ਗ - ൂ - 軌 - 椎 - 梳 - 硅 - 咸 - 阅 - 愤 - 畀 - 疚 - 판 - ീ - 떡 - 娇 - ィ - 림 - ċ - 宪 - ោ - 憩 - 尹 - 搏 - 哩 - 歪 - 簿 - 哺 - ۍ - 惕 - 슬 - 胞 - 薦 - 礁 - Ú - ァ - 愚 - 绵 - 泛 - 稚 - 銅 - 虔 - 筑 - 돌 - 악 - ಷ - 隻 - 특 - 稻 - 乞 - 绸 - 蟹 - 铅 - 該 - 發 - 蹲 - 贬 - 淳 - 陋 - 쯤 - 陀 - ਬ - 贞 - 對 - 杏 - 淑 - 栃 - ঞ - 縄 - 茄 - 険 - 橙 - ٿ - 烫 - 宰 - 靖 - 羅 - 俘 - 錦 - 咕 - 昧 - 창 - Ы - ٬ - 朋 - 浄 - 渗 - 辐 - 磊 - 冊 - 颖 - 베 - 甩 - 咁 - 奎 - 仿 - 텐 - ਅ - 掠 - 硕 - 庇 - 축 - 哑 - அ - 碍 - 鲸 - 屿 - 료 - ζ - 秃 - 읽 - ռ - 誌 - 灿 - 섯 - 옆 - 택 - 咳 - ¿ - 鉢 - 锤 - 琢 - இ - 粹 - 필 - 嘟 - 斉 - 憲 - 허 - 壶 - 巷 - 竭 - 죠 - 盔 - צ - 帘 - 偵 - 钓 - ధ - 황 - 侣 - 糙 - 赐 - 屎 - 烁 - 꺼 - 獄 - ӷ - 겼 - ʼ - 荘 - ઓ - ൾ - 沒 - 乜 - ಜ - 渣 - 골 - 몸 - 崛 - 蹦 - 經 - 怠 - 渐 - 닐 - ̃ - 끔 - 琐 - 塾 - 희 - ળ - 嫩 - 妊 - எ - 각 - 慕 - 謀 - ആ - 驴 - ۹ - 葵 - 咽 - 獣 - 颊 - 狡 - 痒 - 며 - ଲ - 盼 - 풀 - 랜 - 坤 - ޒ - ੂ - 穂 - 詹 - 驰 - 凰 - 療 - 绣 - 啫 - 膚 - 厘 - 熙 - 甄 - 법 - ჭ - 孩 - 辽 - 阱 - ៅ - 睹 - 曝 - 耸 - 늦 - 責 - 침 - 飽 - 틀 - 薯 - 沾 - 禅 - 昨 - 슷 - 缚 - ሩ - 嘎 - ภ - 碁 - ហ - 栓 - ן - ೋ - ധ - 俯 - 拶 - 켜 - 缴 - 傍 - 향 - 衍 - 환 - ૂ - 蝴 - 痘 - „ - 釜 - 琉 - 膏 - ҽ - 함 - 익 - 憋 - 석 - 薛 - 돼 - 醇 - ሞ - 袍 - 권 - 淀 - 鲨 - 嗰 - 択 - '#' - 때 - ਇ - 록 - 岂 - 迭 - զ - 佬 - 捏 - 捞 - 餃 - 쓸 - ભ - 힘 - 渊 - 刈 - ജ - 북 - 채 - 〈 - 泌 - 김 - 凳 - 腫 - 慨 - 욕 - 卒 - 맨 - 宵 - 訓 - 绒 - ୟ - 迅 - ዲ - 栈 - 광 - 恕 - 腻 - 嗡 - 션 - 犹 - 酋 - 꼬 - 嘻 - Ô - 眨 - 磯 - ਣ - ඹ - 叮 - 醋 - + - 叨 - 瞬 - У - 鐘 - 블 - 粧 - 곳 - 谭 - 糧 - ះ - 캐 - 捆 - 樱 - 簡 - 噩 - 蜘 - 贸 - 혀 - 栅 - 儒 - ۳ - 於 - ሺ - 甸 - 葱 - 품 - 喃 - ួ - 칠 - ဖ - 噌 - 赦 - 閥 - ណ - 粥 - Г - ᱟ - 袜 - ັ - 诅 - ջ - խ - ሀ - ○ - 톡 - ಇ - ञ - 嵩 - 氢 - 隧 - 唇 - 塊 - 韧 - 桩 - 箸 - 愣 - 蚀 - չ - 검 - 혹 - 哉 - 苔 - 喇 - 綿 - 穀 - 祐 - 矛 - 杖 - 厢 - 匈 - 깨 - 硫 - 넌 - 吩 - 唠 - 猾 - 輸 - 漢 - ९ - ৯ - 철 - 奮 - Х - 凸 - 吱 - 땐 - 菱 - 哋 - 镖 - 橘 - 挠 - ങ - 缅 - 暢 - 샀 - 紗 - 탈 - 鞠 - 曽 - 꼭 - 송 - 疹 - 너 - 勺 - Ĉ - 깐 - 못 - 蓉 - 麼 - 茸 - 纲 - 움 - 퍼 - 驼 - 倩 - ബ - 瘫 - 揉 - ஆ - 馅 - 밤 - ਟ - 绪 - 刹 - 齋 - 卦 - 棺 - ಧ - 當 - 莹 - ൽ - 缸 - 躬 - 砕 - ෘ - 鸽 - ዝ - ޝ - ॉ - ෞ - 屯 - 渴 - 鉱 - ฝ - 谍 - 哼 - 瞄 - 鎮 - 째 - 嗓 - 顕 - 嵌 - ષ - ഹ - 鈴 - 더 - 凹 - 忏 - 斧 - 璃 - 阐 - 惰 - 喧 - 茜 - 蝇 - ڤ - 趾 - ҕ - ඕ - 吼 - ז - ঢ - 掃 - 극 - 啪 - 絆 - ញ - 橱 - ሮ - 兑 - 녀 - 拙 - 豁 - 骸 - ఈ - ዬ - 拱 - 綾 - 頻 - 髓 - 높 - 잔 - 拟 - 撰 - 酿 - ዎ - 莓 - 련 - 류 - 枉 - 깔 - џ - 耿 - ぃ - 詠 - 벽 - 颇 - 멀 - 肚 - 應 - 窮 - 潤 - 떠 - 암 - 帜 - ഒ - 뽑 - 菇 - 승 - 详 - 嬢 - 槛 - ફ - 剔 - 폰 - 拌 - 拇 - 婉 - 寧 - 騎 - औ - 婿 - 渦 - 陳 - 큼 - 冗 - ዘ - 衝 - 녁 - 봉 - 亩 - 噜 - 坝 - 嘢 - 與 - Ĝ - ኩ - 쉽 - 屠 - 妾 - 蜀 - ଦ - ሥ - 歹 - 距 - 麟 - 谅 - 汐 - 懈 - 榄 - 鼎 - 粤 - 盯 - 輔 - 浇 - ށ - ణ - 攒 - 篠 - 께 - 딩 - 涅 - আ - 寛 - 饲 - 苛 - 彪 - 婶 - 膳 - 놨 - 衫 - 끄 - ቶ - 姬 - 臀 - ഇ - 瓣 - 悯 - 怡 - 揚 - 澜 - 항 - 착 - ۴ - 說 - 甭 - 籠 - 谨 - 껴 - 噬 - 综 - 择 - 萎 - 皂 - 韵 - 垄 - 屑 - 鳄 - 넷 - 墅 - 헤 - 橄 - 鵬 - ቱ - 毙 - ខ - 妞 - ଣ - ڏ - × - 빨 - ฉ - 俱 - 扁 - ቢ - 條 - 嶽 - 溯 - 酮 - ։ - 钦 - 枢 - 宛 - 婪 - 衬 - 銘 - ຟ - ഞ - 冨 - 删 - 禄 - 旱 - ฮ - 粋 - ഭ - ଏ - 潔 - 埔 - 蒲 - 樣 - ओ - 곤 - 앉 - 戳 - 蕉 - 垮 - Ř - ಆ - 뜨 - 唆 - ៉ - 摂 - ಎ - 싼 - 蹄 - ڊ - 啓 - ဲ - 诵 - 萍 - 磕 - 趴 - 斬 - 팀 - 瑶 - ƙ - ፣ - 皓 - 蹭 - ଥ - 笘 - 寮 - ৩ - 荆 - ሌ - 绊 - 娟 - 蝠 - భ - 懦 - 饺 - 剃 - 虏 - 捣 - 绍 - • - 논 - 舆 - 辟 - 氓 - 辻 - 컴 - ଟ - Τ - 匂 - 贡 - 疎 - 贱 - 濁 - 釣 - ಥ - 蜗 - ሲ - 늘 - 烘 -  - 嗽 - ۽ - 蠢 - ୋ - 鷲 - 莽 - љ - ഉ - 兩 - ৫ - 짐 - 蓋 - 蘭 - 淫 - Ņ - 궁 - ۸ - 썼 - 芙 - 錯 - ಭ - 绞 - 촌 - 걱 - 蓮 - 媚 - 斩 - 衅 - 谓 - 涡 - ӣ - ဘ - ភ - 呪 - 맥 - 클 - 剿 - 嚼 - ଇ - 聪 - ぉ - 栽 - ថ - 畔 - 냈 - 墟 - ৪ - 쁜 - 怯 - 粪 - 糕 - 縛 - 蘇 - Κ - 矩 - 诡 - 찌 - 熄 - ሻ - 蘑 - ඊ - ។ - 匀 - Ш - ‐ - 蕨 - 낫 - 킨 - 瞪 - 鴨 - 廓 - 擁 - 醤 - 贿 - ਉ - 禀 - 嘲 - 寓 - 庶 - 撼 - 锯 - 蝙 - 峻 - 쿠 - 涩 - 盏 - ធ - 捍 - 骆 - 빌 - 牡 - ଗ - 實 - 쌤 - 聋 - 梭 - 찾 - 肇 - 瀑 - 딸 - 疤 - Ș - 轿 - 撇 - 蒜 - ሬ - 딴 - 窥 - 篤 - 饥 - 魄 - 唾 - 诀 - 醸 - 曦 - 潇 - 鹦 - 騰 - 範 - 櫻 - 呻 - 宙 - 腥 - 튜 - 舶 - ਚ - 婴 - 락 - 밀 - ផ - 霖 - 铸 - ਖ - 屡 - 辑 - ޕ - 叱 - 弔 - ћ - 遵 - 關 - 씬 - 奕 - 沦 - 暫 - 煽 - 陡 - 馨 - 鹉 - 噂 - 苑 - ቻ - 恤 - 逻 - ३ - ☎ - 藍 - 掷 - Π - 沛 - 런 - ۶ - 檬 - 센 - 棕 - 厉 - 澡 - ഴ - 寇 - 幡 - 彬 - 徘 - 빵 - 徊 - ഡ - 획 - ໊ - 唬 - 妓 - 肋 - 쩌 - 裹 - 曇 - ۷ - ઈ - 冕 - 纺 - 鞍 - ઉ - 揍 - 暂 - 肃 - 嶺 - 缆 - 淮 - 팅 - 郝 - 嘶 - 缪 - 驯 - 啱 - 억 - 咧 - 범 - 砰 - 赁 - 痩 - 胺 - 聡 - 賭 - 炖 - 瞅 - 飓 - ਈ - 铲 - ५ - ๊ - 攞 - 窖 - 咀 - 酶 - 啸 - 翰 - 榴 - 첫 - 瘟 - 嚷 - 鷹 - 嫉 - ẵ - 忆 - 芹 - ዐ - 핑 - 剰 - 걍 - 巩 - ኧ - Ф - 쩔 - 凿 - 嚣 - 讳 - 丞 - 勿 - 枫 - 奢 - 潭 - 靓 - 氨 - 奠 - 찬 - 搂 - 雞 - 邀 - 짓 - 験 - ४ - 沐 - ଯ - 錠 - 钞 - 왕 - 훨 - ଆ - 搵 - 챙 - 诫 - Ṭ - 揃 - 賠 - ኑ - Ở - 跤 - 峭 - ঃ - 텔 - 钙 - 뉴 - 窍 - Э - 熔 - 립 - Ҳ - 幫 - 抒 - 猶 - 拽 - 응 - 亞 - 掀 - 듯 - 탄 - 侃 - 颂 - 飙 - 歉 - 嗎 - 缉 - 痪 - 娩 - ੍ - ௌ - ጊ - 绽 - 拷 - ၏ - 舔 - 孵 - 惦 - ଅ - 桨 - ৮ - ሊ - 殊 - 趋 - 裡 - 亵 - 菩 - Í - 뛰 - 彰 - 咗 - 祁 - 彤 - 墜 - Ο - 區 - 퇴 - 喚 - ઇ - 聽 - 긍 - 탕 - 渕 - 氣 - 鉛 - 鹤 - 갑 - 婢 - 廖 - 嚎 - 沥 - ձ - ଜ - 拧 - 謡 - 棱 - 骼 - Ý - ৬ - 咎 - 禽 - 媽 - Ǧ - 胰 - 蚕 - Ā - ӯ - ൊ - ፋ - 熏 - 禧 - ๆ - ဟ - 층 - 涯 - 捂 - 凄 - 屉 - 鎌 - 笹 - 搾 - 舵 - 谤 - 叩 - ္ - 淆 - 喀 - 泵 - ८ - ዜ - 焚 - 癫 - 굉 - 丐 - 泻 - 飢 - 阀 - 瞳 - 褪 - 愉 - 蔭 - 扒 - 춰 - 퓨 - 扼 - 렌 - ଶ - ቺ - 糗 - ୀ - 樊 - 箇 - 诽 - ६ - 俭 - 밑 - Ó - 견 - 駄 - 谊 - 瘍 - 渲 - Щ - փ - 叭 - 娥 - 拢 - 柠 - 沧 - 訟 - 钝 - 蔼 - 邱 - ኦ - 颤 - 咖 - 呜 - Α - Μ - ਡ - ቆ - ဝ - ጀ - 俣 - 胚 - 訂 - 體 - ဂ - 낙 - 赋 - 佣 - ಒ - 犀 - 咔 - 锈 - င - 茉 - 빡 - 睦 - ኖ - 漬 - 鋼 - 喪 - Õ - 覺 - İ - 猟 - છ - 媳 - 존 - 妍 - 헬 - 贅 - 筷 - 掐 - З - 賊 - ڙ - 溃 - ጅ - ᱩ - 盎 - 褐 - Đ - ٻ - 揾 - 쇼 - 孽 - 篱 - 筝 - 躯 - 녔 - 濒 - 哒 - உ - 鹊 - 焕 - 賂 - ଡ - 巅 - 척 - 襟 - आ - ৭ - ဒ - 꼈 - 將 - 邵 - 咏 - 兮 - 欄 - 恥 - ψ - 碱 - 溅 - 릭 - 놔 - 呕 - 晏 - 秩 - 腎 - 隘 - 烬 - 葫 - 柵 - 梵 - 扳 - 惩 - Ε - 낮 - థ - 錢 - 昊 - 謙 - ఓ - ኪ - 맘 - 叽 - 갖 - 譜 - 轄 - 岬 - 徙 - 蚁 - 谂 - 링 - Ī - 雯 - 沪 - 량 - 揣 - 恳 - ឹ - 绰 - 劈 - 렵 - 琦 - 讓 - 萬 - 禾 - 깝 - 啃 - 侑 - 蛾 - ᱱ - ڳ - ޙ - 窑 - 暦 - ഥ - 닭 - ଷ - 椰 - 값 - 곱 - 藓 - 棠 - 迦 - 쓴 - ଧ - ೈ - 髄 - 弧 - 徑 - 睫 - 냄 - 츠 - 睿 - 汹 - 渺 - 揽 - 膀 - 胴 - 樋 - 氯 - 檐 - 銚 - 驸 - 渎 - 葩 - 械 - ७ - 턴 - 볶 - 凛 - 諭 - 액 - 킹 - 탁 - 髦 - 壕 - 駿 - 젠 - ጉ - 鳳 - ൈ - 苟 - 铛 - Ա - 滕 - 芥 - ٔ - ޢ - 掲 - ޗ - 參 - 염 - 짧 - 逍 - 拎 - 捅 - 斋 - ഫ - ቦ - 锚 - 懵 - 擅 - 柚 - 겁 - ጨ - 轶 - 貞 - 땜 - 嗦 - 從 - 悖 - ኬ - 疟 - 鳞 - ժ - 瑠 - 덜 - 阪 - 碟 - ዙ - ጂ - 鈍 - 槻 - 梶 - 硝 - 囱 - 琥 - ় - 麵 - ǎ - 齢 - 旷 - 웬 - 豫 - 喵 - ଭ - 乒 - 團 - 橇 - 戮 - 규 - 픈 - 僻 - 섭 - 콘 - 绷 - 裾 - 痫 - 꿈 - 矫 - 哗 - 농 - ฤ - 號 - 饵 - Σ - ቼ - 衙 - 餓 - 彗 - 峠 - 鑫 - 莺 - 쳤 - 澈 - 恍 - « - 稜 - 琶 - 氟 - Ӱ - 곡 - 흥 - 虜 - 竿 - 褶 - Ọ - 효 - 崽 - Ấ - 琅 - 內 - 跋 - 램 - ኋ - ٠ - 灼 - 쎄 - ៃ - 隼 - Å - 掺 - 嚇 - 寅 - Ś - 낼 - 抠 - 聂 - 膛 - 馈 - 勲 - 壌 - 濡 - 绎 - ਧ - 荧 - 널 - 弛 - 逾 - 怂 - 罕 - ॅ - 刁 - 漱 - 煲 - 乓 - 樂 - 꿀 - ฎ - 秉 - अ - 隋 - 簧 - 馒 - 铝 - 磷 - 멋 - Я - ਫ - ቁ - 덕 - ଙ - 齊 - 麓 - 쌓 - 幣 - 挚 - 帶 - 橫 - ಉ - ၊ - ሜ - 혜 - 姗 - 磐 - 悚 - 甥 - 켓 - 逞 - 辙 - 丙 - 酌 - 赂 - 완 - 傑 - ך - 莞 - 꿔 - 낸 - 믿 - 揪 - ᱠ - 憾 - 乍 - ଳ - 牲 - 臼 - 眩 - 쌍 - 瑛 - 萱 - 坟 - 柬 - 缀 - 悴 - 闸 - 氮 - 雁 - ऊ - 憔 - 觅 - 롤 - 엽 - 窦 - 跚 - 娴 - 變 - 阎 - ๋ - 黃 - 灸 - 뜻 - 찰 - 搐 - 涕 - 帧 - ճ - 훈 - 渓 - 龚 - 惚 - 믄 - ጎ - 桦 - ޤ - 榈 - 琵 - 祀 - 睐 - Ż - ඛ - ፊ - 噢 - 蠕 - ̇ - 遏 - 吠 - ӳ - 꽤 - 컨 - 蹒 - 灶 - 渠 - 缕 - 樽 - ጭ - 倭 - 翅 - 닥 - 眯 - 蔚 - 觀 - 殷 - 赃 - 粟 - 잤 - 黏 - 脾 - 諮 - 頓 - 엄 - 烯 - 깜 - 歼 - 閲 - 锥 - 篡 - 롱 - 즐 - 鯛 - 狸 - ሙ - 隶 - 頬 - 踝 - ಈ - ៊ - 煌 - 祠 - 讐 - 총 - ଚ - Ḍ - 荻 - 單 - ឡ - 镯 - 貢 - 呉 - 岔 - ჟ - 倘 - 咄 - 俞 - 皈 - 腱 - 瞭 - 洽 - 骏 - 렇 - 钧 - ໋ - 튀 - 绅 - 岌 - ቡ - 數 - 枣 - 炽 - 臆 - þ - 鳍 - 焉 - 穏 - 嘆 - 圍 - 瑕 - 졸 - Ò - 끌 - 獅 - 겹 - ଛ - 眷 - 噛 - 睾 - 득 - 塀 - 侶 - 鹃 - 鳩 - ᱮ - 薫 - 韶 - 녹 - 빙 - 뿌 - 屌 - 憂 - 炳 - 猬 - 땡 - 暧 - ฏ - 蛍 - 隷 - 茁 - ሷ - 霾 - 钏 - 綻 - ៀ - 牽 - 絹 - 娄 - 玮 - 稔 - 垢 - 惭 - 榆 - 胱 - 鮭 - 鸥 - 嗣 - 镀 - 漩 - 峙 - 냉 - 짤 - 蝉 - 賣 - 墳 - 纬 - 팬 - 陨 - ڌ - 萤 - 막 - 窯 - 椭 - 隈 - 뻐 - 巍 - 倔 - 冶 - 榨 - ఖ - 嘈 - 驭 - ՝ - 惶 - 롯 - ᱤ - 瞰 - 镶 - 뻔 - 셋 - 甫 - 冀 - 掰 - 唧 - 臺 - ឺ - 鯉 - 晾 - 藉 - 儲 - 畠 - 挿 - 璐 - 털 - ଁ - 靡 - 螂 - ଉ - 跷 - 슈 - 탔 - 념 - 靳 - 舘 - 蜒 - ޚ - 匣 - ः - 懊 - 蜿 - 缔 - 쭉 - ઘ - 崭 - 螃 - 氪 - 曖 - 穹 - 闫 - 畴 - 颅 - 鳌 - 蛤 - 詳 - ਭ - ઝ - 厩 - ಖ - 呦 - 빈 - 풍 - 處 - 霆 - 람 - 妬 - օ - 瑙 - 聲 - 뀌 - 疵 - 倶 - 蕊 - 梢 - 엑 - 檀 - 蹑 - 厥 - 袱 - ័ - ඥ - ☆ - 權 - 鄂 - 칼 - 푸 - ဉ - 俾 - 祟 - 窜 - 轉 - 侥 - 瘩 - 赣 - 辫 - ஒ - 紳 - 楓 - 옮 - 阮 - 峥 - 逮 - 砾 - ዕ - Ē - Η - ሸ - 诏 - 霹 - 俐 - 掂 - 滔 - 沌 - 殉 - ٫ - 缎 - 拂 - ђ - ኳ - 疮 - 铐 - 惟 - 纫 - 冉 - 嗜 - ጫ - 匮 - 邢 - 紊 - 裳 - ᱭ - 烷 - Ì - ਥ - ቴ - Ṣ - 佟 - 荫 - 撩 - 靶 - 囤 - 솔 - 잉 - 怼 - 嗑 - 蕴 - ຝ - 辍 - 搓 - 辦 - 啰 - 疙 - 咚 - 냥 - 渉 - 焊 - 苇 - ዱ - 줌 - 淌 - 闽 - 霄 - 蜷 - 酗 - 輩 - 讽 - 腊 - 兒 - 渇 - 쟁 - ቂ - 倪 - 拴 - 敛 - 閑 - 薩 - 朔 - 롭 - 멘 - 翘 - 晤 - 薗 - 湛 - 瑾 - 遛 - ዊ - 坨 - 碾 - 젊 - 坍 - 荃 - 锣 - 蔷 - 擬 - ሠ - 瞑 - 싱 - 흔 - ~ - 憤 - 猥 - 悍 - ዶ - 뿐 - 웨 - 猝 - 槟 - 漓 - 購 - 遡 - 炙 - ٘ - 悄 - 죄 - ෛ - 箔 - 諏 - 羹 - 痊 - 율 - ኘ - 茬 - 賓 - 哮 - 岚 - 樫 - 痹 - 羔 - 眶 - 趕 - ඨ - 庐 - 菠 - 맡 - 爺 - 湊 - ෆ - 淵 - ੜ - 雹 - 麒 - 侨 - 恙 - ቲ - 폭 - È - 蜱 - ඝ - ᱜ - 呱 - 仨 - 蜥 - 臻 - 摯 - 稠 - 雍 - 瞻 - 撲 - 釧 - 즌 - Δ - 畸 - 킬 - ၎ - 瘸 - 雳 - 嚏 - অ - 摧 - 俸 - 夠 - ኸ - 嗒 - ኒ - ጃ - 羚 - 汝 - 鬱 - 抨 - 晦 - 钥 - 秤 - 넓 - 簸 - 닝 - 앤 - 홀 - 骡 - \ - 哔 - 弈 - 續 - 吝 - 遼 - 顎 - 씀 - 兢 - 愕 - 럴 - 玷 - 슨 - 啄 - 邉 - 煞 - 寫 - 娱 - 堺 - 豌 - 撬 - 惮 - 槐 - 蝎 - ஃ - 暁 - 桅 - 梧 - 苯 - 흐 - 嫣 - 踹 - 蹊 - 庚 - ౌ - 笋 - 惺 - 淼 - 缰 - 肴 - 춤 - 殆 - 秽 - 俏 - 똥 - 얻 - 酥 - 繕 - 醍 - 樓 - 烙 - 檜 - 碇 - 痢 - ᱢ - 帥 - 馀 - Γ - 摞 - 젤 - 홉 - ̍ - 允 - 鯨 - 凱 - 焙 - 籽 - Ł - 伶 - 昵 - 컬 - 產 - 啼 - 峯 - 醐 - 籔 - 醫 - 攘 - 걘 - 앙 - 桓 - 诋 - 貂 - 寥 - 腌 - 墩 - 牟 - 茵 - 눠 - 똑 - 慑 - 렀 - 딜 - 떼 - 畫 - 璇 - ฆ - 犁 - 缭 - ಫ - 릿 - 笃 - 럼 - 鞘 - 铀 - 럭 - ќ - 诠 - 倚 - 煩 - 懇 - 딘 - ଖ - 聯 - 炬 - 喳 - 诬 - 迂 - 酎 - 餸 - 뮤 - 징 - ١ - 벤 - 汲 - 胁 - 唄 - 懿 - ቹ - 緻 - 뚫 - 뷰 - 씻 - 訊 - 鐵 - ڼ - ඬ - 콩 - 坷 - 燒 - 셀 - 灘 - ヂ - 钠 - 匙 - 紡 - 숨 - 珂 - 翟 - 廿 - 玻 - 祷 - 窪 - 邑 - ኤ - 웹 - ૃ - 茧 - ഏ - Ê - ݙ - ၀ - 錮 - 紺 - 營 - 簇 - 谚 - 殡 - 雏 - ᱚ - 铮 - 벅 - 콜 - 滿 - 晗 - 幌 - 폐 - 晟 - 朦 - 擒 - 汶 - 縣 - 榊 - 쟤 - 叼 - 哆 - 怦 - 吭 - 捨 - 噪 - Ķ - ጆ - 령 - 몬 - 컵 - Մ - 匕 - 蓓 - 槙 - Հ - ዓ - 暉 - 덟 - 델 - 홈 - 呛 - 绮 - 锄 - 杭 - 庵 - 羨 - ऐ - 吏 - 꽃 - 덴 - 템 - 澎 - 酝 - 裴 - 諾 - 镰 - 绥 - ቷ - 芮 - 샤 - 왠 - 叡 - 泞 - 랄 - ଂ - 腳 - 깊 - 밍 - 켰 - ៍ - 嗱 - 蝦 - ഈ - 낀 - 댄 - 댓 - ឆ - 曳 - 凪 - 翡 - 騙 - ፕ - ऱ - 瀧 - 壺 - ੌ - 압 - 溉 - 犰 - 眸 - 蹬 - 狳 - 赡 - 穗 - 옥 - 蒼 - 蜴 - 颓 - 箕 - 黯 - 寐 - 춥 - 啬 - 遷 - 韬 - 鵜 - 剁 - 樟 - 谟 - ぅ - 낭 - 땅 - 봄 - 짱 - 忡 - 荔 - 姻 - 骇 - Ҫ - ଼ - 둥 - 찜 - 柊 - 颯 - 訣 - 渭 - 瘠 - 羁 - 磋 - ሴ - 痰 - 룸 - 쁠 - 襄 - Ĵ - 辗 - 객 - 혈 - 赘 - 撂 - 腓 - 倣 - 眾 - 讪 - 喱 - 瓢 - 벨 - 傳 - 珑 - 询 - 誕 - 潦 - 쌀 - ݨ - 춘 - 槌 - 婊 - 帚 - 厮 - 嗖 - 鲑 - 佰 - ٺ - ঈ - 猖 - 翩 - 삶 - 캠 - 骤 - 蟑 - ぢ - 邹 - 柑 - 烨 - ዩ - 嘀 - 蚤 - 벗 - 잃 - 抉 - 絮 - 腑 - 專 - 陌 - 勅 - 洼 - ሂ - 亢 - 玟 - 삭 - ޞ - ഖ - 嬛 - 绛 - ஏ - 喙 - 喩 - 曙 - 總 - 눌 - 胧 - 蹈 - 刨 - ᱫ - 侏 - 價 - 謗 - ヅ - 芜 - ඡ - 엘 - ጡ - 珈 - 拣 - 踱 - 鼹 - 誹 - 錬 - 鲤 - 舷 - 률 - 릉 - 梓 - 渍 - 谬 - ऑ - 悸 - 萦 - ᱾ - 槿 - 藝 - 빛 - ቋ - 嗮 - 鍾 - 岖 - 폴 - 钳 - 噺 - 釉 - 炜 - 貓 - 骰 - 윤 - 힌 - 滇 - 鼬 - 淤 - 荐 - 艶 - ൃ - 歡 - 鹭 - ڀ - ඵ - ဗ - ጓ - 诛 - 좌 - 픽 - 忑 - 鄉 - 嬷 - 擔 - 禹 - 蛊 - 詣 - 妳 - 蛎 - 浊 - 邋 - ዴ - 싹 - 헌 - 袒 - 汕 - 忐 - 蜚 - 谛 - 菓 - ၍ - ਯ - ኢ - 炕 - 脐 - 놈 - 섬 - 槍 - ။ - 昴 - 껄 - 랩 - 헐 - 氦 - 疡 - 豬 - 弩 - 椿 - 熨 - 篝 - 楔 - 狈 - 鳗 - 걷 - 涟 - 瞩 - 꽂 - 摁 - ף - 닫 - 呑 - 拗 - 釘 - Ģ - 皐 - ŋ - ٢ - ─ - 攸 - 烏 - 遢 - 딨 - 엇 - 펜 - 尧 - 頸 - 榛 - 榎 - 俑 - 蜓 - 浚 - 漪 - 蜻 - 팩 - Ν - Ն - 毗 - 诧 - 詫 - 掴 - 麹 - 皙 - 犊 - 娯 - ᱧ - 샵 - 譬 - 瞌 - 輕 - 엉 - 砌 - 蟆 - 訃 - 綺 - 邝 - Қ - 멍 - 밴 - ృ - ሐ - ዮ - ୂ - 栩 - 湃 - 矣 - 蹂 - 傀 - 儡 - 肽 - 渥 - 삐 - ڄ - ឋ - 밝 - 漉 - 馁 - 撵 - 憬 - 哧 - 韻 - 璋 - 耙 - Έ - ഓ - 亥 - 蟒 - 엠 - 탑 - 庾 - 钾 - ਘ - ៏ - ఐ - 呸 - 筏 - 틱 - 협 - ᱛ - 戎 - 渝 - 吮 - 햄 - 烹 - 僅 - 巳 - 匡 - 矗 - 顷 - 핀 - 嘣 - 瀚 - ୱ - 窘 - 磡 - 謹 - 卉 - 孜 - 獭 - 蔗 - 跺 - Ļ - 沽 - 례 - 큐 - 툰 - 捎 - 谕 - 枷 - 沁 - 蘸 - 啤 - 杵 - 荚 - ዞ - 燈 - 钵 - 萃 - 羲 - 潟 - 啡 - 漕 - Ս - ਏ - 證 - 꼴 - 娲 - 鑼 - 淇 - 灏 - 饷 - 惣 - 鮎 - 躏 - 곧 - 균 - 낳 - 슴 - 袄 - 鞅 - 岑 - 锵 - 拯 - 砺 - 敖 - ѳ - 缽 - 뒷 - 矜 - 虞 - 뜬 - 姊 - Š - 鎧 - 喆 - 섞 - 힐 - ೃ - 筐 - 錄 - 蝗 - 刽 - 隕 - ឈ - 苣 - 〜 - 몽 - 毋 - 錫 - 憧 - 厕 - 諜 - Β - ଫ - ሶ - 떴 - 첨 - 칭 - 틴 - 흰 - 舉 - 盪 - ၁ - 椛 - 楷 - 퀴 - 壽 - 땠 - 랙 - ઠ - ፎ - 脯 - 뒀 - 肛 - 麸 - 嗷 - 哽 - 礎 - 緯 - 钮 - 卤 - 纂 - 꿨 - 쏘 - ሃ - 锂 - 彝 - 舊 - 驿 - 嚓 - 鬟 - 酰 - 孢 - 锌 - 嗌 - 鞦 - 韆 - 楢 - 렴 - 흘 - 轼 - 馋 - 卯 - ຣ - 疱 - 斷 - 曰 - 讷 - 惋 - 묻 - 쿄 - 噼 - 笙 - 跛 - 驹 - 琊 - ฬ - 憶 - 罠 - 찐 - Җ - ૌ - 낄 - 蟋 - 踮 - 锻 - 涓 - 筱 - 꼐 - 핫 - 蟀 - 骄 - 涤 - 棲 - ヵ - 捻 - 凜 - 铠 - 납 - 덩 - 턱 - 涸 - 堰 - 偎 - 喰 - 阂 - ൺ - ‟ - 괴 - 蹋 - 驗 - 彌 - 勸 - 禍 - 镁 - 蒿 - 腦 - ૈ - 腮 - 痺 - 또 - 箋 - 薰 - 蝌 - 踵 - 醺 - 펴 - '~' - 杞 - 芷 - 끓 - 蚪 - 俪 - 瘀 - 罵 - 阈 - 戯 - 戲 - 躇 - 碘 - 幂 - 썰 - ඈ - 喎 - 妒 - 恬 - 榮 - 둔 - 亂 - 铬 - 褴 - 褛 - 嫡 - ڇ - 뚜 - 촬 - 컸 - ဇ - 偕 - 嫖 - 憑 - 峨 - 賜 - 嗝 - 鱿 - '|' - ኞ - ፖ - 掖 - 盡 - 뚱 - 棣 - 盹 - 禮 - 嘩 - 纶 - ‧ - ヱ - 혔 - ฑ - 鼾 - 樵 - 虻 - 蜕 - 绚 - 쿨 - ଠ - 紐 - 龛 - 측 - Є - 韭 - 蟾 - 壱 - 砥 - 鋳 - ඤ - Ồ - 邯 - ֆ - 蕎 - 騨 - 撅 - 虱 - ঔ - 낚 - 탐 - 휘 - 擂 - 팡 - 芊 - 浣 - 醜 - 戟 - Ă - 潼 - ဏ - 깎 - 멜 - 앨 - 泸 - 磺 - 裝 - 讀 - අ - 擢 - 翱 - 幢 - 칸 - 혁 - 皖 - 钰 - 昕 - 殃 - 蕃 - 잘 - 雉 - 廳 - 毀 - 鳟 - Ե - 薙 - 汞 - 렁 - 얀 - 왼 - 쩍 - 찔 - 隨 - 秧 - 睛 - 劉 - 貪 - 酯 - 馏 - 狒 - 噱 - ‒ - 胤 - 迢 - 꼼 - 칙 - 팠 - ฒ - 闆 - 匾 - 罹 - ឌ - 竺 - 琛 - 덤 - 胭 - 涮 - 舜 - 잖 - 桟 - 冴 - 鉾 - 嚴 - 꾼 - 댔 - 핸 - 黔 - 渤 - ≪ - 湍 - 繼 - 뺐 - 뼈 - 앴 - 컷 - '@' - € - 據 - 碉 - 雙 - 爾 - 桝 - 玺 - 剽 - 垦 - 祺 - Ј - 忒 - 閃 - 갠 - 깃 - 닮 - 릇 - 잇 - ೌ - 囉 - 腋 - 꽁 - 坞 - 楞 - 絢 - 帷 - Կ - ଘ - Ả - 挛 - 빤 - Ẹ - 뷔 - 劃 - 鸯 - 鸳 - 凋 - 攥 - 铎 - 雖 - 鹫 - 睑 - 俚 - 嫦 - 廻 - ጤ - 圖 - 恺 - 绯 - 꽉 - 샘 - 톤 - 蜍 - 蠣 - 咫 - 歷 - 갤 - ǹ - 爛 - 佃 - 屹 - 鳅 - 蔬 - 戛 - 蹩 - 戊 - 蚱 - Ӹ - ፒ - 黝 - 잊 - 짬 - ڃ - ਝ - 漸 - 끗 - 圓 - 喋 - 髋 - 蚌 - 裘 - ጽ - ፉ - 겪 - 뀐 - 얇 - 寶 - 痉 - 绢 - 颐 - 觑 - 咤 - 栾 - 較 - 紬 - ៈ - ٰ - 겸 - Ԑ - 毡 - 酚 - 醬 - 霓 - 蒔 - 飴 - 雛 - ਛ - 彷 - Բ - 갚 - 흑 - ၂ - ፅ - 닦 - 랫 - 묘 - 碼 - 卻 - 撸 - 蓟 - 蛰 - 蔻 - 骁 - ឱ - 蛀 - 伎 - 褂 - ৷ - 끈 - 뚝 - 馍 - 哝 - 桔 - 鸠 - 楊 - 钗 - Θ - ൗ - 벼 - 嗤 - 噔 - 玖 - 拮 - 槲 - 迄 - ଓ - 辿 - 덮 - ሑ - 瘪 - 窿 - 껌 - 띠 - 歲 - 爭 - 粑 - 粕 - ∞ - 闵 - 廠 - 馴 - Ό - 娼 - ፓ - 凧 - 莘 - 곰 - 땄 - Ҙ - 汾 - 鹑 - 藥 - 亘 - 憨 - 捗 - 丕 - 鬃 - 囧 - 菁 - ⁄ - 镳 - 빅 - 썹 - 짠 - ሕ - 【 - 】 - 塵 - 檢 - 矯 - 榕 - 捋 - 芈 - 밖 - 媲 - 渾 - 覗 - 栞 - 獗 - 鞑 - 렛 - 圃 - 뜯 - ሯ - 讹 - 摒 - 麥 - 擎 - 堇 - 轧 - ୃ - 孀 - 椋 - 睽 - 묶 - 뺄 - 흡 - 啩 - 闩 - 겉 - 릎 - 븐 - 뽀 - 쨌 - 埗 - 捶 - ဦ - 恢 - 讃 - 邨 - 阑 - 孚 - 宕 - 靼 - ѓ - ᱞ - 귤 - 땀 - 팝 - 曬 - 稱 - 諗 - 獾 - 熠 - 膿 - 苷 - 羟 - 糾 - 坯 - 諒 - 漾 - 氰 - 蚜 - 톱 - ፀ - 鹌 - 랍 - 噎 - 腆 - 糯 - 邃 - '>' - 炅 - 暄 - 膵 - 佼 - 蜢 - 吖 - 둬 - "\x93" - 씹 - 좁 - 勞 - 嫔 - 悶 - 垛 - 龈 - 畝 - 鳕 - 祇 - 苫 - 藐 - 縞 - 덥 - 띄 - Ĥ - 睬 - 랭 - 묵 - ሏ - 浒 - 鳃 - 夭 - 腩 - 蛭 - 锢 - 麝 - ^ - ٩ - ಏ - 啜 - 犒 - 昱 - 阙 - 咨 - 扪 - 晖 - 豐 - 쫓 - 헷 - Ո - 绫 - 駕 - 뜩 - 뽕 - 댕 - 렉 - 앱 - 웅 - 쩐 - 펙 - 膠 - 闰 - ಘ - 偃 - 脓 - 泷 - 瀕 - 宦 - 攪 - 荼 - Պ - △ - 斟 - ഘ - 铰 - 쫙 - 찢 - 캡 - ፡ - 燎 - 阉 - 啕 - ឃ - 獨 - ץ - 룹 - 뺏 - 쌌 - Ț - 侬 - 榔 - 臊 - 쭈 - 쵸 - 舀 - 竣 - 荤 - ໆ - 赳 - 渚 - 熾 - 즘 - 漳 - 唏 - 舛 - 砧 - 굽 - 촉 - ᱨ - 烽 - 罂 - 닿 - 쩨 - 禺 - 囃 - 繭 - 崗 - 戰 - 茱 - ヲ - 崴 - 閒 - 牒 - 팟 - 慵 - 粽 - 荨 - 藪 - 淨 - 羡 - 腼 - 靜 - 潺 - 詮 - 賑 - 賽 - ഠ - ĺ - ٥ - 鬆 - 늙 - 쇠 - 宓 - 惬 - 笆 - 螨 - 缜 - 吒 - 鮫 - 笞 - 鴻 - Ի - 晌 - 汛 - 絕 - 臃 - 辘 - 铂 - 긁 - 긋 - 껍 - 녕 - 듬 - 썸 - 쥐 - 쪘 - 쫄 - 壑 - 孰 - 夯 - 埼 - 鸞 - 鰹 - 黜 - 暨 - ဓ - 埠 - ဥ - 痣 - 麩 - 屬 - 摹 - 涎 - 畢 - 냅 - 뇌 - 뺀 - 샐 - 숭 - 쭐 - 璨 - 螳 - 驮 - 鹽 - 〇 - 檻 - 沂 - 砦 - 祛 - 讣 - 顆 - Į - Դ - ඌ - 怆 - 捺 - 胥 - 苞 - 辄 - 셜 - 숫 - 엊 - Ά - ঋ - ਓ - 壞 - 翎 - 蟻 - 钊 - 鱗 - 傣 - 挝 - 脖 - 贻 - ጌ - ፃ - 絨 - 샌 - 썩 - ሦ - ኙ - 淞 - 狀 - 蝽 - 谩 - 鬣 - 缇 - 梱 - 詐 - 侮 - 綴 - 皑 - 颚 - ឬ - 怅 - 靈 - 麋 - 략 - 맹 - 힙 - 诲 - 賺 - 險 - 붕 - 劵 - 奧 - 宍 - 狛 - 蛟 - 诃 - 醛 - Գ - ቪ - 榻 - 褥 - 갓 - 겐 - 귄 - 뤄 - 텀 - 馳 - 沓 - 濕 - 鲈 - 祯 - 躊 - Ñ - ጪ - ᱴ - 濟 - 篓 - 脫 - 닷 - 멈 - 앗 - 匐 - 洸 - 搀 - 鸵 - 吆 - 剐 - 晰 - 萼 - 雫 - 綬 - 篆 - 艮 - 擇 - 繍 - 絲 - 逅 - 邂 - 啾 - 璀 - 恃 - 忖 - 糞 - 窩 - 箍 - Λ - ऋ - 忱 - 淺 - 藕 - 룰 - 팍 - › - 焱 - 镣 - 멤 - ‹ - 镍 - 啮 - 紹 - 锹 - 耦 - Þ - Χ - ڍ - ဤ - 넨 - 믹 - 밟 - 쉴 - 욱 - 쪄 - ሟ - 啧 - 瓮 - 돋 - 뱃 - 懷 - 掸 - 癞 - 姪 - 趙 - 榉 - 籁 - 銷 - 뱅 - 윗 - ϊ - ਐ - ఘ - ඓ - ‚ - 拄 - 簪 - 遁 - 髅 - 骷 - 筲 - 粵 - 洱 - 迥 - Џ - 跆 - 跻 - 붓 - 兀 - 匍 - 룩 - 쎈 - Ġ - 怔 - 翊 - 贋 - 冑 - 犸 - 胛 - 褚 - 枭 - 溫 - ឿ - 釋 - ሔ - ጁ - ′ - 溥 - 닉 - 딪 - 맵 - 쫍 - 탓 - 叻 - 徨 - 枸 - 稷 - 蚓 - 裆 - 馄 - 孺 - 耘 - 璧 - 鹈 - 鹕 - 鰻 - 臉 - 陇 - Ӓ - Ӧ - ጦ - 浃 - 钛 - 껀 - 낯 - 쾌 - 흠 - 熵 - 犟 - 瘁 - 醚 - 沱 - 瘙 - ங - 紘 - 鹂 - 巌 - 靭 - Φ - 抿 - 沏 - 듀 - 뿔 - 셉 - 쌩 - 壓 - 擊 - 牦 - 谙 - 眈 - 餡 - 彈 - 遽 - 羌 - 迸 - 郸 - ଞ - 綠 - Ԥ - ٪ - ፌ - ỵ - 惡 - 캔 - 폼 - 噶 - 戾 - 蚯 - 蹿 - 鲱 - 噗 - 壹 - ★ - 廟 - 羯 - 輿 - 俨 - 啟 - 砷 - 缥 - Ụ - 垩 - 箫 - 쌈 - 峪 - 扈 - 濤 - 珞 - 痞 - 遜 - 酣 - 宸 - 燻 - 猕 - 歎 - 螈 - ៗ - 烩 - 缈 - 衩 - 遐 - 헛 - ઢ - 緋 - 诟 - 饨 - 魯 - 痨 - 兎 - 欽 - ւ - 濠 - 獒 - ૅ - 刍 - 콤 - 텍 - 尬 - 斡 - 汴 - 둑 - 볍 - 잼 - 쥬 - 챔 - 囡 - 婧 - 耷 - 讥 - 谑 - 舐 - 钨 - 煦 - 鍼 - 镐 - 彙 - 痤 - ಞ - 纏 - Ţ - ၄ - ቄ - 亟 - 佗 - 囔 - 歸 - 踌 - 饉 - 댁 - 맙 - 윙 - 쩡 - Ρ - 櫃 - 涝 - 袤 - 謂 - 泾 - 馥 - 鳖 - 冽 - 阚 - 葦 - 葭 - 傭 - 桧 - "\x94" - 弼 - 茛 - 鞄 - 넉 - Ι - ٣ - ‽ - 泯 - 빚 - 윈 - 췄 - 쿼 - 퉁 - 펌 - 哂 - 弑 - 搪 - 昙 - 聰 - 楂 - එ - 舫 - 嘔 - Ħ - 瞠 - 耆 - 臥 - ૉ - ၉ - ᱥ - 咝 - 澪 - 瑰 - 꼽 - 맣 - 샷 - 핵 - ޠ - 佯 - 叟 - 瞿 - 烊 - 砚 - 酊 - 骋 - 谏 - 靚 - 罷 - 攫 - 藁 - 颞 - 鯖 - 遙 - ÿ - 桉 - 갱 - 뻤 - 찝 - 탱 - 檎 - 踞 - 顯 - 鬧 - 융 - 佘 - 皎 - 缤 - 缮 - 蚬 - 恻 - 鰺 - 滓 - 錐 - '}' - 嚿 - 痔 - 铆 - 빔 - 빴 - 엎 - 툭 - 푼 - 啷 - 壬 - 戬 - 莆 - 蛹 - 豉 - 轲 - 铿 - 龊 - 龌 - 斓 - 轟 - 劾 - 鰂 - 槃 - 樺 - 胯 - 饯 - 恣 - 曉 - 潍 - 纣 - 豺 - 쇄 - Ū - Վ - Ք - Օ - ਠ - 껏 - 뎅 - 빗 - 뻥 - 썬 - ఛ - 罔 - 蜊 - 諫 - 淄 - 鄭 - 擀 - 汎 - 鹧 - ґ - 岱 - 犷 - 秆 - 辕 - 鸪 - 뢰 - 뭉 - 젝 - 풋 - ٨ - 罄 - 膺 - 薮 - 谄 - 맻 - '{' - 蟲 - 钍 - 馊 - 颌 - І - 鱲 - 젓 - ̈ - ၅ - 匝 - 戶 - 攰 - 깡 - 뜰 - 앵 - 즉 - 恪 - 撐 - 焖 - 簾 - 蛆 - 蠻 - 獎 - 瞓 - 骛 - 苓 - ゑ - 牠 - 黨 - 宥 - 廈 - ޏ - ઊ - ୌ - ᱷ - 憐 - 뀔 - 봇 - 옹 - 匯 - 荟 - 诣 - 轭 - 琰 - 摺 - 瘴 - 谔 - 蝾 - ¥ - 戀 - 斛 - 霁 - Չ - Տ - ٤ - 洩 - 궈 - 숱 - 엥 - 찼 - 혐 - 勵 - 叵 - 掻 - 碜 - 莴 - 蟠 - 炯 - 咣 - 掟 - 讧 - 赝 - Ҷ - 檸 - 疽 - 裟 - 詢 - 뭣 - 쉐 - 쏟 - 倹 - 夙 - 帛 - 恿 - 毂 - 脷 - 鹹 - 깥 - 됩 - 탠 - 殇 - 璞 - 褲 - 嶙 - 髮 - 賦 - 觎 - 喹 - 搡 - 襷 - 葆 - 鲟 - ኃ - 塹 - 拈 - 퀄 - Ő - ጸ - 葺 - 觐 - 굶 - 굿 - 넥 - 뇨 - 뉘 - 띵 - 렬 - 뱀 - 콕 - ౦ - 惆 - 玫 - 珏 - 蚝 - 抡 - 淬 - 貰 - 箴 - 粱 - 榷 - 糠 - 칵 - Ō - ఠ - 撚 - 粼 - 蔦 - 遲 - 귈 - 핏 - 횡 - Լ - ᱵ - 倖 - 咻 - 撥 - 瑚 - 驷 - 痱 - 荠 - 薅 - 惴 - 砒 - 峋 - 肓 - 苹 - 槭 - 狞 - 嚢 - 奘 - 忻 - 饗 - Թ - 咿 - 藜 - 룬 - 칩 - 펑 - 흉 - ® - ጻ - 咦 - 礫 - 糜 - 菏 - 衲 - ៌ - 歆 - 膊 - 赈 - 閪 - 髭 - ゞ - 燭 - 礒 - 椀 - 鋸 - 槇 - 髻 - Ư - 擺 - 楕 - 謳 - 隱 - 삿 - 섹 - 틈 - 幔 - 蕙 - 挎 - 娓 - 箏 - ឥ - ឯ - Ω - 苅 - 蜃 - 淅 - 煉 - 蘿 - 诩 - 骞 - 鸢 - 婕 - 戌 - 锭 - 맺 - 빽 - 슐 - 훔 - Ї - ਊ - 婵 - 孪 - 揄 - 굔 - 꼰 - 렷 - ቨ - 嚥 - 弋 - 悻 - 揶 - 滉 - 駛 - 掳 - 蔫 - 衢 - 綜 - 姦 - 疣 - 嘗 - 胫 - ಃ - 戍 - 珺 - 袈 - 鍮 - Ֆ - ਢ - ଵ - ጄ - 惘 - 蚣 - 鏑 - 뺑 - 쉰 - 햇 - 훅 - 嗲 - 揖 - 潢 - 肱 - 袅 - 覽 - 鬓 - 甬 - 挞 - 觊 - 厭 - 簽 - 廢 - 毘 - 狰 - 蝓 - ٦ - 绌 - 虛 - 錣 - 멸 - 므 - 섰 - 얄 - 얹 - ઞ - ኗ - 桢 - 涣 - 稅 - 纥 - 蹟 - 魷 - 炷 - 蝈 - 嬌 - 穣 - 呤 - 禰 - 镭 - 碴 - 萸 - 贮 - 遴 - ၆ - 恆 - 潛 - 熹 - 荏 - 谥 - 酉 - 錆 - "\x9E" - à - Ə - ಐ - ሹ - 徵 - 愫 - 揸 - 缨 - 깄 - 껑 - 냠 - 쏠 - 틸 - 팁 - 폈 - 횟 - 牆 - 牝 - 膈 - 蛞 - 盧 - 櫓 - 痍 - 橿 - 嘌 - 欖 - 緣 - 茴 - 蜈 - 钚 - 铤 - Շ - 徇 - 燧 - 珐 - 蝕 - 铵 - 밸 - Œ - ŕ - Ռ - ၃ - 埴 - 繞 - 荞 - 贏 - 밭 - 숏 - 츄 - 쿵 - 펀 - 푹 - 훌 - 丟 - 坳 - 塙 - 赊 - 啁 - 鋪 - 龋 - 啵 - 饅 - 畿 - 蛱 - 谵 - 滲 - 燗 - 嶼 - 詔 - 頚 - ঊ - ၌ - 歐 - 蔥 - 鹸 - 갸 - 넛 - 랗 - 륵 - 멕 - 옵 - 젖 - 퀘 - 툴 - £ - Ζ - 妲 - 揀 - 昀 - 桀 - 籌 - 诙 - 镊 - 靛 - 鸚 - 짰 - ޟ - 湮 - 濾 - 鄰 - 貘 - 嘚 - 뭐 - 啶 - 嵜 - 纾 - 诨 - 铉 - 苺 - 锰 - Զ - 嚕 - 涧 - Æ - 琲 - 窈 - 謁 - 阖 - ՛ - ଢ - ዥ - ጩ - ፐ - Ủ - 仄 - 炀 - 轱 - 鮨 - 깼 - 냬 - 륙 - 짖 - 텝 - 펐 - 羿 - ̂ - 撫 - అ - 靉 - 薔 - 珪 - 檔 - 蓑 - 荊 - 菰 - 郦 - 揆 - 檗 - 譯 - 魇 - 噏 - 夥 - 澁 - 癸 - Ӑ - Է - ޘ - ಓ - ቧ - ቬ - Ừ - 嬰 - 嵇 - 濮 - 螢 - 襯 - 鳏 - 솜 - 숍 - 쑥 - 쩜 - 킥 - ̄ - 搖 - 濑 - 鬚 - 廁 - 筧 - 纨 - 盃 - 梠 - 嵯 - 骊 - 櫛 - 饽 - 夾 - 寰 - 鋒 - 鷺 - 冢 - 嬲 - 轢 - ­ - ٧ - ઑ - ᱼ - 嘤 - 墾 - 峽 - 濫 - 霎 - 꺾 - 똘 - 룡 - 맑 - 뻘 - 숲 - 쓱 - 옴 - 잎 - 탭 - 蘋 - 逵 - 颧 - 鵝 - ឧ - 啖 - 燦 - 蹶 - 哐 - 磴 - 鶯 - 妩 - 孬 - 蚂 - 蛯 - 砝 - 鑽 - 湄 - 訛 - Ή - 笏 - 趨 - 邁 - 雜 - ʿ - ᱣ - 煜 - 簌 - 綫 - 芪 - 녜 - 뱉 - 읍 - 첩 - 堑 - 廚 - 爲 - 瑄 - 瓒 - 蜣 - 襖 - 辭 - 锏 - 漑 - 霏 - 鳝 - 佇 - 杳 - 瀞 - 煨 - 鹪 - 榧 - 矾 - 搔 - 焗 - 纜 - 芍 - Ø - ۂ - ၈ - ሼ - ቭ - ጧ - 賤 - 黐 - 곽 - 괌 - 꽈 - 릏 - 벳 - 쁨 - 샴 - 콧 - Խ - Փ - ዌ - 嗬 - 奚 - 淚 - 笈 - 繫 - 蓦 - 餘 - 帼 - 椽 - 犠 - 瓯 - 櫂 - 泗 - 蠅 - 燔 - 沅 - 叢 - 腭 - 慘 - 疝 - 旛 - 鲷 - 鄱 - 鹩 - ڦ - ඞ - 姣 - 泱 - 烃 - 獻 - 镫 - 鹬 - 셰 - 엮 - 캉 - Ь - Ҩ - ሾ - ቫ - ፏ - 咂 - 瀛 - 绉 - 饒 - 굵 - 늫 - 뭇 - 왤 - 쭤 - 쯔 - 팽 - 픔 - 璎 - 硤 - 鹋 - 偌 - 懶 - 禎 - 酩 - 黍 - 谪 - 鮑 - 溧 - 甾 - 祓 - 紆 - 喬 - 唑 - 汩 - 渫 - 滘 - 滷 - 簑 - 缛 - 螯 - 掣 - 鲶 - 찡 - Ċ - ഐ - ⠀ - 佔 - 嘥 - 娑 - 揿 - 暹 - 殼 - 穩 - 鱧 - 鲫 - 늬 - 벙 - 벚 - 뻗 - 왈 - 寵 - 捱 - 掬 - 梏 - 檄 - 涞 - 灞 - 罡 - 펼 - ഔ - 圩 - 蓼 - 飒 - 鲻 - 恫 - 觸 - 跄 - 巽 - 吡 - 础 - 隍 - 惇 - 袴 - 冚 - 剌 - 迴 - Ď - 仃 - 佞 - 咛 - 噉 - 硼 - 碓 - 窕 - 苋 - 讚 - 귓 - 됨 - 및 - 숟 - 씌 - 욜 - 첼 - 칫 - Υ - ዪ - 峦 - 犄 - 膽 - 謄 - 頁 - 鸸 - 敝 - 浔 - 裱 - 诿 - 菀 - 踉 - 鹳 - 瀝 - 掇 - 鴎 - 彿 - 鵤 - 〆 - 睨 - 荀 - 郴 - 礴 - 穢 - 筵 - 箩 - 飕 - 衞 - 韋 - 餵 - Ң - ፔ - 唁 - 嬴 - 搽 - 樸 - 濂 - 疸 - 缄 - 鍊 - 넬 - 닙 - 뻑 - 욘 - 줏 - ୈ - 卅 - 厲 - 嬤 - 篑 - 莊 - 薏 - 鍬 - 겄 - 겟 - 짭 - 찹 - 呲 - 慄 - 桎 - 瞟 - 偻 - 梆 - 楣 - 褓 - 榫 - 痿 - 菖 - 〕 - 儘 - 瑳 - 〔 - 辊 - 悌 - 礙 - 閻 - ଝ - 唢 - 嫲 - 钴 - 옳 - Ѐ - Ջ - ઋ - ዟ - 勁 - 栀 - 毬 - 賴 - 鲭 - 麽 - 괄 - 넜 - 돔 - 뚤 - 슥 - 쌔 - 펠 - 흙 - 唰 - 嵘 - 欅 - 浏 - 爻 - 瑁 - 遨 - 靂 - 靱 - 骶 - 拚 - ဧ - 忪 - 蓿 - 苜 - 锉 - 떤 - 吁 - 圀 - 籐 - 麿 - 燮 - 猁 - 薹 - 瑪 - 蔺 - 儂 - 孖 - 猞 - 珩 - 箒 - 绺 - 阡 - Ҭ - ಔ - ၇ - 垠 - 楯 - 蠔 - 赅 - 跎 - 閱 - 멧 - 윽 - Û - Ɓ - ۖ - ಠ - ሣ - ጮ - ፤ - ᱡ - 硒 - 竈 - 竪 - 贊 - 馗 - 겜 - 깅 - 눕 - 댈 - 돕 - 둡 - 뛸 - 얌 - 왓 - 팸 - 펄 - 훠 - 俥 - 庖 - 绾 - 蹉 - 柩 - 祢 - 芃 - 陂 - 龜 - 佝 - 倏 - 僱 - 盂 - 镑 - ຼ - 瘘 - 殘 - 溏 - 殲 - 毓 - 棗 - 籾 - 辋 - 朧 - 皺 - 閾 - 韮 - 뾰 - Ũ - ഛ - ໌ - ዷ - ጢ - 屐 - 脍 - 镂 - 깠 - 눅 - 럿 - 퀸 - 펭 - 핬 - ԑ - 歙 - 湎 - 绡 - 鈿 - 鑊 - 铢 - 搶 - 潸 - 玳 - 笺 - 襁 - 幄 - 怄 - 蹼 - 骜 - 懑 - 噚 - 諦 - 煬 - 鴫 - 栎 - 柾 - 滾 - 蔔 - 忿 - 鋲 - 붉 - ȋ - ఞ - ಛ - ဌ - Ị - 硌 - 繳 - 莅 - 邈 - 깍 - 렙 - 섀 - 슛 - 싯 - 엣 - 줍 - 짼 - 쩰 - 컹 - 쿡 - 텨 - ʽ - ዣ - 杓 - 筠 - 蛔 - 蝋 - 鄞 - 闊 - 퍽 - 娆 - 翫 - 膣 - 蝼 - 봅 - 捌 - 擞 - 訝 - 谀 - 钒 - 鹜 - 崑 - 氙 - 甕 - 硯 - 袂 - 嗪 - 秸 - 僭 - 暈 - 攣 - 瓊 - 绗 - 膘 - 膦 - 谧 - 锑 - 鱈 - 儺 - 懺 - 殒 - 泙 - 璜 - 痙 - 贄 - Ë - 焯 - 疋 - 痧 - 軋 - 갭 - 깬 - 꽝 - 늑 - 텅 - © - Ź - ᱰ - 儚 - 刎 - 屆 - 岷 - 懋 - 泓 - 갛 - 갯 - 껐 - 뮬 - 뵈 - 삥 - 잌 - 쟀 - Ξ - 唸 - 壯 - 炆 - 蒐 - 鸬 - 鹚 - 밋 - 쭝 - 캄 - 縱 - 啉 - 譚 - 嗚 - 菫 - 漿 - 瑭 - 盅 - 锆 - 嶌 - 皋 - 謠 - 辯 - 阕 - 颉 - 뒹 - Ҽ - 怵 - 撻 - 攝 - 溲 - 蘆 - 贖 - 륨 - 붐 - Ը - ዤ - 峒 - 枱 - 桡 - 毽 - 涿 - 爐 - 箜 - 耋 - 讶 - 谴 - 鉤 - 깁 - 껜 - 꿍 - 덧 - 쑤 - 얜 - 짚 - 튕 - 횐 - ⋯ - 厝 - 尴 - 獐 - 珉 - 篌 - 繩 - 纭 - 缢 - 胼 - 茗 - 訶 - 骥 - 誊 - 腍 - 荥 - 葡 - 祕 - 嘧 - 勖 - 瀾 - 瑩 - 盥 - 笥 - 箪 - 忤 - 洙 - 覃 - 貶 - 锱 - 颱 - 嗟 - 囁 - 柘 - 笪 - 蛄 - 蛳 - 鲮 - ଃ - ኡ - ፆ - 勳 - 唞 - 墉 - 盞 - 繹 - 耄 - 霭 - 鲹 - 갇 - 겔 - 낑 - 딲 - 뜸 - 랠 - 렘 - 롬 - 룻 - 뤘 - 륭 - 몫 - 셈 - 솥 - 앓 - 엌 - 읏 - 즙 - 톨 - 팜 - Ĕ - Ҵ - ቾ - ᱯ - 侗 - 哙 - 嗔 - 沣 - 洵 - 淦 - 珥 - 譽 - 讴 - 괘 - 伢 - 嘭 - 戇 - 栉 - 淙 - 琏 - 綁 - 绶 - 藿 - 揼 - 攬 - 碕 - 饴 - 壷 - 膻 - 裨 - 剱 - 撈 - 舩 - ゐ - 哌 - 鐔 - ♂ - 潞 - 哏 - 鯵 - 梾 - 祿 - 腧 - 赭 - 吽 - 茲 - 詛 - 禿 - 薑 - 噻 - 憫 - 擠 - 稞 - 竇 - 蝸 - 錶 - "\x92" - ɔ - ᱦ - ᱲ - ᱹ - 唷 - 癮 - 譴 - 蹤 - 钹 - 閖 - 돗 - 됬 - 륜 - 얗 - 왁 - 잣 - 젯 - 첸 - 팥 - 펫 - 퐁 - Ў - ኚ - 伫 - 卍 - 悅 - 擴 - 晷 - 癣 - 觥 - 钜 - 铍 - 颦 - 齧 - 삘 - 얽 - 旮 - 檯 - 溴 - 爍 - 葚 - 隽 - 鴇 - 屍 - 獠 - 胝 - 蛐 - 俎 - 卞 - 崂 - 糀 - 腚 - 孑 - 璽 - 鲠 - 氘 - 尕 - 羧 - 讼 - 鰓 - 麂 - 馮 - 蕗 - 矶 - 穎 - 麾 - 桤 - 杢 - 儆 - 擤 - 諧 - 豈 - 鬥 - 龐 - 랴 - 킴 - ǔ - ѐ - Ұ - ጵ - ፁ - 唖 - 憊 - 痂 - 蜇 - 釀 - 鸾 - 낌 - 뎌 - 둣 - 뛴 - 삽 - 쏴 - 앚 - 엿 - 웰 - 쨍 - 챠 - 촛 - 컥 - 퀵 - 텃 - 헨 - Ӗ - 佈 - 兇 - 寬 - 惱 - 慳 - 殓 - 胍 - 끽 - 맸 - 뽁 - Ŭ - ͘ - ඖ - 彎 - 胄 - 襪 - 陉 - 鹄 - 婺 - 谗 - 聿 - 蹙 - 枇 - ဿ - 楡 - 擾 - 陛 - 萢 - 鰯 - 錨 - 諄 - 蒹 - 呷 - 棹 - 疇 - 迩 - 楮 - 僑 - 囫 - 晁 - 楝 - 蚩 - 誼 - 铯 - 禪 - 铋 - 飚 - 鵯 - Յ - ಊ - 涜 - 涪 - 礦 - 繇 - 聳 - 芩 - 蕈 - 钽 - 鴈 - Ɗ - Њ - 燉 - 睜 - 詭 - 銛 - 곁 - 꾹 - 닳 - 렐 - 샜 - 캘 - 훑 -  - "\x9A" - ¬ - Ҿ - ኜ - ዢ - 丶 - 呟 - 姒 - 珅 - 璟 - 瓴 - 癡 - 踎 - 邺 - 阄 - 鰐 - 띡 - 뼛 - 뽈 - 뿜 - 셌 - 앰 - 얕 - 웜 - 쫑 - 찻 - 푠 - 핍 - 伉 - 噓 - 垃 - 滯 - 獵 - 褄 - 뻣 - 棂 - 癬 - 脘 - 鲇 - 姝 - 揩 - 杈 - 睏 - 莢 - 蓖 - 註 - 剋 - 鴉 - 鳶 - 閤 - 圳 - ឲ - 饕 - 萘 - 甑 - 鮪 - 坩 - 镉 - 鞆 - 顛 - 崙 - 铣 - 锗 - 鴦 - Ճ - 灑 - 脹 - 谒 - 鴛 - Ӡ - ඍ - 俶 - 壘 - 廂 - 牺 - 씁 - ѝ - ఢ - ጇ - ጬ - 乸 - 嘍 - 姘 - 嬸 - 柒 - 窰 - 筈 - 蕪 - 鄧 - 锶 - 곶 - 눴 - 맴 - 삑 - 샾 - 쉘 - 썽 - 쏙 - 웩 - 쬐 - 캥 - ഊ - 埂 - 恸 - 掮 - 旯 - 滁 - 鰭 - 곈 - 깰 - 낡 - 릅 - 솟 - 쏭 - 촘 - 킷 - 呔 - 妝 - 苒 - 掙 - 瀋 - 稗 - 羸 - 趸 - 钯 - 祜 - 锒 - 粂 - 綦 - 臾 - 嬬 - 瞞 - 翦 - 亳 - 梼 - 谌 - 鲅 - 倜 - 臟 - 壩 - 敕 - 獰 - 囵 - 埚 - 弐 - 榭 - 篁 - 逓 - 驅 - 兪 - 獏 - 葳 - 駁 - ઍ - ઔ - 捩 - 炔 - 珲 - 糍 - 萊 - 謢 - 頷 - 骅 - 岿 - 氡 - 蒡 - 鄢 - 镌 - 饌 - 밉 - 슉 - "\x84" - Ć - Ơ - ʾ - ۓ - ቿ - ⸻ - 咥 - 圾 - 埜 - 媾 - 徜 - 悭 - 揦 - 敘 - 欸 - 꿇 - 떵 - 뽂 - 뽐 - 샹 - 슝 - 씽 - 죙 - 쥴 - 캣 - 쾅 - 핥 - Ƙ - Ί - ਔ - ဍ - ኅ - 佻 - 噙 - 徉 - 滟 - 猢 - 簗 - 膑 - 舖 - 芡 - 劍 - 囝 - 滦 - 谶 - 囿 - 桠 - 騮 - 娣 - 糅 - 葶 - 诓 - 瓿 - 좀 - 餮 - 慟 - 聾 - ឍ - 躙 - 鰆 - 枡 - 傥 - 廬 - 楸 - 泔 - 陝 - 泮 - 崧 - 撳 - 鉦 - 铱 - 撹 - 輯 - 钇 - 锲 - ሒ - ១ - 剝 - 懼 - 滌 - 藨 - Ŵ - ᱪ - 瘢 - 茯 - 韌 - 펍 - § - Ύ - ΐ - Ց - ዦ - 噃 - 奂 - 抻 - 槤 - 樑 - 翳 - 诘 - 锃 - 頒 - 饋 - 驟 - 龇 - 끕 - 둠 - 뗄 - 셧 - 싣 - 윌 - 쨈 - 쭘 - 폿 - 헉 - Ľ - 侩 - 劑 - 壆 - 虧 - 鲼 - 눔 - 뵙 - 씸 - 헝 - 홋 - 拋 - 擘 - 檳 - 绔 - 肅 - 챕 - 췌 - 큘 - 匁 - 籃 - 邙 - 钎 - 饬 - 杷 - 瑤 - 矬 - 遒 - 铡 - 娠 - 廼 - ┐ - ◎ - 铖 - 莒 - 龅 - 筅 - 洮 - 腟 - 芫 - ÷ - 煕 - 癜 - 荽 - 撓 - 謊 - 齟 - 齬 - 恹 - 蝨 - 鋏 - 兖 - 臧 - 訇 - 賬 - ▪ - 筍 - 谡 - 齡 - 挲 - 捲 - 莪 - 蝮 - 蟬 - 觞 - 鹞 - 繪 - 荛 - 蒺 - 銮 - 鵑 - ʺ - Ώ - 汨 - 淒 - 瘋 - 癲 - 篩 - 虢 - 颔 - 餞 - 髖 - "\x91" - ̊ - Ӳ - Ծ - ؔ - ۃ - ઐ - ൌ - ጳ - ጾ - ፑ - ᱬ - Ố - Ờ - 嘯 - 婀 - 玑 - 砵 - 缱 - 膩 - 莠 - 诳 - 遑 - 钡 - 铨 - 낱 - 뎁 - 멓 - 뮌 - 벡 - 뺨 - 챘 - 츤 - 툼 - 튠 - 헥 - 헹 - < - Ĩ - Ť - Ҟ - Ժ - ڱ - 嗫 - 嘹 - 绻 - 邇 - 霑 - 곗 - 깽 - 똠 - 숑 - 젬 - → - 侘 - 壟 - 憚 - 焘 - 狲 - 蘅 - 踽 - 逑 - 逹 - 闱 - 鱒 - 髒 - 攜 - 晔 - 氚 - 燿 - 荸 - 孱 - 旌 - 曆 - 琬 - 緘 - 螞 - 轍 - 邳 - 錘 - 鮓 - ဩ - 寳 - 诌 - 樞 - 梛 - 軀 - 鉗 - 왜 - ゝ - 舳 - 蕭 - 銜 - 瘡 - 癇 - 碲 - 霰 - 頤 - 搣 - 邬 - 鎹 - 剷 - 儋 - 墮 - 徠 - 謬 - 垓 - 桿 - 沮 - 銑 - Ґ - ዑ - ≡ - 篙 - 舄 - 苈 - 跖 - 跬 - 鎚 - 鲢 - 훤 - ఔ - ඪ - ០ - 氹 - 滙 - 菸 - 薈 - 頌 - 곪 - 뗀 - 룐 - 몹 - 썪 - 앳 - 얏 - 큭 - 탬 - 휠 - ˮ - ಢ - ෲ - ኣ - ኽ - ጹ - 嘜 - 籤 - 纈 - 襦 - 颶 - 鯰 - 끙 - 뎃 - 뎠 - 뒨 - 듭 - 랖 - 몄 - 봬 - 빕 - 숯 - 쉭 - 잭 - 잴 - 쟈 - 짙 - 츰 - 켄 - 켔 - 쿤 - 핼 - 嵴 - 徭 - 擰 - 瀨 - 蠡 - 袢 - 顏 - 鬨 - 鼩 - 섣 - 勻 - 笄 - 簀 - 聒 - 郢 - ♥ - 佶 - 岫 - 漲 - 紮 - 纐 - 苻 - 贰 - 鸩 - 佥 - 諍 - 鋤 - 雎 - ๅ - 煖 - 縷 - 栂 - 鏈 - 撺 - 絃 - 騷 - 亓 - 鸮 - 幇 - 崋 - 鐸 - 癪 - 绦 - 賈 - 铊 - 镧 - ㄟ - 瀉 - 傈 - 僳 - 睢 - 稃 - 紥 - 蜆 - 郃 - 鮒 - 鱸 - ฌ - 氤 - 氲 - 祗 - 羰 - 鑓 - 钼 - 愼 - 栢 - 粳 - 腘 - 輋 - 郓 - 鲡 - Ձ - ϋ - ޛ - 乂 - 玥 - 碛 - 纰 - 艷 - 苎 - 陲 - 鲲 - 꾀 - 삔 - 샬 - 킵 - 혓 - "\x96" - "\x97" - ː - ؑ - ۚ - ॰ - ಋ - ೧ - ዉ - ៖ - Ẩ - ∈ - ⸺ - 俬 - 倌 - 卌 - 哚 - 啐 - 喏 - 孭 - 屓 - 屙 - 戆 - 擱 - 潑 - 筺 - 艱 - 蒨 - 褻 - 谆 - 谳 - 贲 - 鄣 - 铄 - 颍 - 髌 - 鳢 - 麪 - 黢 - 걜 - 겅 - 굘 - 굼 - 긱 - 냔 - 듦 - 딤 - 딧 - 똔 - 맏 - 빰 - 뿅 - 샛 - 쏜 - 옇 - 옐 - 읜 - 잦 - 잰 - 쟨 - 챈 - 챌 - 텁 - 텼 - 햐 - 훗 - 흩 - ӊ - ઃ - ೦ - 噸 - 嚅 - 埒 - 楳 - 泫 - 琮 - 疊 - 疖 - 癱 - 腈 - 苕 - 蕩 - 蛉 - 逡 - 黠 - 꿋 - 뗐 - 슘 - 웁 - 刄 - 慾 - 柺 - 贔 - 髂 - 魟 - 尭 - 弭 - 擲 - 槁 - 煸 - 牯 - 磚 - 顼 - 鸫 - 仝 - 僖 - 滂 - 焔 - 瞼 - 緞 - 蕕 - 輻 - 轎 - 黌 - 睪 - 盱 - 砣 - 閘 - 頗 - 鳐 - 钕 - 疥 - 髀 - 晉 - 萠 - 輛 - 偲 - 絋 - 膾 - 綽 - 诰 - 攤 - 獺 - 刪 - 痼 - 襴 - 鬘 - 扦 - 淖 - 渌 - 犧 - 蕁 - 踐 - 鄒 - 鯊 - 吲 - 囮 - 徂 - 楹 - 窨 - 籮 - 蘊 - 酞 - 飫 - 蹣 - 齒 - 傢 - 勐 - 晝 - 籏 - 缬 - 蝿 - 賁 - 邕 - 黾 - 켈 - ဈ - 倬 - 嚯 - 妪 - 崁 - 搗 - 爰 - 稹 - 纒 - 脛 - 蚶 - 覲 - 詈 - 諌 - 邰 - 鐡 - 铩 - 陟 - 鲣 - ቮ - 峅 - 杼 - 鲾 - 밧 - 윷 - 콥 - Ƴ - ˎ - ̔ - ଈ - ஔ - ሿ - ៦ - ᱝ - ḿ - ṇ - 㶶 - 刿 - 劭 - 屜 - 巒 - 庥 - 恽 - 悒 - 撷 - 澆 - 煊 - 獸 - 砭 - 祚 - 羈 - 脲 - 舂 - 蒴 - 蓁 - 谲 - 豢 - 郜 - 醪 - 鏢 - 阆 - 飄 - 髡 - 鲩 - 鹼 - 갬 - 걀 - 궜 - 궤 - 깻 - 꺄 - 냑 - 놉 - 놋 - 뎀 - 뗘 - 띃 - 띨 - 랏 - 맬 - 멱 - 뵀 - 숴 - 쉼 - 쎌 - 옌 - 웍 - 잿 - 짹 - 쩝 - 촥 - 콰 - 햅 - ః - 侈 - 喐 - 忾 - 抌 - 籲 - 躉 - 輾 - 醴 - 鎏 - 锷 - 隸 - 鹛 - 쫒 - 呎 - 戕 - 芾 - 魑 - ޣ - 凈 - 唳 - 噹 - 嬅 - 澧 - 瀟 - 緬 - 貅 - 貔 - 迨 - 鉈 - 钣 - 倅 - 殚 - 謖 - 遞 - 駝 - 骠 - 凇 - 䁪 - 垌 - 孛 - 橼 - 瑧 - 睥 - 舾 - 誡 - 鈕 - 紓 - 甦 - 鐙 - 臘 - 鉉 - 椚 - 篭 - 靄 - 怏 - 銕 - 罫 - 蟄 - 刳 - 祉 - 蔊 - 邏 - 鈔 - 鲿 - 吳 - 笕 - 諺 - 鬢 - 鸻 - 卟 - 圜 - 柃 - 潋 - 狆 - 縻 - 菡 - 蚺 - 蜉 - 钋 - 颙 - 鸨 - 鹘 - 冧 - 跣 - 鑰 - 넙 - ॲ - 侪 - 冪 - 嚨 - 湟 - 滢 - 甴 - 稟 - 竄 - 苴 - 菴 - 諤 - 讫 - 踟 - 蹰 - 镓 - 闌 - 闾 - 騭 - Ҥ - ೫ - ഃ - 眦 - 覓 - 躓 - 躼 - 陞 - 딕 - 캇 - 呯 - 꿉 - 넒 - 맷 - 쐈 - "\x9D" - Ě - ȇ - ̓ - ̧ - ؓ - ޡ - ဠ - ኻ - ፂ - Ứ - 㩒 - 䀫 - 䁅 - 兌 - 匱 - 吋 - 圄 - 岘 - 徕 - 捯 - 掹 - 揇 - 搲 - 摷 - 撿 - 昺 - 曱 - 櫸 - 沆 - 洄 - 洶 - 涷 - 滄 - 瀣 - 燐 - 狹 - 珙 - 璈 - 瓤 - 畊 - 痾 - 瘿 - 硃 - 窠 - 糰 - 繚 - 纔 - 罘 - 罟 - 脩 - 蓆 - 蘼 - 蚵 - 螟 - 螫 - 豂 - 豇 - 豎 - 镬 - 騾 - 骐 - 鬍 - 鲃 - 鲳 - 겋 - 꺠 - 꼿 - 꽥 - 꿰 - 낵 - 넋 - 뇸 - 뉜 - 늪 - 닛 - 덱 - 딥 - 땔 - 뚸 - 룽 - 멩 - 믈 - 밈 - 밲 - 뼌 - 뼘 - 얍 - 옅 - 윳 - 읊 - 짢 - 쨋 - 쯧 - 첵 - 켠 - 탤 - 톰 - 팎 - 푯 - 훼 - 휜 - 𢱕 - 𥄫 - 僆 - 妁 - 煅 - 莳 - 髙 - 갰 - 늉 - 떳 - 쫘 - 팻 - 홧 - 휩 - 偈 - 嗄 - 囗 - 囹 - 泠 - 溟 - 眀 - 硖 - 篦 - 胳 - 艤 - 菝 - 葜 - 蝰 - 蠱 - 衿 - 誅 - 郅 - 靥 - 飩 - 鮟 - 捭 - 酐 - 闖 - ឪ - 嬗 - 怩 - 攋 - 澍 - 籬 - 舁 - 菔 - 菟 - 葎 - 躝 - 鎂 - 锨 - 鯱 - 鵠 - 柝 - 钶 - ♯ - 姶 - 揞 - 礇 - 軚 - 陜 - 鵡 - 惫 - 厠 - 笤 - 轡 - 樨 - 氩 - 祎 - 醗 - 閂 - 髷 - 苧 - 镒 - 坻 - 蒟 - 釐 - 祂 - 榑 - 筬 - 莟 - 瓏 - 莚 - 蘚 - 潅 - 筥 - 镕 - 澱 - 蛸 - 飑 - √ - 圪 - 夲 - 庹 - 愴 - 椤 - 汙 - 狽 - 璉 - 穫 - 臍 - ゙ - 儍 - 噤 - 弍 - 憺 - 暸 - 浬 - 濺 - 狍 - 猗 - 砀 - 禛 - 縹 - 蒽 - 錙 - 鲛 - Ï - ኼ - 郯 - ± - ヮ - 佚 - 帙 - 摈 - 旻 - 朊 - 枞 - 枥 - 樒 - 氫 - 洇 - 洟 - 漣 - 濯 - 燵 - 盜 - 瞋 - 秣 - 簒 - 糬 - 赓 - 蹌 - 轸 - 鏖 - 铟 - 锇 - 鱔 - 鹱 - ƴ - ѕ - ೩ - 媞 - 沔 - 睚 - 磔 - 蝣 - 襞 - 駈 - 驕 - 鲆 - 뇽 - 덨 - 캬 - 펩 - Ւ - ՞ - ־ - ॠ - ૧ - ఋ - ෟ - ฯ - ዒ - ጺ - ፄ - ឮ - ៨ - ᱶ - ᱸ - ḅ - ṅ - Ế - Ổ - 䆀 - 䲟 - 仞 - 侉 - 剜 - 剡 - 唵 - 嗇 - 嗞 - 嚀 - 堯 - 塢 - 壅 - 廛 - 廪 - 悃 - 悱 - 愠 - 戥 - 掞 - 摮 - 摻 - 擳 - 昉 - 暝 - 榲 - 欒 - 浛 - 燴 - 牀 - 犍 - 猷 - 璵 - 畋 - 瘌 - 硎 - 禊 - 笊 - 笸 - 篾 - 簷 - 縢 - 菉 - 薀 - 藷 - 訕 - 譎 - 譖 - 貲 - 趄 - 趔 - 迍 - 邛 - 邾 - 郫 - 鈷 - 鋆 - 鎅 - 鏊 - 鏟 - 铪 - 镛 - 闢 - 頰 - 顱 - 鰲 - 鱷 - 鲗 - 鹗 - 黧 - 齁 - 궐 - 깟 - 넵 - 늄 - 늠 - 댐 - 띤 - 맜 - 묭 - 뭥 - 봔 - 뵌 - 뵐 - 빳 - 뽜 - 슁 - 쌰 - 욤 - 윰 - 젼 - 쨰 - 춧 - 췻 - 켸 - 콸 - 킁 - 탯 - 텄 - 튿 - 틋 - 펨 - 헴 - 휑 - 휙 - ॊ - ଋ - 넝 - 댑 - 읎 - Ę - 仟 - 唥 - 唪 - 愎 - 攏 - 汜 - 臬 - 鯪 - 鱇 - 튈 - Ė - ẅ - 俟 - 剀 - 厶 - 嗳 - 埕 - 墊 - 崆 - 恁 - 慚 - 慷 - 挈 - 揠 - 斃 - 漯 - 澹 - 猊 - 瓠 - 瘓 - 矍 - 粿 - 苁 - 茏 - 薊 - 誨 - 謇 - 閩 - 馐 - 骘 - 骝 - 尷 - 隗 - 鷯 - 嗶 - 娛 - 嵬 - 忸 - 揈 - 殄 - 粝 - 粲 - 繧 - 腴 - 茔 - 蔀 - 蛏 - 躾 - 陁 - 雰 - 靫 - 鬶 - 魍 - 鷉 - 鹓 - 僂 - 甌 - 俠 - 嚡 - 楫 - 瀏 - 畦 - 畷 - 禳 - 蓠 - 鵐 - 碩 - 絣 - 繝 - 舢 - 莼 - 鍔 - 萄 - 弉 - 颢 - 幟 - 崚 - 쫌 - 袆 - 憍 - 鞏 - 傉 - 唈 - 寃 - 殭 - 碣 - 鏨 - 鵞 - 龠 - 肪 - 伜 - 塬 - 玎 - 裃 - 馔 - 馭 - 㪐 - 傖 - 劔 - 旰 - 桷 - 椴 - 炝 - 獪 - 玘 - 瑆 - 砟 - 祆 - 縝 - 羣 - 聶 - 芗 - 蒻 - 蟎 - 裇 - 謐 - 赉 - 跏 - 蹐 - 轔 - 醮 - 銳 - 鎬 - 鞨 - 淝 - Ѕ - ៥ - 㩿 - 亊 - 俦 - 嗐 - 圻 - 坭 - 垅 - 岀 - 峄 - 忟 - 擷 - 朶 - 沭 - 燙 - 稙 - 箝 - 篳 - 糋 - 耧 - 肼 - 艏 - 艪 - 莩 - 螽 - 賎 - 铒 - 颏 - 駟 - 駱 - 鲵 - 鳫 - 龢 - 욍 - ೯ - ඣ - ṯ - ↓ - 屮 - 愍 - 穐 - 跩 - 닯 - 쁩 - 숀 - ŧ - ῖ - 샅 - 쎘 - "\x8A" - "\x8D" - Ą - ǃ - ǰ - ȃ - Ʉ - ̐ - Ψ - Ђ - Љ - Ќ - Ѳ - ૬ - ૮ - ୪ - ୯ - ஶ - ఝ - ಝ - ೨ - ೭ - ෳ - ဋ - ዃ - ዡ - ጐ - ጿ - ፥ - ២ - ៤ - ៧ - ṉ - Ộ - ‫ - ↔ - ゚ - 㧎 - 䒏 - 䒐 - 亙 - 伧 - 佤 - 倧 - 僊 - 儬 - 冼 - 剾 - 劏 - 吶 - 呋 - 咇 - 哣 - 啞 - 喼 - 嘬 - 噁 - 囌 - 囍 - 圑 - 垪 - 埙 - 埞 - 塱 - 壢 - 夔 - 奀 - 妗 - 妯 - 姹 - 娌 - 寢 - 嶂 - 嶄 - 廾 - 弸 - 悋 - 悗 - 愜 - 慇 - 懃 - 懣 - 戽 - 扻 - 扽 - 抆 - 抾 - 捽 - 掅 - 揳 - 揹 - 摅 - 摙 - 摳 - 攔 - 杮 - 枋 - 柢 - 桕 - 棧 - 椶 - 椹 - 槓 - 樁 - 橈 - 橐 - 橛 - 橢 - 檨 - 歛 - 殂 - 殳 - 氈 - 氖 - 洎 - 淪 - 滏 - 漚 - 濛 - 燊 - 牘 - 猇 - 珎 - 痦 - 眙 - 硞 - 礬 - 笫 - 簓 - 簕 - 糉 - 緲 - 縈 - 繄 - 纖 - 缙 - 缧 - 聩 - 肄 - 臚 - 艙 - 芎 - 芵 - 苘 - 苪 - 茭 - 蓣 - 蕤 - 蕷 - 蘂 - 蛻 - 蟥 - 蟯 - 蠹 - 褸 - 覯 - 訐 - 詆 - 誑 - 譟 - 讎 - 贶 - 趌 - 趷 - 趺 - 跶 - 蹚 - 輊 - 輟 - 辇 - 辎 - 迾 - 逶 - 郾 - 鄄 - 鄯 - 釗 - 鈣 - 錚 - 鍜 - 鎭 - 鏃 - 鑄 - 鑿 - 钫 - 铷 - 锎 - 锕 - 镆 - 闍 - 闳 - 阗 - 雋 - 鞣 - 鞥 - 頴 - 餚 - 馕 - 馩 - 魃 - 魉 - 鲎 - 鼆 - 鼐 - 鼱 - 곯 - 궂 - 귐 - 귿 - 껸 - 꽐 - 꾜 - 끅 - 낏 - 넴 - 넹 - 녈 - 놥 - 덫 - 됑 - 딛 - 땍 - 땟 - 떈 - 떫 - 뜀 - 롄 - 롷 - 뤠 - 뤼 - 륑 - 멎 - 멏 - 뭍 - 밌 - 뵤 - 뻠 - 뽝 - 샥 - 셤 - 쌨 - 쏼 - 쐬 - 쑈 - 씅 - 옫 - 쟝 - 좆 - 죵 - 쥔 - 쩠 - 쫀 - 쭌 - 쯘 - 챗 - 캅 - 캤 - 콴 - 큔 - 탉 - 튄 - 튬 - 팰 - 푤 - 헀 - 횔 - Ů - Ắ - 媠 - 뗬 - 렜 - 롹 - 묽 - 뭡 - 뱄 - 쌋 - 잽 - 줴 - 찟 - 틔 - Ğ - 墀 - 戞 - 枘 - 楦 - 牴 - 痠 - 癔 - 皚 - 硲 - 羱 - 觚 - 觜 - 輳 - 饑 - 긌 - 쎅 - 𥚃 - Ḫ - ● - 䢢 - 侫 - 唛 - 埓 - 岨 - 怍 - 恊 - 斂 - 晞 - 琚 - 畈 - 畐 - 磬 - 箬 - 缦 - 蓍 - 蠓 - 褌 - 貉 - 醅 - 醌 - 鉅 - 铑 - 锺 - 飜 - 黩 - 齣 - 擸 - 訥 - 餼 - ៣ - 啅 - 壙 - 寤 - 嶇 - 摟 - 旖 - 暱 - 枩 - 桮 - 椙 - 樯 - 潲 - 璁 - 筚 - 筰 - 耒 - 蕲 - 蠟 - 袛 - 謨 - 謾 - 鈞 - 鏝 - 镱 - 閔 - 餋 - 鳰 - 鶉 - 墻 - 憮 - 曚 - 榾 - 竦 - 縉 - 縊 - 苄 - 迒 - 挐 - 窣 - 窸 - 羂 - 藳 - 钌 - 頽 - ۾ - 끊 - 咙 - 굳 - 肮 - Ẓ - ఒ - 뭘 - 옛 - ẳ - ఊ - 싫 - ఉ - 괜 - 啻 - 鬪 - 떻 - 埏 - 皷 - 禑 - 胬 - 薴 - 蚋 - 裬 - 觧 - 鍍 - 锴 - 闕 - 韜 - 鷄 - 뭔 - ឳ - ↑ - ⇔ - ㄧ - 伥 - 侓 - 叅 - 噘 - 夘 - 帏 - 廸 - 曄 - 栴 - 榙 - 樗 - 殮 - 疠 - 瘧 - 矽 - 筌 - 糺 - 聟 - 萜 - 葯 - 讒 - 谯 - 谿 - 釼 - 鉋 - 钪 - 铌 - 镠 - 韃 - 韫 - 髯 - 鱠 - 鶫 - ઼ - ≠ - 䴙 - 冏 - 唻 - 圞 - 壥 - 壸 - 岈 - 嵖 - 搦 - 枳 - 榖 - 泅 - 渀 - 溌 - 濞 - 珜 - 瓘 - 畲 - 秭 - 篥 - 籴 - 紂 - 緁 - 緡 - 罨 - 莨 - 菪 - 褧 - 蹕 - 軅 - 軻 - 邴 - 醯 - 鉞 - 镲 - 阇 - 鵺 - 鹀 - ഝ - ኀ - 佮 - 夛 - 峇 - 拑 - 歕 - 窺 - 羶 - 蘓 - 衒 - 詬 - 诤 - 鱟 - ૠ - ṃ - 괏 - 눓 - 뉩 - 땋 - 띌 - 벝 - 봣 - 쐐 - 稣 - "\x90" - ¢ - ¦ - Ð - Ù - ŏ - Ÿ - ƒ - Ɨ - ǐ - ǒ - ǫ - ˊ - ˏ - ̋ - ̟ - ͦ - Ϙ - Ћ - ՚ - ٖ - ٗ - ٴ - ۗ - ۘ - ޥ - ऩ - ॄ - ॐ - ॑ - ॔ - ૢ - ૨ - ૩ - ૪ - ଔ - ୨ - ୫ - ௗ - ೬ - ഋ - ඃ - ෴ - ሓ - ቯ - ኾ - ጼ - ឫ - ៎ - ៩ - ᱽ - ḉ - ḑ - ṟ - ẞ - Ầ - Ự - † - ※ - ₹ - ∆ - ∠ - ∨ - □ - ◦ - ♡ - ♭ - ゎ - ゔ - ヰ - 㘉 - 䀹 - 䁯 - 䂿 - 䖳 - 䟴 - 䳍 - 䴘 - 丨 - 亍 - 仡 - 仫 - 仱 - 伱 - 侂 - 侷 - 侹 - 俅 - 俍 - 傩 - 傱 - 僉 - 僫 - 凩 - 劖 - 勼 - 厍 - 厴 - 呿 - 咆 - 咾 - 啋 - 啍 - 啯 - 喈 - 喑 - 喟 - 嗙 - 嗼 - 嚐 - 嚮 - 囑 - 圇 - 圉 - 垚 - 垭 - 埤 - 埭 - 塭 - 墘 - 奅 - 妠 - 妡 - 妣 - 妧 - 娉 - 娗 - 婁 - 嫐 - 孥 - 岜 - 崃 - 崮 - 嵊 - 嵋 - 嵛 - 嶝 - 巖 - 巿 - 帑 - 庠 - 廆 - 廍 - 弌 - 彥 - 彧 - 徛 - 怙 - 怿 - 悳 - 悾 - 慤 - 扞 - 抔 - 抦 - 拃 - 挭 - 捹 - 掁 - 掗 - 揗 - 揤 - 揷 - 揻 - 擋 - 擏 - 擛 - 擯 - 攑 - 旳 - 昐 - 昰 - 暎 - 朏 - 杣 - 枧 - 枰 - 枹 - 柞 - 柸 - 楙 - 槚 - 樅 - 橞 - 檪 - 櫈 - 欑 - 歁 - 歃 - 殁 - 殯 - 毵 - 氅 - 氾 - 沤 - 涠 - 淸 - 渑 - 湠 - 湳 - 溋 - 溼 - 漭 - 潴 - 潷 - 澗 - 瀰 - 烚 - 煥 - 燹 - 燼 - 爿 - 牍 - 牕 - 狨 - 狯 - 玆 - 珒 - 珮 - 珰 - 琨 - 璘 - 瓩 - 痈 - 瘻 - 癘 - 皌 - 皲 - 眇 - 睄 - 矧 - 矲 - 砜 - 硟 - 磧 - 磽 - 礽 - 秅 - 稈 - 穌 - 竃 - 竽 - 筿 - 箓 - 箟 - 箠 - 簫 - 籟 - 籼 - 粦 - 綣 - 綷 - 綸 - 绀 - 罅 - 翕 - 翥 - 翹 - 聼 - 肫 - 胗 - 胪 - 胿 - 脒 - 艄 - 艉 - 艋 - 艸 - 茼 - 荇 - 荑 - 荜 - 荪 - 莾 - 萡 - 萺 - 蒞 - 蓭 - 蕻 - 薖 - 薜 - 薷 - 蘖 - 螣 - 蟮 - 蠑 - 衮 - 袷 - 裄 - 裥 - 觴 - 誦 - 謔 - 譫 - 诂 - 诮 - 谘 - 豔 - 豸 - 贛 - 赀 - 赇 - 赟 - 赧 - 趵 - 跹 - 踅 - 踫 - 踭 - 踴 - 蹁 - 躋 - 軁 - 軼 - 輘 - 輜 - 輷 - 轳 - 轾 - 辶 - 迤 - 郏 - 郞 - 郤 - 酃 - 釁 - 鈉 - 銹 - 鋐 - 鋺 - 鎔 - 鏰 - 鐃 - 鑚 - 鑠 - 钸 - 钺 - 铈 - 锊 - 锿 - 镎 - 镗 - 镥 - 閏 - 闼 - 阊 - 陖 - 餒 - 餛 - 駭 - 驥 - 驺 - 骯 - 髎 - 魨 - 鯇 - 鰈 - 鰒 - 鰜 - 鰤 - 鰰 - 鲔 - 鲯 - 鳊 - 鳔 - 鵲 - 鸱 - 鹮 - 麭 - 鼈 - 齲 - 걤 - 곌 - 괍 - 괞 - 굣 - 굥 - 귬 - 꺅 - 꺌 - 꺽 - 꼳 - 꾿 - 뀜 - 뀨 - 끍 - 낋 - 놘 - 뇬 - 뉸 - 늗 - 늣 - 늰 - 댇 - 댤 - 뎄 - 됭 - 듶 - 딿 - 땁 - 떄 - 떔 - 떙 - 뗏 - 뚬 - 뛘 - 뜷 - 띈 - 랒 - 롸 - 뢴 - 룃 - 룔 - 룟 - 뤤 - 릈 - 맇 - 맽 - 맿 - 먁 - 먕 - 먙 - 먺 - 뫠 - 뫼 - 뭠 - 뭬 - 믁 - 믕 - 믾 - 밎 - 밨 - 밷 - 뱁 - 뱌 - 뱍 - 벛 - 벰 - 볌 - 볕 - 봥 - 뷴 - 빢 - 빶 - 빻 - 뺌 - 뺸 - 뻬 - 뼉 - 뿍 - 뿟 - 뿡 - 샄 - 샆 - 셍 - 셕 - 셴 - 셸 - 솨 - 쇤 - 쇳 - 쇽 - 쉅 - 쉈 - 쉿 - 슌 - 슦 - 썅 - 썜 - 쎙 - 쎤 - 쎼 - 쏸 - 쑨 - 쒀 - 쒔 - 쒸 - 쒹 - 쓕 - 씰 - 얐 - 얠 - 얬 - 엡 - 엩 - 옂 - 옉 - 옙 - 옜 - 옭 - 옼 - 욧 - 윅 - 읒 - 읖 - 읗 - 잕 - 쟬 - 젰 - 젱 - 졀 - 졍 - 졔 - 좐 - 줜 - 쥑 - 쥼 - 짯 - 쨀 - 쩄 - 쩟 - 쩬 - 쪈 - 쪙 - 쪠 - 쫏 - 쬠 - 쬬 - 쮸 - 쯩 - 찠 - 챂 - 첬 - 쳔 - 촐 - 촤 - 춴 - 걔 - Ŋ - Ǹ - ʙ - Ӷ - ٓ - ḏ - Ậ - ☉ - 쌕 - 쐴 - 윘 - 짊 - Ȋ - Ѓ - ऍ - ଐ - ഢ - ∙ - 嘸 - 妤 - 枦 - 橹 - 櫳 - 淥 - 澩 - 癢 - 蚧 - 蛚 - 蛣 - 褦 - 賒 - 賰 - 趒 - 輓 - 鋁 - 镡 - 鳉 - 鸝 - 띔 - 챤 - ▲ - 倻 - 剉 - 哞 - 坵 - 堝 - 弶 - 扴 - 挾 - 摃 - 敨 - 暅 - 櫚 - 櫥 - 欉 - 洐 - 煳 - 猄 - 瑀 - 硨 - 磲 - 篋 - 篪 - 簋 - 簣 - 緹 - 绐 - 耨 - 臈 - 舸 - 茖 - 莜 - 莵 - 蓥 - 藺 - 蘰 - 蝟 - 衾 - 襠 - 誣 - 豕 - 蹍 - 蹺 - 逖 - 郿 - 鍚 - 鑢 - 铳 - 锝 - 闔 - 隳 - 隴 - 餉 - 饂 - 駘 - 骢 - 骹 - 鯣 - 鲴 - 鲽 - 鳎 - 鵪 - ⻣ - 䴕 - 叺 - 囥 - 壠 - 寀 - 掼 - 昶 - 梘 - 洌 - 瞽 - 稘 - 莦 - 蚨 - 覕 - 輌 - 鄲 - 鈾 - 铫 - 顰 - 饪 - 骈 - 髏 - 髑 - 髣 - 髴 - 魎 - 鳚 - 鼙 - 僢 - ํ - 僣 - 旎 - 糒 - ằ - 儕 - ắ - 絛 - <sos> - <eos> - <sop> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true brctc_risk_strategy: exp brctc_group_strategy: end brctc_risk_factor: 0.0 use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram50000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: false time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 4 normalize: global_mvn normalize_conf: stats_file: exp/s2t_stats_raw_bpe50000/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false sym_na: <na> preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 768 attention_heads: 12 attention_layer_type: selfattn pos_enc_layer_type: abs_pos rel_pos_type: latest cgmlp_linear_units: 3072 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 9 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.0 linear_units: 3072 positionwise_layer_type: linear use_ffn: true macaron_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 12 linear_units: 3072 num_blocks: 9 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 preprocessor: s2t preprocessor_conf: text_prev_name: text_prev text_ctc_name: text_ctc fs: 16000 na_symbol: <na> speech_length: 30 speech_resolution: 0.02 speech_init_silence: 30 text_prev_apply_prob: 0.5 time_apply_prob: 0.5 notime_symbol: <notimestamps> first_time_symbol: <0.00> last_time_symbol: <30.00> required: - output_dir - token_list version: '202310' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "TRANSLATION" ]
[ "ANEM", "BEAR", "CAS", "CHIA", "CRAFT", "GAD", "MEDAL", "PCR" ]
Non_BioNLP
nickmuchi/finbert-tone-finetuned-fintwitter-classification
nickmuchi
text-classification
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "financial-tweets-sentiment-analysis", "sentiment-analysis", "financial", "stocks", "sentiment", "dataset:zeroshot/twitter-financial-news-sentiment", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,672
1,679
138
12
--- datasets: - zeroshot/twitter-financial-news-sentiment metrics: - accuracy - f1 - precision - recall tags: - generated_from_trainer - financial-tweets-sentiment-analysis - sentiment-analysis - financial - stocks - sentiment widget: - text: $LOW - Lowe's racks up another positive rating despite recession risk example_title: Bullish Sentiment - text: $HNHAF $HNHPD $AAPL - Trendforce cuts iPhone estimate after Foxconn delay example_title: Bearish Sentiment - text: 'Coin Toss: Morgan Stanley Raises Tesla Bull Case To $500, Keeps Bear Case At $10' example_title: Neutral Sentiment model-index: - name: finbert-tone-finetuned-fintwitter-classification results: - task: type: text-classification name: Text Classification dataset: name: twitter-financial-news-sentiment type: finance metrics: - type: F1 value: 0.8838 name: F1 - type: accuracy value: 0.884 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finbert-tone-finetuned-fintwitter-classification This model is a fine-tuned version of [yiyanghkust/finbert-tone](https://huggingface.co/yiyanghkust/finbert-tone) on [Twitter Financial News](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) dataset. It achieves the following results on the evaluation set: - Loss: 1.4078 - Accuracy: 0.8840 - F1: 0.8838 - Precision: 0.8838 - Recall: 0.8840 ## Model description Model determines the financial sentiment of given tweets. Given the unbalanced distribution of the class labels, the weights were adjusted to pay attention to the less sampled labels which should increase overall performance.. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6385 | 1.0 | 597 | 0.3688 | 0.8668 | 0.8693 | 0.8744 | 0.8668 | | 0.3044 | 2.0 | 1194 | 0.3994 | 0.8744 | 0.8726 | 0.8739 | 0.8744 | | 0.1833 | 3.0 | 1791 | 0.6212 | 0.8781 | 0.8764 | 0.8762 | 0.8781 | | 0.1189 | 4.0 | 2388 | 0.8370 | 0.8740 | 0.8743 | 0.8748 | 0.8740 | | 0.0759 | 5.0 | 2985 | 0.9107 | 0.8807 | 0.8798 | 0.8796 | 0.8807 | | 0.0291 | 6.0 | 3582 | 0.9711 | 0.8836 | 0.8825 | 0.8821 | 0.8836 | | 0.0314 | 7.0 | 4179 | 1.1305 | 0.8819 | 0.8811 | 0.8812 | 0.8819 | | 0.0217 | 8.0 | 4776 | 1.0190 | 0.8811 | 0.8813 | 0.8816 | 0.8811 | | 0.0227 | 9.0 | 5373 | 1.1940 | 0.8844 | 0.8832 | 0.8838 | 0.8844 | | 0.0156 | 10.0 | 5970 | 1.2595 | 0.8752 | 0.8768 | 0.8801 | 0.8752 | | 0.0135 | 11.0 | 6567 | 1.1931 | 0.8760 | 0.8768 | 0.8780 | 0.8760 | | 0.009 | 12.0 | 7164 | 1.2154 | 0.8857 | 0.8852 | 0.8848 | 0.8857 | | 0.0058 | 13.0 | 7761 | 1.3874 | 0.8748 | 0.8759 | 0.8776 | 0.8748 | | 0.009 | 14.0 | 8358 | 1.4193 | 0.8740 | 0.8754 | 0.8780 | 0.8740 | | 0.0042 | 15.0 | 8955 | 1.2999 | 0.8807 | 0.8800 | 0.8796 | 0.8807 | | 0.0028 | 16.0 | 9552 | 1.3428 | 0.8802 | 0.8805 | 0.8817 | 0.8802 | | 0.0029 | 17.0 | 10149 | 1.3959 | 0.8807 | 0.8807 | 0.8810 | 0.8807 | | 0.0022 | 18.0 | 10746 | 1.4149 | 0.8827 | 0.8823 | 0.8824 | 0.8827 | | 0.0037 | 19.0 | 11343 | 1.4078 | 0.8840 | 0.8838 | 0.8838 | 0.8840 | | 0.001 | 20.0 | 11940 | 1.4236 | 0.8823 | 0.8823 | 0.8825 | 0.8823 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
[ "TEXT_CLASSIFICATION" ]
[ "BEAR" ]
Non_BioNLP
sdadas/mmlw-e5-large
sdadas
sentence-similarity
[ "sentence-transformers", "pytorch", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "mteb", "pl", "arxiv:2402.13350", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,700
1,730
103
0
--- language: pl license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb widget: - source_sentence: 'query: Jak dożyć 100 lat?' sentences: - 'passage: Trzeba zdrowo się odżywiać i uprawiać sport.' - 'passage: Trzeba pić alkohol, imprezować i jeździć szybkimi autami.' - 'passage: Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu.' model-index: - name: mmlw-e5-large results: - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 30.623921415441725 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 37.683896620278325 - type: f1 value: 34.19193027014284 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: arguana-pl config: default split: test revision: None metrics: - type: map_at_1 value: 38.407000000000004 - type: map_at_10 value: 55.147 - type: map_at_100 value: 55.757 - type: map_at_1000 value: 55.761 - type: map_at_3 value: 51.268 - type: map_at_5 value: 53.696999999999996 - type: mrr_at_1 value: 40.043 - type: mrr_at_10 value: 55.840999999999994 - type: mrr_at_100 value: 56.459 - type: mrr_at_1000 value: 56.462999999999994 - type: mrr_at_3 value: 52.074 - type: mrr_at_5 value: 54.364999999999995 - type: ndcg_at_1 value: 38.407000000000004 - type: ndcg_at_10 value: 63.248000000000005 - type: ndcg_at_100 value: 65.717 - type: ndcg_at_1000 value: 65.79 - type: ndcg_at_3 value: 55.403999999999996 - type: ndcg_at_5 value: 59.760000000000005 - type: precision_at_1 value: 38.407000000000004 - type: precision_at_10 value: 8.862 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.451 - type: precision_at_5 value: 15.576 - type: recall_at_1 value: 38.407000000000004 - type: recall_at_10 value: 88.62 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 67.354 - type: recall_at_5 value: 77.881 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 66.14999999999999 - type: ap value: 21.69513674684204 - type: f1 value: 56.48142830893528 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.4 - type: cos_sim_ap value: 76.83228768203222 - type: cos_sim_f1 value: 65.3658536585366 - type: cos_sim_precision value: 60.909090909090914 - type: cos_sim_recall value: 70.52631578947368 - type: dot_accuracy value: 84.1 - type: dot_ap value: 57.26072201751864 - type: dot_f1 value: 62.75395033860045 - type: dot_precision value: 54.9407114624506 - type: dot_recall value: 73.15789473684211 - type: euclidean_accuracy value: 89.4 - type: euclidean_ap value: 76.59095263388942 - type: euclidean_f1 value: 65.21739130434783 - type: euclidean_precision value: 60.26785714285714 - type: euclidean_recall value: 71.05263157894737 - type: manhattan_accuracy value: 89.4 - type: manhattan_ap value: 76.58825999753456 - type: manhattan_f1 value: 64.72019464720195 - type: manhattan_precision value: 60.18099547511312 - type: manhattan_recall value: 70.0 - type: max_accuracy value: 89.4 - type: max_ap value: 76.83228768203222 - type: max_f1 value: 65.3658536585366 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.73949495291659 - type: cos_sim_spearman value: 93.50397366192922 - type: euclidean_pearson value: 92.47498888987636 - type: euclidean_spearman value: 93.39315936230747 - type: manhattan_pearson value: 92.47250250777654 - type: manhattan_spearman value: 93.36739690549109 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: dbpedia-pl config: default split: test revision: None metrics: - type: map_at_1 value: 8.434 - type: map_at_10 value: 18.424 - type: map_at_100 value: 26.428 - type: map_at_1000 value: 28.002 - type: map_at_3 value: 13.502 - type: map_at_5 value: 15.577 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 72.714 - type: mrr_at_100 value: 73.021 - type: mrr_at_1000 value: 73.028 - type: mrr_at_3 value: 70.75 - type: mrr_at_5 value: 72.3 - type: ndcg_at_1 value: 52.75 - type: ndcg_at_10 value: 39.839999999999996 - type: ndcg_at_100 value: 44.989000000000004 - type: ndcg_at_1000 value: 52.532999999999994 - type: ndcg_at_3 value: 45.198 - type: ndcg_at_5 value: 42.015 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 31.05 - type: precision_at_100 value: 10.26 - type: precision_at_1000 value: 1.9879999999999998 - type: precision_at_3 value: 48.25 - type: precision_at_5 value: 40.45 - type: recall_at_1 value: 8.434 - type: recall_at_10 value: 24.004 - type: recall_at_100 value: 51.428 - type: recall_at_1000 value: 75.712 - type: recall_at_3 value: 15.015 - type: recall_at_5 value: 18.282999999999998 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: fiqa-pl config: default split: test revision: None metrics: - type: map_at_1 value: 19.088 - type: map_at_10 value: 31.818 - type: map_at_100 value: 33.689 - type: map_at_1000 value: 33.86 - type: map_at_3 value: 27.399 - type: map_at_5 value: 29.945 - type: mrr_at_1 value: 38.117000000000004 - type: mrr_at_10 value: 47.668 - type: mrr_at_100 value: 48.428 - type: mrr_at_1000 value: 48.475 - type: mrr_at_3 value: 45.242 - type: mrr_at_5 value: 46.716 - type: ndcg_at_1 value: 38.272 - type: ndcg_at_10 value: 39.903 - type: ndcg_at_100 value: 46.661 - type: ndcg_at_1000 value: 49.625 - type: ndcg_at_3 value: 35.921 - type: ndcg_at_5 value: 37.558 - type: precision_at_1 value: 38.272 - type: precision_at_10 value: 11.358 - type: precision_at_100 value: 1.8190000000000002 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.395 - type: recall_at_1 value: 19.088 - type: recall_at_10 value: 47.355999999999995 - type: recall_at_100 value: 72.451 - type: recall_at_1000 value: 90.257 - type: recall_at_3 value: 32.931 - type: recall_at_5 value: 39.878 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: hotpotqa-pl config: default split: test revision: None metrics: - type: map_at_1 value: 39.095 - type: map_at_10 value: 62.529 - type: map_at_100 value: 63.425 - type: map_at_1000 value: 63.483000000000004 - type: map_at_3 value: 58.887 - type: map_at_5 value: 61.18599999999999 - type: mrr_at_1 value: 78.123 - type: mrr_at_10 value: 84.231 - type: mrr_at_100 value: 84.408 - type: mrr_at_1000 value: 84.414 - type: mrr_at_3 value: 83.286 - type: mrr_at_5 value: 83.94 - type: ndcg_at_1 value: 78.19 - type: ndcg_at_10 value: 70.938 - type: ndcg_at_100 value: 73.992 - type: ndcg_at_1000 value: 75.1 - type: ndcg_at_3 value: 65.863 - type: ndcg_at_5 value: 68.755 - type: precision_at_1 value: 78.19 - type: precision_at_10 value: 14.949000000000002 - type: precision_at_100 value: 1.733 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 42.381 - type: precision_at_5 value: 27.711000000000002 - type: recall_at_1 value: 39.095 - type: recall_at_10 value: 74.747 - type: recall_at_100 value: 86.631 - type: recall_at_1000 value: 93.923 - type: recall_at_3 value: 63.571999999999996 - type: recall_at_5 value: 69.27799999999999 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: msmarco-pl config: default split: validation revision: None metrics: - type: map_at_1 value: 19.439999999999998 - type: map_at_10 value: 30.264000000000003 - type: map_at_100 value: 31.438 - type: map_at_1000 value: 31.495 - type: map_at_3 value: 26.735 - type: map_at_5 value: 28.716 - type: mrr_at_1 value: 19.914 - type: mrr_at_10 value: 30.753999999999998 - type: mrr_at_100 value: 31.877 - type: mrr_at_1000 value: 31.929000000000002 - type: mrr_at_3 value: 27.299 - type: mrr_at_5 value: 29.254 - type: ndcg_at_1 value: 20.014000000000003 - type: ndcg_at_10 value: 36.472 - type: ndcg_at_100 value: 42.231 - type: ndcg_at_1000 value: 43.744 - type: ndcg_at_3 value: 29.268 - type: ndcg_at_5 value: 32.79 - type: precision_at_1 value: 20.014000000000003 - type: precision_at_10 value: 5.814 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 12.426 - type: precision_at_5 value: 9.238 - type: recall_at_1 value: 19.439999999999998 - type: recall_at_10 value: 55.535000000000004 - type: recall_at_100 value: 82.44399999999999 - type: recall_at_1000 value: 94.217 - type: recall_at_3 value: 35.963 - type: recall_at_5 value: 44.367000000000004 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.01412239408205 - type: f1 value: 70.04544187503352 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.26899798251513 - type: f1 value: 75.55876166863844 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: nfcorpus-pl config: default split: test revision: None metrics: - type: map_at_1 value: 5.772 - type: map_at_10 value: 12.708 - type: map_at_100 value: 16.194 - type: map_at_1000 value: 17.630000000000003 - type: map_at_3 value: 9.34 - type: map_at_5 value: 10.741 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 53.429 - type: mrr_at_100 value: 53.88699999999999 - type: mrr_at_1000 value: 53.925 - type: mrr_at_3 value: 51.342 - type: mrr_at_5 value: 52.456 - type: ndcg_at_1 value: 41.641 - type: ndcg_at_10 value: 34.028000000000006 - type: ndcg_at_100 value: 31.613000000000003 - type: ndcg_at_1000 value: 40.428 - type: ndcg_at_3 value: 38.991 - type: ndcg_at_5 value: 36.704 - type: precision_at_1 value: 43.034 - type: precision_at_10 value: 25.324999999999996 - type: precision_at_100 value: 7.889 - type: precision_at_1000 value: 2.069 - type: precision_at_3 value: 36.739 - type: precision_at_5 value: 32.074000000000005 - type: recall_at_1 value: 5.772 - type: recall_at_10 value: 16.827 - type: recall_at_100 value: 32.346000000000004 - type: recall_at_1000 value: 62.739 - type: recall_at_3 value: 10.56 - type: recall_at_5 value: 12.655 - task: type: Retrieval dataset: name: MTEB NQ-PL type: nq-pl config: default split: test revision: None metrics: - type: map_at_1 value: 26.101000000000003 - type: map_at_10 value: 39.912 - type: map_at_100 value: 41.037 - type: map_at_1000 value: 41.077000000000005 - type: map_at_3 value: 35.691 - type: map_at_5 value: 38.155 - type: mrr_at_1 value: 29.403000000000002 - type: mrr_at_10 value: 42.376999999999995 - type: mrr_at_100 value: 43.248999999999995 - type: mrr_at_1000 value: 43.277 - type: mrr_at_3 value: 38.794000000000004 - type: mrr_at_5 value: 40.933 - type: ndcg_at_1 value: 29.519000000000002 - type: ndcg_at_10 value: 47.33 - type: ndcg_at_100 value: 52.171 - type: ndcg_at_1000 value: 53.125 - type: ndcg_at_3 value: 39.316 - type: ndcg_at_5 value: 43.457 - type: precision_at_1 value: 29.519000000000002 - type: precision_at_10 value: 8.03 - type: precision_at_100 value: 1.075 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 18.009 - type: precision_at_5 value: 13.221 - type: recall_at_1 value: 26.101000000000003 - type: recall_at_10 value: 67.50399999999999 - type: recall_at_100 value: 88.64699999999999 - type: recall_at_1000 value: 95.771 - type: recall_at_3 value: 46.669 - type: recall_at_5 value: 56.24 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 63.76773819866782 - type: ap value: 74.87896817642536 - type: f1 value: 61.420506092721425 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 82.1 - type: cos_sim_ap value: 91.09417013497443 - type: cos_sim_f1 value: 84.78437754271766 - type: cos_sim_precision value: 83.36 - type: cos_sim_recall value: 86.25827814569537 - type: dot_accuracy value: 75.9 - type: dot_ap value: 86.82680649789796 - type: dot_f1 value: 80.5379746835443 - type: dot_precision value: 77.12121212121212 - type: dot_recall value: 84.27152317880795 - type: euclidean_accuracy value: 81.6 - type: euclidean_ap value: 90.81248760600693 - type: euclidean_f1 value: 84.35374149659863 - type: euclidean_precision value: 86.7132867132867 - type: euclidean_recall value: 82.11920529801324 - type: manhattan_accuracy value: 81.6 - type: manhattan_ap value: 90.81272803548767 - type: manhattan_f1 value: 84.33530906011855 - type: manhattan_precision value: 86.30849220103987 - type: manhattan_recall value: 82.45033112582782 - type: max_accuracy value: 82.1 - type: max_ap value: 91.09417013497443 - type: max_f1 value: 84.78437754271766 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 98.05194805194806 - type: cos_sim_ap value: 99.52709687103496 - type: cos_sim_f1 value: 96.83257918552036 - type: cos_sim_precision value: 95.82089552238806 - type: cos_sim_recall value: 97.86585365853658 - type: dot_accuracy value: 92.30055658627087 - type: dot_ap value: 94.12759311032353 - type: dot_f1 value: 87.00906344410878 - type: dot_precision value: 86.22754491017965 - type: dot_recall value: 87.8048780487805 - type: euclidean_accuracy value: 98.05194805194806 - type: euclidean_ap value: 99.49402675624125 - type: euclidean_f1 value: 96.8133535660091 - type: euclidean_precision value: 96.37462235649546 - type: euclidean_recall value: 97.2560975609756 - type: manhattan_accuracy value: 98.05194805194806 - type: manhattan_ap value: 99.50120505935962 - type: manhattan_f1 value: 96.8133535660091 - type: manhattan_precision value: 96.37462235649546 - type: manhattan_recall value: 97.2560975609756 - type: max_accuracy value: 98.05194805194806 - type: max_ap value: 99.52709687103496 - type: max_f1 value: 96.83257918552036 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 69.45983379501385 - type: f1 value: 68.60917948426784 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 43.13765182186235 - type: f1 value: 36.15557441785656 - task: type: Retrieval dataset: name: MTEB Quora-PL type: quora-pl config: default split: test revision: None metrics: - type: map_at_1 value: 67.448 - type: map_at_10 value: 81.566 - type: map_at_100 value: 82.284 - type: map_at_1000 value: 82.301 - type: map_at_3 value: 78.425 - type: map_at_5 value: 80.43400000000001 - type: mrr_at_1 value: 77.61 - type: mrr_at_10 value: 84.467 - type: mrr_at_100 value: 84.63199999999999 - type: mrr_at_1000 value: 84.634 - type: mrr_at_3 value: 83.288 - type: mrr_at_5 value: 84.095 - type: ndcg_at_1 value: 77.66 - type: ndcg_at_10 value: 85.63199999999999 - type: ndcg_at_100 value: 87.166 - type: ndcg_at_1000 value: 87.306 - type: ndcg_at_3 value: 82.32300000000001 - type: ndcg_at_5 value: 84.22 - type: precision_at_1 value: 77.66 - type: precision_at_10 value: 13.136000000000001 - type: precision_at_100 value: 1.522 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.153 - type: precision_at_5 value: 23.982 - type: recall_at_1 value: 67.448 - type: recall_at_10 value: 93.83200000000001 - type: recall_at_100 value: 99.212 - type: recall_at_1000 value: 99.94 - type: recall_at_3 value: 84.539 - type: recall_at_5 value: 89.71000000000001 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: scidocs-pl config: default split: test revision: None metrics: - type: map_at_1 value: 4.393 - type: map_at_10 value: 11.472 - type: map_at_100 value: 13.584999999999999 - type: map_at_1000 value: 13.918 - type: map_at_3 value: 8.212 - type: map_at_5 value: 9.864 - type: mrr_at_1 value: 21.7 - type: mrr_at_10 value: 32.268 - type: mrr_at_100 value: 33.495000000000005 - type: mrr_at_1000 value: 33.548 - type: mrr_at_3 value: 29.15 - type: mrr_at_5 value: 30.91 - type: ndcg_at_1 value: 21.6 - type: ndcg_at_10 value: 19.126 - type: ndcg_at_100 value: 27.496 - type: ndcg_at_1000 value: 33.274 - type: ndcg_at_3 value: 18.196 - type: ndcg_at_5 value: 15.945 - type: precision_at_1 value: 21.6 - type: precision_at_10 value: 9.94 - type: precision_at_100 value: 2.1999999999999997 - type: precision_at_1000 value: 0.359 - type: precision_at_3 value: 17.2 - type: precision_at_5 value: 14.12 - type: recall_at_1 value: 4.393 - type: recall_at_10 value: 20.166999999999998 - type: recall_at_100 value: 44.678000000000004 - type: recall_at_1000 value: 72.868 - type: recall_at_3 value: 10.473 - type: recall_at_5 value: 14.313 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 82.65389319200979 - type: cos_sim_ap value: 76.13749398520014 - type: cos_sim_f1 value: 66.64355062413314 - type: cos_sim_precision value: 64.93243243243244 - type: cos_sim_recall value: 68.44729344729345 - type: dot_accuracy value: 76.0905014268243 - type: dot_ap value: 58.058968583382494 - type: dot_f1 value: 61.181080324657145 - type: dot_precision value: 50.391885661595204 - type: dot_recall value: 77.84900284900284 - type: euclidean_accuracy value: 82.61312678353036 - type: euclidean_ap value: 76.10290283033221 - type: euclidean_f1 value: 66.50782845473111 - type: euclidean_precision value: 63.6897001303781 - type: euclidean_recall value: 69.58689458689459 - type: manhattan_accuracy value: 82.6742763962495 - type: manhattan_ap value: 76.12712309700966 - type: manhattan_f1 value: 66.59700452803902 - type: manhattan_precision value: 65.16700749829583 - type: manhattan_recall value: 68.09116809116809 - type: max_accuracy value: 82.6742763962495 - type: max_ap value: 76.13749398520014 - type: max_f1 value: 66.64355062413314 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 81.23898481255246 - type: cos_sim_spearman value: 76.0416957474899 - type: euclidean_pearson value: 78.96475496102107 - type: euclidean_spearman value: 76.07208683063504 - type: manhattan_pearson value: 78.92666424673251 - type: manhattan_spearman value: 76.04968227583831 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 39.13987124398541 - type: cos_sim_spearman value: 40.40194528288759 - type: euclidean_pearson value: 29.14566247168167 - type: euclidean_spearman value: 39.97389932591777 - type: manhattan_pearson value: 29.172993134388935 - type: manhattan_spearman value: 39.85681935287037 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: scifact-pl config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.92399999999999 - type: map_at_100 value: 67.443 - type: map_at_1000 value: 67.47800000000001 - type: map_at_3 value: 64.859 - type: map_at_5 value: 65.71900000000001 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.95400000000001 - type: mrr_at_100 value: 68.42 - type: mrr_at_1000 value: 68.45 - type: mrr_at_3 value: 66.444 - type: mrr_at_5 value: 67.128 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 71.209 - type: ndcg_at_100 value: 73.37 - type: ndcg_at_1000 value: 74.287 - type: ndcg_at_3 value: 67.66799999999999 - type: ndcg_at_5 value: 68.644 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.467 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.778000000000002 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 83.256 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.933 - type: recall_at_5 value: 75.744 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: trec-covid-pl config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.693 - type: map_at_100 value: 9.281 - type: map_at_1000 value: 21.462999999999997 - type: map_at_3 value: 0.609 - type: map_at_5 value: 0.9570000000000001 - type: mrr_at_1 value: 80.0 - type: mrr_at_10 value: 88.73299999999999 - type: mrr_at_100 value: 88.73299999999999 - type: mrr_at_1000 value: 88.73299999999999 - type: mrr_at_3 value: 88.333 - type: mrr_at_5 value: 88.73299999999999 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 71.177 - type: ndcg_at_100 value: 52.479 - type: ndcg_at_1000 value: 45.333 - type: ndcg_at_3 value: 77.48 - type: ndcg_at_5 value: 76.137 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 74.0 - type: precision_at_100 value: 53.68000000000001 - type: precision_at_1000 value: 19.954 - type: precision_at_3 value: 80.667 - type: precision_at_5 value: 80.80000000000001 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 1.934 - type: recall_at_100 value: 12.728 - type: recall_at_1000 value: 41.869 - type: recall_at_3 value: 0.637 - type: recall_at_5 value: 1.042 --- <h1 align="center">MMLW-e5-large</h1> MMLW (muszę mieć lepszą wiadomość) are neural text encoders for Polish. This is a distilled model that can be used to generate embeddings applicable to many tasks such as semantic similarity, clustering, information retrieval. The model can also serve as a base for further fine-tuning. It transforms texts to 1024 dimensional vectors. The model was initialized with multilingual E5 checkpoint, and then trained with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) on a diverse corpus of 60 million Polish-English text pairs. We utilised [English FlagEmbeddings (BGE)](https://huggingface.co/BAAI/bge-base-en) as teacher models for distillation. ## Usage (Sentence-Transformers) ⚠️ Our embedding models require the use of specific prefixes and suffixes when encoding texts. For this model, queries should be prefixed with **"query: "** and passages with **"passage: "** ⚠️ You can use the model like this with [sentence-transformers](https://www.SBERT.net): ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim query_prefix = "query: " answer_prefix = "passage: " queries = [query_prefix + "Jak dożyć 100 lat?"] answers = [ answer_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.", answer_prefix + "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.", answer_prefix + "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu." ] model = SentenceTransformer("sdadas/mmlw-e5-large") queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False) answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False) best_answer = cos_sim(queries_emb, answers_emb).argmax().item() print(answers[best_answer]) # Trzeba zdrowo się odżywiać i uprawiać sport. ``` ## Evaluation Results - The model achieves an **Average Score** of **61.17** on the Polish Massive Text Embedding Benchmark (MTEB). See [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) for detailed results. - The model achieves **NDCG@10** of **56.09** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results. ## Acknowledgements This model was trained with the A100 GPU cluster support delivered by the Gdansk University of Technology within the TASK center initiative. ## Citation ```bibtex @article{dadas2024pirb, title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods}, author={Sławomir Dadas and Michał Perełkiewicz and Rafał Poświata}, year={2024}, eprint={2402.13350}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "SEMANTIC_SIMILARITY" ]
[ "SCIFACT" ]
Non_BioNLP
bijaygurung/stella_en_400M_v5
bijaygurung
sentence-similarity
[ "sentence-transformers", "pytorch", "safetensors", "new", "feature-extraction", "mteb", "transformers", "sentence-similarity", "custom_code", "arxiv:2205.13147", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,727
1,727
3,336
4
--- license: mit tags: - mteb - sentence-transformers - transformers - sentence-similarity model-index: - name: stella_en_400M_v5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 92.35820895522387 - type: ap value: 70.81322736988783 - type: ap_weighted value: 70.81322736988783 - type: f1 value: 88.9505466159595 - type: f1_weighted value: 92.68630932872613 - type: main_score value: 92.35820895522387 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.1945 - type: ap value: 96.08192192244094 - type: ap_weighted value: 96.08192192244094 - type: f1 value: 97.1936887167346 - type: f1_weighted value: 97.1936887167346 - type: main_score value: 97.1945 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 59.528000000000006 - type: f1 value: 59.21016819840188 - type: f1_weighted value: 59.21016819840188 - type: main_score value: 59.528000000000006 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 64.24 - type: map_at_1 value: 40.398 - type: map_at_10 value: 56.215 - type: map_at_100 value: 56.833999999999996 - type: map_at_1000 value: 56.835 - type: map_at_20 value: 56.747 - type: map_at_3 value: 52.181 - type: map_at_5 value: 54.628 - type: mrr_at_1 value: 41.25177809388336 - type: mrr_at_10 value: 56.570762491815216 - type: mrr_at_100 value: 57.17548614361504 - type: mrr_at_1000 value: 57.176650626377466 - type: mrr_at_20 value: 57.08916253512566 - type: mrr_at_3 value: 52.47747747747754 - type: mrr_at_5 value: 54.94547178757718 - type: nauc_map_at_1000_diff1 value: 22.408086887100158 - type: nauc_map_at_1000_max value: -8.730419096847543 - type: nauc_map_at_1000_std value: -17.789262741255737 - type: nauc_map_at_100_diff1 value: 22.407371684274025 - type: nauc_map_at_100_max value: -8.732263549026266 - type: nauc_map_at_100_std value: -17.79550515579994 - type: nauc_map_at_10_diff1 value: 21.925005073301246 - type: nauc_map_at_10_max value: -8.990323944492134 - type: nauc_map_at_10_std value: -18.199246301671458 - type: nauc_map_at_1_diff1 value: 26.23276644969203 - type: nauc_map_at_1_max value: -12.376511389571245 - type: nauc_map_at_1_std value: -18.11411715207284 - type: nauc_map_at_20_diff1 value: 22.32455790850922 - type: nauc_map_at_20_max value: -8.664671547236034 - type: nauc_map_at_20_std value: -17.8290016125137 - type: nauc_map_at_3_diff1 value: 22.395462147465064 - type: nauc_map_at_3_max value: -8.206580750918844 - type: nauc_map_at_3_std value: -17.604490446911484 - type: nauc_map_at_5_diff1 value: 21.95307379904799 - type: nauc_map_at_5_max value: -8.03958102978443 - type: nauc_map_at_5_std value: -17.36578866595004 - type: nauc_mrr_at_1000_diff1 value: 20.124236798365587 - type: nauc_mrr_at_1000_max value: -9.587376069575898 - type: nauc_mrr_at_1000_std value: -17.79191612151833 - type: nauc_mrr_at_100_diff1 value: 20.123612603474033 - type: nauc_mrr_at_100_max value: -9.589187218607831 - type: nauc_mrr_at_100_std value: -17.7981617777748 - type: nauc_mrr_at_10_diff1 value: 19.723683875738075 - type: nauc_mrr_at_10_max value: -9.774151729178815 - type: nauc_mrr_at_10_std value: -18.168668675495162 - type: nauc_mrr_at_1_diff1 value: 23.945332059908132 - type: nauc_mrr_at_1_max value: -12.260461466152819 - type: nauc_mrr_at_1_std value: -18.007194922921148 - type: nauc_mrr_at_20_diff1 value: 20.04819461810257 - type: nauc_mrr_at_20_max value: -9.518368283588936 - type: nauc_mrr_at_20_std value: -17.831608149836136 - type: nauc_mrr_at_3_diff1 value: 19.8571785245832 - type: nauc_mrr_at_3_max value: -9.464375021240478 - type: nauc_mrr_at_3_std value: -17.728533927330453 - type: nauc_mrr_at_5_diff1 value: 19.670313652167827 - type: nauc_mrr_at_5_max value: -8.966372585728434 - type: nauc_mrr_at_5_std value: -17.468955834324817 - type: nauc_ndcg_at_1000_diff1 value: 21.863049281767417 - type: nauc_ndcg_at_1000_max value: -8.18698520924057 - type: nauc_ndcg_at_1000_std value: -17.634483364794804 - type: nauc_ndcg_at_100_diff1 value: 21.849924385738586 - type: nauc_ndcg_at_100_max value: -8.226437560889345 - type: nauc_ndcg_at_100_std value: -17.774648478087002 - type: nauc_ndcg_at_10_diff1 value: 19.888395590413573 - type: nauc_ndcg_at_10_max value: -8.968706085632382 - type: nauc_ndcg_at_10_std value: -19.31386964628115 - type: nauc_ndcg_at_1_diff1 value: 26.23276644969203 - type: nauc_ndcg_at_1_max value: -12.376511389571245 - type: nauc_ndcg_at_1_std value: -18.11411715207284 - type: nauc_ndcg_at_20_diff1 value: 21.38413342416933 - type: nauc_ndcg_at_20_max value: -7.636238194084164 - type: nauc_ndcg_at_20_std value: -17.946390844693028 - type: nauc_ndcg_at_3_diff1 value: 21.29169165029195 - type: nauc_ndcg_at_3_max value: -6.793840499730093 - type: nauc_ndcg_at_3_std value: -17.52359001586737 - type: nauc_ndcg_at_5_diff1 value: 20.238297656671364 - type: nauc_ndcg_at_5_max value: -6.424992706950072 - type: nauc_ndcg_at_5_std value: -17.082391132291356 - type: nauc_precision_at_1000_diff1 value: -7.05195108528572 - type: nauc_precision_at_1000_max value: 34.439879624882145 - type: nauc_precision_at_1000_std value: 68.72436351659353 - type: nauc_precision_at_100_diff1 value: -2.769464113932605 - type: nauc_precision_at_100_max value: 9.89562961226698 - type: nauc_precision_at_100_std value: -0.5880967482224028 - type: nauc_precision_at_10_diff1 value: 2.1371544726832323 - type: nauc_precision_at_10_max value: -11.93051325147756 - type: nauc_precision_at_10_std value: -30.83144187392059 - type: nauc_precision_at_1_diff1 value: 26.23276644969203 - type: nauc_precision_at_1_max value: -12.376511389571245 - type: nauc_precision_at_1_std value: -18.11411715207284 - type: nauc_precision_at_20_diff1 value: 3.780146814257504 - type: nauc_precision_at_20_max value: 17.06527540214615 - type: nauc_precision_at_20_std value: -20.36832563035565 - type: nauc_precision_at_3_diff1 value: 17.63894384012077 - type: nauc_precision_at_3_max value: -2.0220490624638887 - type: nauc_precision_at_3_std value: -17.285601413493918 - type: nauc_precision_at_5_diff1 value: 12.557855071944601 - type: nauc_precision_at_5_max value: 0.5840236463956658 - type: nauc_precision_at_5_std value: -15.827224420217846 - type: nauc_recall_at_1000_diff1 value: -7.051951085286463 - type: nauc_recall_at_1000_max value: 34.43987962487738 - type: nauc_recall_at_1000_std value: 68.724363516591 - type: nauc_recall_at_100_diff1 value: -2.769464113930314 - type: nauc_recall_at_100_max value: 9.895629612270017 - type: nauc_recall_at_100_std value: -0.58809674821745 - type: nauc_recall_at_10_diff1 value: 2.1371544726834495 - type: nauc_recall_at_10_max value: -11.930513251477253 - type: nauc_recall_at_10_std value: -30.83144187392047 - type: nauc_recall_at_1_diff1 value: 26.23276644969203 - type: nauc_recall_at_1_max value: -12.376511389571245 - type: nauc_recall_at_1_std value: -18.11411715207284 - type: nauc_recall_at_20_diff1 value: 3.7801468142575922 - type: nauc_recall_at_20_max value: 17.0652754021456 - type: nauc_recall_at_20_std value: -20.36832563035559 - type: nauc_recall_at_3_diff1 value: 17.63894384012074 - type: nauc_recall_at_3_max value: -2.02204906246383 - type: nauc_recall_at_3_std value: -17.28560141349386 - type: nauc_recall_at_5_diff1 value: 12.55785507194463 - type: nauc_recall_at_5_max value: 0.5840236463957296 - type: nauc_recall_at_5_std value: -15.827224420217856 - type: ndcg_at_1 value: 40.398 - type: ndcg_at_10 value: 64.24 - type: ndcg_at_100 value: 66.631 - type: ndcg_at_1000 value: 66.65100000000001 - type: ndcg_at_20 value: 66.086 - type: ndcg_at_3 value: 55.938 - type: ndcg_at_5 value: 60.370000000000005 - type: precision_at_1 value: 40.398 - type: precision_at_10 value: 8.962 - type: precision_at_100 value: 0.9950000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.836 - type: precision_at_3 value: 22.262 - type: precision_at_5 value: 15.519 - type: recall_at_1 value: 40.398 - type: recall_at_10 value: 89.616 - type: recall_at_100 value: 99.502 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 96.72800000000001 - type: recall_at_3 value: 66.78500000000001 - type: recall_at_5 value: 77.596 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 55.1564333205451 - type: v_measure value: 55.1564333205451 - type: v_measure_std value: 14.696883012214512 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 49.823698316694795 - type: v_measure value: 49.823698316694795 - type: v_measure_std value: 14.951660654298186 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 66.15294503553424 - type: map value: 66.15294503553424 - type: mrr value: 78.53438420612935 - type: nAUC_map_diff1 value: 12.569697092717997 - type: nAUC_map_max value: 21.50670312412572 - type: nAUC_map_std value: 16.943786429229064 - type: nAUC_mrr_diff1 value: 15.590272897361238 - type: nAUC_mrr_max value: 34.96072022474653 - type: nAUC_mrr_std value: 21.649217605241045 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 85.7824546319275 - type: cosine_spearman value: 83.29587385660628 - type: euclidean_pearson value: 84.58764190565167 - type: euclidean_spearman value: 83.30069324352772 - type: main_score value: 83.29587385660628 - type: manhattan_pearson value: 84.95996839947179 - type: manhattan_spearman value: 83.87480271054358 - type: pearson value: 85.7824546319275 - type: spearman value: 83.29587385660628 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 89.30194805194806 - type: f1 value: 89.26182507266391 - type: f1_weighted value: 89.26182507266391 - type: main_score value: 89.30194805194806 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 50.67972171889736 - type: v_measure value: 50.67972171889736 - type: v_measure_std value: 0.7687409980036303 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 45.80539715556144 - type: v_measure value: 45.80539715556144 - type: v_measure_std value: 0.9601346216579142 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 44.361250000000005 - type: map_at_1 value: 28.304499999999997 - type: map_at_10 value: 38.54841666666666 - type: map_at_100 value: 39.83141666666667 - type: map_at_1000 value: 39.944750000000006 - type: map_at_20 value: 39.25341666666667 - type: map_at_3 value: 35.406749999999995 - type: map_at_5 value: 37.15558333333333 - type: mrr_at_1 value: 34.09077232860122 - type: mrr_at_10 value: 43.15445393211421 - type: mrr_at_100 value: 43.98645286848257 - type: mrr_at_1000 value: 44.037631313469404 - type: mrr_at_20 value: 43.64045813249614 - type: mrr_at_3 value: 40.674138648480486 - type: mrr_at_5 value: 42.106251182620255 - type: nauc_map_at_1000_diff1 value: 46.250011739434996 - type: nauc_map_at_1000_max value: 30.13664446260598 - type: nauc_map_at_1000_std value: 5.422301791618935 - type: nauc_map_at_100_diff1 value: 46.253631351999395 - type: nauc_map_at_100_max value: 30.12612918885181 - type: nauc_map_at_100_std value: 5.367077019987172 - type: nauc_map_at_10_diff1 value: 46.328171341741346 - type: nauc_map_at_10_max value: 29.80274612581464 - type: nauc_map_at_10_std value: 4.62996685176396 - type: nauc_map_at_1_diff1 value: 51.56118117729493 - type: nauc_map_at_1_max value: 27.94885243863768 - type: nauc_map_at_1_std value: 1.700366508927356 - type: nauc_map_at_20_diff1 value: 46.286750260299094 - type: nauc_map_at_20_max value: 29.979205290353278 - type: nauc_map_at_20_std value: 5.010588412441873 - type: nauc_map_at_3_diff1 value: 47.10018183619064 - type: nauc_map_at_3_max value: 29.062318206078753 - type: nauc_map_at_3_std value: 3.2235696254694197 - type: nauc_map_at_5_diff1 value: 46.41971733050039 - type: nauc_map_at_5_max value: 29.456798617695657 - type: nauc_map_at_5_std value: 4.0921691023077145 - type: nauc_mrr_at_1000_diff1 value: 45.88888977975723 - type: nauc_mrr_at_1000_max value: 32.162138978089544 - type: nauc_mrr_at_1000_std value: 6.2811943424217915 - type: nauc_mrr_at_100_diff1 value: 45.87480433011124 - type: nauc_mrr_at_100_max value: 32.16011334212834 - type: nauc_mrr_at_100_std value: 6.2865717772421785 - type: nauc_mrr_at_10_diff1 value: 45.849652904658825 - type: nauc_mrr_at_10_max value: 32.13847916232293 - type: nauc_mrr_at_10_std value: 6.105718728141999 - type: nauc_mrr_at_1_diff1 value: 51.013730325062156 - type: nauc_mrr_at_1_max value: 32.77457396492779 - type: nauc_mrr_at_1_std value: 4.415684893471724 - type: nauc_mrr_at_20_diff1 value: 45.86663046255274 - type: nauc_mrr_at_20_max value: 32.15219360697865 - type: nauc_mrr_at_20_std value: 6.19603046412763 - type: nauc_mrr_at_3_diff1 value: 46.522376582423185 - type: nauc_mrr_at_3_max value: 32.18259009733714 - type: nauc_mrr_at_3_std value: 5.288000648220897 - type: nauc_mrr_at_5_diff1 value: 45.86611481369745 - type: nauc_mrr_at_5_max value: 32.14261639054921 - type: nauc_mrr_at_5_std value: 5.8811238177073735 - type: nauc_ndcg_at_1000_diff1 value: 44.5055097547565 - type: nauc_ndcg_at_1000_max value: 31.149682057975458 - type: nauc_ndcg_at_1000_std value: 8.157937194901333 - type: nauc_ndcg_at_100_diff1 value: 44.12398363638596 - type: nauc_ndcg_at_100_max value: 30.878064321409994 - type: nauc_ndcg_at_100_std value: 8.40493441452808 - type: nauc_ndcg_at_10_diff1 value: 44.200093505221474 - type: nauc_ndcg_at_10_max value: 30.15267107733158 - type: nauc_ndcg_at_10_std value: 6.407495361566107 - type: nauc_ndcg_at_1_diff1 value: 51.013730325062156 - type: nauc_ndcg_at_1_max value: 32.77457396492779 - type: nauc_ndcg_at_1_std value: 4.415684893471724 - type: nauc_ndcg_at_20_diff1 value: 44.16988321564116 - type: nauc_ndcg_at_20_max value: 30.333532500651213 - type: nauc_ndcg_at_20_std value: 7.10024701386895 - type: nauc_ndcg_at_3_diff1 value: 45.35982873879988 - type: nauc_ndcg_at_3_max value: 30.288312457948702 - type: nauc_ndcg_at_3_std value: 4.653900898293395 - type: nauc_ndcg_at_5_diff1 value: 44.324558115380185 - type: nauc_ndcg_at_5_max value: 30.048149698941373 - type: nauc_ndcg_at_5_std value: 5.6684459618413205 - type: nauc_precision_at_1000_diff1 value: -7.282175798304458 - type: nauc_precision_at_1000_max value: 7.820142031765352 - type: nauc_precision_at_1000_std value: 11.736131836431172 - type: nauc_precision_at_100_diff1 value: 1.0222940256506976 - type: nauc_precision_at_100_max value: 16.12346497070298 - type: nauc_precision_at_100_std value: 18.202607395247874 - type: nauc_precision_at_10_diff1 value: 18.289439185857837 - type: nauc_precision_at_10_max value: 26.116517399154375 - type: nauc_precision_at_10_std value: 13.921214069982302 - type: nauc_precision_at_1_diff1 value: 51.013730325062156 - type: nauc_precision_at_1_max value: 32.77457396492779 - type: nauc_precision_at_1_std value: 4.415684893471724 - type: nauc_precision_at_20_diff1 value: 12.365165405210886 - type: nauc_precision_at_20_max value: 22.946297258937367 - type: nauc_precision_at_20_std value: 16.13862870358933 - type: nauc_precision_at_3_diff1 value: 32.063423642849685 - type: nauc_precision_at_3_max value: 30.140965811989407 - type: nauc_precision_at_3_std value: 8.501746262550146 - type: nauc_precision_at_5_diff1 value: 24.777203357717948 - type: nauc_precision_at_5_max value: 28.401579566848472 - type: nauc_precision_at_5_std value: 11.643246774390914 - type: nauc_recall_at_1000_diff1 value: 30.04216463401409 - type: nauc_recall_at_1000_max value: 34.98067760563842 - type: nauc_recall_at_1000_std value: 48.01453905250591 - type: nauc_recall_at_100_diff1 value: 31.193415507513972 - type: nauc_recall_at_100_max value: 28.69740149270981 - type: nauc_recall_at_100_std value: 25.20960758920368 - type: nauc_recall_at_10_diff1 value: 36.18870823636506 - type: nauc_recall_at_10_max value: 26.005625231341238 - type: nauc_recall_at_10_std value: 8.891983977041376 - type: nauc_recall_at_1_diff1 value: 51.56118117729493 - type: nauc_recall_at_1_max value: 27.94885243863768 - type: nauc_recall_at_1_std value: 1.700366508927356 - type: nauc_recall_at_20_diff1 value: 34.93996118564803 - type: nauc_recall_at_20_max value: 26.149961715956138 - type: nauc_recall_at_20_std value: 12.0657502367633 - type: nauc_recall_at_3_diff1 value: 40.80743946709512 - type: nauc_recall_at_3_max value: 26.443127773025783 - type: nauc_recall_at_3_std value: 3.7011448604241477 - type: nauc_recall_at_5_diff1 value: 37.608535157055776 - type: nauc_recall_at_5_max value: 26.168016189725822 - type: nauc_recall_at_5_std value: 6.344191564595316 - type: ndcg_at_1 value: 34.09083333333333 - type: ndcg_at_10 value: 44.361250000000005 - type: ndcg_at_100 value: 49.586166666666664 - type: ndcg_at_1000 value: 51.623583333333336 - type: ndcg_at_20 value: 46.40158333333333 - type: ndcg_at_3 value: 39.27733333333333 - type: ndcg_at_5 value: 41.662333333333336 - type: precision_at_1 value: 34.09083333333333 - type: precision_at_10 value: 7.957000000000002 - type: precision_at_100 value: 1.2521666666666669 - type: precision_at_1000 value: 0.16125 - type: precision_at_20 value: 4.6755 - type: precision_at_3 value: 18.402083333333334 - type: precision_at_5 value: 13.104333333333335 - type: recall_at_1 value: 28.304499999999997 - type: recall_at_10 value: 56.80666666666667 - type: recall_at_100 value: 79.66208333333334 - type: recall_at_1000 value: 93.6455 - type: recall_at_20 value: 64.2495 - type: recall_at_3 value: 42.431333333333335 - type: recall_at_5 value: 48.665416666666665 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 43.525999999999996 - type: map_at_1 value: 19.291 - type: map_at_10 value: 33.471000000000004 - type: map_at_100 value: 35.388999999999996 - type: map_at_1000 value: 35.568 - type: map_at_20 value: 34.496 - type: map_at_3 value: 28.713 - type: map_at_5 value: 31.384 - type: mrr_at_1 value: 43.77850162866449 - type: mrr_at_10 value: 56.28576598934912 - type: mrr_at_100 value: 56.8588518168194 - type: mrr_at_1000 value: 56.878236725973544 - type: mrr_at_20 value: 56.6409328120183 - type: mrr_at_3 value: 53.56134636264935 - type: mrr_at_5 value: 55.27795874049956 - type: nauc_map_at_1000_diff1 value: 27.262513153363876 - type: nauc_map_at_1000_max value: 40.099398684385584 - type: nauc_map_at_1000_std value: 18.847812394005512 - type: nauc_map_at_100_diff1 value: 27.238993503030745 - type: nauc_map_at_100_max value: 40.07730434492169 - type: nauc_map_at_100_std value: 18.795349250833684 - type: nauc_map_at_10_diff1 value: 27.70929180366227 - type: nauc_map_at_10_max value: 39.55987024970173 - type: nauc_map_at_10_std value: 17.214881544648996 - type: nauc_map_at_1_diff1 value: 43.34155892182403 - type: nauc_map_at_1_max value: 38.23324890148018 - type: nauc_map_at_1_std value: 6.0781444393516075 - type: nauc_map_at_20_diff1 value: 27.311577477800103 - type: nauc_map_at_20_max value: 39.624414083413456 - type: nauc_map_at_20_std value: 18.149811054163287 - type: nauc_map_at_3_diff1 value: 30.475965062734367 - type: nauc_map_at_3_max value: 38.49324825043695 - type: nauc_map_at_3_std value: 13.357656038648487 - type: nauc_map_at_5_diff1 value: 28.425110095017747 - type: nauc_map_at_5_max value: 39.017894870747796 - type: nauc_map_at_5_std value: 15.543817194122564 - type: nauc_mrr_at_1000_diff1 value: 33.16689354701644 - type: nauc_mrr_at_1000_max value: 41.70755363247148 - type: nauc_mrr_at_1000_std value: 24.61667417463176 - type: nauc_mrr_at_100_diff1 value: 33.147229262917506 - type: nauc_mrr_at_100_max value: 41.712455697170725 - type: nauc_mrr_at_100_std value: 24.6418922043652 - type: nauc_mrr_at_10_diff1 value: 32.94185191112572 - type: nauc_mrr_at_10_max value: 41.64272730141954 - type: nauc_mrr_at_10_std value: 24.663391015702707 - type: nauc_mrr_at_1_diff1 value: 39.571969559016395 - type: nauc_mrr_at_1_max value: 39.396249211263495 - type: nauc_mrr_at_1_std value: 16.984149923258357 - type: nauc_mrr_at_20_diff1 value: 33.10040770334742 - type: nauc_mrr_at_20_max value: 41.807565560083034 - type: nauc_mrr_at_20_std value: 24.8064180365271 - type: nauc_mrr_at_3_diff1 value: 33.065406161485704 - type: nauc_mrr_at_3_max value: 41.049510969934694 - type: nauc_mrr_at_3_std value: 23.18371458928609 - type: nauc_mrr_at_5_diff1 value: 33.2389593543916 - type: nauc_mrr_at_5_max value: 41.629486918949915 - type: nauc_mrr_at_5_std value: 24.5777253036149 - type: nauc_ndcg_at_1000_diff1 value: 25.868840609197637 - type: nauc_ndcg_at_1000_max value: 42.79564910784761 - type: nauc_ndcg_at_1000_std value: 27.035091271680113 - type: nauc_ndcg_at_100_diff1 value: 25.019789319579942 - type: nauc_ndcg_at_100_max value: 42.482345143533735 - type: nauc_ndcg_at_100_std value: 26.76872010731345 - type: nauc_ndcg_at_10_diff1 value: 25.949464660653238 - type: nauc_ndcg_at_10_max value: 40.79769544643906 - type: nauc_ndcg_at_10_std value: 22.486116508973204 - type: nauc_ndcg_at_1_diff1 value: 39.571969559016395 - type: nauc_ndcg_at_1_max value: 39.396249211263495 - type: nauc_ndcg_at_1_std value: 16.984149923258357 - type: nauc_ndcg_at_20_diff1 value: 25.173455685962214 - type: nauc_ndcg_at_20_max value: 40.88873540662413 - type: nauc_ndcg_at_20_std value: 24.4451041955519 - type: nauc_ndcg_at_3_diff1 value: 28.185416070726333 - type: nauc_ndcg_at_3_max value: 39.10600031163912 - type: nauc_ndcg_at_3_std value: 18.42694044215541 - type: nauc_ndcg_at_5_diff1 value: 27.112647584005583 - type: nauc_ndcg_at_5_max value: 40.154045682322526 - type: nauc_ndcg_at_5_std value: 20.26822517176828 - type: nauc_precision_at_1000_diff1 value: -16.42087927044017 - type: nauc_precision_at_1000_max value: 3.5326295053913 - type: nauc_precision_at_1000_std value: 24.406810708493197 - type: nauc_precision_at_100_diff1 value: -12.17648135724982 - type: nauc_precision_at_100_max value: 15.895489260126183 - type: nauc_precision_at_100_std value: 32.48346122610907 - type: nauc_precision_at_10_diff1 value: -1.2493131347748072 - type: nauc_precision_at_10_max value: 26.409459305604376 - type: nauc_precision_at_10_std value: 31.115432019300016 - type: nauc_precision_at_1_diff1 value: 39.571969559016395 - type: nauc_precision_at_1_max value: 39.396249211263495 - type: nauc_precision_at_1_std value: 16.984149923258357 - type: nauc_precision_at_20_diff1 value: -6.597509397240593 - type: nauc_precision_at_20_max value: 21.461984620659695 - type: nauc_precision_at_20_std value: 32.9450259748889 - type: nauc_precision_at_3_diff1 value: 9.46378764865453 - type: nauc_precision_at_3_max value: 32.03650819375425 - type: nauc_precision_at_3_std value: 26.489382638510765 - type: nauc_precision_at_5_diff1 value: 3.5987036728169537 - type: nauc_precision_at_5_max value: 30.633955978579703 - type: nauc_precision_at_5_std value: 30.532430088014443 - type: nauc_recall_at_1000_diff1 value: 10.714633106872254 - type: nauc_recall_at_1000_max value: 43.94958623961 - type: nauc_recall_at_1000_std value: 51.78914468954123 - type: nauc_recall_at_100_diff1 value: 9.63781472255557 - type: nauc_recall_at_100_max value: 38.50917465255336 - type: nauc_recall_at_100_std value: 37.78623984642377 - type: nauc_recall_at_10_diff1 value: 16.480342820841688 - type: nauc_recall_at_10_max value: 35.982566867357406 - type: nauc_recall_at_10_std value: 23.30688188788895 - type: nauc_recall_at_1_diff1 value: 43.34155892182403 - type: nauc_recall_at_1_max value: 38.23324890148018 - type: nauc_recall_at_1_std value: 6.0781444393516075 - type: nauc_recall_at_20_diff1 value: 13.521048985146367 - type: nauc_recall_at_20_max value: 34.62462209239834 - type: nauc_recall_at_20_std value: 27.85924191501618 - type: nauc_recall_at_3_diff1 value: 23.57032748533523 - type: nauc_recall_at_3_max value: 36.32703197635613 - type: nauc_recall_at_3_std value: 15.730238734014337 - type: nauc_recall_at_5_diff1 value: 19.61387036368584 - type: nauc_recall_at_5_max value: 36.22030835529556 - type: nauc_recall_at_5_std value: 19.76310648649897 - type: ndcg_at_1 value: 43.779 - type: ndcg_at_10 value: 43.525999999999996 - type: ndcg_at_100 value: 50.138000000000005 - type: ndcg_at_1000 value: 52.991 - type: ndcg_at_20 value: 46.083 - type: ndcg_at_3 value: 38.002 - type: ndcg_at_5 value: 39.842 - type: precision_at_1 value: 43.779 - type: precision_at_10 value: 13.205 - type: precision_at_100 value: 2.051 - type: precision_at_1000 value: 0.259 - type: precision_at_20 value: 7.722999999999999 - type: precision_at_3 value: 28.903000000000002 - type: precision_at_5 value: 21.368000000000002 - type: recall_at_1 value: 19.291 - type: recall_at_10 value: 48.754 - type: recall_at_100 value: 70.97200000000001 - type: recall_at_1000 value: 86.611 - type: recall_at_20 value: 55.884 - type: recall_at_3 value: 34.101 - type: recall_at_5 value: 40.784 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 49.884 - type: map_at_1 value: 9.913 - type: map_at_10 value: 23.186999999999998 - type: map_at_100 value: 34.207 - type: map_at_1000 value: 36.318 - type: map_at_20 value: 27.419 - type: map_at_3 value: 15.656 - type: map_at_5 value: 18.945999999999998 - type: mrr_at_1 value: 75.75 - type: mrr_at_10 value: 82.16279761904761 - type: mrr_at_100 value: 82.48445635330299 - type: mrr_at_1000 value: 82.4870246719901 - type: mrr_at_20 value: 82.36203632968338 - type: mrr_at_3 value: 81.29166666666666 - type: mrr_at_5 value: 82.02916666666667 - type: nauc_map_at_1000_diff1 value: 17.0739966990996 - type: nauc_map_at_1000_max value: 28.440065298437133 - type: nauc_map_at_1000_std value: 20.83498154003865 - type: nauc_map_at_100_diff1 value: 17.75982086107111 - type: nauc_map_at_100_max value: 26.87850835673573 - type: nauc_map_at_100_std value: 18.350282298599275 - type: nauc_map_at_10_diff1 value: 17.15984258564116 - type: nauc_map_at_10_max value: 10.846179132675553 - type: nauc_map_at_10_std value: -6.263534464094614 - type: nauc_map_at_1_diff1 value: 24.014897777973694 - type: nauc_map_at_1_max value: -4.556638938723358 - type: nauc_map_at_1_std value: -22.7844467526989 - type: nauc_map_at_20_diff1 value: 16.3179372493187 - type: nauc_map_at_20_max value: 17.176378915498915 - type: nauc_map_at_20_std value: 1.9378637630340372 - type: nauc_map_at_3_diff1 value: 19.12786794046792 - type: nauc_map_at_3_max value: 0.09063919305677291 - type: nauc_map_at_3_std value: -16.713143158330492 - type: nauc_map_at_5_diff1 value: 18.76504725420023 - type: nauc_map_at_5_max value: 5.040867712207419 - type: nauc_map_at_5_std value: -12.382578318931165 - type: nauc_mrr_at_1000_diff1 value: 54.61266255011247 - type: nauc_mrr_at_1000_max value: 60.83961280977112 - type: nauc_mrr_at_1000_std value: 32.70429260443016 - type: nauc_mrr_at_100_diff1 value: 54.61346236538542 - type: nauc_mrr_at_100_max value: 60.8407974416647 - type: nauc_mrr_at_100_std value: 32.69272843993462 - type: nauc_mrr_at_10_diff1 value: 54.74633685810871 - type: nauc_mrr_at_10_max value: 61.084525933097865 - type: nauc_mrr_at_10_std value: 33.001220210025565 - type: nauc_mrr_at_1_diff1 value: 56.12708423835806 - type: nauc_mrr_at_1_max value: 58.9314540998289 - type: nauc_mrr_at_1_std value: 27.39422607651012 - type: nauc_mrr_at_20_diff1 value: 54.58896150245695 - type: nauc_mrr_at_20_max value: 60.890929983464815 - type: nauc_mrr_at_20_std value: 32.65559641276393 - type: nauc_mrr_at_3_diff1 value: 54.38229071443791 - type: nauc_mrr_at_3_max value: 59.987849044098596 - type: nauc_mrr_at_3_std value: 33.439813880719974 - type: nauc_mrr_at_5_diff1 value: 54.961790262449824 - type: nauc_mrr_at_5_max value: 61.17705173908951 - type: nauc_mrr_at_5_std value: 33.30939850734856 - type: nauc_ndcg_at_1000_diff1 value: 29.27465932507067 - type: nauc_ndcg_at_1000_max value: 47.952543312315214 - type: nauc_ndcg_at_1000_std value: 36.17132236391485 - type: nauc_ndcg_at_100_diff1 value: 28.63072328980134 - type: nauc_ndcg_at_100_max value: 41.460833419186564 - type: nauc_ndcg_at_100_std value: 27.157100358988135 - type: nauc_ndcg_at_10_diff1 value: 23.41488013023301 - type: nauc_ndcg_at_10_max value: 39.27798133072349 - type: nauc_ndcg_at_10_std value: 21.979241438928312 - type: nauc_ndcg_at_1_diff1 value: 46.12120543657642 - type: nauc_ndcg_at_1_max value: 47.28452124039853 - type: nauc_ndcg_at_1_std value: 19.799884708952543 - type: nauc_ndcg_at_20_diff1 value: 23.627669045115574 - type: nauc_ndcg_at_20_max value: 35.88225062457673 - type: nauc_ndcg_at_20_std value: 18.218628030529498 - type: nauc_ndcg_at_3_diff1 value: 25.37309228946118 - type: nauc_ndcg_at_3_max value: 40.64426332992231 - type: nauc_ndcg_at_3_std value: 24.608330645901482 - type: nauc_ndcg_at_5_diff1 value: 24.055798594999654 - type: nauc_ndcg_at_5_max value: 41.16180524175431 - type: nauc_ndcg_at_5_std value: 24.048305528761315 - type: nauc_precision_at_1000_diff1 value: -18.234943251015576 - type: nauc_precision_at_1000_max value: 0.48708502364659184 - type: nauc_precision_at_1000_std value: 2.4473601543134027 - type: nauc_precision_at_100_diff1 value: -3.0077810947381227 - type: nauc_precision_at_100_max value: 25.27249321108913 - type: nauc_precision_at_100_std value: 37.36575792126928 - type: nauc_precision_at_10_diff1 value: -0.2393778190297635 - type: nauc_precision_at_10_max value: 36.40513293547299 - type: nauc_precision_at_10_std value: 37.4827885766009 - type: nauc_precision_at_1_diff1 value: 56.12708423835806 - type: nauc_precision_at_1_max value: 58.9314540998289 - type: nauc_precision_at_1_std value: 27.39422607651012 - type: nauc_precision_at_20_diff1 value: -1.2010133229402933 - type: nauc_precision_at_20_max value: 34.117541814385966 - type: nauc_precision_at_20_std value: 39.13273254177449 - type: nauc_precision_at_3_diff1 value: 11.757378092198486 - type: nauc_precision_at_3_max value: 42.637962482588875 - type: nauc_precision_at_3_std value: 37.42465077352342 - type: nauc_precision_at_5_diff1 value: 7.233177203405101 - type: nauc_precision_at_5_max value: 43.1663582897407 - type: nauc_precision_at_5_std value: 38.848449220750055 - type: nauc_recall_at_1000_diff1 value: 27.33938551969145 - type: nauc_recall_at_1000_max value: 45.5614254479334 - type: nauc_recall_at_1000_std value: 50.58528916250458 - type: nauc_recall_at_100_diff1 value: 23.610383761920097 - type: nauc_recall_at_100_max value: 31.422168485847184 - type: nauc_recall_at_100_std value: 25.58649926458304 - type: nauc_recall_at_10_diff1 value: 14.62495111808408 - type: nauc_recall_at_10_max value: 7.4295041277681095 - type: nauc_recall_at_10_std value: -9.32297089600654 - type: nauc_recall_at_1_diff1 value: 24.014897777973694 - type: nauc_recall_at_1_max value: -4.556638938723358 - type: nauc_recall_at_1_std value: -22.7844467526989 - type: nauc_recall_at_20_diff1 value: 14.027862330014662 - type: nauc_recall_at_20_max value: 12.437478731690844 - type: nauc_recall_at_20_std value: -3.0740743798103676 - type: nauc_recall_at_3_diff1 value: 16.354018356566712 - type: nauc_recall_at_3_max value: -2.9812231240997917 - type: nauc_recall_at_3_std value: -18.27746460743442 - type: nauc_recall_at_5_diff1 value: 16.81486583473587 - type: nauc_recall_at_5_max value: 2.420128513974744 - type: nauc_recall_at_5_std value: -14.441820321214108 - type: ndcg_at_1 value: 63.87500000000001 - type: ndcg_at_10 value: 49.884 - type: ndcg_at_100 value: 54.738 - type: ndcg_at_1000 value: 61.635 - type: ndcg_at_20 value: 48.894999999999996 - type: ndcg_at_3 value: 54.287 - type: ndcg_at_5 value: 52.40899999999999 - type: precision_at_1 value: 75.75 - type: precision_at_10 value: 40.9 - type: precision_at_100 value: 13.139999999999999 - type: precision_at_1000 value: 2.533 - type: precision_at_20 value: 30.8 - type: precision_at_3 value: 57.667 - type: precision_at_5 value: 51.05 - type: recall_at_1 value: 9.913 - type: recall_at_10 value: 28.591 - type: recall_at_100 value: 61.017999999999994 - type: recall_at_1000 value: 83.383 - type: recall_at_20 value: 37.834 - type: recall_at_3 value: 17.049 - type: recall_at_5 value: 21.685 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 78.77499999999999 - type: f1 value: 73.74058240799386 - type: f1_weighted value: 79.78804377638227 - type: main_score value: 78.77499999999999 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 90.986 - type: map_at_1 value: 81.601 - type: map_at_10 value: 88.242 - type: map_at_100 value: 88.46000000000001 - type: map_at_1000 value: 88.472 - type: map_at_20 value: 88.375 - type: map_at_3 value: 87.237 - type: map_at_5 value: 87.85300000000001 - type: mrr_at_1 value: 87.81878187818782 - type: mrr_at_10 value: 92.20301196786335 - type: mrr_at_100 value: 92.24884236673292 - type: mrr_at_1000 value: 92.2496338899362 - type: mrr_at_20 value: 92.23112073283473 - type: mrr_at_3 value: 91.77417741774165 - type: mrr_at_5 value: 92.03970397039689 - type: nauc_map_at_1000_diff1 value: 56.54670664910505 - type: nauc_map_at_1000_max value: 33.08375749975477 - type: nauc_map_at_1000_std value: 2.7491595418252865 - type: nauc_map_at_100_diff1 value: 56.50887688686924 - type: nauc_map_at_100_max value: 33.075487189958494 - type: nauc_map_at_100_std value: 2.7675869969253375 - type: nauc_map_at_10_diff1 value: 56.08080806610569 - type: nauc_map_at_10_max value: 32.776972098819066 - type: nauc_map_at_10_std value: 2.5904846711290097 - type: nauc_map_at_1_diff1 value: 60.645344065853145 - type: nauc_map_at_1_max value: 31.232776777514797 - type: nauc_map_at_1_std value: -1.1946138176109171 - type: nauc_map_at_20_diff1 value: 56.28378454162355 - type: nauc_map_at_20_max value: 32.98207150385811 - type: nauc_map_at_20_std value: 2.8469814040214025 - type: nauc_map_at_3_diff1 value: 55.81958007095375 - type: nauc_map_at_3_max value: 31.602707711038313 - type: nauc_map_at_3_std value: 0.8117019292273401 - type: nauc_map_at_5_diff1 value: 55.706025752316535 - type: nauc_map_at_5_max value: 32.16032683604737 - type: nauc_map_at_5_std value: 1.8853201503498669 - type: nauc_mrr_at_1000_diff1 value: 75.4997173366251 - type: nauc_mrr_at_1000_max value: 41.49117135484116 - type: nauc_mrr_at_1000_std value: -2.0636172883680852 - type: nauc_mrr_at_100_diff1 value: 75.50118860648519 - type: nauc_mrr_at_100_max value: 41.49490161517194 - type: nauc_mrr_at_100_std value: -2.057024385178682 - type: nauc_mrr_at_10_diff1 value: 75.47295153099428 - type: nauc_mrr_at_10_max value: 41.55003304042536 - type: nauc_mrr_at_10_std value: -2.0353663198929253 - type: nauc_mrr_at_1_diff1 value: 76.632058433229 - type: nauc_mrr_at_1_max value: 39.754483718891656 - type: nauc_mrr_at_1_std value: -2.962241058101701 - type: nauc_mrr_at_20_diff1 value: 75.47221882396194 - type: nauc_mrr_at_20_max value: 41.50779280480839 - type: nauc_mrr_at_20_std value: -1.9620212266426307 - type: nauc_mrr_at_3_diff1 value: 75.5682297897137 - type: nauc_mrr_at_3_max value: 41.53543801506081 - type: nauc_mrr_at_3_std value: -3.391681195945978 - type: nauc_mrr_at_5_diff1 value: 75.37562775183947 - type: nauc_mrr_at_5_max value: 41.42028509006753 - type: nauc_mrr_at_5_std value: -2.418698675622726 - type: nauc_ndcg_at_1000_diff1 value: 59.364557011624 - type: nauc_ndcg_at_1000_max value: 35.4112238125149 - type: nauc_ndcg_at_1000_std value: 3.717516193303376 - type: nauc_ndcg_at_100_diff1 value: 58.55706703023122 - type: nauc_ndcg_at_100_max value: 35.352285999934594 - type: nauc_ndcg_at_100_std value: 4.273437944266781 - type: nauc_ndcg_at_10_diff1 value: 56.77422701267037 - type: nauc_ndcg_at_10_max value: 34.24909893882957 - type: nauc_ndcg_at_10_std value: 4.178151434006727 - type: nauc_ndcg_at_1_diff1 value: 76.632058433229 - type: nauc_ndcg_at_1_max value: 39.754483718891656 - type: nauc_ndcg_at_1_std value: -2.962241058101701 - type: nauc_ndcg_at_20_diff1 value: 57.27343398231262 - type: nauc_ndcg_at_20_max value: 34.7416626740278 - type: nauc_ndcg_at_20_std value: 4.955858766014002 - type: nauc_ndcg_at_3_diff1 value: 57.69267803121093 - type: nauc_ndcg_at_3_max value: 33.13744317023105 - type: nauc_ndcg_at_3_std value: 0.40380284030057023 - type: nauc_ndcg_at_5_diff1 value: 56.57461019113917 - type: nauc_ndcg_at_5_max value: 33.244657840804386 - type: nauc_ndcg_at_5_std value: 2.5121440827702046 - type: nauc_precision_at_1000_diff1 value: -14.54492513449718 - type: nauc_precision_at_1000_max value: -5.94552147573623 - type: nauc_precision_at_1000_std value: 1.2446209816057374 - type: nauc_precision_at_100_diff1 value: -15.452676132568344 - type: nauc_precision_at_100_max value: -3.760241749847617 - type: nauc_precision_at_100_std value: 4.623534605290865 - type: nauc_precision_at_10_diff1 value: -12.712908026086176 - type: nauc_precision_at_10_max value: 0.45241316994816805 - type: nauc_precision_at_10_std value: 7.849478570138391 - type: nauc_precision_at_1_diff1 value: 76.632058433229 - type: nauc_precision_at_1_max value: 39.754483718891656 - type: nauc_precision_at_1_std value: -2.962241058101701 - type: nauc_precision_at_20_diff1 value: -14.514618673172041 - type: nauc_precision_at_20_max value: -1.113635490621818 - type: nauc_precision_at_20_std value: 8.599811730457576 - type: nauc_precision_at_3_diff1 value: 6.1367799850003815 - type: nauc_precision_at_3_max value: 8.466271950897857 - type: nauc_precision_at_3_std value: 1.7458051543195068 - type: nauc_precision_at_5_diff1 value: -5.804548945783379 - type: nauc_precision_at_5_max value: 3.4060251839074818 - type: nauc_precision_at_5_std value: 5.583410511782371 - type: nauc_recall_at_1000_diff1 value: 19.329432953574095 - type: nauc_recall_at_1000_max value: 43.260442595158736 - type: nauc_recall_at_1000_std value: 53.89644660661804 - type: nauc_recall_at_100_diff1 value: 21.265326296051235 - type: nauc_recall_at_100_max value: 38.573000195373695 - type: nauc_recall_at_100_std value: 42.169391082152785 - type: nauc_recall_at_10_diff1 value: 29.785129558987432 - type: nauc_recall_at_10_max value: 28.379657867558034 - type: nauc_recall_at_10_std value: 21.132574624091973 - type: nauc_recall_at_1_diff1 value: 60.645344065853145 - type: nauc_recall_at_1_max value: 31.232776777514797 - type: nauc_recall_at_1_std value: -1.1946138176109171 - type: nauc_recall_at_20_diff1 value: 25.88845612373954 - type: nauc_recall_at_20_max value: 30.24785945821152 - type: nauc_recall_at_20_std value: 31.73911437468067 - type: nauc_recall_at_3_diff1 value: 42.2968464797395 - type: nauc_recall_at_3_max value: 26.494318009870018 - type: nauc_recall_at_3_std value: 2.6045977160467544 - type: nauc_recall_at_5_diff1 value: 35.81340094401374 - type: nauc_recall_at_5_max value: 25.91082947510634 - type: nauc_recall_at_5_std value: 9.759404930864779 - type: ndcg_at_1 value: 87.819 - type: ndcg_at_10 value: 90.986 - type: ndcg_at_100 value: 91.69 - type: ndcg_at_1000 value: 91.863 - type: ndcg_at_20 value: 91.293 - type: ndcg_at_3 value: 89.621 - type: ndcg_at_5 value: 90.333 - type: precision_at_1 value: 87.819 - type: precision_at_10 value: 10.753 - type: precision_at_100 value: 1.138 - type: precision_at_1000 value: 0.117 - type: precision_at_20 value: 5.4879999999999995 - type: precision_at_3 value: 33.703 - type: precision_at_5 value: 20.831 - type: recall_at_1 value: 81.601 - type: recall_at_10 value: 95.44200000000001 - type: recall_at_100 value: 98.14399999999999 - type: recall_at_1000 value: 99.157 - type: recall_at_20 value: 96.43 - type: recall_at_3 value: 91.729 - type: recall_at_5 value: 93.552 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 56.056 - type: map_at_1 value: 28.666000000000004 - type: map_at_10 value: 47.437000000000005 - type: map_at_100 value: 49.537 - type: map_at_1000 value: 49.665 - type: map_at_20 value: 48.618 - type: map_at_3 value: 41.355 - type: map_at_5 value: 44.525 - type: mrr_at_1 value: 55.55555555555556 - type: mrr_at_10 value: 63.705173427395614 - type: mrr_at_100 value: 64.25449940779741 - type: mrr_at_1000 value: 64.27635581092147 - type: mrr_at_20 value: 64.03796029079103 - type: mrr_at_3 value: 61.49691358024688 - type: mrr_at_5 value: 62.73148148148143 - type: nauc_map_at_1000_diff1 value: 43.24282910397747 - type: nauc_map_at_1000_max value: 28.506093180265644 - type: nauc_map_at_1000_std value: -13.040508386155054 - type: nauc_map_at_100_diff1 value: 43.23650442904607 - type: nauc_map_at_100_max value: 28.470565635459156 - type: nauc_map_at_100_std value: -12.988098780714935 - type: nauc_map_at_10_diff1 value: 43.393840733087686 - type: nauc_map_at_10_max value: 26.637302062720153 - type: nauc_map_at_10_std value: -14.47500292113762 - type: nauc_map_at_1_diff1 value: 47.705150227211725 - type: nauc_map_at_1_max value: 15.354189686550129 - type: nauc_map_at_1_std value: -14.559819859039067 - type: nauc_map_at_20_diff1 value: 43.14121075706104 - type: nauc_map_at_20_max value: 27.811170590408395 - type: nauc_map_at_20_std value: -13.459413585283583 - type: nauc_map_at_3_diff1 value: 44.33938667720801 - type: nauc_map_at_3_max value: 21.785619884549398 - type: nauc_map_at_3_std value: -15.569980103071593 - type: nauc_map_at_5_diff1 value: 43.39280905665027 - type: nauc_map_at_5_max value: 25.021492190645017 - type: nauc_map_at_5_std value: -14.48856622187443 - type: nauc_mrr_at_1000_diff1 value: 52.971563939946286 - type: nauc_mrr_at_1000_max value: 38.88019486172324 - type: nauc_mrr_at_1000_std value: -12.412991642381616 - type: nauc_mrr_at_100_diff1 value: 52.978468139876945 - type: nauc_mrr_at_100_max value: 38.89751787948751 - type: nauc_mrr_at_100_std value: -12.3677876252269 - type: nauc_mrr_at_10_diff1 value: 52.78507148048174 - type: nauc_mrr_at_10_max value: 38.55079809310022 - type: nauc_mrr_at_10_std value: -12.944127025078755 - type: nauc_mrr_at_1_diff1 value: 55.52626805861546 - type: nauc_mrr_at_1_max value: 40.49306809164979 - type: nauc_mrr_at_1_std value: -12.886607701317681 - type: nauc_mrr_at_20_diff1 value: 52.9592152665678 - type: nauc_mrr_at_20_max value: 38.88514014589964 - type: nauc_mrr_at_20_std value: -12.434464359819444 - type: nauc_mrr_at_3_diff1 value: 52.73696844091174 - type: nauc_mrr_at_3_max value: 38.61018727252859 - type: nauc_mrr_at_3_std value: -13.123989867364166 - type: nauc_mrr_at_5_diff1 value: 53.037110010188 - type: nauc_mrr_at_5_max value: 38.44770729849151 - type: nauc_mrr_at_5_std value: -13.49318771828972 - type: nauc_ndcg_at_1000_diff1 value: 44.73813840091289 - type: nauc_ndcg_at_1000_max value: 33.70113904685389 - type: nauc_ndcg_at_1000_std value: -10.328687058192742 - type: nauc_ndcg_at_100_diff1 value: 44.595174119928835 - type: nauc_ndcg_at_100_max value: 33.4788285112467 - type: nauc_ndcg_at_100_std value: -8.695355259716946 - type: nauc_ndcg_at_10_diff1 value: 44.39837225263 - type: nauc_ndcg_at_10_max value: 29.188289725593393 - type: nauc_ndcg_at_10_std value: -13.67608323673103 - type: nauc_ndcg_at_1_diff1 value: 55.52626805861546 - type: nauc_ndcg_at_1_max value: 40.49306809164979 - type: nauc_ndcg_at_1_std value: -12.886607701317681 - type: nauc_ndcg_at_20_diff1 value: 44.24661739902305 - type: nauc_ndcg_at_20_max value: 31.667868318249965 - type: nauc_ndcg_at_20_std value: -10.65470780066342 - type: nauc_ndcg_at_3_diff1 value: 43.39857166975522 - type: nauc_ndcg_at_3_max value: 31.764668313577495 - type: nauc_ndcg_at_3_std value: -14.494866954678152 - type: nauc_ndcg_at_5_diff1 value: 43.16976647347281 - type: nauc_ndcg_at_5_max value: 29.878329062643143 - type: nauc_ndcg_at_5_std value: -13.987689089179739 - type: nauc_precision_at_1000_diff1 value: -9.807973252625484 - type: nauc_precision_at_1000_max value: 26.6279603849494 - type: nauc_precision_at_1000_std value: 7.113187103520632 - type: nauc_precision_at_100_diff1 value: -4.777149603323976 - type: nauc_precision_at_100_max value: 31.03410463692187 - type: nauc_precision_at_100_std value: 10.463144150275435 - type: nauc_precision_at_10_diff1 value: 8.691528703215962 - type: nauc_precision_at_10_max value: 33.329579434123374 - type: nauc_precision_at_10_std value: -0.8002015226329403 - type: nauc_precision_at_1_diff1 value: 55.52626805861546 - type: nauc_precision_at_1_max value: 40.49306809164979 - type: nauc_precision_at_1_std value: -12.886607701317681 - type: nauc_precision_at_20_diff1 value: 3.4564653474184284 - type: nauc_precision_at_20_max value: 34.401070158471136 - type: nauc_precision_at_20_std value: 5.813431200164549 - type: nauc_precision_at_3_diff1 value: 22.463219705462187 - type: nauc_precision_at_3_max value: 34.77413976546924 - type: nauc_precision_at_3_std value: -7.083890789741479 - type: nauc_precision_at_5_diff1 value: 14.011006004883154 - type: nauc_precision_at_5_max value: 35.73655466853702 - type: nauc_precision_at_5_std value: -2.8395172077771598 - type: nauc_recall_at_1000_diff1 value: 16.478046357391555 - type: nauc_recall_at_1000_max value: 43.231704288282344 - type: nauc_recall_at_1000_std value: 38.430684937573645 - type: nauc_recall_at_100_diff1 value: 30.764718344602436 - type: nauc_recall_at_100_max value: 31.769050487166655 - type: nauc_recall_at_100_std value: 23.48468311677149 - type: nauc_recall_at_10_diff1 value: 34.47339565324045 - type: nauc_recall_at_10_max value: 19.054212335800454 - type: nauc_recall_at_10_std value: -11.039734015330437 - type: nauc_recall_at_1_diff1 value: 47.705150227211725 - type: nauc_recall_at_1_max value: 15.354189686550129 - type: nauc_recall_at_1_std value: -14.559819859039067 - type: nauc_recall_at_20_diff1 value: 32.1011474016873 - type: nauc_recall_at_20_max value: 25.546372988304423 - type: nauc_recall_at_20_std value: -0.007233471152482897 - type: nauc_recall_at_3_diff1 value: 37.5708138019065 - type: nauc_recall_at_3_max value: 16.66410785756736 - type: nauc_recall_at_3_std value: -15.404817020108966 - type: nauc_recall_at_5_diff1 value: 35.714519648479595 - type: nauc_recall_at_5_max value: 19.02075233009296 - type: nauc_recall_at_5_std value: -13.180963359760725 - type: ndcg_at_1 value: 55.556000000000004 - type: ndcg_at_10 value: 56.056 - type: ndcg_at_100 value: 62.44 - type: ndcg_at_1000 value: 64.263 - type: ndcg_at_20 value: 58.638999999999996 - type: ndcg_at_3 value: 51.722 - type: ndcg_at_5 value: 52.701 - type: precision_at_1 value: 55.556000000000004 - type: precision_at_10 value: 15.679000000000002 - type: precision_at_100 value: 2.252 - type: precision_at_1000 value: 0.257 - type: precision_at_20 value: 9.02 - type: precision_at_3 value: 34.619 - type: precision_at_5 value: 25.093 - type: recall_at_1 value: 28.666000000000004 - type: recall_at_10 value: 63.717999999999996 - type: recall_at_100 value: 86.938 - type: recall_at_1000 value: 97.603 - type: recall_at_20 value: 71.649 - type: recall_at_3 value: 46.663 - type: recall_at_5 value: 53.313 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 71.74199999999999 - type: map_at_1 value: 41.729 - type: map_at_10 value: 63.168 - type: map_at_100 value: 64.132 - type: map_at_1000 value: 64.199 - type: map_at_20 value: 63.736000000000004 - type: map_at_3 value: 59.826 - type: map_at_5 value: 61.882000000000005 - type: mrr_at_1 value: 83.45712356515868 - type: mrr_at_10 value: 87.850342432719 - type: mrr_at_100 value: 88.0016320691113 - type: mrr_at_1000 value: 88.00576596968136 - type: mrr_at_20 value: 87.94463253190389 - type: mrr_at_3 value: 87.13706954760278 - type: mrr_at_5 value: 87.59419311276136 - type: nauc_map_at_1000_diff1 value: 13.635446621095054 - type: nauc_map_at_1000_max value: 18.670632529445633 - type: nauc_map_at_1000_std value: 10.444842636150575 - type: nauc_map_at_100_diff1 value: 13.599262398010783 - type: nauc_map_at_100_max value: 18.636389405484806 - type: nauc_map_at_100_std value: 10.460027483576043 - type: nauc_map_at_10_diff1 value: 13.235053919323942 - type: nauc_map_at_10_max value: 18.252140477080047 - type: nauc_map_at_10_std value: 9.9075337042203 - type: nauc_map_at_1_diff1 value: 76.51940497836482 - type: nauc_map_at_1_max value: 51.251419487235474 - type: nauc_map_at_1_std value: 0.16714896857146574 - type: nauc_map_at_20_diff1 value: 13.4178245722222 - type: nauc_map_at_20_max value: 18.40988771210718 - type: nauc_map_at_20_std value: 10.216685163366282 - type: nauc_map_at_3_diff1 value: 13.38370761663418 - type: nauc_map_at_3_max value: 17.760962555456537 - type: nauc_map_at_3_std value: 7.15741965624388 - type: nauc_map_at_5_diff1 value: 13.138133309724855 - type: nauc_map_at_5_max value: 17.871761295251044 - type: nauc_map_at_5_std value: 8.475147426940074 - type: nauc_mrr_at_1000_diff1 value: 75.82650818891959 - type: nauc_mrr_at_1000_max value: 53.6736100668434 - type: nauc_mrr_at_1000_std value: 1.8025016349213916 - type: nauc_mrr_at_100_diff1 value: 75.82530574210111 - type: nauc_mrr_at_100_max value: 53.68067545829002 - type: nauc_mrr_at_100_std value: 1.8147470536495791 - type: nauc_mrr_at_10_diff1 value: 75.8330135686799 - type: nauc_mrr_at_10_max value: 53.78626885349077 - type: nauc_mrr_at_10_std value: 1.7975782717226636 - type: nauc_mrr_at_1_diff1 value: 76.51940497836482 - type: nauc_mrr_at_1_max value: 51.251419487235474 - type: nauc_mrr_at_1_std value: 0.16714896857146574 - type: nauc_mrr_at_20_diff1 value: 75.82783382464166 - type: nauc_mrr_at_20_max value: 53.68364567043885 - type: nauc_mrr_at_20_std value: 1.742037904463963 - type: nauc_mrr_at_3_diff1 value: 75.6944609768663 - type: nauc_mrr_at_3_max value: 53.803941340341666 - type: nauc_mrr_at_3_std value: 1.1849945458077804 - type: nauc_mrr_at_5_diff1 value: 75.73006960604903 - type: nauc_mrr_at_5_max value: 53.62223096420106 - type: nauc_mrr_at_5_std value: 1.6144067563410909 - type: nauc_ndcg_at_1000_diff1 value: 21.58025241642726 - type: nauc_ndcg_at_1000_max value: 24.675747527001153 - type: nauc_ndcg_at_1000_std value: 13.075943547492718 - type: nauc_ndcg_at_100_diff1 value: 20.30260137544846 - type: nauc_ndcg_at_100_max value: 23.757528813872018 - type: nauc_ndcg_at_100_std value: 13.648994687574062 - type: nauc_ndcg_at_10_diff1 value: 18.995052360997818 - type: nauc_ndcg_at_10_max value: 22.254260808196037 - type: nauc_ndcg_at_10_std value: 11.27212390633054 - type: nauc_ndcg_at_1_diff1 value: 76.51940497836482 - type: nauc_ndcg_at_1_max value: 51.251419487235474 - type: nauc_ndcg_at_1_std value: 0.16714896857146574 - type: nauc_ndcg_at_20_diff1 value: 19.333742380695757 - type: nauc_ndcg_at_20_max value: 22.527779834633364 - type: nauc_ndcg_at_20_std value: 12.161009000707917 - type: nauc_ndcg_at_3_diff1 value: 20.013329040965534 - type: nauc_ndcg_at_3_max value: 21.99692460311921 - type: nauc_ndcg_at_3_std value: 6.8076290638386165 - type: nauc_ndcg_at_5_diff1 value: 19.08226315942471 - type: nauc_ndcg_at_5_max value: 21.71185964294168 - type: nauc_ndcg_at_5_std value: 8.671911269518214 - type: nauc_precision_at_1000_diff1 value: 2.4462475489446764 - type: nauc_precision_at_1000_max value: 29.145662064268578 - type: nauc_precision_at_1000_std value: 49.20704909525856 - type: nauc_precision_at_100_diff1 value: 0.11271196725540299 - type: nauc_precision_at_100_max value: 17.37584606388067 - type: nauc_precision_at_100_std value: 34.66099346244071 - type: nauc_precision_at_10_diff1 value: 2.9923183951227825 - type: nauc_precision_at_10_max value: 14.261884731124264 - type: nauc_precision_at_10_std value: 18.084188795498378 - type: nauc_precision_at_1_diff1 value: 76.51940497836482 - type: nauc_precision_at_1_max value: 51.251419487235474 - type: nauc_precision_at_1_std value: 0.16714896857146574 - type: nauc_precision_at_20_diff1 value: 1.9180293008303761 - type: nauc_precision_at_20_max value: 13.832269193468512 - type: nauc_precision_at_20_std value: 21.65284406055607 - type: nauc_precision_at_3_diff1 value: 7.226609484731811 - type: nauc_precision_at_3_max value: 15.162908526977272 - type: nauc_precision_at_3_std value: 8.451859972962776 - type: nauc_precision_at_5_diff1 value: 4.705236845538159 - type: nauc_precision_at_5_max value: 14.022910843582666 - type: nauc_precision_at_5_std value: 11.777269322821605 - type: nauc_recall_at_1000_diff1 value: 2.446247548945172 - type: nauc_recall_at_1000_max value: 29.14566206426889 - type: nauc_recall_at_1000_std value: 49.20704909525879 - type: nauc_recall_at_100_diff1 value: 0.1127119672553316 - type: nauc_recall_at_100_max value: 17.37584606388062 - type: nauc_recall_at_100_std value: 34.660993462440686 - type: nauc_recall_at_10_diff1 value: 2.9923183951227927 - type: nauc_recall_at_10_max value: 14.261884731124299 - type: nauc_recall_at_10_std value: 18.08418879549837 - type: nauc_recall_at_1_diff1 value: 76.51940497836482 - type: nauc_recall_at_1_max value: 51.251419487235474 - type: nauc_recall_at_1_std value: 0.16714896857146574 - type: nauc_recall_at_20_diff1 value: 1.918029300830432 - type: nauc_recall_at_20_max value: 13.832269193468566 - type: nauc_recall_at_20_std value: 21.65284406055605 - type: nauc_recall_at_3_diff1 value: 7.226609484731802 - type: nauc_recall_at_3_max value: 15.162908526977182 - type: nauc_recall_at_3_std value: 8.451859972962634 - type: nauc_recall_at_5_diff1 value: 4.705236845538197 - type: nauc_recall_at_5_max value: 14.02291084358265 - type: nauc_recall_at_5_std value: 11.777269322821638 - type: ndcg_at_1 value: 83.45700000000001 - type: ndcg_at_10 value: 71.74199999999999 - type: ndcg_at_100 value: 75.008 - type: ndcg_at_1000 value: 76.242 - type: ndcg_at_20 value: 73.114 - type: ndcg_at_3 value: 67.128 - type: ndcg_at_5 value: 69.645 - type: precision_at_1 value: 83.45700000000001 - type: precision_at_10 value: 14.747 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.189 - type: precision_at_20 value: 7.8149999999999995 - type: precision_at_3 value: 42.323 - type: precision_at_5 value: 27.381 - type: recall_at_1 value: 41.729 - type: recall_at_10 value: 73.734 - type: recall_at_100 value: 86.502 - type: recall_at_1000 value: 94.60499999999999 - type: recall_at_20 value: 78.14999999999999 - type: recall_at_3 value: 63.483999999999995 - type: recall_at_5 value: 68.45400000000001 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.4904 - type: ap value: 94.85481918794709 - type: ap_weighted value: 94.85481918794709 - type: f1 value: 96.4898592305707 - type: f1_weighted value: 96.4898592305707 - type: main_score value: 96.4904 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 43.692 - type: map_at_1 value: 23.751 - type: map_at_10 value: 36.553999999999995 - type: map_at_100 value: 37.721 - type: map_at_1000 value: 37.763999999999996 - type: map_at_20 value: 37.289 - type: map_at_3 value: 32.643 - type: map_at_5 value: 34.851 - type: mrr_at_1 value: 24.455587392550143 - type: mrr_at_10 value: 37.18388706963206 - type: mrr_at_100 value: 38.28330737932916 - type: mrr_at_1000 value: 38.32054399710817 - type: mrr_at_20 value: 37.8818001216278 - type: mrr_at_3 value: 33.35721107927405 - type: mrr_at_5 value: 35.52483285577843 - type: nauc_map_at_1000_diff1 value: 36.3576177260684 - type: nauc_map_at_1000_max value: 7.854511605962703 - type: nauc_map_at_1000_std value: -17.701121059746878 - type: nauc_map_at_100_diff1 value: 36.356075649230505 - type: nauc_map_at_100_max value: 7.862168042999533 - type: nauc_map_at_100_std value: -17.670102459097233 - type: nauc_map_at_10_diff1 value: 36.22122978875574 - type: nauc_map_at_10_max value: 7.80848606967416 - type: nauc_map_at_10_std value: -18.3265151386167 - type: nauc_map_at_1_diff1 value: 39.28605466408357 - type: nauc_map_at_1_max value: 6.20202977590459 - type: nauc_map_at_1_std value: -15.734334090045026 - type: nauc_map_at_20_diff1 value: 36.33637880909657 - type: nauc_map_at_20_max value: 7.843437969476022 - type: nauc_map_at_20_std value: -17.917533363025996 - type: nauc_map_at_3_diff1 value: 36.24864976076741 - type: nauc_map_at_3_max value: 7.420345251835957 - type: nauc_map_at_3_std value: -18.71678497722944 - type: nauc_map_at_5_diff1 value: 36.0789619291824 - type: nauc_map_at_5_max value: 7.7314285669514495 - type: nauc_map_at_5_std value: -18.748688764538706 - type: nauc_mrr_at_1000_diff1 value: 36.23912675623378 - type: nauc_mrr_at_1000_max value: 7.690553436255147 - type: nauc_mrr_at_1000_std value: -17.609526070212304 - type: nauc_mrr_at_100_diff1 value: 36.23782651189002 - type: nauc_mrr_at_100_max value: 7.70075095171647 - type: nauc_mrr_at_100_std value: -17.575714144960184 - type: nauc_mrr_at_10_diff1 value: 36.125229472534215 - type: nauc_mrr_at_10_max value: 7.635472248755658 - type: nauc_mrr_at_10_std value: -18.208166616511086 - type: nauc_mrr_at_1_diff1 value: 39.20986875554532 - type: nauc_mrr_at_1_max value: 6.062668487561363 - type: nauc_mrr_at_1_std value: -16.04130340817602 - type: nauc_mrr_at_20_diff1 value: 36.21207088739667 - type: nauc_mrr_at_20_max value: 7.699610250145951 - type: nauc_mrr_at_20_std value: -17.778245221724028 - type: nauc_mrr_at_3_diff1 value: 36.03957583885305 - type: nauc_mrr_at_3_max value: 7.225515576504581 - type: nauc_mrr_at_3_std value: -18.74478742943741 - type: nauc_mrr_at_5_diff1 value: 35.969152496648974 - type: nauc_mrr_at_5_max value: 7.584059789018233 - type: nauc_mrr_at_5_std value: -18.569374723129332 - type: nauc_ndcg_at_1000_diff1 value: 35.894655529841806 - type: nauc_ndcg_at_1000_max value: 8.579327424366236 - type: nauc_ndcg_at_1000_std value: -16.359677367747896 - type: nauc_ndcg_at_100_diff1 value: 35.89861902483983 - type: nauc_ndcg_at_100_max value: 8.830873623962242 - type: nauc_ndcg_at_100_std value: -15.173125564722978 - type: nauc_ndcg_at_10_diff1 value: 35.36499811105169 - type: nauc_ndcg_at_10_max value: 8.449267180956992 - type: nauc_ndcg_at_10_std value: -18.41978802362402 - type: nauc_ndcg_at_1_diff1 value: 39.15422481210622 - type: nauc_ndcg_at_1_max value: 6.055515791928331 - type: nauc_ndcg_at_1_std value: -16.042779610876252 - type: nauc_ndcg_at_20_diff1 value: 35.73402868264468 - type: nauc_ndcg_at_20_max value: 8.695705518210847 - type: nauc_ndcg_at_20_std value: -16.7735829470466 - type: nauc_ndcg_at_3_diff1 value: 35.31358242856231 - type: nauc_ndcg_at_3_max value: 7.645692789058997 - type: nauc_ndcg_at_3_std value: -19.460003734786874 - type: nauc_ndcg_at_5_diff1 value: 35.05216588927143 - type: nauc_ndcg_at_5_max value: 8.216690520604715 - type: nauc_ndcg_at_5_std value: -19.3982054492159 - type: nauc_precision_at_1000_diff1 value: -4.440002625111349 - type: nauc_precision_at_1000_max value: 7.886988951901723 - type: nauc_precision_at_1000_std value: 9.88111187048247 - type: nauc_precision_at_100_diff1 value: 15.728286119463325 - type: nauc_precision_at_100_max value: 13.218650824470654 - type: nauc_precision_at_100_std value: 16.113245895522553 - type: nauc_precision_at_10_diff1 value: 29.51218489610567 - type: nauc_precision_at_10_max value: 10.197432401942912 - type: nauc_precision_at_10_std value: -16.950603431359493 - type: nauc_precision_at_1_diff1 value: 39.15422481210622 - type: nauc_precision_at_1_max value: 6.055515791928331 - type: nauc_precision_at_1_std value: -16.042779610876252 - type: nauc_precision_at_20_diff1 value: 27.825993070397338 - type: nauc_precision_at_20_max value: 11.437632287846007 - type: nauc_precision_at_20_std value: -7.450353566405601 - type: nauc_precision_at_3_diff1 value: 32.14135556796588 - type: nauc_precision_at_3_max value: 7.989252443574163 - type: nauc_precision_at_3_std value: -21.566254595671055 - type: nauc_precision_at_5_diff1 value: 30.68778685307082 - type: nauc_precision_at_5_max value: 9.332160758499892 - type: nauc_precision_at_5_std value: -20.928554713448914 - type: nauc_recall_at_1000_diff1 value: 25.00810478716878 - type: nauc_recall_at_1000_max value: 46.518165765201644 - type: nauc_recall_at_1000_std value: 61.4734635576085 - type: nauc_recall_at_100_diff1 value: 33.895581318261726 - type: nauc_recall_at_100_max value: 20.10706035872801 - type: nauc_recall_at_100_std value: 24.204226584457047 - type: nauc_recall_at_10_diff1 value: 32.363127359576296 - type: nauc_recall_at_10_max value: 10.729923804989545 - type: nauc_recall_at_10_std value: -18.1335370184202 - type: nauc_recall_at_1_diff1 value: 39.28605466408357 - type: nauc_recall_at_1_max value: 6.20202977590459 - type: nauc_recall_at_1_std value: -15.734334090045026 - type: nauc_recall_at_20_diff1 value: 33.47804003169795 - type: nauc_recall_at_20_max value: 12.781494765263382 - type: nauc_recall_at_20_std value: -9.263970132202658 - type: nauc_recall_at_3_diff1 value: 32.71001429428999 - type: nauc_recall_at_3_max value: 8.353439197382693 - type: nauc_recall_at_3_std value: -21.235097744366954 - type: nauc_recall_at_5_diff1 value: 31.87451464963415 - type: nauc_recall_at_5_max value: 9.635051450907305 - type: nauc_recall_at_5_std value: -21.113235357132794 - type: ndcg_at_1 value: 24.47 - type: ndcg_at_10 value: 43.692 - type: ndcg_at_100 value: 49.211 - type: ndcg_at_1000 value: 50.244 - type: ndcg_at_20 value: 46.278000000000006 - type: ndcg_at_3 value: 35.719 - type: ndcg_at_5 value: 39.652 - type: precision_at_1 value: 24.47 - type: precision_at_10 value: 6.857 - type: precision_at_100 value: 0.9610000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_20 value: 3.968 - type: precision_at_3 value: 15.181000000000001 - type: precision_at_5 value: 11.117 - type: recall_at_1 value: 23.751 - type: recall_at_10 value: 65.64 - type: recall_at_100 value: 90.967 - type: recall_at_1000 value: 98.738 - type: recall_at_20 value: 75.639 - type: recall_at_3 value: 43.927 - type: recall_at_5 value: 53.366 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 98.82580939352485 - type: f1 value: 98.75201754333801 - type: f1_weighted value: 98.82795205108245 - type: main_score value: 98.82580939352485 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 92.29822161422709 - type: f1 value: 77.75210224871594 - type: f1_weighted value: 93.58661422540348 - type: main_score value: 92.29822161422709 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 85.17484868863484 - type: f1 value: 81.94484244487094 - type: f1_weighted value: 85.21022593423332 - type: main_score value: 85.17484868863484 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 89.61667787491594 - type: f1 value: 89.02701927621264 - type: f1_weighted value: 89.56306982022801 - type: main_score value: 89.61667787491594 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 46.318282423948574 - type: v_measure value: 46.318282423948574 - type: v_measure_std value: 0.9729055662461538 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 44.29033625273981 - type: v_measure value: 44.29033625273981 - type: v_measure_std value: 1.0596383629128594 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 33.0526129239962 - type: map value: 33.0526129239962 - type: mrr value: 34.29260046890935 - type: nAUC_map_diff1 value: 12.579738077238032 - type: nAUC_map_max value: -20.936629344962 - type: nAUC_map_std value: -1.6096805784945216 - type: nAUC_mrr_diff1 value: 11.597584463580807 - type: nAUC_mrr_max value: -15.723702838537504 - type: nAUC_mrr_std value: 0.2719172965777737 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 41.486000000000004 - type: map_at_1 value: 6.866 - type: map_at_10 value: 15.895999999999999 - type: map_at_100 value: 21.093 - type: map_at_1000 value: 23.067 - type: map_at_20 value: 18.125 - type: map_at_3 value: 11.421000000000001 - type: map_at_5 value: 13.415 - type: mrr_at_1 value: 52.63157894736842 - type: mrr_at_10 value: 61.486805248415166 - type: mrr_at_100 value: 62.08211009182091 - type: mrr_at_1000 value: 62.10828701365016 - type: mrr_at_20 value: 61.904411187915784 - type: mrr_at_3 value: 59.90712074303407 - type: mrr_at_5 value: 60.91331269349847 - type: nauc_map_at_1000_diff1 value: 25.484625278529403 - type: nauc_map_at_1000_max value: 31.206600396418853 - type: nauc_map_at_1000_std value: 15.569448072357156 - type: nauc_map_at_100_diff1 value: 27.636750226316764 - type: nauc_map_at_100_max value: 29.66992681250722 - type: nauc_map_at_100_std value: 10.570600484002671 - type: nauc_map_at_10_diff1 value: 32.76642525548697 - type: nauc_map_at_10_max value: 21.459225397237663 - type: nauc_map_at_10_std value: -3.546494734209264 - type: nauc_map_at_1_diff1 value: 48.8002894871328 - type: nauc_map_at_1_max value: 5.7236722609868815 - type: nauc_map_at_1_std value: -13.283554044471352 - type: nauc_map_at_20_diff1 value: 30.57169701502308 - type: nauc_map_at_20_max value: 25.79666139518404 - type: nauc_map_at_20_std value: 1.781732492989651 - type: nauc_map_at_3_diff1 value: 40.076315947201095 - type: nauc_map_at_3_max value: 12.862524429140054 - type: nauc_map_at_3_std value: -9.188349777126817 - type: nauc_map_at_5_diff1 value: 36.9918718052938 - type: nauc_map_at_5_max value: 16.74234374361876 - type: nauc_map_at_5_std value: -7.818523349307494 - type: nauc_mrr_at_1000_diff1 value: 26.88183002609805 - type: nauc_mrr_at_1000_max value: 47.10209348428658 - type: nauc_mrr_at_1000_std value: 32.067825924992924 - type: nauc_mrr_at_100_diff1 value: 26.871482491566745 - type: nauc_mrr_at_100_max value: 47.11303868498556 - type: nauc_mrr_at_100_std value: 32.08961428818868 - type: nauc_mrr_at_10_diff1 value: 26.6356914977722 - type: nauc_mrr_at_10_max value: 47.091624558810366 - type: nauc_mrr_at_10_std value: 31.942424120660164 - type: nauc_mrr_at_1_diff1 value: 28.19774198483673 - type: nauc_mrr_at_1_max value: 41.44380927834253 - type: nauc_mrr_at_1_std value: 25.18222691885917 - type: nauc_mrr_at_20_diff1 value: 26.86487347109452 - type: nauc_mrr_at_20_max value: 47.1987778214726 - type: nauc_mrr_at_20_std value: 32.143517921610034 - type: nauc_mrr_at_3_diff1 value: 27.34340373236422 - type: nauc_mrr_at_3_max value: 46.358726506276646 - type: nauc_mrr_at_3_std value: 31.74924155572593 - type: nauc_mrr_at_5_diff1 value: 27.209667205060672 - type: nauc_mrr_at_5_max value: 46.79883369072009 - type: nauc_mrr_at_5_std value: 31.655605306670758 - type: nauc_ndcg_at_1000_diff1 value: 18.940195769769687 - type: nauc_ndcg_at_1000_max value: 46.48551313937331 - type: nauc_ndcg_at_1000_std value: 33.64819502089232 - type: nauc_ndcg_at_100_diff1 value: 19.50885253809146 - type: nauc_ndcg_at_100_max value: 40.53174462354878 - type: nauc_ndcg_at_100_std value: 28.516152877751118 - type: nauc_ndcg_at_10_diff1 value: 16.01699218096564 - type: nauc_ndcg_at_10_max value: 41.17322878314514 - type: nauc_ndcg_at_10_std value: 29.002233224832196 - type: nauc_ndcg_at_1_diff1 value: 27.443547710102205 - type: nauc_ndcg_at_1_max value: 40.66529763309582 - type: nauc_ndcg_at_1_std value: 24.15016766225869 - type: nauc_ndcg_at_20_diff1 value: 17.541197675685062 - type: nauc_ndcg_at_20_max value: 40.53231266973844 - type: nauc_ndcg_at_20_std value: 29.54096347876548 - type: nauc_ndcg_at_3_diff1 value: 18.649628357473716 - type: nauc_ndcg_at_3_max value: 41.18603570171764 - type: nauc_ndcg_at_3_std value: 27.125524188420396 - type: nauc_ndcg_at_5_diff1 value: 17.519593751448483 - type: nauc_ndcg_at_5_max value: 42.715997890377345 - type: nauc_ndcg_at_5_std value: 27.902627839899868 - type: nauc_precision_at_1000_diff1 value: -15.528797630565155 - type: nauc_precision_at_1000_max value: 13.741640921778671 - type: nauc_precision_at_1000_std value: 44.50896053788372 - type: nauc_precision_at_100_diff1 value: -14.491464489721887 - type: nauc_precision_at_100_max value: 23.136434418999457 - type: nauc_precision_at_100_std value: 49.73145147863128 - type: nauc_precision_at_10_diff1 value: -4.829188942994277 - type: nauc_precision_at_10_max value: 40.327612559528866 - type: nauc_precision_at_10_std value: 39.34919529635044 - type: nauc_precision_at_1_diff1 value: 28.19774198483673 - type: nauc_precision_at_1_max value: 41.44380927834253 - type: nauc_precision_at_1_std value: 25.18222691885917 - type: nauc_precision_at_20_diff1 value: -7.210726293112847 - type: nauc_precision_at_20_max value: 37.195679576636984 - type: nauc_precision_at_20_std value: 45.4597096418357 - type: nauc_precision_at_3_diff1 value: 7.578219537774854 - type: nauc_precision_at_3_max value: 41.59775233475654 - type: nauc_precision_at_3_std value: 30.764584790895118 - type: nauc_precision_at_5_diff1 value: 1.655451789039598 - type: nauc_precision_at_5_max value: 43.435739407610455 - type: nauc_precision_at_5_std value: 33.42552263325999 - type: nauc_recall_at_1000_diff1 value: 5.030705700690516 - type: nauc_recall_at_1000_max value: 19.108072570815583 - type: nauc_recall_at_1000_std value: 14.697734974217308 - type: nauc_recall_at_100_diff1 value: 14.746540318132407 - type: nauc_recall_at_100_max value: 21.798705033854795 - type: nauc_recall_at_100_std value: 11.416195108842587 - type: nauc_recall_at_10_diff1 value: 25.548642427860486 - type: nauc_recall_at_10_max value: 18.711677681987474 - type: nauc_recall_at_10_std value: -5.988904818971677 - type: nauc_recall_at_1_diff1 value: 48.8002894871328 - type: nauc_recall_at_1_max value: 5.7236722609868815 - type: nauc_recall_at_1_std value: -13.283554044471352 - type: nauc_recall_at_20_diff1 value: 23.39140739154809 - type: nauc_recall_at_20_max value: 19.351150636155474 - type: nauc_recall_at_20_std value: -2.757280266915132 - type: nauc_recall_at_3_diff1 value: 38.17453576012812 - type: nauc_recall_at_3_max value: 13.47003839643972 - type: nauc_recall_at_3_std value: -8.75780163862688 - type: nauc_recall_at_5_diff1 value: 33.02812855226899 - type: nauc_recall_at_5_max value: 15.477626408978477 - type: nauc_recall_at_5_std value: -9.072206441070708 - type: ndcg_at_1 value: 50.773999999999994 - type: ndcg_at_10 value: 41.486000000000004 - type: ndcg_at_100 value: 39.051 - type: ndcg_at_1000 value: 48.106 - type: ndcg_at_20 value: 39.432 - type: ndcg_at_3 value: 47.428 - type: ndcg_at_5 value: 45.227000000000004 - type: precision_at_1 value: 52.632 - type: precision_at_10 value: 31.146 - type: precision_at_100 value: 10.328 - type: precision_at_1000 value: 2.432 - type: precision_at_20 value: 23.793 - type: precision_at_3 value: 45.201 - type: precision_at_5 value: 39.876 - type: recall_at_1 value: 6.866 - type: recall_at_10 value: 20.447000000000003 - type: recall_at_100 value: 40.607 - type: recall_at_1000 value: 73.411 - type: recall_at_20 value: 26.082 - type: recall_at_3 value: 12.484 - type: recall_at_5 value: 15.847 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 69.072 - type: map_at_1 value: 45.483000000000004 - type: map_at_10 value: 62.050000000000004 - type: map_at_100 value: 62.693 - type: map_at_1000 value: 62.702999999999996 - type: map_at_20 value: 62.498 - type: map_at_3 value: 58.285 - type: map_at_5 value: 60.711000000000006 - type: mrr_at_1 value: 50.840092699884124 - type: mrr_at_10 value: 64.54635224116673 - type: mrr_at_100 value: 64.9526548702289 - type: mrr_at_1000 value: 64.95908460752281 - type: mrr_at_20 value: 64.82949565799959 - type: mrr_at_3 value: 61.89165701042856 - type: mrr_at_5 value: 63.632676709154026 - type: nauc_map_at_1000_diff1 value: 43.187285304185224 - type: nauc_map_at_1000_max value: 32.39921659632756 - type: nauc_map_at_1000_std value: -5.780901333066553 - type: nauc_map_at_100_diff1 value: 43.184487221204456 - type: nauc_map_at_100_max value: 32.41176116347982 - type: nauc_map_at_100_std value: -5.76422606662383 - type: nauc_map_at_10_diff1 value: 42.967066814031746 - type: nauc_map_at_10_max value: 32.489617364418514 - type: nauc_map_at_10_std value: -6.029045531102664 - type: nauc_map_at_1_diff1 value: 46.16376563218624 - type: nauc_map_at_1_max value: 26.342624776802232 - type: nauc_map_at_1_std value: -7.142171388751972 - type: nauc_map_at_20_diff1 value: 43.15894358608328 - type: nauc_map_at_20_max value: 32.46492198956245 - type: nauc_map_at_20_std value: -5.788373305449195 - type: nauc_map_at_3_diff1 value: 43.231752344608545 - type: nauc_map_at_3_max value: 31.68003009949564 - type: nauc_map_at_3_std value: -8.015235132765458 - type: nauc_map_at_5_diff1 value: 42.86197608819917 - type: nauc_map_at_5_max value: 32.363857571094485 - type: nauc_map_at_5_std value: -6.780487416387977 - type: nauc_mrr_at_1000_diff1 value: 43.40542912045782 - type: nauc_mrr_at_1000_max value: 32.8461770324533 - type: nauc_mrr_at_1000_std value: -3.6505425530008204 - type: nauc_mrr_at_100_diff1 value: 43.40233508014468 - type: nauc_mrr_at_100_max value: 32.85598538385942 - type: nauc_mrr_at_100_std value: -3.637477352635459 - type: nauc_mrr_at_10_diff1 value: 43.260179162806054 - type: nauc_mrr_at_10_max value: 32.942643527040474 - type: nauc_mrr_at_10_std value: -3.712052825320437 - type: nauc_mrr_at_1_diff1 value: 46.354919460881206 - type: nauc_mrr_at_1_max value: 29.1760258591106 - type: nauc_mrr_at_1_std value: -4.107225031227406 - type: nauc_mrr_at_20_diff1 value: 43.37092385434311 - type: nauc_mrr_at_20_max value: 32.93390254712846 - type: nauc_mrr_at_20_std value: -3.5719056112132006 - type: nauc_mrr_at_3_diff1 value: 43.1744474040527 - type: nauc_mrr_at_3_max value: 32.741290559777994 - type: nauc_mrr_at_3_std value: -4.72677925120697 - type: nauc_mrr_at_5_diff1 value: 43.108396819975674 - type: nauc_mrr_at_5_max value: 32.970519514893084 - type: nauc_mrr_at_5_std value: -4.090906158975974 - type: nauc_ndcg_at_1000_diff1 value: 42.786664193638714 - type: nauc_ndcg_at_1000_max value: 33.65554095609296 - type: nauc_ndcg_at_1000_std value: -4.024030130584482 - type: nauc_ndcg_at_100_diff1 value: 42.691246775210814 - type: nauc_ndcg_at_100_max value: 34.063232335110875 - type: nauc_ndcg_at_100_std value: -3.477813807415248 - type: nauc_ndcg_at_10_diff1 value: 41.90988990571757 - type: nauc_ndcg_at_10_max value: 34.58934812881633 - type: nauc_ndcg_at_10_std value: -4.3295110195497655 - type: nauc_ndcg_at_1_diff1 value: 46.354919460881206 - type: nauc_ndcg_at_1_max value: 29.1760258591106 - type: nauc_ndcg_at_1_std value: -4.107225031227406 - type: nauc_ndcg_at_20_diff1 value: 42.493206675867114 - type: nauc_ndcg_at_20_max value: 34.562441307459544 - type: nauc_ndcg_at_20_std value: -3.4456116866749107 - type: nauc_ndcg_at_3_diff1 value: 42.24180336502808 - type: nauc_ndcg_at_3_max value: 33.064267018100594 - type: nauc_ndcg_at_3_std value: -7.786248093572142 - type: nauc_ndcg_at_5_diff1 value: 41.692714787779565 - type: nauc_ndcg_at_5_max value: 34.20502498949156 - type: nauc_ndcg_at_5_std value: -5.979557859282785 - type: nauc_precision_at_1000_diff1 value: -13.779832506640702 - type: nauc_precision_at_1000_max value: 1.243001688631421 - type: nauc_precision_at_1000_std value: 17.351623398622323 - type: nauc_precision_at_100_diff1 value: -11.310526816290297 - type: nauc_precision_at_100_max value: 5.771669506192959 - type: nauc_precision_at_100_std value: 19.917795079540113 - type: nauc_precision_at_10_diff1 value: 2.163699384635286 - type: nauc_precision_at_10_max value: 19.66440698458386 - type: nauc_precision_at_10_std value: 13.689876348315726 - type: nauc_precision_at_1_diff1 value: 46.354919460881206 - type: nauc_precision_at_1_max value: 29.1760258591106 - type: nauc_precision_at_1_std value: -4.107225031227406 - type: nauc_precision_at_20_diff1 value: -3.038735879584471 - type: nauc_precision_at_20_max value: 14.132968299701695 - type: nauc_precision_at_20_std value: 17.78069734664346 - type: nauc_precision_at_3_diff1 value: 21.783760758070095 - type: nauc_precision_at_3_max value: 30.244127986404497 - type: nauc_precision_at_3_std value: -0.12411163467738723 - type: nauc_precision_at_5_diff1 value: 10.980635723302418 - type: nauc_precision_at_5_max value: 25.302293738975575 - type: nauc_precision_at_5_std value: 6.4740817488722024 - type: nauc_recall_at_1000_diff1 value: 34.10343772356593 - type: nauc_recall_at_1000_max value: 80.72497340357538 - type: nauc_recall_at_1000_std value: 69.54564103264093 - type: nauc_recall_at_100_diff1 value: 33.427719956774126 - type: nauc_recall_at_100_max value: 71.54086768335449 - type: nauc_recall_at_100_std value: 49.66157377654885 - type: nauc_recall_at_10_diff1 value: 33.70139560054039 - type: nauc_recall_at_10_max value: 45.47878072860151 - type: nauc_recall_at_10_std value: 1.4188516615716378 - type: nauc_recall_at_1_diff1 value: 46.16376563218624 - type: nauc_recall_at_1_max value: 26.342624776802232 - type: nauc_recall_at_1_std value: -7.142171388751972 - type: nauc_recall_at_20_diff1 value: 35.805379874970086 - type: nauc_recall_at_20_max value: 51.80479822253392 - type: nauc_recall_at_20_std value: 13.531467576460143 - type: nauc_recall_at_3_diff1 value: 37.288500141631616 - type: nauc_recall_at_3_max value: 35.07078243516728 - type: nauc_recall_at_3_std value: -10.452926441410405 - type: nauc_recall_at_5_diff1 value: 34.83186104526897 - type: nauc_recall_at_5_max value: 39.58488976496973 - type: nauc_recall_at_5_std value: -6.3049292065708835 - type: ndcg_at_1 value: 50.839999999999996 - type: ndcg_at_10 value: 69.072 - type: ndcg_at_100 value: 71.538 - type: ndcg_at_1000 value: 71.77799999999999 - type: ndcg_at_20 value: 70.41 - type: ndcg_at_3 value: 62.544999999999995 - type: ndcg_at_5 value: 66.33099999999999 - type: precision_at_1 value: 50.839999999999996 - type: precision_at_10 value: 10.495000000000001 - type: precision_at_100 value: 1.1900000000000002 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.5809999999999995 - type: precision_at_3 value: 27.636 - type: precision_at_5 value: 18.864 - type: recall_at_1 value: 45.483000000000004 - type: recall_at_10 value: 87.483 - type: recall_at_100 value: 97.844 - type: recall_at_1000 value: 99.66199999999999 - type: recall_at_20 value: 92.294 - type: recall_at_3 value: 71.2 - type: recall_at_5 value: 79.753 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 89.58 - type: map_at_1 value: 71.819 - type: map_at_10 value: 86.04899999999999 - type: map_at_100 value: 86.648 - type: map_at_1000 value: 86.66199999999999 - type: map_at_20 value: 86.441 - type: map_at_3 value: 83.114 - type: map_at_5 value: 84.981 - type: mrr_at_1 value: 82.62 - type: mrr_at_10 value: 88.62899999999979 - type: mrr_at_100 value: 88.70918591324215 - type: mrr_at_1000 value: 88.70973091492397 - type: mrr_at_20 value: 88.68914765317221 - type: mrr_at_3 value: 87.74999999999979 - type: mrr_at_5 value: 88.36799999999974 - type: nauc_map_at_1000_diff1 value: 77.89207709760448 - type: nauc_map_at_1000_max value: 29.63371361495422 - type: nauc_map_at_1000_std value: -48.628180385874344 - type: nauc_map_at_100_diff1 value: 77.89592179104915 - type: nauc_map_at_100_max value: 29.617171506130756 - type: nauc_map_at_100_std value: -48.66057170774648 - type: nauc_map_at_10_diff1 value: 78.0618161228185 - type: nauc_map_at_10_max value: 29.178490609366737 - type: nauc_map_at_10_std value: -50.74755004592002 - type: nauc_map_at_1_diff1 value: 81.64335579973574 - type: nauc_map_at_1_max value: 21.813832226652174 - type: nauc_map_at_1_std value: -42.57570978190876 - type: nauc_map_at_20_diff1 value: 77.9299081005938 - type: nauc_map_at_20_max value: 29.458718470003888 - type: nauc_map_at_20_std value: -49.63337236763102 - type: nauc_map_at_3_diff1 value: 78.72941448509229 - type: nauc_map_at_3_max value: 26.600997896960056 - type: nauc_map_at_3_std value: -51.889002227479885 - type: nauc_map_at_5_diff1 value: 78.31466610917171 - type: nauc_map_at_5_max value: 28.09863984582896 - type: nauc_map_at_5_std value: -52.14058096096497 - type: nauc_mrr_at_1000_diff1 value: 78.42667263739992 - type: nauc_mrr_at_1000_max value: 31.98996235127974 - type: nauc_mrr_at_1000_std value: -44.380439148429296 - type: nauc_mrr_at_100_diff1 value: 78.42661032698115 - type: nauc_mrr_at_100_max value: 31.991652631740102 - type: nauc_mrr_at_100_std value: -44.37854108460535 - type: nauc_mrr_at_10_diff1 value: 78.39126022544136 - type: nauc_mrr_at_10_max value: 32.02023484451197 - type: nauc_mrr_at_10_std value: -44.561252349176954 - type: nauc_mrr_at_1_diff1 value: 79.21630894647448 - type: nauc_mrr_at_1_max value: 31.526303156060177 - type: nauc_mrr_at_1_std value: -41.887504422443136 - type: nauc_mrr_at_20_diff1 value: 78.42548039170424 - type: nauc_mrr_at_20_max value: 31.99588275070137 - type: nauc_mrr_at_20_std value: -44.44957722627042 - type: nauc_mrr_at_3_diff1 value: 78.26165151833735 - type: nauc_mrr_at_3_max value: 32.18028826126801 - type: nauc_mrr_at_3_std value: -44.6998237213182 - type: nauc_mrr_at_5_diff1 value: 78.34786430903962 - type: nauc_mrr_at_5_max value: 32.168476272879566 - type: nauc_mrr_at_5_std value: -44.7915919956712 - type: nauc_ndcg_at_1000_diff1 value: 77.79198355957816 - type: nauc_ndcg_at_1000_max value: 31.14363511518406 - type: nauc_ndcg_at_1000_std value: -46.69335151274275 - type: nauc_ndcg_at_100_diff1 value: 77.79898090286419 - type: nauc_ndcg_at_100_max value: 31.115103811629215 - type: nauc_ndcg_at_100_std value: -46.73078913421965 - type: nauc_ndcg_at_10_diff1 value: 77.74856635461343 - type: nauc_ndcg_at_10_max value: 30.279584686212747 - type: nauc_ndcg_at_10_std value: -50.23514662356807 - type: nauc_ndcg_at_1_diff1 value: 79.17833000040999 - type: nauc_ndcg_at_1_max value: 31.703788144510746 - type: nauc_ndcg_at_1_std value: -41.854817402870715 - type: nauc_ndcg_at_20_diff1 value: 77.7380353804671 - type: nauc_ndcg_at_20_max value: 30.622294129001553 - type: nauc_ndcg_at_20_std value: -49.035794761065254 - type: nauc_ndcg_at_3_diff1 value: 77.41476880573593 - type: nauc_ndcg_at_3_max value: 29.015949978243032 - type: nauc_ndcg_at_3_std value: -49.78627087622648 - type: nauc_ndcg_at_5_diff1 value: 77.64439137502896 - type: nauc_ndcg_at_5_max value: 29.444684897492206 - type: nauc_ndcg_at_5_std value: -51.21908400252501 - type: nauc_precision_at_1000_diff1 value: -44.92396459446822 - type: nauc_precision_at_1000_max value: -3.674153720989045 - type: nauc_precision_at_1000_std value: 39.56552468277785 - type: nauc_precision_at_100_diff1 value: -44.75143023259094 - type: nauc_precision_at_100_max value: -3.705280025140011 - type: nauc_precision_at_100_std value: 39.433619999113326 - type: nauc_precision_at_10_diff1 value: -41.0651074726579 - type: nauc_precision_at_10_max value: -0.21097985601783667 - type: nauc_precision_at_10_std value: 26.24652824589493 - type: nauc_precision_at_1_diff1 value: 79.17833000040999 - type: nauc_precision_at_1_max value: 31.703788144510746 - type: nauc_precision_at_1_std value: -41.854817402870715 - type: nauc_precision_at_20_diff1 value: -43.368001340920294 - type: nauc_precision_at_20_max value: -2.036990010399129 - type: nauc_precision_at_20_std value: 32.37747041406297 - type: nauc_precision_at_3_diff1 value: -22.089307548346877 - type: nauc_precision_at_3_max value: 6.2280973175296 - type: nauc_precision_at_3_std value: 5.323992514036145 - type: nauc_precision_at_5_diff1 value: -34.07115055244003 - type: nauc_precision_at_5_max value: 2.5955315789198834 - type: nauc_precision_at_5_std value: 16.26096689407332 - type: nauc_recall_at_1000_diff1 value: 58.27703860947467 - type: nauc_recall_at_1000_max value: 68.59835835315768 - type: nauc_recall_at_1000_std value: 77.96687006056064 - type: nauc_recall_at_100_diff1 value: 73.24371223081737 - type: nauc_recall_at_100_max value: 39.55925344664591 - type: nauc_recall_at_100_std value: -32.25605030215798 - type: nauc_recall_at_10_diff1 value: 73.41261201339202 - type: nauc_recall_at_10_max value: 26.822979434062926 - type: nauc_recall_at_10_std value: -74.2909332592806 - type: nauc_recall_at_1_diff1 value: 81.64335579973574 - type: nauc_recall_at_1_max value: 21.813832226652174 - type: nauc_recall_at_1_std value: -42.57570978190876 - type: nauc_recall_at_20_diff1 value: 72.7621297920656 - type: nauc_recall_at_20_max value: 26.02492304096079 - type: nauc_recall_at_20_std value: -77.8724532438279 - type: nauc_recall_at_3_diff1 value: 75.25149312810714 - type: nauc_recall_at_3_max value: 23.20545662481487 - type: nauc_recall_at_3_std value: -59.69689982140521 - type: nauc_recall_at_5_diff1 value: 73.69807273001406 - type: nauc_recall_at_5_max value: 24.073666798066057 - type: nauc_recall_at_5_std value: -67.91121268130719 - type: ndcg_at_1 value: 82.64 - type: ndcg_at_10 value: 89.58 - type: ndcg_at_100 value: 90.606 - type: ndcg_at_1000 value: 90.676 - type: ndcg_at_20 value: 90.132 - type: ndcg_at_3 value: 86.88 - type: ndcg_at_5 value: 88.40299999999999 - type: precision_at_1 value: 82.64 - type: precision_at_10 value: 13.604 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.188 - type: precision_at_3 value: 38.083 - type: precision_at_5 value: 25.018 - type: recall_at_1 value: 71.819 - type: recall_at_10 value: 96.34700000000001 - type: recall_at_100 value: 99.715 - type: recall_at_1000 value: 99.995 - type: recall_at_20 value: 98.073 - type: recall_at_3 value: 88.57300000000001 - type: recall_at_5 value: 92.908 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 71.18966762070158 - type: v_measure value: 71.18966762070158 - type: v_measure_std value: 2.7498969054457048 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 74.42014716862516 - type: v_measure value: 74.42014716862516 - type: v_measure_std value: 9.909739891410648 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 25.041999999999998 - type: map_at_1 value: 5.893000000000001 - type: map_at_10 value: 15.260000000000002 - type: map_at_100 value: 18.084 - type: map_at_1000 value: 18.467 - type: map_at_20 value: 16.675 - type: map_at_3 value: 10.526 - type: map_at_5 value: 12.775 - type: mrr_at_1 value: 28.999999999999996 - type: mrr_at_10 value: 41.03575396825395 - type: mrr_at_100 value: 42.136771862785835 - type: mrr_at_1000 value: 42.16698555415099 - type: mrr_at_20 value: 41.707493696104315 - type: mrr_at_3 value: 37.34999999999998 - type: mrr_at_5 value: 39.59999999999995 - type: nauc_map_at_1000_diff1 value: 12.080002654911883 - type: nauc_map_at_1000_max value: 29.813563682286276 - type: nauc_map_at_1000_std value: 20.36659817908673 - type: nauc_map_at_100_diff1 value: 12.108735517749706 - type: nauc_map_at_100_max value: 29.76830671710955 - type: nauc_map_at_100_std value: 20.3433621032846 - type: nauc_map_at_10_diff1 value: 12.91575031185637 - type: nauc_map_at_10_max value: 29.427600958386318 - type: nauc_map_at_10_std value: 16.89867275177153 - type: nauc_map_at_1_diff1 value: 19.353069488987916 - type: nauc_map_at_1_max value: 17.093914951159693 - type: nauc_map_at_1_std value: 8.19886078055046 - type: nauc_map_at_20_diff1 value: 11.977233457943113 - type: nauc_map_at_20_max value: 29.171812822948805 - type: nauc_map_at_20_std value: 18.780517506173965 - type: nauc_map_at_3_diff1 value: 14.453129464176092 - type: nauc_map_at_3_max value: 25.801958649112077 - type: nauc_map_at_3_std value: 11.572823684429643 - type: nauc_map_at_5_diff1 value: 13.167155808104997 - type: nauc_map_at_5_max value: 27.355626948365792 - type: nauc_map_at_5_std value: 14.414151839192183 - type: nauc_mrr_at_1000_diff1 value: 17.262104643988636 - type: nauc_mrr_at_1000_max value: 23.991373837217058 - type: nauc_mrr_at_1000_std value: 12.44755488671623 - type: nauc_mrr_at_100_diff1 value: 17.267280132318703 - type: nauc_mrr_at_100_max value: 24.022189287889294 - type: nauc_mrr_at_100_std value: 12.480695500214788 - type: nauc_mrr_at_10_diff1 value: 17.012383998246268 - type: nauc_mrr_at_10_max value: 24.192637911171722 - type: nauc_mrr_at_10_std value: 12.524608847408917 - type: nauc_mrr_at_1_diff1 value: 19.43518811038007 - type: nauc_mrr_at_1_max value: 17.747482933395602 - type: nauc_mrr_at_1_std value: 8.410779775558684 - type: nauc_mrr_at_20_diff1 value: 17.202663281407446 - type: nauc_mrr_at_20_max value: 24.091991130543118 - type: nauc_mrr_at_20_std value: 12.503814263019908 - type: nauc_mrr_at_3_diff1 value: 17.52733013432995 - type: nauc_mrr_at_3_max value: 23.569459518780214 - type: nauc_mrr_at_3_std value: 11.770846827520726 - type: nauc_mrr_at_5_diff1 value: 17.10817561975543 - type: nauc_mrr_at_5_max value: 23.945141435234678 - type: nauc_mrr_at_5_std value: 12.034468615317719 - type: nauc_ndcg_at_1000_diff1 value: 12.317811393346936 - type: nauc_ndcg_at_1000_max value: 30.809991350156103 - type: nauc_ndcg_at_1000_std value: 24.517501065205067 - type: nauc_ndcg_at_100_diff1 value: 12.824804203182936 - type: nauc_ndcg_at_100_max value: 30.895499817010748 - type: nauc_ndcg_at_100_std value: 25.424376279745402 - type: nauc_ndcg_at_10_diff1 value: 13.32724552457439 - type: nauc_ndcg_at_10_max value: 30.409088666807456 - type: nauc_ndcg_at_10_std value: 18.216330475714113 - type: nauc_ndcg_at_1_diff1 value: 19.43518811038007 - type: nauc_ndcg_at_1_max value: 17.747482933395602 - type: nauc_ndcg_at_1_std value: 8.410779775558684 - type: nauc_ndcg_at_20_diff1 value: 12.224399111852902 - type: nauc_ndcg_at_20_max value: 29.86352330445272 - type: nauc_ndcg_at_20_std value: 21.196937851331807 - type: nauc_ndcg_at_3_diff1 value: 15.367489533734027 - type: nauc_ndcg_at_3_max value: 26.76486390741532 - type: nauc_ndcg_at_3_std value: 12.606077508789923 - type: nauc_ndcg_at_5_diff1 value: 13.831157482390935 - type: nauc_ndcg_at_5_max value: 28.070226983968904 - type: nauc_ndcg_at_5_std value: 15.236787943125435 - type: nauc_precision_at_1000_diff1 value: 0.016122957101357048 - type: nauc_precision_at_1000_max value: 24.380929903557334 - type: nauc_precision_at_1000_std value: 34.54045112720052 - type: nauc_precision_at_100_diff1 value: 7.255224788507301 - type: nauc_precision_at_100_max value: 27.98453788447542 - type: nauc_precision_at_100_std value: 35.38999555441665 - type: nauc_precision_at_10_diff1 value: 9.69185099834181 - type: nauc_precision_at_10_max value: 32.532315522580454 - type: nauc_precision_at_10_std value: 21.48948348473612 - type: nauc_precision_at_1_diff1 value: 19.43518811038007 - type: nauc_precision_at_1_max value: 17.747482933395602 - type: nauc_precision_at_1_std value: 8.410779775558684 - type: nauc_precision_at_20_diff1 value: 6.964076536695672 - type: nauc_precision_at_20_max value: 29.30087236410044 - type: nauc_precision_at_20_std value: 26.413625895571986 - type: nauc_precision_at_3_diff1 value: 14.145134359925155 - type: nauc_precision_at_3_max value: 29.915650960808303 - type: nauc_precision_at_3_std value: 14.095370019867797 - type: nauc_precision_at_5_diff1 value: 11.043933558522692 - type: nauc_precision_at_5_max value: 30.93016505807111 - type: nauc_precision_at_5_std value: 17.749256196062603 - type: nauc_recall_at_1000_diff1 value: -0.7776817772090345 - type: nauc_recall_at_1000_max value: 23.094717340324518 - type: nauc_recall_at_1000_std value: 37.189908681396425 - type: nauc_recall_at_100_diff1 value: 6.887748742013364 - type: nauc_recall_at_100_max value: 27.00798435230277 - type: nauc_recall_at_100_std value: 35.908147807345344 - type: nauc_recall_at_10_diff1 value: 9.605632017480751 - type: nauc_recall_at_10_max value: 31.845202901168655 - type: nauc_recall_at_10_std value: 21.497414586634683 - type: nauc_recall_at_1_diff1 value: 19.353069488987916 - type: nauc_recall_at_1_max value: 17.093914951159693 - type: nauc_recall_at_1_std value: 8.19886078055046 - type: nauc_recall_at_20_diff1 value: 6.927503731844782 - type: nauc_recall_at_20_max value: 28.611698183338202 - type: nauc_recall_at_20_std value: 26.69018660149911 - type: nauc_recall_at_3_diff1 value: 14.043724087062268 - type: nauc_recall_at_3_max value: 29.269835821380465 - type: nauc_recall_at_3_std value: 14.104419605998094 - type: nauc_recall_at_5_diff1 value: 11.017319452873336 - type: nauc_recall_at_5_max value: 30.295720628306228 - type: nauc_recall_at_5_std value: 17.758048545573825 - type: ndcg_at_1 value: 28.999999999999996 - type: ndcg_at_10 value: 25.041999999999998 - type: ndcg_at_100 value: 35.045 - type: ndcg_at_1000 value: 40.803 - type: ndcg_at_20 value: 28.584 - type: ndcg_at_3 value: 23.249 - type: ndcg_at_5 value: 20.533 - type: precision_at_1 value: 28.999999999999996 - type: precision_at_10 value: 13.120000000000001 - type: precision_at_100 value: 2.7470000000000003 - type: precision_at_1000 value: 0.41200000000000003 - type: precision_at_20 value: 8.584999999999999 - type: precision_at_3 value: 21.633 - type: precision_at_5 value: 18.099999999999998 - type: recall_at_1 value: 5.893000000000001 - type: recall_at_10 value: 26.567 - type: recall_at_100 value: 55.800000000000004 - type: recall_at_1000 value: 83.608 - type: recall_at_20 value: 34.86 - type: recall_at_3 value: 13.153 - type: recall_at_5 value: 18.323 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 86.57284584320382 - type: cosine_spearman value: 82.20531642680812 - type: euclidean_pearson value: 83.94261758556554 - type: euclidean_spearman value: 82.20721497738559 - type: main_score value: 82.20531642680812 - type: manhattan_pearson value: 84.15902154703083 - type: manhattan_spearman value: 82.19506027155957 - type: pearson value: 86.57284584320382 - type: spearman value: 82.20531642680812 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 86.28047602146931 - type: cosine_spearman value: 79.51504881448884 - type: euclidean_pearson value: 83.10545189967856 - type: euclidean_spearman value: 79.50586960492797 - type: main_score value: 79.51504881448884 - type: manhattan_pearson value: 83.44244457500889 - type: manhattan_spearman value: 79.730303339846 - type: pearson value: 86.28047602146931 - type: spearman value: 79.51504881448884 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 88.74723553048702 - type: cosine_spearman value: 89.18936052329725 - type: euclidean_pearson value: 88.90400878928668 - type: euclidean_spearman value: 89.19174821431281 - type: main_score value: 89.18936052329725 - type: manhattan_pearson value: 88.81504628424054 - type: manhattan_spearman value: 89.18063294142597 - type: pearson value: 88.74723553048702 - type: spearman value: 89.18936052329725 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 86.45403437836023 - type: cosine_spearman value: 85.14654611519086 - type: euclidean_pearson value: 85.87509624462743 - type: euclidean_spearman value: 85.1391108856681 - type: main_score value: 85.14654611519086 - type: manhattan_pearson value: 85.96635794953866 - type: manhattan_spearman value: 85.3271371527667 - type: pearson value: 86.45403437836023 - type: spearman value: 85.14654611519086 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 87.84742260009705 - type: cosine_spearman value: 89.10215217191254 - type: euclidean_pearson value: 88.97393286325477 - type: euclidean_spearman value: 89.1014105509662 - type: main_score value: 89.10215217191254 - type: manhattan_pearson value: 89.31698781090151 - type: manhattan_spearman value: 89.53000001764433 - type: pearson value: 87.84742260009705 - type: spearman value: 89.10215217191254 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 85.22397535461835 - type: cosine_spearman value: 87.14066355879785 - type: euclidean_pearson value: 86.31393364087295 - type: euclidean_spearman value: 87.14018892702765 - type: main_score value: 87.14066355879785 - type: manhattan_pearson value: 86.36366855248434 - type: manhattan_spearman value: 87.20858630423012 - type: pearson value: 85.22397535461835 - type: spearman value: 87.14066355879785 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 90.66131612061355 - type: cosine_spearman value: 90.97082650129164 - type: euclidean_pearson value: 90.98181906744969 - type: euclidean_spearman value: 90.99008476850047 - type: main_score value: 90.97082650129164 - type: manhattan_pearson value: 90.75245040709021 - type: manhattan_spearman value: 90.6199877691265 - type: pearson value: 90.66131612061355 - type: spearman value: 90.97082650129164 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 67.270656447085 - type: cosine_spearman value: 67.82870469746828 - type: euclidean_pearson value: 69.03857775285664 - type: euclidean_spearman value: 67.74455108773341 - type: main_score value: 67.82870469746828 - type: manhattan_pearson value: 69.25304172245812 - type: manhattan_spearman value: 68.00987097916055 - type: pearson value: 67.270656447085 - type: spearman value: 67.82870469746828 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 87.17245205384889 - type: cosine_spearman value: 87.7360146030987 - type: euclidean_pearson value: 87.48919412794656 - type: euclidean_spearman value: 87.7312047878383 - type: main_score value: 87.7360146030987 - type: manhattan_pearson value: 87.61476224354806 - type: manhattan_spearman value: 87.95220889254693 - type: pearson value: 87.17245205384889 - type: spearman value: 87.7360146030987 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 88.43547871921146 - type: map value: 88.43547871921146 - type: mrr value: 96.5564473652709 - type: nAUC_map_diff1 value: -13.66029392579231 - type: nAUC_map_max value: 50.325613574053506 - type: nAUC_map_std value: 60.02986231275796 - type: nAUC_mrr_diff1 value: 23.83821476411125 - type: nAUC_mrr_max value: 86.72643311769906 - type: nAUC_mrr_std value: 72.12741063469213 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 78.233 - type: map_at_1 value: 61.49400000000001 - type: map_at_10 value: 73.30600000000001 - type: map_at_100 value: 73.719 - type: map_at_1000 value: 73.724 - type: map_at_20 value: 73.611 - type: map_at_3 value: 70.626 - type: map_at_5 value: 72.417 - type: mrr_at_1 value: 64.66666666666666 - type: mrr_at_10 value: 74.30357142857143 - type: mrr_at_100 value: 74.56950898079988 - type: mrr_at_1000 value: 74.57295833098681 - type: mrr_at_20 value: 74.46165223665226 - type: mrr_at_3 value: 72.3888888888889 - type: mrr_at_5 value: 73.60555555555557 - type: nauc_map_at_1000_diff1 value: 76.51524604780636 - type: nauc_map_at_1000_max value: 53.48521938401881 - type: nauc_map_at_1000_std value: -7.347799382158861 - type: nauc_map_at_100_diff1 value: 76.5122888096236 - type: nauc_map_at_100_max value: 53.49221847471618 - type: nauc_map_at_100_std value: -7.329683735681086 - type: nauc_map_at_10_diff1 value: 76.30928630674504 - type: nauc_map_at_10_max value: 53.00102977185941 - type: nauc_map_at_10_std value: -7.7467740085108705 - type: nauc_map_at_1_diff1 value: 79.54189281784247 - type: nauc_map_at_1_max value: 46.630071622109526 - type: nauc_map_at_1_std value: -14.395943134644112 - type: nauc_map_at_20_diff1 value: 76.41604361947962 - type: nauc_map_at_20_max value: 53.578883876146875 - type: nauc_map_at_20_std value: -7.403103451288041 - type: nauc_map_at_3_diff1 value: 76.25911617571941 - type: nauc_map_at_3_max value: 49.140287380513605 - type: nauc_map_at_3_std value: -11.35992449218983 - type: nauc_map_at_5_diff1 value: 76.35122077770336 - type: nauc_map_at_5_max value: 52.1744367901208 - type: nauc_map_at_5_std value: -7.85753955055384 - type: nauc_mrr_at_1000_diff1 value: 76.97223309515867 - type: nauc_mrr_at_1000_max value: 57.263787498613326 - type: nauc_mrr_at_1000_std value: -4.884090708840035 - type: nauc_mrr_at_100_diff1 value: 76.97312970894603 - type: nauc_mrr_at_100_max value: 57.26850730446478 - type: nauc_mrr_at_100_std value: -4.875200894216617 - type: nauc_mrr_at_10_diff1 value: 76.65927674223613 - type: nauc_mrr_at_10_max value: 57.30979763941454 - type: nauc_mrr_at_10_std value: -4.863331094022142 - type: nauc_mrr_at_1_diff1 value: 80.0454932568644 - type: nauc_mrr_at_1_max value: 56.76038421319305 - type: nauc_mrr_at_1_std value: -4.101939392632653 - type: nauc_mrr_at_20_diff1 value: 76.87237970440503 - type: nauc_mrr_at_20_max value: 57.33843605225869 - type: nauc_mrr_at_20_std value: -4.96248984417978 - type: nauc_mrr_at_3_diff1 value: 76.74130186666727 - type: nauc_mrr_at_3_max value: 56.19313244846155 - type: nauc_mrr_at_3_std value: -5.684365934009136 - type: nauc_mrr_at_5_diff1 value: 76.66406918799962 - type: nauc_mrr_at_5_max value: 57.56110093228628 - type: nauc_mrr_at_5_std value: -3.7464413085588073 - type: nauc_ndcg_at_1000_diff1 value: 76.19194173971773 - type: nauc_ndcg_at_1000_max value: 55.57464600170693 - type: nauc_ndcg_at_1000_std value: -6.0761689532372625 - type: nauc_ndcg_at_100_diff1 value: 76.14631273843654 - type: nauc_ndcg_at_100_max value: 55.72246565373382 - type: nauc_ndcg_at_100_std value: -5.595160698860595 - type: nauc_ndcg_at_10_diff1 value: 75.0108223611192 - type: nauc_ndcg_at_10_max value: 55.27894212877493 - type: nauc_ndcg_at_10_std value: -6.968331740214591 - type: nauc_ndcg_at_1_diff1 value: 80.0454932568644 - type: nauc_ndcg_at_1_max value: 56.76038421319305 - type: nauc_ndcg_at_1_std value: -4.101939392632653 - type: nauc_ndcg_at_20_diff1 value: 75.54887755702472 - type: nauc_ndcg_at_20_max value: 56.406879417251496 - type: nauc_ndcg_at_20_std value: -6.495231061329629 - type: nauc_ndcg_at_3_diff1 value: 75.03620356688509 - type: nauc_ndcg_at_3_max value: 52.147381077773424 - type: nauc_ndcg_at_3_std value: -8.448005688956199 - type: nauc_ndcg_at_5_diff1 value: 75.1195898074229 - type: nauc_ndcg_at_5_max value: 54.2321033861173 - type: nauc_ndcg_at_5_std value: -5.882690780895338 - type: nauc_precision_at_1000_diff1 value: -28.081979732100532 - type: nauc_precision_at_1000_max value: 35.055348014832916 - type: nauc_precision_at_1000_std value: 59.61280468927384 - type: nauc_precision_at_100_diff1 value: -25.112740730587458 - type: nauc_precision_at_100_max value: 38.26331300116496 - type: nauc_precision_at_100_std value: 62.46316222328831 - type: nauc_precision_at_10_diff1 value: -2.6766206473658833 - type: nauc_precision_at_10_max value: 45.95321867204845 - type: nauc_precision_at_10_std value: 45.07212468670564 - type: nauc_precision_at_1_diff1 value: 80.0454932568644 - type: nauc_precision_at_1_max value: 56.76038421319305 - type: nauc_precision_at_1_std value: -4.101939392632653 - type: nauc_precision_at_20_diff1 value: -10.698911116738385 - type: nauc_precision_at_20_max value: 43.467275950182994 - type: nauc_precision_at_20_std value: 48.00467321991766 - type: nauc_precision_at_3_diff1 value: 33.6344708541193 - type: nauc_precision_at_3_max value: 49.309242331670504 - type: nauc_precision_at_3_std value: 21.02940391379915 - type: nauc_precision_at_5_diff1 value: 13.560415600596318 - type: nauc_precision_at_5_max value: 48.918726500100085 - type: nauc_precision_at_5_std value: 39.940930429172184 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 70.82166199813196 - type: nauc_recall_at_100_max value: 76.6106442577042 - type: nauc_recall_at_100_std value: 66.47992530345513 - type: nauc_recall_at_10_diff1 value: 62.68908885556092 - type: nauc_recall_at_10_max value: 58.14262437741839 - type: nauc_recall_at_10_std value: -12.946717875063369 - type: nauc_recall_at_1_diff1 value: 79.54189281784247 - type: nauc_recall_at_1_max value: 46.630071622109526 - type: nauc_recall_at_1_std value: -14.395943134644112 - type: nauc_recall_at_20_diff1 value: 65.79470497876567 - type: nauc_recall_at_20_max value: 71.68308183488456 - type: nauc_recall_at_20_std value: -12.556850697268453 - type: nauc_recall_at_3_diff1 value: 68.3240211318129 - type: nauc_recall_at_3_max value: 45.05998217275036 - type: nauc_recall_at_3_std value: -14.23179772593869 - type: nauc_recall_at_5_diff1 value: 67.53366869904056 - type: nauc_recall_at_5_max value: 53.57935627081027 - type: nauc_recall_at_5_std value: -3.3271112904853393 - type: ndcg_at_1 value: 64.667 - type: ndcg_at_10 value: 78.233 - type: ndcg_at_100 value: 79.806 - type: ndcg_at_1000 value: 79.92099999999999 - type: ndcg_at_20 value: 79.006 - type: ndcg_at_3 value: 74.018 - type: ndcg_at_5 value: 76.334 - type: precision_at_1 value: 64.667 - type: precision_at_10 value: 10.4 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.383 - type: precision_at_3 value: 29.444 - type: precision_at_5 value: 19.467000000000002 - type: recall_at_1 value: 61.49400000000001 - type: recall_at_10 value: 92.156 - type: recall_at_100 value: 99.167 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 94.833 - type: recall_at_3 value: 80.833 - type: recall_at_5 value: 86.6 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.8039603960396 - type: cosine_accuracy_threshold value: 84.54211950302124 - type: cosine_ap value: 95.59056372734358 - type: cosine_f1 value: 90.1394422310757 - type: cosine_f1_threshold value: 84.54211950302124 - type: cosine_precision value: 89.78174603174604 - type: cosine_recall value: 90.5 - type: dot_accuracy value: 99.80594059405941 - type: dot_accuracy_threshold value: 85.57180166244507 - type: dot_ap value: 95.53453431914399 - type: dot_f1 value: 90.10442565887618 - type: dot_f1_threshold value: 84.59715843200684 - type: dot_precision value: 89.61424332344214 - type: dot_recall value: 90.60000000000001 - type: euclidean_accuracy value: 99.8039603960396 - type: euclidean_accuracy_threshold value: 53.253382444381714 - type: euclidean_ap value: 95.5850992402159 - type: euclidean_f1 value: 90.09457441513192 - type: euclidean_f1_threshold value: 55.725520849227905 - type: euclidean_precision value: 89.69276511397423 - type: euclidean_recall value: 90.5 - type: main_score value: 95.7485189884476 - type: manhattan_accuracy value: 99.81485148514851 - type: manhattan_accuracy_threshold value: 3491.29638671875 - type: manhattan_ap value: 95.7485189884476 - type: manhattan_f1 value: 90.464048954615 - type: manhattan_f1_threshold value: 3491.29638671875 - type: manhattan_precision value: 92.2996878251821 - type: manhattan_recall value: 88.7 - type: max_ap value: 95.7485189884476 - type: max_f1 value: 90.464048954615 - type: max_precision value: 92.2996878251821 - type: max_recall value: 90.60000000000001 - type: similarity_accuracy value: 99.8039603960396 - type: similarity_accuracy_threshold value: 84.54211950302124 - type: similarity_ap value: 95.59056372734358 - type: similarity_f1 value: 90.1394422310757 - type: similarity_f1_threshold value: 84.54211950302124 - type: similarity_precision value: 89.78174603174604 - type: similarity_recall value: 90.5 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 78.49205191950675 - type: v_measure value: 78.49205191950675 - type: v_measure_std value: 2.84869550699959 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 48.90421736513028 - type: v_measure value: 48.90421736513028 - type: v_measure_std value: 1.6875865714471023 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 52.9874730481696 - type: map value: 52.9874730481696 - type: mrr value: 53.85867604617604 - type: nAUC_map_diff1 value: 39.633429293407616 - type: nAUC_map_max value: 10.236807988858546 - type: nAUC_map_std value: 10.276522217929674 - type: nAUC_mrr_diff1 value: 40.0543079218377 - type: nAUC_mrr_max value: 10.96209807382042 - type: nAUC_mrr_std value: 10.524400196109918 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 30.727801109114232 - type: cosine_spearman value: 31.66058223980157 - type: dot_pearson value: 30.78818248622866 - type: dot_spearman value: 31.525158776890265 - type: main_score value: 31.66058223980157 - type: pearson value: 30.727801109114232 - type: spearman value: 31.66058223980157 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 85.206 - type: map_at_1 value: 0.246 - type: map_at_10 value: 2.1950000000000003 - type: map_at_100 value: 14.179 - type: map_at_1000 value: 35.037 - type: map_at_20 value: 4.143 - type: map_at_3 value: 0.7100000000000001 - type: map_at_5 value: 1.135 - type: mrr_at_1 value: 94.0 - type: mrr_at_10 value: 96.66666666666666 - type: mrr_at_100 value: 96.66666666666666 - type: mrr_at_1000 value: 96.66666666666666 - type: mrr_at_20 value: 96.66666666666666 - type: mrr_at_3 value: 96.66666666666666 - type: mrr_at_5 value: 96.66666666666666 - type: nauc_map_at_1000_diff1 value: -4.6264497624527525 - type: nauc_map_at_1000_max value: 44.594457564749355 - type: nauc_map_at_1000_std value: 73.17642341400133 - type: nauc_map_at_100_diff1 value: 23.451335157405726 - type: nauc_map_at_100_max value: 25.426398857299525 - type: nauc_map_at_100_std value: 64.07416694472633 - type: nauc_map_at_10_diff1 value: 46.57568738568346 - type: nauc_map_at_10_max value: 9.693233249079238 - type: nauc_map_at_10_std value: 28.549530265164357 - type: nauc_map_at_1_diff1 value: 53.48238396620123 - type: nauc_map_at_1_max value: 0.33476619393733076 - type: nauc_map_at_1_std value: 8.906362219128463 - type: nauc_map_at_20_diff1 value: 39.40719602207749 - type: nauc_map_at_20_max value: 9.635915072074045 - type: nauc_map_at_20_std value: 35.15634791346394 - type: nauc_map_at_3_diff1 value: 53.11784737840137 - type: nauc_map_at_3_max value: 3.059682761072153 - type: nauc_map_at_3_std value: 21.310633086556617 - type: nauc_map_at_5_diff1 value: 49.91570701185436 - type: nauc_map_at_5_max value: 8.045082896244576 - type: nauc_map_at_5_std value: 20.597686235051647 - type: nauc_mrr_at_1000_diff1 value: 41.98412698412726 - type: nauc_mrr_at_1000_max value: 78.24463118580779 - type: nauc_mrr_at_1000_std value: 0.30812324930028195 - type: nauc_mrr_at_100_diff1 value: 41.98412698412726 - type: nauc_mrr_at_100_max value: 78.24463118580779 - type: nauc_mrr_at_100_std value: 0.30812324930028195 - type: nauc_mrr_at_10_diff1 value: 41.98412698412726 - type: nauc_mrr_at_10_max value: 78.24463118580779 - type: nauc_mrr_at_10_std value: 0.30812324930028195 - type: nauc_mrr_at_1_diff1 value: 38.62433862433873 - type: nauc_mrr_at_1_max value: 80.78120136943666 - type: nauc_mrr_at_1_std value: -10.768751945222197 - type: nauc_mrr_at_20_diff1 value: 41.98412698412726 - type: nauc_mrr_at_20_max value: 78.24463118580779 - type: nauc_mrr_at_20_std value: 0.30812324930028195 - type: nauc_mrr_at_3_diff1 value: 41.98412698412726 - type: nauc_mrr_at_3_max value: 78.24463118580779 - type: nauc_mrr_at_3_std value: 0.30812324930028195 - type: nauc_mrr_at_5_diff1 value: 41.98412698412726 - type: nauc_mrr_at_5_max value: 78.24463118580779 - type: nauc_mrr_at_5_std value: 0.30812324930028195 - type: nauc_ndcg_at_1000_diff1 value: 0.5174948602880207 - type: nauc_ndcg_at_1000_max value: 48.60686602077053 - type: nauc_ndcg_at_1000_std value: 75.72456343175277 - type: nauc_ndcg_at_100_diff1 value: -20.747252137999254 - type: nauc_ndcg_at_100_max value: 49.985132618254994 - type: nauc_ndcg_at_100_std value: 61.096383293836574 - type: nauc_ndcg_at_10_diff1 value: 6.791377920463332 - type: nauc_ndcg_at_10_max value: 57.50019332833286 - type: nauc_ndcg_at_10_std value: 49.201028841219426 - type: nauc_ndcg_at_1_diff1 value: 54.92683440362145 - type: nauc_ndcg_at_1_max value: 83.8667228129276 - type: nauc_ndcg_at_1_std value: 1.6738604063586122 - type: nauc_ndcg_at_20_diff1 value: -5.1948699196314925 - type: nauc_ndcg_at_20_max value: 54.483087684806556 - type: nauc_ndcg_at_20_std value: 50.54823818118781 - type: nauc_ndcg_at_3_diff1 value: 26.267246500164372 - type: nauc_ndcg_at_3_max value: 63.0173212926611 - type: nauc_ndcg_at_3_std value: 41.025597406368256 - type: nauc_ndcg_at_5_diff1 value: 16.910185454343036 - type: nauc_ndcg_at_5_max value: 60.9328683868778 - type: nauc_ndcg_at_5_std value: 36.70169905857712 - type: nauc_precision_at_1000_diff1 value: -46.374447765983525 - type: nauc_precision_at_1000_max value: 35.36052337813863 - type: nauc_precision_at_1000_std value: 14.219220668161018 - type: nauc_precision_at_100_diff1 value: -29.7838083657744 - type: nauc_precision_at_100_max value: 43.93589400385112 - type: nauc_precision_at_100_std value: 55.425045718579945 - type: nauc_precision_at_10_diff1 value: -12.016613405227687 - type: nauc_precision_at_10_max value: 57.79924427743131 - type: nauc_precision_at_10_std value: 49.022036703550675 - type: nauc_precision_at_1_diff1 value: 38.62433862433873 - type: nauc_precision_at_1_max value: 80.78120136943666 - type: nauc_precision_at_1_std value: -10.768751945222197 - type: nauc_precision_at_20_diff1 value: -23.95633847880195 - type: nauc_precision_at_20_max value: 48.34715917258276 - type: nauc_precision_at_20_std value: 48.82198285255887 - type: nauc_precision_at_3_diff1 value: 6.871296905858807 - type: nauc_precision_at_3_max value: 70.54805793285054 - type: nauc_precision_at_3_std value: 44.65108624094803 - type: nauc_precision_at_5_diff1 value: -9.074932448759695 - type: nauc_precision_at_5_max value: 67.41284242437573 - type: nauc_precision_at_5_std value: 23.876891983919577 - type: nauc_recall_at_1000_diff1 value: 8.142288830293255 - type: nauc_recall_at_1000_max value: 38.85182826835104 - type: nauc_recall_at_1000_std value: 68.60783819217335 - type: nauc_recall_at_100_diff1 value: 34.262914076287466 - type: nauc_recall_at_100_max value: 12.87009658528838 - type: nauc_recall_at_100_std value: 56.21330603762995 - type: nauc_recall_at_10_diff1 value: 49.33830945338758 - type: nauc_recall_at_10_max value: 0.3539875530671406 - type: nauc_recall_at_10_std value: 26.85864465557644 - type: nauc_recall_at_1_diff1 value: 53.48238396620123 - type: nauc_recall_at_1_max value: 0.33476619393733076 - type: nauc_recall_at_1_std value: 8.906362219128463 - type: nauc_recall_at_20_diff1 value: 44.21928181266254 - type: nauc_recall_at_20_max value: -0.9198356057088594 - type: nauc_recall_at_20_std value: 31.484376992896784 - type: nauc_recall_at_3_diff1 value: 53.038093080990876 - type: nauc_recall_at_3_max value: -1.4170895916973003 - type: nauc_recall_at_3_std value: 21.890202855574497 - type: nauc_recall_at_5_diff1 value: 49.39742214825278 - type: nauc_recall_at_5_max value: 2.8412267611894517 - type: nauc_recall_at_5_std value: 18.01598921859512 - type: ndcg_at_1 value: 91.0 - type: ndcg_at_10 value: 85.206 - type: ndcg_at_100 value: 67.29 - type: ndcg_at_1000 value: 60.584 - type: ndcg_at_20 value: 82.321 - type: ndcg_at_3 value: 88.642 - type: ndcg_at_5 value: 87.063 - type: precision_at_1 value: 94.0 - type: precision_at_10 value: 89.8 - type: precision_at_100 value: 69.78 - type: precision_at_1000 value: 26.738 - type: precision_at_20 value: 87.2 - type: precision_at_3 value: 92.0 - type: precision_at_5 value: 90.8 - type: recall_at_1 value: 0.246 - type: recall_at_10 value: 2.344 - type: recall_at_100 value: 16.962 - type: recall_at_1000 value: 57.325 - type: recall_at_20 value: 4.517 - type: recall_at_3 value: 0.731 - type: recall_at_5 value: 1.1780000000000002 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 31.455 - type: map_at_1 value: 2.9739999999999998 - type: map_at_10 value: 12.183 - type: map_at_100 value: 18.772 - type: map_at_1000 value: 20.415 - type: map_at_20 value: 14.451 - type: map_at_3 value: 6.507000000000001 - type: map_at_5 value: 8.66 - type: mrr_at_1 value: 40.816326530612244 - type: mrr_at_10 value: 57.70975056689341 - type: mrr_at_100 value: 58.18379126542391 - type: mrr_at_1000 value: 58.18379126542391 - type: mrr_at_20 value: 57.85552316164561 - type: mrr_at_3 value: 54.08163265306123 - type: mrr_at_5 value: 56.42857142857143 - type: nauc_map_at_1000_diff1 value: 3.1567471051481437 - type: nauc_map_at_1000_max value: -1.5882060729791523 - type: nauc_map_at_1000_std value: 18.69622198722074 - type: nauc_map_at_100_diff1 value: 3.3449677678147536 - type: nauc_map_at_100_max value: -2.8928606866168405 - type: nauc_map_at_100_std value: 15.789984947653412 - type: nauc_map_at_10_diff1 value: 2.9696743570444264 - type: nauc_map_at_10_max value: -9.096749212011876 - type: nauc_map_at_10_std value: -5.38545817258353 - type: nauc_map_at_1_diff1 value: 20.680780404542546 - type: nauc_map_at_1_max value: -7.04722927447817 - type: nauc_map_at_1_std value: -7.062494733973898 - type: nauc_map_at_20_diff1 value: 4.070437790119271 - type: nauc_map_at_20_max value: -4.84491434686032 - type: nauc_map_at_20_std value: 0.5846341109021014 - type: nauc_map_at_3_diff1 value: 11.9634978045925 - type: nauc_map_at_3_max value: -8.27834591046608 - type: nauc_map_at_3_std value: -8.687615453381065 - type: nauc_map_at_5_diff1 value: 0.9195191526009436 - type: nauc_map_at_5_max value: -1.673813362719489 - type: nauc_map_at_5_std value: -6.67549753473631 - type: nauc_mrr_at_1000_diff1 value: 19.877993208719573 - type: nauc_mrr_at_1000_max value: -10.37776706406218 - type: nauc_mrr_at_1000_std value: 7.132169578056367 - type: nauc_mrr_at_100_diff1 value: 19.877993208719573 - type: nauc_mrr_at_100_max value: -10.37776706406218 - type: nauc_mrr_at_100_std value: 7.132169578056367 - type: nauc_mrr_at_10_diff1 value: 20.414285568401457 - type: nauc_mrr_at_10_max value: -9.677800295687861 - type: nauc_mrr_at_10_std value: 8.001103690180859 - type: nauc_mrr_at_1_diff1 value: 22.393284073955723 - type: nauc_mrr_at_1_max value: -5.889370191243167 - type: nauc_mrr_at_1_std value: -1.5183536173658247 - type: nauc_mrr_at_20_diff1 value: 20.455564720604055 - type: nauc_mrr_at_20_max value: -10.230642830103074 - type: nauc_mrr_at_20_std value: 7.863582453266621 - type: nauc_mrr_at_3_diff1 value: 17.554895390732618 - type: nauc_mrr_at_3_max value: -15.618463505555052 - type: nauc_mrr_at_3_std value: 5.913231577966864 - type: nauc_mrr_at_5_diff1 value: 18.393678507779914 - type: nauc_mrr_at_5_max value: -11.903593353147762 - type: nauc_mrr_at_5_std value: 7.580745996262831 - type: nauc_ndcg_at_1000_diff1 value: 13.746937095530473 - type: nauc_ndcg_at_1000_max value: -0.9319249687895838 - type: nauc_ndcg_at_1000_std value: 38.56328031451904 - type: nauc_ndcg_at_100_diff1 value: 13.854865944415895 - type: nauc_ndcg_at_100_max value: -7.142142012591404 - type: nauc_ndcg_at_100_std value: 35.61341954818848 - type: nauc_ndcg_at_10_diff1 value: 9.010144273248759 - type: nauc_ndcg_at_10_max value: -15.320014897424574 - type: nauc_ndcg_at_10_std value: 2.84883880489144 - type: nauc_ndcg_at_1_diff1 value: 20.939533945592967 - type: nauc_ndcg_at_1_max value: -6.387319972188946 - type: nauc_ndcg_at_1_std value: -0.5258673122126726 - type: nauc_ndcg_at_20_diff1 value: 14.660827309009496 - type: nauc_ndcg_at_20_max value: -13.476196120145994 - type: nauc_ndcg_at_20_std value: 8.22391881710838 - type: nauc_ndcg_at_3_diff1 value: 13.429985227235935 - type: nauc_ndcg_at_3_max value: -14.904544592570247 - type: nauc_ndcg_at_3_std value: 1.599779998183342 - type: nauc_ndcg_at_5_diff1 value: 8.085466231900622 - type: nauc_ndcg_at_5_max value: -9.09591969526831 - type: nauc_ndcg_at_5_std value: 3.5794092637248505 - type: nauc_precision_at_1000_diff1 value: -9.31941215946743 - type: nauc_precision_at_1000_max value: 31.52913520470716 - type: nauc_precision_at_1000_std value: 22.720784312185856 - type: nauc_precision_at_100_diff1 value: 8.958548406995279 - type: nauc_precision_at_100_max value: 15.100597910674104 - type: nauc_precision_at_100_std value: 71.04548238175113 - type: nauc_precision_at_10_diff1 value: 12.4698194690008 - type: nauc_precision_at_10_max value: -15.84870544871496 - type: nauc_precision_at_10_std value: 7.575297622501928 - type: nauc_precision_at_1_diff1 value: 22.393284073955723 - type: nauc_precision_at_1_max value: -5.889370191243167 - type: nauc_precision_at_1_std value: -1.5183536173658247 - type: nauc_precision_at_20_diff1 value: 15.393505718138758 - type: nauc_precision_at_20_max value: -3.70684298539384 - type: nauc_precision_at_20_std value: 29.426137824970304 - type: nauc_precision_at_3_diff1 value: 9.997768085465394 - type: nauc_precision_at_3_max value: -17.12224314347674 - type: nauc_precision_at_3_std value: -1.343018166772313 - type: nauc_precision_at_5_diff1 value: 3.8936997437913554 - type: nauc_precision_at_5_max value: -5.689104289687632 - type: nauc_precision_at_5_std value: 3.181098051304285 - type: nauc_recall_at_1000_diff1 value: 9.908303508158387 - type: nauc_recall_at_1000_max value: 6.174506592699848 - type: nauc_recall_at_1000_std value: 77.41931114780012 - type: nauc_recall_at_100_diff1 value: 10.286839241876192 - type: nauc_recall_at_100_max value: -6.6138697026666815 - type: nauc_recall_at_100_std value: 49.608313692633224 - type: nauc_recall_at_10_diff1 value: 2.215545846659851 - type: nauc_recall_at_10_max value: -17.83025802478445 - type: nauc_recall_at_10_std value: -3.3784768673705465 - type: nauc_recall_at_1_diff1 value: 20.680780404542546 - type: nauc_recall_at_1_max value: -7.04722927447817 - type: nauc_recall_at_1_std value: -7.062494733973898 - type: nauc_recall_at_20_diff1 value: 6.974410239251615 - type: nauc_recall_at_20_max value: -14.161147924731646 - type: nauc_recall_at_20_std value: 9.328412057721454 - type: nauc_recall_at_3_diff1 value: 7.904589805754212 - type: nauc_recall_at_3_max value: -12.1912388648593 - type: nauc_recall_at_3_std value: -9.221542013385555 - type: nauc_recall_at_5_diff1 value: -3.2604132752706914 - type: nauc_recall_at_5_max value: -6.886351441658915 - type: nauc_recall_at_5_std value: -7.014252851712789 - type: ndcg_at_1 value: 39.796 - type: ndcg_at_10 value: 31.455 - type: ndcg_at_100 value: 42.388999999999996 - type: ndcg_at_1000 value: 53.556000000000004 - type: ndcg_at_20 value: 30.808000000000003 - type: ndcg_at_3 value: 35.831 - type: ndcg_at_5 value: 32.845 - type: precision_at_1 value: 40.816 - type: precision_at_10 value: 27.143 - type: precision_at_100 value: 8.449 - type: precision_at_1000 value: 1.6179999999999999 - type: precision_at_20 value: 19.387999999999998 - type: precision_at_3 value: 35.374 - type: precision_at_5 value: 31.019999999999996 - type: recall_at_1 value: 2.9739999999999998 - type: recall_at_10 value: 19.39 - type: recall_at_100 value: 51.636 - type: recall_at_1000 value: 86.99900000000001 - type: recall_at_20 value: 26.478 - type: recall_at_3 value: 7.703 - type: recall_at_5 value: 11.42 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 86.9384765625 - type: ap value: 31.737513704141552 - type: ap_weighted value: 31.737513704141552 - type: f1 value: 71.5490757306975 - type: f1_weighted value: 89.14632533489856 - type: main_score value: 86.9384765625 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 73.57668364459535 - type: f1 value: 73.90467103648074 - type: f1_weighted value: 73.42158415034704 - type: main_score value: 73.57668364459535 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 58.574148097494685 - type: v_measure value: 58.574148097494685 - type: v_measure_std value: 0.9443161637490822 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 88.1385229778864 - type: cosine_accuracy_threshold value: 83.86307954788208 - type: cosine_ap value: 80.17965893449055 - type: cosine_f1 value: 73.0614300100705 - type: cosine_f1_threshold value: 80.7942807674408 - type: cosine_precision value: 69.8603755416466 - type: cosine_recall value: 76.56992084432717 - type: dot_accuracy value: 88.2100494724921 - type: dot_accuracy_threshold value: 83.84793996810913 - type: dot_ap value: 80.18603932881858 - type: dot_f1 value: 73.07643714466204 - type: dot_f1_threshold value: 80.87586164474487 - type: dot_precision value: 70.10909090909091 - type: dot_recall value: 76.3060686015831 - type: euclidean_accuracy value: 88.1385229778864 - type: euclidean_accuracy_threshold value: 56.77661895751953 - type: euclidean_ap value: 80.1784070881624 - type: euclidean_f1 value: 73.04830369529574 - type: euclidean_f1_threshold value: 61.91838979721069 - type: euclidean_precision value: 69.96859144720948 - type: euclidean_recall value: 76.41160949868075 - type: main_score value: 80.18603932881858 - type: manhattan_accuracy value: 88.0431543184121 - type: manhattan_accuracy_threshold value: 3755.6137084960938 - type: manhattan_ap value: 79.98270453664578 - type: manhattan_f1 value: 72.68242015061023 - type: manhattan_f1_threshold value: 3892.494583129883 - type: manhattan_precision value: 71.54907975460122 - type: manhattan_recall value: 73.85224274406332 - type: max_ap value: 80.18603932881858 - type: max_f1 value: 73.07643714466204 - type: max_precision value: 71.54907975460122 - type: max_recall value: 76.56992084432717 - type: similarity_accuracy value: 88.1385229778864 - type: similarity_accuracy_threshold value: 83.86307954788208 - type: similarity_ap value: 80.17965893449055 - type: similarity_f1 value: 73.0614300100705 - type: similarity_f1_threshold value: 80.7942807674408 - type: similarity_precision value: 69.8603755416466 - type: similarity_recall value: 76.56992084432717 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 89.7892653393876 - type: cosine_accuracy_threshold value: 79.69566583633423 - type: cosine_ap value: 87.4579867302024 - type: cosine_f1 value: 79.91620843152658 - type: cosine_f1_threshold value: 78.53609323501587 - type: cosine_precision value: 77.7155329210622 - type: cosine_recall value: 82.24514936864799 - type: dot_accuracy value: 89.78732487289945 - type: dot_accuracy_threshold value: 80.05315661430359 - type: dot_ap value: 87.44916182456272 - type: dot_f1 value: 79.90419878751591 - type: dot_f1_threshold value: 78.57890725135803 - type: dot_precision value: 77.73409057812728 - type: dot_recall value: 82.19895287958116 - type: euclidean_accuracy value: 89.78538440641131 - type: euclidean_accuracy_threshold value: 62.29925751686096 - type: euclidean_ap value: 87.45904868911386 - type: euclidean_f1 value: 79.93127404474657 - type: euclidean_f1_threshold value: 65.61101078987122 - type: euclidean_precision value: 77.62060210373595 - type: euclidean_recall value: 82.38373883584848 - type: main_score value: 87.46554314325058 - type: manhattan_accuracy value: 89.76597974152986 - type: manhattan_accuracy_threshold value: 3988.5299682617188 - type: manhattan_ap value: 87.46554314325058 - type: manhattan_f1 value: 79.97181740645973 - type: manhattan_f1_threshold value: 4235.905838012695 - type: manhattan_precision value: 77.13713427283783 - type: manhattan_recall value: 83.02279026793964 - type: max_ap value: 87.46554314325058 - type: max_f1 value: 79.97181740645973 - type: max_precision value: 77.73409057812728 - type: max_recall value: 83.02279026793964 - type: similarity_accuracy value: 89.7892653393876 - type: similarity_accuracy_threshold value: 79.69566583633423 - type: similarity_ap value: 87.4579867302024 - type: similarity_f1 value: 79.91620843152658 - type: similarity_f1_threshold value: 78.53609323501587 - type: similarity_precision value: 77.7155329210622 - type: similarity_recall value: 82.24514936864799 --- # Updates New open-source models and ToDoList will be listed on https://github.com/DunZhang/Stella/blob/main/news_and_todo.md. You can also find these models on my [homepage](https://huggingface.co/infgrad). # Introduction The models are trained based on `Alibaba-NLP/gte-large-en-v1.5` and `Alibaba-NLP/gte-Qwen2-1.5B-instruct`. Thanks for their contributions! **We simplify usage of prompts, providing two prompts for most general tasks, one is for s2p, another one is for s2s.** Prompt of s2p task(e.g. retrieve task): ```text Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: {query} ``` Prompt of s2s task(e.g. semantic textual similarity task): ```text Instruct: Retrieve semantically similar text.\nQuery: {query} ``` The models are finally trained by [MRL](https://arxiv.org/abs/2205.13147), so they have multiple dimensions: 512, 768, 1024, 2048, 4096, 6144 and 8192. The higher the dimension, the better the performance. **Generally speaking, 1024d is good enough.** The MTEB score of 1024d is only 0.001 lower than 8192d. # Model directory structure The model directory structure is very simple, it is a standard SentenceTransformer directory **with a series of `2_Dense_{dims}` folders**, where `dims` represents the final vector dimension. For example, the `2_Dense_256` folder stores Linear weights that convert vector dimensions to 256 dimensions. Please refer to the following chapters for specific instructions on how to use them. # Usage You can use `SentenceTransformers` or `transformers` library to encode text. ## Sentence Transformers ```python from sentence_transformers import SentenceTransformer # This model supports two prompts: "s2p_query" and "s2s_query" for sentence-to-passage and sentence-to-sentence tasks, respectively. # They are defined in `config_sentence_transformers.json` query_prompt_name = "s2p_query" queries = [ "What are some ways to reduce stress?", "What are the benefits of drinking green tea?", ] # docs do not need any prompts docs = [ "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.", "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.", ] # !The default dimension is 1024, if you need other dimensions, please clone the model and modify `modules.json` to replace `2_Dense_1024` with another dimension, e.g. `2_Dense_256` or `2_Dense_8192` ! # on gpu model = SentenceTransformer("dunzhang/stella_en_400M_v5", trust_remote_code=True).cuda() # you can also use this model without the features of `use_memory_efficient_attention` and `unpad_inputs`. It can be worked in CPU. # model = SentenceTransformer( # "dunzhang/stella_en_400M_v5", # trust_remote_code=True, # device="cpu", # config_kwargs={"use_memory_efficient_attention": False, "unpad_inputs": False} # ) query_embeddings = model.encode(queries, prompt_name=query_prompt_name) doc_embeddings = model.encode(docs) print(query_embeddings.shape, doc_embeddings.shape) # (2, 1024) (2, 1024) similarities = model.similarity(query_embeddings, doc_embeddings) print(similarities) # tensor([[0.8398, 0.2990], # [0.3282, 0.8095]]) ``` ## Transformers ```python import os import torch from transformers import AutoModel, AutoTokenizer from sklearn.preprocessing import normalize query_prompt = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: " queries = [ "What are some ways to reduce stress?", "What are the benefits of drinking green tea?", ] queries = [query_prompt + query for query in queries] # docs do not need any prompts docs = [ "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.", "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.", ] # The path of your model after cloning it model_dir = "{Your MODEL_PATH}" vector_dim = 1024 vector_linear_directory = f"2_Dense_{vector_dim}" model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).cuda().eval() # you can also use this model without the features of `use_memory_efficient_attention` and `unpad_inputs`. It can be worked in CPU. # model = AutoModel.from_pretrained(model_dir, trust_remote_code=True,use_memory_efficient_attention=False,unpad_inputs=False).cuda().eval() tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) vector_linear = torch.nn.Linear(in_features=model.config.hidden_size, out_features=vector_dim) vector_linear_dict = { k.replace("linear.", ""): v for k, v in torch.load(os.path.join(model_dir, f"{vector_linear_directory}/pytorch_model.bin")).items() } vector_linear.load_state_dict(vector_linear_dict) vector_linear.cuda() # Embed the queries with torch.no_grad(): input_data = tokenizer(queries, padding="longest", truncation=True, max_length=512, return_tensors="pt") input_data = {k: v.cuda() for k, v in input_data.items()} attention_mask = input_data["attention_mask"] last_hidden_state = model(**input_data)[0] last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) query_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] query_vectors = normalize(vector_linear(query_vectors).cpu().numpy()) # Embed the documents with torch.no_grad(): input_data = tokenizer(docs, padding="longest", truncation=True, max_length=512, return_tensors="pt") input_data = {k: v.cuda() for k, v in input_data.items()} attention_mask = input_data["attention_mask"] last_hidden_state = model(**input_data)[0] last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) docs_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] docs_vectors = normalize(vector_linear(docs_vectors).cpu().numpy()) print(query_vectors.shape, docs_vectors.shape) # (2, 1024) (2, 1024) similarities = query_vectors @ docs_vectors.T print(similarities) # [[0.8397531 0.29900077] # [0.32818374 0.80954516]] ``` # FAQ Q: The details of training? A: The training method and datasets will be released in the future. (specific time unknown, may be provided in a paper) Q: How to choose a suitable prompt for my own task? A: In most cases, please use the s2p and s2s prompts. These two prompts account for the vast majority of the training data. Q: How to reproduce MTEB results? A: Please use evaluation scripts in `Alibaba-NLP/gte-Qwen2-1.5B-instruct` or `intfloat/e5-mistral-7b-instruct` Q: Why each dimension has a linear weight? A: MRL has multiple training methods, we choose this method which has the best performance. Q: What is the sequence length of models? A: 512 is recommended, in our experiments, almost all models perform poorly on specialized long text retrieval datasets. Besides, the model is trained on datasets of 512 length. This may be an optimization term. If you have any questions, please start a discussion on community.
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
moussaKam/barthez-orangesum-title
moussaKam
summarization
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "summarization", "fr", "arxiv:2010.12321", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,646
1,734
171
3
--- language: - fr license: apache-2.0 tags: - summarization widget: - text: Citant les préoccupations de ses clients dénonçant des cas de censure après la suppression du compte de Trump, un fournisseur d'accès Internet de l'État de l'Idaho a décidé de bloquer Facebook et Twitter. La mesure ne concernera cependant que les clients mécontents de la politique de ces réseaux sociaux. --- ### Barthez model finetuned on orangeSum (title generation) finetuning: examples/seq2seq/ (as of Nov 06, 2020) Metrics: ROUGE-2 > 23 paper: https://arxiv.org/abs/2010.12321 \ github: https://github.com/moussaKam/BARThez ``` @article{eddine2020barthez, title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model}, author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis}, journal={arXiv preprint arXiv:2010.12321}, year={2020} } ```
[ "SUMMARIZATION" ]
[ "CAS" ]
Non_BioNLP
radames/e5-large
radames
feature-extraction
[ "generic", "pytorch", "bert", "mteb", "feature-extraction", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "model-index", "region:us" ]
1,681
1,681
11
1
--- language: - en library_name: generic tags: - mteb - feature-extraction widget: - text: 'query: how much protein should a female eat' duplicated_from: intfloat/e5-large model-index: - name: e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.68656716417911 - type: ap value: 41.336896075573584 - type: f1 value: 71.788561468075 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 90.04965 - type: ap value: 86.24637009569418 - type: f1 value: 90.03896671762645 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.016000000000005 - type: f1 value: 42.1942431880186 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 25.107000000000003 - type: map_at_10 value: 40.464 - type: map_at_100 value: 41.577999999999996 - type: map_at_1000 value: 41.588 - type: map_at_3 value: 35.301 - type: map_at_5 value: 38.263000000000005 - type: mrr_at_1 value: 25.605 - type: mrr_at_10 value: 40.64 - type: mrr_at_100 value: 41.760000000000005 - type: mrr_at_1000 value: 41.77 - type: mrr_at_3 value: 35.443000000000005 - type: mrr_at_5 value: 38.448 - type: ndcg_at_1 value: 25.107000000000003 - type: ndcg_at_10 value: 49.352000000000004 - type: ndcg_at_100 value: 53.98500000000001 - type: ndcg_at_1000 value: 54.208 - type: ndcg_at_3 value: 38.671 - type: ndcg_at_5 value: 43.991 - type: precision_at_1 value: 25.107000000000003 - type: precision_at_10 value: 7.795000000000001 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.145 - type: precision_at_5 value: 12.262 - type: recall_at_1 value: 25.107000000000003 - type: recall_at_10 value: 77.952 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 48.435 - type: recall_at_5 value: 61.309000000000005 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.19278045044154 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 41.37976387757665 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.07433334608074 - type: mrr value: 73.44347711383723 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.4298072183543 - type: cos_sim_spearman value: 84.73144873582848 - type: euclidean_pearson value: 85.15885058870728 - type: euclidean_spearman value: 85.42062106559356 - type: manhattan_pearson value: 84.89409921792054 - type: manhattan_spearman value: 85.31941394024344 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.14285714285714 - type: f1 value: 84.11674412565644 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.600076342340785 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.08861812135148 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.684000000000005 - type: map_at_10 value: 41.675000000000004 - type: map_at_100 value: 42.963 - type: map_at_1000 value: 43.078 - type: map_at_3 value: 38.708999999999996 - type: map_at_5 value: 40.316 - type: mrr_at_1 value: 39.485 - type: mrr_at_10 value: 47.152 - type: mrr_at_100 value: 47.96 - type: mrr_at_1000 value: 48.010000000000005 - type: mrr_at_3 value: 44.754 - type: mrr_at_5 value: 46.285 - type: ndcg_at_1 value: 39.485 - type: ndcg_at_10 value: 46.849000000000004 - type: ndcg_at_100 value: 52.059 - type: ndcg_at_1000 value: 54.358 - type: ndcg_at_3 value: 42.705 - type: ndcg_at_5 value: 44.663000000000004 - type: precision_at_1 value: 39.485 - type: precision_at_10 value: 8.455 - type: precision_at_100 value: 1.3379999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 19.695 - type: precision_at_5 value: 13.905999999999999 - type: recall_at_1 value: 32.684000000000005 - type: recall_at_10 value: 56.227000000000004 - type: recall_at_100 value: 78.499 - type: recall_at_1000 value: 94.021 - type: recall_at_3 value: 44.157999999999994 - type: recall_at_5 value: 49.694 - type: map_at_1 value: 31.875999999999998 - type: map_at_10 value: 41.603 - type: map_at_100 value: 42.825 - type: map_at_1000 value: 42.961 - type: map_at_3 value: 38.655 - type: map_at_5 value: 40.294999999999995 - type: mrr_at_1 value: 40.127 - type: mrr_at_10 value: 47.959 - type: mrr_at_100 value: 48.59 - type: mrr_at_1000 value: 48.634 - type: mrr_at_3 value: 45.786 - type: mrr_at_5 value: 46.964 - type: ndcg_at_1 value: 40.127 - type: ndcg_at_10 value: 47.176 - type: ndcg_at_100 value: 51.346000000000004 - type: ndcg_at_1000 value: 53.502 - type: ndcg_at_3 value: 43.139 - type: ndcg_at_5 value: 44.883 - type: precision_at_1 value: 40.127 - type: precision_at_10 value: 8.72 - type: precision_at_100 value: 1.387 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 20.637 - type: precision_at_5 value: 14.446 - type: recall_at_1 value: 31.875999999999998 - type: recall_at_10 value: 56.54900000000001 - type: recall_at_100 value: 73.939 - type: recall_at_1000 value: 87.732 - type: recall_at_3 value: 44.326 - type: recall_at_5 value: 49.445 - type: map_at_1 value: 41.677 - type: map_at_10 value: 52.222 - type: map_at_100 value: 53.229000000000006 - type: map_at_1000 value: 53.288000000000004 - type: map_at_3 value: 49.201 - type: map_at_5 value: 51.00599999999999 - type: mrr_at_1 value: 47.524 - type: mrr_at_10 value: 55.745999999999995 - type: mrr_at_100 value: 56.433 - type: mrr_at_1000 value: 56.464999999999996 - type: mrr_at_3 value: 53.37499999999999 - type: mrr_at_5 value: 54.858 - type: ndcg_at_1 value: 47.524 - type: ndcg_at_10 value: 57.406 - type: ndcg_at_100 value: 61.403 - type: ndcg_at_1000 value: 62.7 - type: ndcg_at_3 value: 52.298 - type: ndcg_at_5 value: 55.02 - type: precision_at_1 value: 47.524 - type: precision_at_10 value: 8.865 - type: precision_at_100 value: 1.179 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 22.612 - type: precision_at_5 value: 15.461 - type: recall_at_1 value: 41.677 - type: recall_at_10 value: 69.346 - type: recall_at_100 value: 86.344 - type: recall_at_1000 value: 95.703 - type: recall_at_3 value: 55.789 - type: recall_at_5 value: 62.488 - type: map_at_1 value: 25.991999999999997 - type: map_at_10 value: 32.804 - type: map_at_100 value: 33.812999999999995 - type: map_at_1000 value: 33.897 - type: map_at_3 value: 30.567 - type: map_at_5 value: 31.599 - type: mrr_at_1 value: 27.797 - type: mrr_at_10 value: 34.768 - type: mrr_at_100 value: 35.702 - type: mrr_at_1000 value: 35.766 - type: mrr_at_3 value: 32.637 - type: mrr_at_5 value: 33.614 - type: ndcg_at_1 value: 27.797 - type: ndcg_at_10 value: 36.966 - type: ndcg_at_100 value: 41.972 - type: ndcg_at_1000 value: 44.139 - type: ndcg_at_3 value: 32.547 - type: ndcg_at_5 value: 34.258 - type: precision_at_1 value: 27.797 - type: precision_at_10 value: 5.514 - type: precision_at_100 value: 0.8340000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 13.333 - type: precision_at_5 value: 9.04 - type: recall_at_1 value: 25.991999999999997 - type: recall_at_10 value: 47.941 - type: recall_at_100 value: 71.039 - type: recall_at_1000 value: 87.32799999999999 - type: recall_at_3 value: 36.01 - type: recall_at_5 value: 40.056000000000004 - type: map_at_1 value: 17.533 - type: map_at_10 value: 24.336 - type: map_at_100 value: 25.445 - type: map_at_1000 value: 25.561 - type: map_at_3 value: 22.116 - type: map_at_5 value: 23.347 - type: mrr_at_1 value: 21.642 - type: mrr_at_10 value: 28.910999999999998 - type: mrr_at_100 value: 29.836000000000002 - type: mrr_at_1000 value: 29.907 - type: mrr_at_3 value: 26.638 - type: mrr_at_5 value: 27.857 - type: ndcg_at_1 value: 21.642 - type: ndcg_at_10 value: 28.949 - type: ndcg_at_100 value: 34.211000000000006 - type: ndcg_at_1000 value: 37.031 - type: ndcg_at_3 value: 24.788 - type: ndcg_at_5 value: 26.685 - type: precision_at_1 value: 21.642 - type: precision_at_10 value: 5.137 - type: precision_at_100 value: 0.893 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 11.733 - type: precision_at_5 value: 8.383000000000001 - type: recall_at_1 value: 17.533 - type: recall_at_10 value: 38.839 - type: recall_at_100 value: 61.458999999999996 - type: recall_at_1000 value: 81.58 - type: recall_at_3 value: 27.328999999999997 - type: recall_at_5 value: 32.168 - type: map_at_1 value: 28.126 - type: map_at_10 value: 37.872 - type: map_at_100 value: 39.229 - type: map_at_1000 value: 39.353 - type: map_at_3 value: 34.93 - type: map_at_5 value: 36.59 - type: mrr_at_1 value: 34.071 - type: mrr_at_10 value: 43.056 - type: mrr_at_100 value: 43.944 - type: mrr_at_1000 value: 43.999 - type: mrr_at_3 value: 40.536 - type: mrr_at_5 value: 42.065999999999995 - type: ndcg_at_1 value: 34.071 - type: ndcg_at_10 value: 43.503 - type: ndcg_at_100 value: 49.120000000000005 - type: ndcg_at_1000 value: 51.410999999999994 - type: ndcg_at_3 value: 38.767 - type: ndcg_at_5 value: 41.075 - type: precision_at_1 value: 34.071 - type: precision_at_10 value: 7.843999999999999 - type: precision_at_100 value: 1.2489999999999999 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 18.223 - type: precision_at_5 value: 13.050999999999998 - type: recall_at_1 value: 28.126 - type: recall_at_10 value: 54.952 - type: recall_at_100 value: 78.375 - type: recall_at_1000 value: 93.29899999999999 - type: recall_at_3 value: 41.714 - type: recall_at_5 value: 47.635 - type: map_at_1 value: 25.957 - type: map_at_10 value: 34.749 - type: map_at_100 value: 35.929 - type: map_at_1000 value: 36.043 - type: map_at_3 value: 31.947 - type: map_at_5 value: 33.575 - type: mrr_at_1 value: 32.078 - type: mrr_at_10 value: 39.844 - type: mrr_at_100 value: 40.71 - type: mrr_at_1000 value: 40.77 - type: mrr_at_3 value: 37.386 - type: mrr_at_5 value: 38.83 - type: ndcg_at_1 value: 32.078 - type: ndcg_at_10 value: 39.97 - type: ndcg_at_100 value: 45.254 - type: ndcg_at_1000 value: 47.818 - type: ndcg_at_3 value: 35.453 - type: ndcg_at_5 value: 37.631 - type: precision_at_1 value: 32.078 - type: precision_at_10 value: 7.158 - type: precision_at_100 value: 1.126 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 16.743 - type: precision_at_5 value: 11.872 - type: recall_at_1 value: 25.957 - type: recall_at_10 value: 50.583 - type: recall_at_100 value: 73.593 - type: recall_at_1000 value: 91.23599999999999 - type: recall_at_3 value: 37.651 - type: recall_at_5 value: 43.626 - type: map_at_1 value: 27.1505 - type: map_at_10 value: 34.844833333333334 - type: map_at_100 value: 35.95216666666667 - type: map_at_1000 value: 36.06675 - type: map_at_3 value: 32.41975 - type: map_at_5 value: 33.74233333333333 - type: mrr_at_1 value: 31.923666666666662 - type: mrr_at_10 value: 38.87983333333334 - type: mrr_at_100 value: 39.706250000000004 - type: mrr_at_1000 value: 39.76708333333333 - type: mrr_at_3 value: 36.72008333333333 - type: mrr_at_5 value: 37.96933333333334 - type: ndcg_at_1 value: 31.923666666666662 - type: ndcg_at_10 value: 39.44258333333334 - type: ndcg_at_100 value: 44.31475 - type: ndcg_at_1000 value: 46.75 - type: ndcg_at_3 value: 35.36299999999999 - type: ndcg_at_5 value: 37.242333333333335 - type: precision_at_1 value: 31.923666666666662 - type: precision_at_10 value: 6.643333333333333 - type: precision_at_100 value: 1.0612499999999998 - type: precision_at_1000 value: 0.14575 - type: precision_at_3 value: 15.875250000000001 - type: precision_at_5 value: 11.088916666666664 - type: recall_at_1 value: 27.1505 - type: recall_at_10 value: 49.06349999999999 - type: recall_at_100 value: 70.60841666666666 - type: recall_at_1000 value: 87.72049999999999 - type: recall_at_3 value: 37.60575000000001 - type: recall_at_5 value: 42.511166666666675 - type: map_at_1 value: 25.101000000000003 - type: map_at_10 value: 30.147000000000002 - type: map_at_100 value: 30.98 - type: map_at_1000 value: 31.080000000000002 - type: map_at_3 value: 28.571 - type: map_at_5 value: 29.319 - type: mrr_at_1 value: 27.761000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.504 - type: mrr_at_1000 value: 33.574 - type: mrr_at_3 value: 31.135 - type: mrr_at_5 value: 32.032 - type: ndcg_at_1 value: 27.761000000000003 - type: ndcg_at_10 value: 33.358 - type: ndcg_at_100 value: 37.569 - type: ndcg_at_1000 value: 40.189 - type: ndcg_at_3 value: 30.291 - type: ndcg_at_5 value: 31.558000000000003 - type: precision_at_1 value: 27.761000000000003 - type: precision_at_10 value: 4.939 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.577 - type: precision_at_5 value: 8.497 - type: recall_at_1 value: 25.101000000000003 - type: recall_at_10 value: 40.739 - type: recall_at_100 value: 60.089999999999996 - type: recall_at_1000 value: 79.768 - type: recall_at_3 value: 32.16 - type: recall_at_5 value: 35.131 - type: map_at_1 value: 20.112 - type: map_at_10 value: 26.119999999999997 - type: map_at_100 value: 27.031 - type: map_at_1000 value: 27.150000000000002 - type: map_at_3 value: 24.230999999999998 - type: map_at_5 value: 25.15 - type: mrr_at_1 value: 24.535 - type: mrr_at_10 value: 30.198000000000004 - type: mrr_at_100 value: 30.975 - type: mrr_at_1000 value: 31.051000000000002 - type: mrr_at_3 value: 28.338 - type: mrr_at_5 value: 29.269000000000002 - type: ndcg_at_1 value: 24.535 - type: ndcg_at_10 value: 30.147000000000002 - type: ndcg_at_100 value: 34.544000000000004 - type: ndcg_at_1000 value: 37.512 - type: ndcg_at_3 value: 26.726 - type: ndcg_at_5 value: 28.046 - type: precision_at_1 value: 24.535 - type: precision_at_10 value: 5.179 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 12.159 - type: precision_at_5 value: 8.424 - type: recall_at_1 value: 20.112 - type: recall_at_10 value: 38.312000000000005 - type: recall_at_100 value: 58.406000000000006 - type: recall_at_1000 value: 79.863 - type: recall_at_3 value: 28.358 - type: recall_at_5 value: 31.973000000000003 - type: map_at_1 value: 27.111 - type: map_at_10 value: 34.096 - type: map_at_100 value: 35.181000000000004 - type: map_at_1000 value: 35.276 - type: map_at_3 value: 31.745 - type: map_at_5 value: 33.045 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 37.994 - type: mrr_at_100 value: 38.873000000000005 - type: mrr_at_1000 value: 38.934999999999995 - type: mrr_at_3 value: 35.743 - type: mrr_at_5 value: 37.077 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 38.572 - type: ndcg_at_100 value: 43.854 - type: ndcg_at_1000 value: 46.190999999999995 - type: ndcg_at_3 value: 34.247 - type: ndcg_at_5 value: 36.28 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 6.166 - type: precision_at_100 value: 1 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 15.081 - type: precision_at_5 value: 10.428999999999998 - type: recall_at_1 value: 27.111 - type: recall_at_10 value: 48.422 - type: recall_at_100 value: 71.846 - type: recall_at_1000 value: 88.57000000000001 - type: recall_at_3 value: 36.435 - type: recall_at_5 value: 41.765 - type: map_at_1 value: 26.264 - type: map_at_10 value: 33.522 - type: map_at_100 value: 34.963 - type: map_at_1000 value: 35.175 - type: map_at_3 value: 31.366 - type: map_at_5 value: 32.621 - type: mrr_at_1 value: 31.028 - type: mrr_at_10 value: 37.230000000000004 - type: mrr_at_100 value: 38.149 - type: mrr_at_1000 value: 38.218 - type: mrr_at_3 value: 35.046 - type: mrr_at_5 value: 36.617 - type: ndcg_at_1 value: 31.028 - type: ndcg_at_10 value: 37.964999999999996 - type: ndcg_at_100 value: 43.342000000000006 - type: ndcg_at_1000 value: 46.471000000000004 - type: ndcg_at_3 value: 34.67 - type: ndcg_at_5 value: 36.458 - type: precision_at_1 value: 31.028 - type: precision_at_10 value: 6.937 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 15.942 - type: precision_at_5 value: 11.462 - type: recall_at_1 value: 26.264 - type: recall_at_10 value: 45.571 - type: recall_at_100 value: 70.246 - type: recall_at_1000 value: 90.971 - type: recall_at_3 value: 36.276 - type: recall_at_5 value: 41.162 - type: map_at_1 value: 23.372999999999998 - type: map_at_10 value: 28.992 - type: map_at_100 value: 29.837999999999997 - type: map_at_1000 value: 29.939 - type: map_at_3 value: 26.999000000000002 - type: map_at_5 value: 28.044999999999998 - type: mrr_at_1 value: 25.692999999999998 - type: mrr_at_10 value: 30.984 - type: mrr_at_100 value: 31.799 - type: mrr_at_1000 value: 31.875999999999998 - type: mrr_at_3 value: 29.267 - type: mrr_at_5 value: 30.163 - type: ndcg_at_1 value: 25.692999999999998 - type: ndcg_at_10 value: 32.45 - type: ndcg_at_100 value: 37.103 - type: ndcg_at_1000 value: 39.678000000000004 - type: ndcg_at_3 value: 28.725 - type: ndcg_at_5 value: 30.351 - type: precision_at_1 value: 25.692999999999998 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.765 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 11.768 - type: precision_at_5 value: 8.096 - type: recall_at_1 value: 23.372999999999998 - type: recall_at_10 value: 41.281 - type: recall_at_100 value: 63.465 - type: recall_at_1000 value: 82.575 - type: recall_at_3 value: 31.063000000000002 - type: recall_at_5 value: 34.991 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 8.821 - type: map_at_10 value: 15.383 - type: map_at_100 value: 17.244999999999997 - type: map_at_1000 value: 17.445 - type: map_at_3 value: 12.64 - type: map_at_5 value: 13.941999999999998 - type: mrr_at_1 value: 19.544 - type: mrr_at_10 value: 29.738999999999997 - type: mrr_at_100 value: 30.923000000000002 - type: mrr_at_1000 value: 30.969 - type: mrr_at_3 value: 26.384 - type: mrr_at_5 value: 28.199 - type: ndcg_at_1 value: 19.544 - type: ndcg_at_10 value: 22.398 - type: ndcg_at_100 value: 30.253999999999998 - type: ndcg_at_1000 value: 33.876 - type: ndcg_at_3 value: 17.473 - type: ndcg_at_5 value: 19.154 - type: precision_at_1 value: 19.544 - type: precision_at_10 value: 7.217999999999999 - type: precision_at_100 value: 1.564 - type: precision_at_1000 value: 0.22300000000000003 - type: precision_at_3 value: 13.225000000000001 - type: precision_at_5 value: 10.319 - type: recall_at_1 value: 8.821 - type: recall_at_10 value: 28.110000000000003 - type: recall_at_100 value: 55.64 - type: recall_at_1000 value: 75.964 - type: recall_at_3 value: 16.195 - type: recall_at_5 value: 20.678 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.344 - type: map_at_10 value: 20.301 - type: map_at_100 value: 28.709 - type: map_at_1000 value: 30.470999999999997 - type: map_at_3 value: 14.584 - type: map_at_5 value: 16.930999999999997 - type: mrr_at_1 value: 67.25 - type: mrr_at_10 value: 75.393 - type: mrr_at_100 value: 75.742 - type: mrr_at_1000 value: 75.75 - type: mrr_at_3 value: 73.958 - type: mrr_at_5 value: 74.883 - type: ndcg_at_1 value: 56.00000000000001 - type: ndcg_at_10 value: 42.394 - type: ndcg_at_100 value: 47.091 - type: ndcg_at_1000 value: 54.215 - type: ndcg_at_3 value: 46.995 - type: ndcg_at_5 value: 44.214999999999996 - type: precision_at_1 value: 67.25 - type: precision_at_10 value: 33.525 - type: precision_at_100 value: 10.67 - type: precision_at_1000 value: 2.221 - type: precision_at_3 value: 49.417 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 9.344 - type: recall_at_10 value: 25.209 - type: recall_at_100 value: 52.329 - type: recall_at_1000 value: 74.2 - type: recall_at_3 value: 15.699 - type: recall_at_5 value: 19.24 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.05 - type: f1 value: 43.06718139212933 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 46.452 - type: map_at_10 value: 58.825 - type: map_at_100 value: 59.372 - type: map_at_1000 value: 59.399 - type: map_at_3 value: 56.264 - type: map_at_5 value: 57.879999999999995 - type: mrr_at_1 value: 49.82 - type: mrr_at_10 value: 62.178999999999995 - type: mrr_at_100 value: 62.641999999999996 - type: mrr_at_1000 value: 62.658 - type: mrr_at_3 value: 59.706 - type: mrr_at_5 value: 61.283 - type: ndcg_at_1 value: 49.82 - type: ndcg_at_10 value: 65.031 - type: ndcg_at_100 value: 67.413 - type: ndcg_at_1000 value: 68.014 - type: ndcg_at_3 value: 60.084 - type: ndcg_at_5 value: 62.858000000000004 - type: precision_at_1 value: 49.82 - type: precision_at_10 value: 8.876000000000001 - type: precision_at_100 value: 1.018 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 24.477 - type: precision_at_5 value: 16.208 - type: recall_at_1 value: 46.452 - type: recall_at_10 value: 80.808 - type: recall_at_100 value: 91.215 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 67.62899999999999 - type: recall_at_5 value: 74.32900000000001 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 18.351 - type: map_at_10 value: 30.796 - type: map_at_100 value: 32.621 - type: map_at_1000 value: 32.799 - type: map_at_3 value: 26.491 - type: map_at_5 value: 28.933999999999997 - type: mrr_at_1 value: 36.265 - type: mrr_at_10 value: 45.556999999999995 - type: mrr_at_100 value: 46.323 - type: mrr_at_1000 value: 46.359 - type: mrr_at_3 value: 42.695 - type: mrr_at_5 value: 44.324000000000005 - type: ndcg_at_1 value: 36.265 - type: ndcg_at_10 value: 38.558 - type: ndcg_at_100 value: 45.18 - type: ndcg_at_1000 value: 48.292 - type: ndcg_at_3 value: 34.204 - type: ndcg_at_5 value: 35.735 - type: precision_at_1 value: 36.265 - type: precision_at_10 value: 10.879999999999999 - type: precision_at_100 value: 1.77 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 23.044999999999998 - type: precision_at_5 value: 17.253 - type: recall_at_1 value: 18.351 - type: recall_at_10 value: 46.116 - type: recall_at_100 value: 70.786 - type: recall_at_1000 value: 89.46300000000001 - type: recall_at_3 value: 31.404 - type: recall_at_5 value: 37.678 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.847 - type: map_at_10 value: 54.269999999999996 - type: map_at_100 value: 55.152 - type: map_at_1000 value: 55.223 - type: map_at_3 value: 51.166 - type: map_at_5 value: 53.055 - type: mrr_at_1 value: 73.693 - type: mrr_at_10 value: 79.975 - type: mrr_at_100 value: 80.202 - type: mrr_at_1000 value: 80.214 - type: mrr_at_3 value: 78.938 - type: mrr_at_5 value: 79.595 - type: ndcg_at_1 value: 73.693 - type: ndcg_at_10 value: 63.334999999999994 - type: ndcg_at_100 value: 66.452 - type: ndcg_at_1000 value: 67.869 - type: ndcg_at_3 value: 58.829 - type: ndcg_at_5 value: 61.266 - type: precision_at_1 value: 73.693 - type: precision_at_10 value: 13.122 - type: precision_at_100 value: 1.5559999999999998 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 37.083 - type: precision_at_5 value: 24.169999999999998 - type: recall_at_1 value: 36.847 - type: recall_at_10 value: 65.61099999999999 - type: recall_at_100 value: 77.792 - type: recall_at_1000 value: 87.17099999999999 - type: recall_at_3 value: 55.625 - type: recall_at_5 value: 60.425 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 82.1096 - type: ap value: 76.67089212843918 - type: f1 value: 82.03535056754939 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.465 - type: map_at_10 value: 37.072 - type: map_at_100 value: 38.188 - type: map_at_1000 value: 38.232 - type: map_at_3 value: 33.134 - type: map_at_5 value: 35.453 - type: mrr_at_1 value: 25.142999999999997 - type: mrr_at_10 value: 37.669999999999995 - type: mrr_at_100 value: 38.725 - type: mrr_at_1000 value: 38.765 - type: mrr_at_3 value: 33.82 - type: mrr_at_5 value: 36.111 - type: ndcg_at_1 value: 25.142999999999997 - type: ndcg_at_10 value: 44.054 - type: ndcg_at_100 value: 49.364000000000004 - type: ndcg_at_1000 value: 50.456 - type: ndcg_at_3 value: 36.095 - type: ndcg_at_5 value: 40.23 - type: precision_at_1 value: 25.142999999999997 - type: precision_at_10 value: 6.845 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.204999999999998 - type: precision_at_5 value: 11.221 - type: recall_at_1 value: 24.465 - type: recall_at_10 value: 65.495 - type: recall_at_100 value: 89.888 - type: recall_at_1000 value: 98.165 - type: recall_at_3 value: 43.964 - type: recall_at_5 value: 53.891 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.86228910168718 - type: f1 value: 93.69177113259104 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.3999088007296 - type: f1 value: 58.96668664333438 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.21788836583727 - type: f1 value: 71.4545936552952 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.39071956960323 - type: f1 value: 77.12398952847603 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.255379528166955 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.66423362872814 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.782211620375964 - type: mrr value: 31.773479703044956 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.863 - type: map_at_10 value: 13.831 - type: map_at_100 value: 17.534 - type: map_at_1000 value: 19.012 - type: map_at_3 value: 10.143 - type: map_at_5 value: 12.034 - type: mrr_at_1 value: 46.749 - type: mrr_at_10 value: 55.376999999999995 - type: mrr_at_100 value: 56.009 - type: mrr_at_1000 value: 56.042 - type: mrr_at_3 value: 53.30200000000001 - type: mrr_at_5 value: 54.85 - type: ndcg_at_1 value: 44.582 - type: ndcg_at_10 value: 36.07 - type: ndcg_at_100 value: 33.39 - type: ndcg_at_1000 value: 41.884 - type: ndcg_at_3 value: 41.441 - type: ndcg_at_5 value: 39.861000000000004 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.594 - type: precision_at_100 value: 8.365 - type: precision_at_1000 value: 2.1260000000000003 - type: precision_at_3 value: 39.009 - type: precision_at_5 value: 34.861 - type: recall_at_1 value: 5.863 - type: recall_at_10 value: 17.961 - type: recall_at_100 value: 34.026 - type: recall_at_1000 value: 64.46499999999999 - type: recall_at_3 value: 11.242 - type: recall_at_5 value: 14.493 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 38.601 - type: map_at_10 value: 55.293000000000006 - type: map_at_100 value: 56.092 - type: map_at_1000 value: 56.111999999999995 - type: map_at_3 value: 51.269 - type: map_at_5 value: 53.787 - type: mrr_at_1 value: 43.221 - type: mrr_at_10 value: 57.882999999999996 - type: mrr_at_100 value: 58.408 - type: mrr_at_1000 value: 58.421 - type: mrr_at_3 value: 54.765 - type: mrr_at_5 value: 56.809 - type: ndcg_at_1 value: 43.221 - type: ndcg_at_10 value: 62.858999999999995 - type: ndcg_at_100 value: 65.987 - type: ndcg_at_1000 value: 66.404 - type: ndcg_at_3 value: 55.605000000000004 - type: ndcg_at_5 value: 59.723000000000006 - type: precision_at_1 value: 43.221 - type: precision_at_10 value: 9.907 - type: precision_at_100 value: 1.169 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.019000000000002 - type: precision_at_5 value: 17.474 - type: recall_at_1 value: 38.601 - type: recall_at_10 value: 82.966 - type: recall_at_100 value: 96.154 - type: recall_at_1000 value: 99.223 - type: recall_at_3 value: 64.603 - type: recall_at_5 value: 73.97200000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.77 - type: map_at_10 value: 84.429 - type: map_at_100 value: 85.04599999999999 - type: map_at_1000 value: 85.065 - type: map_at_3 value: 81.461 - type: map_at_5 value: 83.316 - type: mrr_at_1 value: 81.51 - type: mrr_at_10 value: 87.52799999999999 - type: mrr_at_100 value: 87.631 - type: mrr_at_1000 value: 87.632 - type: mrr_at_3 value: 86.533 - type: mrr_at_5 value: 87.214 - type: ndcg_at_1 value: 81.47999999999999 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.39200000000001 - type: ndcg_at_1000 value: 89.52 - type: ndcg_at_3 value: 85.29299999999999 - type: ndcg_at_5 value: 86.88 - type: precision_at_1 value: 81.47999999999999 - type: precision_at_10 value: 13.367 - type: precision_at_100 value: 1.5230000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.227 - type: precision_at_5 value: 24.494 - type: recall_at_1 value: 70.77 - type: recall_at_10 value: 95.199 - type: recall_at_100 value: 99.37700000000001 - type: recall_at_1000 value: 99.973 - type: recall_at_3 value: 86.895 - type: recall_at_5 value: 91.396 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 50.686353396858344 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.3664675312921 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.7379999999999995 - type: map_at_10 value: 12.01 - type: map_at_100 value: 14.02 - type: map_at_1000 value: 14.310999999999998 - type: map_at_3 value: 8.459 - type: map_at_5 value: 10.281 - type: mrr_at_1 value: 23.3 - type: mrr_at_10 value: 34.108 - type: mrr_at_100 value: 35.217 - type: mrr_at_1000 value: 35.272 - type: mrr_at_3 value: 30.833 - type: mrr_at_5 value: 32.768 - type: ndcg_at_1 value: 23.3 - type: ndcg_at_10 value: 20.116999999999997 - type: ndcg_at_100 value: 27.961000000000002 - type: ndcg_at_1000 value: 33.149 - type: ndcg_at_3 value: 18.902 - type: ndcg_at_5 value: 16.742 - type: precision_at_1 value: 23.3 - type: precision_at_10 value: 10.47 - type: precision_at_100 value: 2.177 - type: precision_at_1000 value: 0.34299999999999997 - type: precision_at_3 value: 17.567 - type: precision_at_5 value: 14.78 - type: recall_at_1 value: 4.7379999999999995 - type: recall_at_10 value: 21.221999999999998 - type: recall_at_100 value: 44.242 - type: recall_at_1000 value: 69.652 - type: recall_at_3 value: 10.688 - type: recall_at_5 value: 14.982999999999999 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.84572946827069 - type: cos_sim_spearman value: 80.48508130408966 - type: euclidean_pearson value: 82.0481530027767 - type: euclidean_spearman value: 80.45902876782752 - type: manhattan_pearson value: 82.03728222483326 - type: manhattan_spearman value: 80.45684282911755 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.33476464677516 - type: cos_sim_spearman value: 75.93057758003266 - type: euclidean_pearson value: 80.89685744015691 - type: euclidean_spearman value: 76.29929953441706 - type: manhattan_pearson value: 80.91391345459995 - type: manhattan_spearman value: 76.31985463110914 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.63686106359005 - type: cos_sim_spearman value: 85.22240034668202 - type: euclidean_pearson value: 84.6074814189106 - type: euclidean_spearman value: 85.17169644755828 - type: manhattan_pearson value: 84.48329306239368 - type: manhattan_spearman value: 85.0086508544768 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.95455774064745 - type: cos_sim_spearman value: 80.54074646118492 - type: euclidean_pearson value: 81.79598955554704 - type: euclidean_spearman value: 80.55837617606814 - type: manhattan_pearson value: 81.78213797905386 - type: manhattan_spearman value: 80.5666746878273 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.92813309124739 - type: cos_sim_spearman value: 88.81459873052108 - type: euclidean_pearson value: 88.21193118930564 - type: euclidean_spearman value: 88.87072745043731 - type: manhattan_pearson value: 88.22576929706727 - type: manhattan_spearman value: 88.8867671095791 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.6881529671839 - type: cos_sim_spearman value: 85.2807092969554 - type: euclidean_pearson value: 84.62334178652704 - type: euclidean_spearman value: 85.2116373296784 - type: manhattan_pearson value: 84.54948211541777 - type: manhattan_spearman value: 85.10737722637882 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.55963694458408 - type: cos_sim_spearman value: 89.36731628848683 - type: euclidean_pearson value: 89.64975952985465 - type: euclidean_spearman value: 89.29689484033007 - type: manhattan_pearson value: 89.61234491713135 - type: manhattan_spearman value: 89.20302520255782 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.411800961903886 - type: cos_sim_spearman value: 62.99105515749963 - type: euclidean_pearson value: 65.29826669549443 - type: euclidean_spearman value: 63.29880964105775 - type: manhattan_pearson value: 65.00126190601183 - type: manhattan_spearman value: 63.32011025899179 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.83498531837608 - type: cos_sim_spearman value: 87.21366640615442 - type: euclidean_pearson value: 86.74764288798261 - type: euclidean_spearman value: 87.06060470780834 - type: manhattan_pearson value: 86.65971223951476 - type: manhattan_spearman value: 86.99814399831457 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 83.94448463485881 - type: mrr value: 95.36291867174221 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 59.928000000000004 - type: map_at_10 value: 68.577 - type: map_at_100 value: 69.35900000000001 - type: map_at_1000 value: 69.37299999999999 - type: map_at_3 value: 66.217 - type: map_at_5 value: 67.581 - type: mrr_at_1 value: 63 - type: mrr_at_10 value: 69.994 - type: mrr_at_100 value: 70.553 - type: mrr_at_1000 value: 70.56700000000001 - type: mrr_at_3 value: 68.167 - type: mrr_at_5 value: 69.11699999999999 - type: ndcg_at_1 value: 63 - type: ndcg_at_10 value: 72.58 - type: ndcg_at_100 value: 75.529 - type: ndcg_at_1000 value: 76.009 - type: ndcg_at_3 value: 68.523 - type: ndcg_at_5 value: 70.301 - type: precision_at_1 value: 63 - type: precision_at_10 value: 9.333 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.444000000000003 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 59.928000000000004 - type: recall_at_10 value: 83.544 - type: recall_at_100 value: 96 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 72.072 - type: recall_at_5 value: 76.683 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.82178217821782 - type: cos_sim_ap value: 95.41507679819003 - type: cos_sim_f1 value: 90.9456740442656 - type: cos_sim_precision value: 91.49797570850203 - type: cos_sim_recall value: 90.4 - type: dot_accuracy value: 99.77227722772277 - type: dot_ap value: 92.50123869445967 - type: dot_f1 value: 88.18414322250638 - type: dot_precision value: 90.26178010471205 - type: dot_recall value: 86.2 - type: euclidean_accuracy value: 99.81782178217821 - type: euclidean_ap value: 95.3935066749006 - type: euclidean_f1 value: 90.66128218071681 - type: euclidean_precision value: 91.53924566768603 - type: euclidean_recall value: 89.8 - type: manhattan_accuracy value: 99.81881188118813 - type: manhattan_ap value: 95.39767454613512 - type: manhattan_f1 value: 90.62019477191186 - type: manhattan_precision value: 92.95478443743428 - type: manhattan_recall value: 88.4 - type: max_accuracy value: 99.82178217821782 - type: max_ap value: 95.41507679819003 - type: max_f1 value: 90.9456740442656 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 64.96313921233748 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.602625720956745 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.32659230651731 - type: mrr value: 52.33861726508785 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.01587644214203 - type: cos_sim_spearman value: 30.974306908731013 - type: dot_pearson value: 29.83339853838187 - type: dot_spearman value: 30.07761671934048 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.9539999999999997 - type: map_at_100 value: 11.437 - type: map_at_1000 value: 27.861000000000004 - type: map_at_3 value: 0.6479999999999999 - type: map_at_5 value: 1.0410000000000001 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 90.333 - type: mrr_at_100 value: 90.333 - type: mrr_at_1000 value: 90.333 - type: mrr_at_3 value: 90.333 - type: mrr_at_5 value: 90.333 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 78.31700000000001 - type: ndcg_at_100 value: 59.396 - type: ndcg_at_1000 value: 52.733 - type: ndcg_at_3 value: 81.46900000000001 - type: ndcg_at_5 value: 80.74 - type: precision_at_1 value: 84 - type: precision_at_10 value: 84 - type: precision_at_100 value: 60.980000000000004 - type: precision_at_1000 value: 23.432 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.156 - type: recall_at_100 value: 14.557999999999998 - type: recall_at_1000 value: 49.553999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.121 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.373 - type: map_at_10 value: 11.701 - type: map_at_100 value: 17.144000000000002 - type: map_at_1000 value: 18.624 - type: map_at_3 value: 6.552 - type: map_at_5 value: 9.372 - type: mrr_at_1 value: 38.775999999999996 - type: mrr_at_10 value: 51.975 - type: mrr_at_100 value: 52.873999999999995 - type: mrr_at_1000 value: 52.873999999999995 - type: mrr_at_3 value: 47.619 - type: mrr_at_5 value: 50.578 - type: ndcg_at_1 value: 36.735 - type: ndcg_at_10 value: 27.212999999999997 - type: ndcg_at_100 value: 37.245 - type: ndcg_at_1000 value: 48.602000000000004 - type: ndcg_at_3 value: 30.916 - type: ndcg_at_5 value: 30.799 - type: precision_at_1 value: 38.775999999999996 - type: precision_at_10 value: 23.469 - type: precision_at_100 value: 7.327 - type: precision_at_1000 value: 1.486 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 32.245000000000005 - type: recall_at_1 value: 3.373 - type: recall_at_10 value: 17.404 - type: recall_at_100 value: 46.105000000000004 - type: recall_at_1000 value: 80.35 - type: recall_at_3 value: 7.4399999999999995 - type: recall_at_5 value: 12.183 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.5592 - type: ap value: 14.330910591410134 - type: f1 value: 54.45745186286521 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.20543293718167 - type: f1 value: 61.45365480309872 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 43.81162998944145 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.69011146212075 - type: cos_sim_ap value: 76.09792353652536 - type: cos_sim_f1 value: 70.10202763786646 - type: cos_sim_precision value: 68.65671641791045 - type: cos_sim_recall value: 71.60949868073878 - type: dot_accuracy value: 85.33110806461227 - type: dot_ap value: 70.19304383327554 - type: dot_f1 value: 67.22494202525122 - type: dot_precision value: 65.6847935548842 - type: dot_recall value: 68.83905013192611 - type: euclidean_accuracy value: 86.5410979316922 - type: euclidean_ap value: 75.91906915651882 - type: euclidean_f1 value: 69.6798975672215 - type: euclidean_precision value: 67.6865671641791 - type: euclidean_recall value: 71.79419525065963 - type: manhattan_accuracy value: 86.60070334386363 - type: manhattan_ap value: 75.94617413885031 - type: manhattan_f1 value: 69.52689565780946 - type: manhattan_precision value: 68.3312101910828 - type: manhattan_recall value: 70.76517150395777 - type: max_accuracy value: 86.69011146212075 - type: max_ap value: 76.09792353652536 - type: max_f1 value: 70.10202763786646 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.25951798812434 - type: cos_sim_ap value: 86.31476416599727 - type: cos_sim_f1 value: 78.52709971038477 - type: cos_sim_precision value: 76.7629972792117 - type: cos_sim_recall value: 80.37419156144134 - type: dot_accuracy value: 88.03896456708192 - type: dot_ap value: 83.26963599196237 - type: dot_f1 value: 76.72696459492317 - type: dot_precision value: 73.56411162133521 - type: dot_recall value: 80.17400677548507 - type: euclidean_accuracy value: 89.21682772538519 - type: euclidean_ap value: 86.29306071289969 - type: euclidean_f1 value: 78.40827030519554 - type: euclidean_precision value: 77.42250243939053 - type: euclidean_recall value: 79.41946412072683 - type: manhattan_accuracy value: 89.22458959133776 - type: manhattan_ap value: 86.2901934710645 - type: manhattan_f1 value: 78.54211378440453 - type: manhattan_precision value: 76.85505858079729 - type: manhattan_recall value: 80.30489682784109 - type: max_accuracy value: 89.25951798812434 - type: max_ap value: 86.31476416599727 - type: max_f1 value: 78.54211378440453 --- ## E5-large [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large') model = AutoModel.from_pretrained('intfloat/e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
RichardErkhov/EleutherAI_-_pythia-70m-deduped-8bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
1,713
1,713
5
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-70m-deduped - bnb 8bits - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped/ Original model description: --- language: - en tags: - pytorch - causal-lm - pythia license: apache-2.0 datasets: - EleutherAI/the_pile_deduplicated --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. We also provide 154 intermediate checkpoints per model, hosted on Hugging Face as branches. The Pythia model suite was designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. <details> <summary style="font-weight:600">Details on previous early release and naming convention.</summary> Previously, we released an early version of the Pythia suite to the public. However, we decided to retrain the model suite to address a few hyperparameter discrepancies. This model card <a href="#changelog">lists the changes</a>; see appendix B in the Pythia paper for further discussion. We found no difference in benchmark performance between the two Pythia versions. The old models are [still available](https://huggingface.co/models?other=pythia_v0), but we suggest the retrained suite if you are just starting to use Pythia.<br> **This is the current release.** Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. </details> <br> # Pythia-70M-deduped ## Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. [See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation details. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ## Uses and Limitations ### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. We also provide 154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints `step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to `step143000`. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-70M-deduped for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text. Please evaluate the risks associated with your particular use case. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-70M-deduped has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-70M-deduped will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “follow” human instructions. ### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token used by the model need not produce the most “accurate” text. Never rely on Pythia-70M-deduped to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-70M-deduped may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-70M-deduped. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ## Training ### Training data Pythia-70M-deduped was trained on the Pile **after the dataset has been globally deduplicated**.<br> [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). ### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training, from `step1000` to `step143000` (which is the same as `main`). In addition, we also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for 143000 steps at a batch size of 2M (2,097,152 tokens).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ## Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Easy Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/> </details> ## Changelog This section compares differences between previously released [Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current models. See Appendix B of the Pythia paper for further discussion of these changes and the motivation behind them. We found that retraining Pythia had no impact on benchmark performance. - All model sizes are now trained with uniform batch size of 2M tokens. Previously, the models of size 160M, 410M, and 1.4B parameters were trained with batch sizes of 4M tokens. - We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64, 128,256,512} in addition to every 1000 training steps. - Flash Attention was used in the new retrained suite. - We remedied a minor inconsistency that existed in the original suite: all models of size 2.8B parameters or smaller had a learning rate (LR) schedule which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and 12B models all used an LR schedule which decayed to a minimum LR of 0. In the redone training runs, we rectified this inconsistency: all models now were trained with LR decaying to a minimum of 0.1× their maximum LR. ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
[ "QUESTION_ANSWERING", "TRANSLATION" ]
[ "SCIQ" ]
Non_BioNLP
yoeven/multilingual-e5-large-instruct-Q5_0-GGUF
yoeven
null
[ "sentence-transformers", "gguf", "mteb", "transformers", "llama-cpp", "gguf-my-repo", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "base_model:intfloat/multilingual-e5-large-instruct", "base_model:quantized:intfloat/multilingual-e5-large-instruct", "license:mit", "model-index", "endpoints_compatible", "region:us", "feature-extraction" ]
1,736
1,736
42
2
--- base_model: intfloat/multilingual-e5-large-instruct language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - sentence-transformers - transformers - llama-cpp - gguf-my-repo model-index: - name: multilingual-e5-large-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.23880597014924 - type: ap value: 39.07351965022687 - type: f1 value: 70.04836733862683 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 66.71306209850107 - type: ap value: 79.01499914759529 - type: f1 value: 64.81951817560703 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.85307346326837 - type: ap value: 22.447519885878737 - type: f1 value: 61.0162730745633 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.04925053533191 - type: ap value: 23.44983217128922 - type: f1 value: 62.5723230907759 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 96.28742500000001 - type: ap value: 94.8449918887462 - type: f1 value: 96.28680923610432 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 56.716 - type: f1 value: 55.76510398266401 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 52.99999999999999 - type: f1 value: 52.00829994765178 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.806000000000004 - type: f1 value: 48.082345914983634 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.507999999999996 - type: f1 value: 47.68752844642045 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.709999999999994 - type: f1 value: 47.05870376637181 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.662000000000006 - type: f1 value: 43.42371965372771 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 31.721 - type: map_at_10 value: 49.221 - type: map_at_100 value: 49.884 - type: map_at_1000 value: 49.888 - type: map_at_3 value: 44.31 - type: map_at_5 value: 47.276 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 49.5 - type: mrr_at_100 value: 50.163000000000004 - type: mrr_at_1000 value: 50.166 - type: mrr_at_3 value: 44.618 - type: mrr_at_5 value: 47.541 - type: ndcg_at_1 value: 31.721 - type: ndcg_at_10 value: 58.384 - type: ndcg_at_100 value: 61.111000000000004 - type: ndcg_at_1000 value: 61.187999999999995 - type: ndcg_at_3 value: 48.386 - type: ndcg_at_5 value: 53.708999999999996 - type: precision_at_1 value: 31.721 - type: precision_at_10 value: 8.741 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.609 - type: recall_at_1 value: 31.721 - type: recall_at_10 value: 87.411 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.044 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.40419580759799 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.48593255007969 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.889179122289995 - type: mrr value: 77.61146286769556 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.15075203727929 - type: cos_sim_spearman value: 86.9622224570873 - type: euclidean_pearson value: 86.70473853624121 - type: euclidean_spearman value: 86.9622224570873 - type: manhattan_pearson value: 86.21089380980065 - type: manhattan_spearman value: 86.75318154937008 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.65553235908142 - type: f1 value: 99.60681976339595 - type: precision value: 99.58246346555325 - type: recall value: 99.65553235908142 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26260180497468 - type: f1 value: 99.14520507740848 - type: precision value: 99.08650671362535 - type: recall value: 99.26260180497468 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.07412538967787 - type: f1 value: 97.86629719431936 - type: precision value: 97.76238309664012 - type: recall value: 98.07412538967787 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.42074776197998 - type: f1 value: 99.38564156573635 - type: precision value: 99.36808846761454 - type: recall value: 99.42074776197998 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.73376623376623 - type: f1 value: 85.68480707214599 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 40.935218072113855 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.276389017675264 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 27.764166666666668 - type: map_at_10 value: 37.298166666666674 - type: map_at_100 value: 38.530166666666666 - type: map_at_1000 value: 38.64416666666667 - type: map_at_3 value: 34.484833333333334 - type: map_at_5 value: 36.0385 - type: mrr_at_1 value: 32.93558333333333 - type: mrr_at_10 value: 41.589749999999995 - type: mrr_at_100 value: 42.425333333333334 - type: mrr_at_1000 value: 42.476333333333336 - type: mrr_at_3 value: 39.26825 - type: mrr_at_5 value: 40.567083333333336 - type: ndcg_at_1 value: 32.93558333333333 - type: ndcg_at_10 value: 42.706583333333334 - type: ndcg_at_100 value: 47.82483333333333 - type: ndcg_at_1000 value: 49.95733333333334 - type: ndcg_at_3 value: 38.064750000000004 - type: ndcg_at_5 value: 40.18158333333333 - type: precision_at_1 value: 32.93558333333333 - type: precision_at_10 value: 7.459833333333334 - type: precision_at_100 value: 1.1830833333333335 - type: precision_at_1000 value: 0.15608333333333332 - type: precision_at_3 value: 17.5235 - type: precision_at_5 value: 12.349833333333333 - type: recall_at_1 value: 27.764166666666668 - type: recall_at_10 value: 54.31775 - type: recall_at_100 value: 76.74350000000001 - type: recall_at_1000 value: 91.45208333333332 - type: recall_at_3 value: 41.23425 - type: recall_at_5 value: 46.73983333333334 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 12.969 - type: map_at_10 value: 21.584999999999997 - type: map_at_100 value: 23.3 - type: map_at_1000 value: 23.5 - type: map_at_3 value: 18.218999999999998 - type: map_at_5 value: 19.983 - type: mrr_at_1 value: 29.316 - type: mrr_at_10 value: 40.033 - type: mrr_at_100 value: 40.96 - type: mrr_at_1000 value: 41.001 - type: mrr_at_3 value: 37.123 - type: mrr_at_5 value: 38.757999999999996 - type: ndcg_at_1 value: 29.316 - type: ndcg_at_10 value: 29.858 - type: ndcg_at_100 value: 36.756 - type: ndcg_at_1000 value: 40.245999999999995 - type: ndcg_at_3 value: 24.822 - type: ndcg_at_5 value: 26.565 - type: precision_at_1 value: 29.316 - type: precision_at_10 value: 9.186 - type: precision_at_100 value: 1.6549999999999998 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 18.436 - type: precision_at_5 value: 13.876 - type: recall_at_1 value: 12.969 - type: recall_at_10 value: 35.142 - type: recall_at_100 value: 59.143 - type: recall_at_1000 value: 78.594 - type: recall_at_3 value: 22.604 - type: recall_at_5 value: 27.883000000000003 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.527999999999999 - type: map_at_10 value: 17.974999999999998 - type: map_at_100 value: 25.665 - type: map_at_1000 value: 27.406000000000002 - type: map_at_3 value: 13.017999999999999 - type: map_at_5 value: 15.137 - type: mrr_at_1 value: 62.5 - type: mrr_at_10 value: 71.891 - type: mrr_at_100 value: 72.294 - type: mrr_at_1000 value: 72.296 - type: mrr_at_3 value: 69.958 - type: mrr_at_5 value: 71.121 - type: ndcg_at_1 value: 50.875 - type: ndcg_at_10 value: 38.36 - type: ndcg_at_100 value: 44.235 - type: ndcg_at_1000 value: 52.154 - type: ndcg_at_3 value: 43.008 - type: ndcg_at_5 value: 40.083999999999996 - type: precision_at_1 value: 62.5 - type: precision_at_10 value: 30.0 - type: precision_at_100 value: 10.038 - type: precision_at_1000 value: 2.0869999999999997 - type: precision_at_3 value: 46.833000000000006 - type: precision_at_5 value: 38.800000000000004 - type: recall_at_1 value: 8.527999999999999 - type: recall_at_10 value: 23.828 - type: recall_at_100 value: 52.322 - type: recall_at_1000 value: 77.143 - type: recall_at_3 value: 14.136000000000001 - type: recall_at_5 value: 17.761 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.51 - type: f1 value: 47.632159862049896 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 60.734 - type: map_at_10 value: 72.442 - type: map_at_100 value: 72.735 - type: map_at_1000 value: 72.75 - type: map_at_3 value: 70.41199999999999 - type: map_at_5 value: 71.80499999999999 - type: mrr_at_1 value: 65.212 - type: mrr_at_10 value: 76.613 - type: mrr_at_100 value: 76.79899999999999 - type: mrr_at_1000 value: 76.801 - type: mrr_at_3 value: 74.8 - type: mrr_at_5 value: 76.12400000000001 - type: ndcg_at_1 value: 65.212 - type: ndcg_at_10 value: 77.988 - type: ndcg_at_100 value: 79.167 - type: ndcg_at_1000 value: 79.452 - type: ndcg_at_3 value: 74.362 - type: ndcg_at_5 value: 76.666 - type: precision_at_1 value: 65.212 - type: precision_at_10 value: 10.003 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 29.518 - type: precision_at_5 value: 19.016 - type: recall_at_1 value: 60.734 - type: recall_at_10 value: 90.824 - type: recall_at_100 value: 95.71600000000001 - type: recall_at_1000 value: 97.577 - type: recall_at_3 value: 81.243 - type: recall_at_5 value: 86.90299999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 23.845 - type: map_at_10 value: 39.281 - type: map_at_100 value: 41.422 - type: map_at_1000 value: 41.593 - type: map_at_3 value: 34.467 - type: map_at_5 value: 37.017 - type: mrr_at_1 value: 47.531 - type: mrr_at_10 value: 56.204 - type: mrr_at_100 value: 56.928999999999995 - type: mrr_at_1000 value: 56.962999999999994 - type: mrr_at_3 value: 54.115 - type: mrr_at_5 value: 55.373000000000005 - type: ndcg_at_1 value: 47.531 - type: ndcg_at_10 value: 47.711999999999996 - type: ndcg_at_100 value: 54.510999999999996 - type: ndcg_at_1000 value: 57.103 - type: ndcg_at_3 value: 44.145 - type: ndcg_at_5 value: 45.032 - type: precision_at_1 value: 47.531 - type: precision_at_10 value: 13.194 - type: precision_at_100 value: 2.045 - type: precision_at_1000 value: 0.249 - type: precision_at_3 value: 29.424 - type: precision_at_5 value: 21.451 - type: recall_at_1 value: 23.845 - type: recall_at_10 value: 54.967 - type: recall_at_100 value: 79.11399999999999 - type: recall_at_1000 value: 94.56700000000001 - type: recall_at_3 value: 40.256 - type: recall_at_5 value: 46.215 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 37.819 - type: map_at_10 value: 60.889 - type: map_at_100 value: 61.717999999999996 - type: map_at_1000 value: 61.778 - type: map_at_3 value: 57.254000000000005 - type: map_at_5 value: 59.541 - type: mrr_at_1 value: 75.638 - type: mrr_at_10 value: 82.173 - type: mrr_at_100 value: 82.362 - type: mrr_at_1000 value: 82.37 - type: mrr_at_3 value: 81.089 - type: mrr_at_5 value: 81.827 - type: ndcg_at_1 value: 75.638 - type: ndcg_at_10 value: 69.317 - type: ndcg_at_100 value: 72.221 - type: ndcg_at_1000 value: 73.382 - type: ndcg_at_3 value: 64.14 - type: ndcg_at_5 value: 67.07600000000001 - type: precision_at_1 value: 75.638 - type: precision_at_10 value: 14.704999999999998 - type: precision_at_100 value: 1.698 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 41.394999999999996 - type: precision_at_5 value: 27.162999999999997 - type: recall_at_1 value: 37.819 - type: recall_at_10 value: 73.52499999999999 - type: recall_at_100 value: 84.875 - type: recall_at_1000 value: 92.559 - type: recall_at_3 value: 62.092999999999996 - type: recall_at_5 value: 67.907 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.60079999999999 - type: ap value: 92.67396345347356 - type: f1 value: 94.5988098167121 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.285 - type: map_at_10 value: 33.436 - type: map_at_100 value: 34.63 - type: map_at_1000 value: 34.681 - type: map_at_3 value: 29.412 - type: map_at_5 value: 31.715 - type: mrr_at_1 value: 21.848 - type: mrr_at_10 value: 33.979 - type: mrr_at_100 value: 35.118 - type: mrr_at_1000 value: 35.162 - type: mrr_at_3 value: 30.036 - type: mrr_at_5 value: 32.298 - type: ndcg_at_1 value: 21.862000000000002 - type: ndcg_at_10 value: 40.43 - type: ndcg_at_100 value: 46.17 - type: ndcg_at_1000 value: 47.412 - type: ndcg_at_3 value: 32.221 - type: ndcg_at_5 value: 36.332 - type: precision_at_1 value: 21.862000000000002 - type: precision_at_10 value: 6.491 - type: precision_at_100 value: 0.935 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.744 - type: precision_at_5 value: 10.331999999999999 - type: recall_at_1 value: 21.285 - type: recall_at_10 value: 62.083 - type: recall_at_100 value: 88.576 - type: recall_at_1000 value: 98.006 - type: recall_at_3 value: 39.729 - type: recall_at_5 value: 49.608000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.92612859097127 - type: f1 value: 93.82370333372853 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.67681036911807 - type: f1 value: 92.14191382411472 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.26817878585723 - type: f1 value: 91.92824250337878 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.96554963983714 - type: f1 value: 90.02859329630792 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 90.02509860164935 - type: f1 value: 89.30665159182062 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.55515370705244 - type: f1 value: 87.94449232331907 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.4623803009576 - type: f1 value: 66.06738378772725 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.3716539870386 - type: f1 value: 60.37614033396853 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 80.34022681787857 - type: f1 value: 58.302008026952 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.72095208268087 - type: f1 value: 59.64524724009049 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.87020437432773 - type: f1 value: 57.80202694670567 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.73598553345387 - type: f1 value: 58.19628250675031 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.6630800268998 - type: f1 value: 65.00996668051691 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.7128446536651 - type: f1 value: 57.95860594874963 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.61129791526563 - type: f1 value: 59.75328290206483 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.00134498991257 - type: f1 value: 67.0230483991802 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.54068594485541 - type: f1 value: 65.54604628946976 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.032952252858095 - type: f1 value: 58.715741857057104 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.80901143241427 - type: f1 value: 68.33963989243877 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.47141896435777 - type: f1 value: 69.56765020308262 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.2373907195696 - type: f1 value: 69.04529836036467 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.05783456624076 - type: f1 value: 74.69430584708174 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.82111634162744 - type: f1 value: 70.77228952803762 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.25353059852051 - type: f1 value: 71.05310103416411 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.28648285137861 - type: f1 value: 69.08020473732226 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.31540013449899 - type: f1 value: 70.9426355465791 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.2151983860121 - type: f1 value: 67.52541755908858 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.58372562205784 - type: f1 value: 69.49769064229827 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.9233355749832 - type: f1 value: 69.36311548259593 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.07330195023538 - type: f1 value: 64.99882022345572 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.62273032952253 - type: f1 value: 70.6394885471001 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.77000672494957 - type: f1 value: 62.9368944815065 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.453261600538 - type: f1 value: 70.85069934666681 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6906523201076 - type: f1 value: 72.03249740074217 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.03631472763953 - type: f1 value: 59.3165215571852 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.913920645595155 - type: f1 value: 57.367337711611285 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.42837928715535 - type: f1 value: 52.60527294970906 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.33490248823135 - type: f1 value: 63.213340969404065 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.58507061197041 - type: f1 value: 68.40256628040486 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.11230665770006 - type: f1 value: 66.44863577842305 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.70073974445192 - type: f1 value: 67.21291337273702 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.43913920645595 - type: f1 value: 64.09838087422806 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.80026899798251 - type: f1 value: 68.76986742962444 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.78816408876934 - type: f1 value: 62.18781873428972 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.6577000672495 - type: f1 value: 68.75171511133003 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.42501681237391 - type: f1 value: 71.18434963451544 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.64828513786146 - type: f1 value: 70.67741914007422 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.62811028917284 - type: f1 value: 71.36402039740959 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.88634835238736 - type: f1 value: 69.23701923480677 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.15938130464022 - type: f1 value: 71.87792218993388 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.96301277740416 - type: f1 value: 67.29584200202983 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.49562878278412 - type: f1 value: 66.91716685679431 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.6805648957633 - type: f1 value: 72.02723592594374 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.00605245460659 - type: f1 value: 60.16716669482932 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.90988567585742 - type: f1 value: 63.99405488777784 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.62273032952253 - type: f1 value: 65.17213906909481 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.50907868190988 - type: f1 value: 69.15165697194853 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.30733019502352 - type: f1 value: 66.69024007380474 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.24277067921989 - type: f1 value: 68.80515408492947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.49831876260929 - type: f1 value: 64.83778567111116 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.28782784129119 - type: f1 value: 69.3294186700733 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.315400134499 - type: f1 value: 71.22674385243207 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.37794216543377 - type: f1 value: 68.96962492838232 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.33557498318764 - type: f1 value: 72.28949738478356 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.84398117014123 - type: f1 value: 64.71026362091463 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.76462676529925 - type: f1 value: 69.8229667407667 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.02420981842636 - type: f1 value: 71.76576384895898 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.7572293207801 - type: f1 value: 72.76840765295256 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.02286482851379 - type: f1 value: 66.17237947327872 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.60928043039678 - type: f1 value: 77.27094731234773 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.68325487558843 - type: f1 value: 77.97530399082261 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.13315400134498 - type: f1 value: 75.97558584796424 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.47410894418292 - type: f1 value: 80.52244841473792 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.9670477471419 - type: f1 value: 77.37318805793146 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.09683927370544 - type: f1 value: 77.69773737430847 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.20847343644922 - type: f1 value: 75.17071738727348 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.07464694014796 - type: f1 value: 77.16136207698571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.53396099529255 - type: f1 value: 73.58296404484122 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.75319435104237 - type: f1 value: 75.24674707850833 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.0948217888366 - type: f1 value: 76.47559490205028 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.07599193006052 - type: f1 value: 70.76028043093511 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.10490921318089 - type: f1 value: 77.01215275283272 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.25756556825824 - type: f1 value: 70.20605314648762 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.08137188971082 - type: f1 value: 77.3899269057439 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.35440484196369 - type: f1 value: 79.58964690002772 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.42299932750504 - type: f1 value: 68.07844356925413 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.15669132481507 - type: f1 value: 65.89383352608513 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.11432414256894 - type: f1 value: 57.69910594559806 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.24747814391392 - type: f1 value: 70.42455553830918 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46267652992603 - type: f1 value: 76.8854559308316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.24815063887021 - type: f1 value: 72.77805034658074 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11566913248151 - type: f1 value: 73.86147988001356 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.0168123739072 - type: f1 value: 69.38515920054571 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.41156691324814 - type: f1 value: 73.43474953408237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.39609952925353 - type: f1 value: 67.29731681109291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.20914593140552 - type: f1 value: 77.07066497935367 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.52387357094821 - type: f1 value: 78.5259569473291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.6913248150639 - type: f1 value: 76.91201656350455 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.1217215870881 - type: f1 value: 77.41179937912504 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.25891055817083 - type: f1 value: 75.8089244542887 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.70679219905851 - type: f1 value: 78.21459594517711 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.83523873570948 - type: f1 value: 74.86847028401978 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.71755211835911 - type: f1 value: 74.0214326485662 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.06523201075991 - type: f1 value: 79.10545620325138 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.91862811028918 - type: f1 value: 66.50386121217983 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.93140551445865 - type: f1 value: 70.755435928495 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.40753194351042 - type: f1 value: 71.61816115782923 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.1815736381977 - type: f1 value: 75.08016717887205 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.86482851378614 - type: f1 value: 72.39521180006291 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46940147948891 - type: f1 value: 76.70044085362349 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.89307330195024 - type: f1 value: 71.5721825332298 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.7511768661735 - type: f1 value: 75.17918654541515 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69535978480162 - type: f1 value: 78.90019070153316 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.45729657027572 - type: f1 value: 76.19578371794672 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 36.92715354123554 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 35.53536244162518 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.08507884504006 - type: mrr value: 34.32436977159129 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.935 - type: map_at_10 value: 13.297 - type: map_at_100 value: 16.907 - type: map_at_1000 value: 18.391 - type: map_at_3 value: 9.626999999999999 - type: map_at_5 value: 11.190999999999999 - type: mrr_at_1 value: 46.129999999999995 - type: mrr_at_10 value: 54.346000000000004 - type: mrr_at_100 value: 55.067 - type: mrr_at_1000 value: 55.1 - type: mrr_at_3 value: 51.961 - type: mrr_at_5 value: 53.246 - type: ndcg_at_1 value: 44.118 - type: ndcg_at_10 value: 35.534 - type: ndcg_at_100 value: 32.946999999999996 - type: ndcg_at_1000 value: 41.599000000000004 - type: ndcg_at_3 value: 40.25 - type: ndcg_at_5 value: 37.978 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.842 - type: precision_at_100 value: 8.427 - type: precision_at_1000 value: 2.128 - type: precision_at_3 value: 37.977 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.935 - type: recall_at_10 value: 17.211000000000002 - type: recall_at_100 value: 34.33 - type: recall_at_1000 value: 65.551 - type: recall_at_3 value: 10.483 - type: recall_at_5 value: 13.078999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 35.231 - type: map_at_10 value: 50.202000000000005 - type: map_at_100 value: 51.154999999999994 - type: map_at_1000 value: 51.181 - type: map_at_3 value: 45.774 - type: map_at_5 value: 48.522 - type: mrr_at_1 value: 39.687 - type: mrr_at_10 value: 52.88 - type: mrr_at_100 value: 53.569 - type: mrr_at_1000 value: 53.58500000000001 - type: mrr_at_3 value: 49.228 - type: mrr_at_5 value: 51.525 - type: ndcg_at_1 value: 39.687 - type: ndcg_at_10 value: 57.754000000000005 - type: ndcg_at_100 value: 61.597 - type: ndcg_at_1000 value: 62.18900000000001 - type: ndcg_at_3 value: 49.55 - type: ndcg_at_5 value: 54.11899999999999 - type: precision_at_1 value: 39.687 - type: precision_at_10 value: 9.313 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 22.229 - type: precision_at_5 value: 15.939 - type: recall_at_1 value: 35.231 - type: recall_at_10 value: 78.083 - type: recall_at_100 value: 94.42099999999999 - type: recall_at_1000 value: 98.81 - type: recall_at_3 value: 57.047000000000004 - type: recall_at_5 value: 67.637 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.241 - type: map_at_10 value: 85.462 - type: map_at_100 value: 86.083 - type: map_at_1000 value: 86.09700000000001 - type: map_at_3 value: 82.49499999999999 - type: map_at_5 value: 84.392 - type: mrr_at_1 value: 82.09 - type: mrr_at_10 value: 88.301 - type: mrr_at_100 value: 88.383 - type: mrr_at_1000 value: 88.384 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.035 - type: ndcg_at_1 value: 82.12 - type: ndcg_at_10 value: 89.149 - type: ndcg_at_100 value: 90.235 - type: ndcg_at_1000 value: 90.307 - type: ndcg_at_3 value: 86.37599999999999 - type: ndcg_at_5 value: 87.964 - type: precision_at_1 value: 82.12 - type: precision_at_10 value: 13.56 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.88 - type: precision_at_5 value: 24.92 - type: recall_at_1 value: 71.241 - type: recall_at_10 value: 96.128 - type: recall_at_100 value: 99.696 - type: recall_at_1000 value: 99.994 - type: recall_at_3 value: 88.181 - type: recall_at_5 value: 92.694 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.59757799655151 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.27391998854624 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.243 - type: map_at_10 value: 10.965 - type: map_at_100 value: 12.934999999999999 - type: map_at_1000 value: 13.256 - type: map_at_3 value: 7.907 - type: map_at_5 value: 9.435 - type: mrr_at_1 value: 20.9 - type: mrr_at_10 value: 31.849 - type: mrr_at_100 value: 32.964 - type: mrr_at_1000 value: 33.024 - type: mrr_at_3 value: 28.517 - type: mrr_at_5 value: 30.381999999999998 - type: ndcg_at_1 value: 20.9 - type: ndcg_at_10 value: 18.723 - type: ndcg_at_100 value: 26.384999999999998 - type: ndcg_at_1000 value: 32.114 - type: ndcg_at_3 value: 17.753 - type: ndcg_at_5 value: 15.558 - type: precision_at_1 value: 20.9 - type: precision_at_10 value: 9.8 - type: precision_at_100 value: 2.078 - type: precision_at_1000 value: 0.345 - type: precision_at_3 value: 16.900000000000002 - type: precision_at_5 value: 13.88 - type: recall_at_1 value: 4.243 - type: recall_at_10 value: 19.885 - type: recall_at_100 value: 42.17 - type: recall_at_1000 value: 70.12 - type: recall_at_3 value: 10.288 - type: recall_at_5 value: 14.072000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.84209174935282 - type: cos_sim_spearman value: 81.73248048438833 - type: euclidean_pearson value: 83.02810070308149 - type: euclidean_spearman value: 81.73248295679514 - type: manhattan_pearson value: 82.95368060376002 - type: manhattan_spearman value: 81.60277910998718 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 88.52628804556943 - type: cos_sim_spearman value: 82.5713913555672 - type: euclidean_pearson value: 85.8796774746988 - type: euclidean_spearman value: 82.57137506803424 - type: manhattan_pearson value: 85.79671002960058 - type: manhattan_spearman value: 82.49445981618027 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 86.23682503505542 - type: cos_sim_spearman value: 87.15008956711806 - type: euclidean_pearson value: 86.79805401524959 - type: euclidean_spearman value: 87.15008956711806 - type: manhattan_pearson value: 86.65298502699244 - type: manhattan_spearman value: 86.97677821948562 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.63370304677802 - type: cos_sim_spearman value: 84.97105553540318 - type: euclidean_pearson value: 85.28896108687721 - type: euclidean_spearman value: 84.97105553540318 - type: manhattan_pearson value: 85.09663190337331 - type: manhattan_spearman value: 84.79126831644619 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 90.2614838800733 - type: cos_sim_spearman value: 91.0509162991835 - type: euclidean_pearson value: 90.33098317533373 - type: euclidean_spearman value: 91.05091625871644 - type: manhattan_pearson value: 90.26250435151107 - type: manhattan_spearman value: 90.97999594417519 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.80480973335091 - type: cos_sim_spearman value: 87.313695492969 - type: euclidean_pearson value: 86.49267251576939 - type: euclidean_spearman value: 87.313695492969 - type: manhattan_pearson value: 86.44019901831935 - type: manhattan_spearman value: 87.24205395460392 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 90.05662789380672 - type: cos_sim_spearman value: 90.02759424426651 - type: euclidean_pearson value: 90.4042483422981 - type: euclidean_spearman value: 90.02759424426651 - type: manhattan_pearson value: 90.51446975000226 - type: manhattan_spearman value: 90.08832889933616 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.5975528273532 - type: cos_sim_spearman value: 67.62969861411354 - type: euclidean_pearson value: 69.224275734323 - type: euclidean_spearman value: 67.62969861411354 - type: manhattan_pearson value: 69.3761447059927 - type: manhattan_spearman value: 67.90921005611467 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.11244327231684 - type: cos_sim_spearman value: 88.37902438979035 - type: euclidean_pearson value: 87.86054279847336 - type: euclidean_spearman value: 88.37902438979035 - type: manhattan_pearson value: 87.77257757320378 - type: manhattan_spearman value: 88.25208966098123 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.87174608143563 - type: mrr value: 96.12836872640794 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 67.258 - type: map_at_100 value: 67.757 - type: map_at_1000 value: 67.78800000000001 - type: map_at_3 value: 64.602 - type: map_at_5 value: 65.64 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.441 - type: mrr_at_100 value: 68.825 - type: mrr_at_1000 value: 68.853 - type: mrr_at_3 value: 66.444 - type: mrr_at_5 value: 67.26100000000001 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.852 - type: ndcg_at_100 value: 73.9 - type: ndcg_at_1000 value: 74.628 - type: ndcg_at_3 value: 67.093 - type: ndcg_at_5 value: 68.58 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.6 - type: precision_at_100 value: 1.0670000000000002 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 26.111 - type: precision_at_5 value: 16.733 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 84.967 - type: recall_at_100 value: 93.833 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 71.589 - type: recall_at_5 value: 75.483 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.66633663366336 - type: cos_sim_ap value: 91.17685358899108 - type: cos_sim_f1 value: 82.16818642350559 - type: cos_sim_precision value: 83.26488706365504 - type: cos_sim_recall value: 81.10000000000001 - type: dot_accuracy value: 99.66633663366336 - type: dot_ap value: 91.17663411119032 - type: dot_f1 value: 82.16818642350559 - type: dot_precision value: 83.26488706365504 - type: dot_recall value: 81.10000000000001 - type: euclidean_accuracy value: 99.66633663366336 - type: euclidean_ap value: 91.17685189882275 - type: euclidean_f1 value: 82.16818642350559 - type: euclidean_precision value: 83.26488706365504 - type: euclidean_recall value: 81.10000000000001 - type: manhattan_accuracy value: 99.66633663366336 - type: manhattan_ap value: 91.2241619496737 - type: manhattan_f1 value: 82.20472440944883 - type: manhattan_precision value: 86.51933701657458 - type: manhattan_recall value: 78.3 - type: max_accuracy value: 99.66633663366336 - type: max_ap value: 91.2241619496737 - type: max_f1 value: 82.20472440944883 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.85101268897951 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 42.461184054706905 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.44542568873886 - type: mrr value: 52.33656151854681 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.75982974997539 - type: cos_sim_spearman value: 30.385405026539914 - type: dot_pearson value: 30.75982433546523 - type: dot_spearman value: 30.385405026539914 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22799999999999998 - type: map_at_10 value: 2.064 - type: map_at_100 value: 13.056000000000001 - type: map_at_1000 value: 31.747999999999998 - type: map_at_3 value: 0.67 - type: map_at_5 value: 1.097 - type: mrr_at_1 value: 90.0 - type: mrr_at_10 value: 94.667 - type: mrr_at_100 value: 94.667 - type: mrr_at_1000 value: 94.667 - type: mrr_at_3 value: 94.667 - type: mrr_at_5 value: 94.667 - type: ndcg_at_1 value: 86.0 - type: ndcg_at_10 value: 82.0 - type: ndcg_at_100 value: 64.307 - type: ndcg_at_1000 value: 57.023999999999994 - type: ndcg_at_3 value: 85.816 - type: ndcg_at_5 value: 84.904 - type: precision_at_1 value: 90.0 - type: precision_at_10 value: 85.8 - type: precision_at_100 value: 66.46 - type: precision_at_1000 value: 25.202 - type: precision_at_3 value: 90.0 - type: precision_at_5 value: 89.2 - type: recall_at_1 value: 0.22799999999999998 - type: recall_at_10 value: 2.235 - type: recall_at_100 value: 16.185 - type: recall_at_1000 value: 53.620999999999995 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.172 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.75 - type: precision value: 96.45 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.54913294797689 - type: f1 value: 82.46628131021194 - type: precision value: 81.1175337186898 - type: recall value: 85.54913294797689 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.21951219512195 - type: f1 value: 77.33333333333334 - type: precision value: 75.54878048780488 - type: recall value: 81.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.26666666666665 - type: precision value: 98.1 - type: recall value: 98.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.5 - type: f1 value: 99.33333333333333 - type: precision value: 99.25 - type: recall value: 99.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.2 - type: precision value: 96.89999999999999 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.18333333333334 - type: precision value: 96.88333333333333 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.61194029850746 - type: f1 value: 72.81094527363183 - type: precision value: 70.83333333333333 - type: recall value: 77.61194029850746 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.91666666666667 - type: precision value: 91.08333333333334 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.29268292682927 - type: f1 value: 85.27642276422765 - type: precision value: 84.01277584204414 - type: recall value: 88.29268292682927 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.0 - type: precision value: 94.46666666666668 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.681652490887 - type: f1 value: 91.90765492102065 - type: precision value: 91.05913325232888 - type: recall value: 93.681652490887 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.17391304347827 - type: f1 value: 89.97101449275361 - type: precision value: 88.96811594202899 - type: recall value: 92.17391304347827 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.43478260869566 - type: f1 value: 87.72173913043478 - type: precision value: 86.42028985507245 - type: recall value: 90.43478260869566 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.03 - type: precision value: 86.95 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.4 - type: f1 value: 91.45666666666666 - type: precision value: 90.525 - type: recall value: 93.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.9059107358263 - type: f1 value: 78.32557872364869 - type: precision value: 76.78260286824823 - type: recall value: 81.9059107358263 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.58333333333333 - type: precision value: 91.73333333333332 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.10000000000001 - type: f1 value: 74.50500000000001 - type: precision value: 72.58928571428571 - type: recall value: 79.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.55 - type: precision value: 95.05 - type: recall value: 96.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.0952380952381 - type: f1 value: 77.98458049886621 - type: precision value: 76.1968253968254 - type: recall value: 82.0952380952381 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.9 - type: f1 value: 84.99190476190476 - type: precision value: 83.65 - type: recall value: 87.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.56666666666666 - type: precision value: 94.01666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.6 - type: f1 value: 98.2 - type: precision value: 98.0 - type: recall value: 98.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.38333333333334 - type: precision value: 93.78333333333335 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.4 - type: f1 value: 84.10380952380952 - type: precision value: 82.67 - type: recall value: 87.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.5 - type: f1 value: 94.33333333333334 - type: precision value: 93.78333333333333 - type: recall value: 95.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.4 - type: f1 value: 86.82000000000001 - type: precision value: 85.64500000000001 - type: recall value: 89.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.1 - type: f1 value: 93.56666666666668 - type: precision value: 92.81666666666666 - type: recall value: 95.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.9 - type: f1 value: 98.6 - type: precision value: 98.45 - type: recall value: 98.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.01347708894879 - type: f1 value: 93.51752021563343 - type: precision value: 92.82794249775381 - type: recall value: 95.01347708894879 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.00854700854701 - type: f1 value: 96.08262108262107 - type: precision value: 95.65527065527067 - type: recall value: 97.00854700854701 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5 - type: f1 value: 95.39999999999999 - type: precision value: 94.88333333333333 - type: recall value: 96.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.5909090909091 - type: f1 value: 95.49242424242425 - type: precision value: 94.9621212121212 - type: recall value: 96.5909090909091 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.90566037735849 - type: f1 value: 81.85883997204752 - type: precision value: 80.54507337526205 - type: recall value: 84.90566037735849 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.5 - type: f1 value: 96.75 - type: precision value: 96.38333333333333 - type: recall value: 97.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7704280155642 - type: f1 value: 82.99610894941635 - type: precision value: 81.32295719844358 - type: recall value: 86.7704280155642 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.52136752136752 - type: f1 value: 61.89662189662191 - type: precision value: 59.68660968660969 - type: recall value: 67.52136752136752 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.2 - type: f1 value: 86.32 - type: precision value: 85.015 - type: recall value: 89.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.0 - type: f1 value: 94.78333333333333 - type: precision value: 94.18333333333334 - type: recall value: 96.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.8785046728972 - type: f1 value: 80.54517133956385 - type: precision value: 79.154984423676 - type: recall value: 83.8785046728972 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.60000000000001 - type: f1 value: 92.01333333333334 - type: precision value: 91.28333333333333 - type: recall value: 93.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.1 - type: f1 value: 96.26666666666667 - type: precision value: 95.85000000000001 - type: recall value: 97.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.67833333333333 - type: precision value: 79.03928571428571 - type: recall value: 84.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.3 - type: f1 value: 96.48333333333332 - type: precision value: 96.08333333333331 - type: recall value: 97.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.66666666666667 - type: precision value: 94.16666666666667 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.36666666666667 - type: precision value: 95.96666666666668 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.80666666666667 - type: precision value: 92.12833333333333 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.22333333333334 - type: precision value: 95.875 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.33333333333333 - type: f1 value: 70.78174603174602 - type: precision value: 69.28333333333332 - type: recall value: 74.33333333333333 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.6 - type: f1 value: 32.938348952090365 - type: precision value: 31.2811038961039 - type: recall value: 37.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.5 - type: f1 value: 89.13333333333333 - type: precision value: 88.03333333333333 - type: recall value: 91.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.14285714285714 - type: f1 value: 77.67857142857143 - type: precision value: 75.59523809523809 - type: recall value: 82.14285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.0450054884742 - type: f1 value: 63.070409283362075 - type: precision value: 60.58992781824835 - type: recall value: 69.0450054884742 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.1 - type: f1 value: 57.848333333333336 - type: precision value: 55.69500000000001 - type: recall value: 63.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.01666666666667 - type: precision value: 94.5 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.90666666666667 - type: precision value: 94.425 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.61333333333333 - type: precision value: 83.27 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.4 - type: f1 value: 71.90746031746032 - type: precision value: 70.07027777777778 - type: recall value: 76.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.26666666666667 - type: precision value: 96.95 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.8 - type: f1 value: 74.39555555555555 - type: precision value: 72.59416666666667 - type: recall value: 78.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 93.78999999999999 - type: precision value: 93.125 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.75 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.25666666666666 - type: precision value: 93.64166666666668 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 56.934306569343065 - type: f1 value: 51.461591936044485 - type: precision value: 49.37434827945776 - type: recall value: 56.934306569343065 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 20.200000000000003 - type: f1 value: 16.91799284049284 - type: precision value: 15.791855158730158 - type: recall value: 20.200000000000003 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.2 - type: f1 value: 95.3 - type: precision value: 94.85 - type: recall value: 96.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.3 - type: f1 value: 95.11666666666667 - type: precision value: 94.53333333333333 - type: recall value: 96.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.88095238095238 - type: f1 value: 87.14285714285714 - type: precision value: 85.96230158730161 - type: recall value: 89.88095238095238 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 24.099999999999998 - type: f1 value: 19.630969083349783 - type: precision value: 18.275094905094907 - type: recall value: 24.099999999999998 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.4368530020704 - type: f1 value: 79.45183870649709 - type: precision value: 77.7432712215321 - type: recall value: 83.4368530020704 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.53333333333333 - type: precision value: 93.91666666666666 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.8 - type: f1 value: 98.48333333333332 - type: precision value: 98.33333333333334 - type: recall value: 98.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.5 - type: f1 value: 14.979285714285714 - type: precision value: 14.23235060690943 - type: recall value: 17.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.93939393939394 - type: f1 value: 91.991341991342 - type: precision value: 91.05339105339105 - type: recall value: 93.93939393939394 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.31297709923665 - type: f1 value: 86.76844783715012 - type: precision value: 85.63613231552164 - type: recall value: 89.31297709923665 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.12663755458514 - type: f1 value: 98.93255701115964 - type: precision value: 98.83551673944687 - type: recall value: 99.12663755458514 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.0 - type: f1 value: 89.77999999999999 - type: precision value: 88.78333333333333 - type: recall value: 92.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.89265536723164 - type: f1 value: 95.85687382297553 - type: precision value: 95.33898305084746 - type: recall value: 96.89265536723164 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 14.6 - type: f1 value: 11.820611790170615 - type: precision value: 11.022616224355355 - type: recall value: 14.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.93333333333334 - type: precision value: 94.48666666666666 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.72333333333334 - type: precision value: 83.44166666666666 - type: recall value: 87.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.47333333333333 - type: precision value: 92.875 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.6 - type: f1 value: 95.71666666666665 - type: precision value: 95.28333333333335 - type: recall value: 96.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 17.8 - type: f1 value: 14.511074040901628 - type: precision value: 13.503791000666002 - type: recall value: 17.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.10187667560321 - type: f1 value: 92.46648793565683 - type: precision value: 91.71134941912423 - type: recall value: 94.10187667560321 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.0 - type: f1 value: 96.11666666666666 - type: precision value: 95.68333333333334 - type: recall value: 97.0 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.72727272727273 - type: f1 value: 66.58949745906267 - type: precision value: 63.86693017127799 - type: recall value: 72.72727272727273 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.14084507042254 - type: f1 value: 88.26291079812206 - type: precision value: 87.32394366197182 - type: recall value: 90.14084507042254 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.67065868263472 - type: f1 value: 58.2876627696987 - type: precision value: 55.79255774165953 - type: recall value: 64.67065868263472 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.6 - type: f1 value: 94.41666666666667 - type: precision value: 93.85 - type: recall value: 95.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.172413793103445 - type: f1 value: 49.63992493549144 - type: precision value: 47.71405113769646 - type: recall value: 55.172413793103445 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.46478873239437 - type: f1 value: 73.4417616811983 - type: precision value: 71.91607981220658 - type: recall value: 77.46478873239437 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.61538461538461 - type: f1 value: 80.91452991452994 - type: precision value: 79.33760683760683 - type: recall value: 84.61538461538461 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2 - type: f1 value: 97.6 - type: precision value: 97.3 - type: recall value: 98.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.5741127348643 - type: f1 value: 72.00417536534445 - type: precision value: 70.53467872883321 - type: recall value: 75.5741127348643 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 62.2 - type: f1 value: 55.577460317460314 - type: precision value: 52.98583333333333 - type: recall value: 62.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.18241042345277 - type: f1 value: 90.6468124709167 - type: precision value: 89.95656894679696 - type: recall value: 92.18241042345277 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.13333333333333 - type: precision value: 94.66666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 95.85000000000001 - type: precision value: 95.39999999999999 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.1259842519685 - type: f1 value: 89.76377952755905 - type: precision value: 88.71391076115485 - type: recall value: 92.1259842519685 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.49 - type: precision value: 91.725 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.5623268698061 - type: f1 value: 73.27364463791058 - type: precision value: 71.51947852086357 - type: recall value: 77.5623268698061 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.56666666666666 - type: precision value: 96.16666666666667 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.34615384615384 - type: f1 value: 61.092032967032964 - type: precision value: 59.27197802197802 - type: recall value: 66.34615384615384 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.41190476190476 - type: precision value: 92.7 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.13333333333333 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.97333333333334 - type: precision value: 91.14166666666667 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.21698113207547 - type: f1 value: 90.3796046720575 - type: precision value: 89.56367924528303 - type: recall value: 92.21698113207547 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.6 - type: f1 value: 96.91666666666667 - type: precision value: 96.6 - type: recall value: 97.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.44525547445255 - type: f1 value: 96.71532846715328 - type: precision value: 96.35036496350365 - type: recall value: 97.44525547445255 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.34000000000002 - type: precision value: 91.49166666666667 - type: recall value: 94.1 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.2910000000000004 - type: map_at_10 value: 10.373000000000001 - type: map_at_100 value: 15.612 - type: map_at_1000 value: 17.06 - type: map_at_3 value: 6.119 - type: map_at_5 value: 7.917000000000001 - type: mrr_at_1 value: 44.897999999999996 - type: mrr_at_10 value: 56.054 - type: mrr_at_100 value: 56.82000000000001 - type: mrr_at_1000 value: 56.82000000000001 - type: mrr_at_3 value: 52.381 - type: mrr_at_5 value: 53.81 - type: ndcg_at_1 value: 42.857 - type: ndcg_at_10 value: 27.249000000000002 - type: ndcg_at_100 value: 36.529 - type: ndcg_at_1000 value: 48.136 - type: ndcg_at_3 value: 33.938 - type: ndcg_at_5 value: 29.951 - type: precision_at_1 value: 44.897999999999996 - type: precision_at_10 value: 22.653000000000002 - type: precision_at_100 value: 7.000000000000001 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 32.653 - type: precision_at_5 value: 27.755000000000003 - type: recall_at_1 value: 3.2910000000000004 - type: recall_at_10 value: 16.16 - type: recall_at_100 value: 43.908 - type: recall_at_1000 value: 79.823 - type: recall_at_3 value: 7.156 - type: recall_at_5 value: 10.204 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.05879999999999 - type: ap value: 14.609748142799111 - type: f1 value: 54.878956295843096 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.61799660441426 - type: f1 value: 64.8698191961434 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.32860036611885 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.34714192048638 - type: cos_sim_ap value: 80.26732975975634 - type: cos_sim_f1 value: 73.53415148134374 - type: cos_sim_precision value: 69.34767360299276 - type: cos_sim_recall value: 78.25857519788919 - type: dot_accuracy value: 88.34714192048638 - type: dot_ap value: 80.26733698491206 - type: dot_f1 value: 73.53415148134374 - type: dot_precision value: 69.34767360299276 - type: dot_recall value: 78.25857519788919 - type: euclidean_accuracy value: 88.34714192048638 - type: euclidean_ap value: 80.26734337771738 - type: euclidean_f1 value: 73.53415148134374 - type: euclidean_precision value: 69.34767360299276 - type: euclidean_recall value: 78.25857519788919 - type: manhattan_accuracy value: 88.30541813196639 - type: manhattan_ap value: 80.19415808104145 - type: manhattan_f1 value: 73.55143870713441 - type: manhattan_precision value: 73.25307511122743 - type: manhattan_recall value: 73.85224274406332 - type: max_accuracy value: 88.34714192048638 - type: max_ap value: 80.26734337771738 - type: max_f1 value: 73.55143870713441 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.81061047075717 - type: cos_sim_ap value: 87.11747055081017 - type: cos_sim_f1 value: 80.04355498817256 - type: cos_sim_precision value: 78.1165262000733 - type: cos_sim_recall value: 82.06806282722513 - type: dot_accuracy value: 89.81061047075717 - type: dot_ap value: 87.11746902745236 - type: dot_f1 value: 80.04355498817256 - type: dot_precision value: 78.1165262000733 - type: dot_recall value: 82.06806282722513 - type: euclidean_accuracy value: 89.81061047075717 - type: euclidean_ap value: 87.11746919324248 - type: euclidean_f1 value: 80.04355498817256 - type: euclidean_precision value: 78.1165262000733 - type: euclidean_recall value: 82.06806282722513 - type: manhattan_accuracy value: 89.79508673885202 - type: manhattan_ap value: 87.11074390832218 - type: manhattan_f1 value: 80.13002540726349 - type: manhattan_precision value: 77.83826945412311 - type: manhattan_recall value: 82.56082537727133 - type: max_accuracy value: 89.81061047075717 - type: max_ap value: 87.11747055081017 - type: max_f1 value: 80.13002540726349 --- # yoeven/multilingual-e5-large-instruct-Q5_0-GGUF This model was converted to GGUF format from [`intfloat/multilingual-e5-large-instruct`](https://huggingface.co/intfloat/multilingual-e5-large-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
LXC1999/gte-Qwen2-7B-instruct-Q6_K-GGUF
LXC1999
sentence-similarity
[ "sentence-transformers", "gguf", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
1,739
1,739
9
0
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 - task: type: Clustering dataset: name: MTEB AlloProfClusteringP2P type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 76.05592323841036 - type: v_measure value: 64.51718058866508 - task: type: Reranking dataset: name: MTEB AlloprofReranking type: lyon-nlp/mteb-fr-reranking-alloprof-s2p config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 73.08278490943373 - type: mrr value: 74.66561454570449 - task: type: Retrieval dataset: name: MTEB AlloprofRetrieval type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 38.912 - type: map_at_10 value: 52.437999999999995 - type: map_at_100 value: 53.38 - type: map_at_1000 value: 53.427 - type: map_at_3 value: 48.879 - type: map_at_5 value: 50.934000000000005 - type: mrr_at_1 value: 44.085 - type: mrr_at_10 value: 55.337 - type: mrr_at_100 value: 56.016999999999996 - type: mrr_at_1000 value: 56.043 - type: mrr_at_3 value: 52.55499999999999 - type: mrr_at_5 value: 54.20399999999999 - type: ndcg_at_1 value: 44.085 - type: ndcg_at_10 value: 58.876 - type: ndcg_at_100 value: 62.714000000000006 - type: ndcg_at_1000 value: 63.721000000000004 - type: ndcg_at_3 value: 52.444 - type: ndcg_at_5 value: 55.692 - type: precision_at_1 value: 44.085 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 23.043 - type: precision_at_5 value: 15.898000000000001 - type: recall_at_1 value: 38.912 - type: recall_at_10 value: 75.577 - type: recall_at_100 value: 92.038 - type: recall_at_1000 value: 99.325 - type: recall_at_3 value: 58.592 - type: recall_at_5 value: 66.235 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.532000000000004 - type: f1 value: 52.5783943471605 - task: type: Retrieval dataset: name: MTEB BSARDRetrieval type: maastrichtlawtech/bsard config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 8.108 - type: map_at_10 value: 14.710999999999999 - type: map_at_100 value: 15.891 - type: map_at_1000 value: 15.983 - type: map_at_3 value: 12.237 - type: map_at_5 value: 13.679 - type: mrr_at_1 value: 8.108 - type: mrr_at_10 value: 14.710999999999999 - type: mrr_at_100 value: 15.891 - type: mrr_at_1000 value: 15.983 - type: mrr_at_3 value: 12.237 - type: mrr_at_5 value: 13.679 - type: ndcg_at_1 value: 8.108 - type: ndcg_at_10 value: 18.796 - type: ndcg_at_100 value: 25.098 - type: ndcg_at_1000 value: 27.951999999999998 - type: ndcg_at_3 value: 13.712 - type: ndcg_at_5 value: 16.309 - type: precision_at_1 value: 8.108 - type: precision_at_10 value: 3.198 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 6.006 - type: precision_at_5 value: 4.865 - type: recall_at_1 value: 8.108 - type: recall_at_10 value: 31.982 - type: recall_at_100 value: 62.613 - type: recall_at_1000 value: 86.036 - type: recall_at_3 value: 18.018 - type: recall_at_5 value: 24.324 - task: type: Clustering dataset: name: MTEB HALClusteringS2S type: lyon-nlp/clustering-hal-s2s config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 30.833269778867116 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P type: mlsum config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 50.0281928004713 - type: v_measure value: 43.699961510636534 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.68963357344191 - type: f1 value: 96.45175170820961 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.46946445349202 - type: f1 value: 65.79860440988624 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 82.60663507109005 - type: f1 value: 77.20462646604777 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 60.19311264967803 - type: v_measure value: 63.6235764409785 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.65097511768661 - type: f1 value: 78.77796091490924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.64425016812373 - type: f1 value: 85.4912728670017 - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (fr) type: jinaai/mintakaqa config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 35.913000000000004 - type: map_at_10 value: 48.147 - type: map_at_100 value: 48.91 - type: map_at_1000 value: 48.949 - type: map_at_3 value: 45.269999999999996 - type: map_at_5 value: 47.115 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 48.147 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.949 - type: mrr_at_3 value: 45.269999999999996 - type: mrr_at_5 value: 47.115 - type: ndcg_at_1 value: 35.913000000000004 - type: ndcg_at_10 value: 54.03 - type: ndcg_at_100 value: 57.839 - type: ndcg_at_1000 value: 58.925000000000004 - type: ndcg_at_3 value: 48.217999999999996 - type: ndcg_at_5 value: 51.56699999999999 - type: precision_at_1 value: 35.913000000000004 - type: precision_at_10 value: 7.244000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 18.905 - type: precision_at_5 value: 12.981000000000002 - type: recall_at_1 value: 35.913000000000004 - type: recall_at_10 value: 72.441 - type: recall_at_100 value: 90.41799999999999 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 56.716 - type: recall_at_5 value: 64.90599999999999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 75.25 - type: cos_sim_ap value: 80.86376001270014 - type: cos_sim_f1 value: 73.65945437441204 - type: cos_sim_precision value: 64.02289452166802 - type: cos_sim_recall value: 86.71096345514951 - type: dot_accuracy value: 75.25 - type: dot_ap value: 80.93686107633002 - type: dot_f1 value: 73.65945437441204 - type: dot_precision value: 64.02289452166802 - type: dot_recall value: 86.71096345514951 - type: euclidean_accuracy value: 75.25 - type: euclidean_ap value: 80.86379136218862 - type: euclidean_f1 value: 73.65945437441204 - type: euclidean_precision value: 64.02289452166802 - type: euclidean_recall value: 86.71096345514951 - type: manhattan_accuracy value: 75.3 - type: manhattan_ap value: 80.87826606097734 - type: manhattan_f1 value: 73.68421052631581 - type: manhattan_precision value: 64.0 - type: manhattan_recall value: 86.82170542635659 - type: max_accuracy value: 75.3 - type: max_ap value: 80.93686107633002 - type: max_f1 value: 73.68421052631581 - task: type: STS dataset: name: MTEB SICKFr type: Lajavaness/SICK-fr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 81.42349425981143 - type: cos_sim_spearman value: 78.90454327031226 - type: euclidean_pearson value: 78.39086497435166 - type: euclidean_spearman value: 78.9046133980509 - type: manhattan_pearson value: 78.63743094286502 - type: manhattan_spearman value: 79.12136348449269 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.452697919749 - type: cos_sim_spearman value: 82.58116836039301 - type: euclidean_pearson value: 81.04038478932786 - type: euclidean_spearman value: 82.58116836039301 - type: manhattan_pearson value: 81.37075396187771 - type: manhattan_spearman value: 82.73678231355368 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: stsb_multi_mt config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 85.7419764013806 - type: cos_sim_spearman value: 85.46085808849622 - type: euclidean_pearson value: 83.70449639870063 - type: euclidean_spearman value: 85.46159013076233 - type: manhattan_pearson value: 83.95259510313929 - type: manhattan_spearman value: 85.8029724659458 - task: type: Summarization dataset: name: MTEB SummEvalFr type: lyon-nlp/summarization-summeval-fr-p2p config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 32.61063271753325 - type: cos_sim_spearman value: 31.454589417353603 - type: dot_pearson value: 32.6106288643431 - type: dot_spearman value: 31.454589417353603 - task: type: Reranking dataset: name: MTEB SyntecReranking type: lyon-nlp/mteb-fr-reranking-syntec-s2p config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 84.31666666666666 - type: mrr value: 84.31666666666666 - task: type: Retrieval dataset: name: MTEB SyntecRetrieval type: lyon-nlp/mteb-fr-retrieval-syntec-s2p config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 63.0 - type: map_at_10 value: 73.471 - type: map_at_100 value: 73.87 - type: map_at_1000 value: 73.87 - type: map_at_3 value: 70.5 - type: map_at_5 value: 73.05 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 73.471 - type: mrr_at_100 value: 73.87 - type: mrr_at_1000 value: 73.87 - type: mrr_at_3 value: 70.5 - type: mrr_at_5 value: 73.05 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 78.255 - type: ndcg_at_100 value: 79.88 - type: ndcg_at_1000 value: 79.88 - type: ndcg_at_3 value: 72.702 - type: ndcg_at_5 value: 77.264 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 18.0 - type: recall_at_1 value: 63.0 - type: recall_at_10 value: 93.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.0 - type: recall_at_5 value: 90.0 - task: type: Retrieval dataset: name: MTEB XPQARetrieval (fr) type: jinaai/xpqa config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 40.338 - type: map_at_10 value: 61.927 - type: map_at_100 value: 63.361999999999995 - type: map_at_1000 value: 63.405 - type: map_at_3 value: 55.479 - type: map_at_5 value: 59.732 - type: mrr_at_1 value: 63.551 - type: mrr_at_10 value: 71.006 - type: mrr_at_100 value: 71.501 - type: mrr_at_1000 value: 71.509 - type: mrr_at_3 value: 69.07 - type: mrr_at_5 value: 70.165 - type: ndcg_at_1 value: 63.551 - type: ndcg_at_10 value: 68.297 - type: ndcg_at_100 value: 73.13199999999999 - type: ndcg_at_1000 value: 73.751 - type: ndcg_at_3 value: 62.999 - type: ndcg_at_5 value: 64.89 - type: precision_at_1 value: 63.551 - type: precision_at_10 value: 15.661 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 38.273 - type: precision_at_5 value: 27.61 - type: recall_at_1 value: 40.338 - type: recall_at_10 value: 77.267 - type: recall_at_100 value: 95.892 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 60.36 - type: recall_at_5 value: 68.825 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 51.36126303874126 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 67.13717693836979 - type: f1 value: 57.27609848003782 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: map_at_1 value: 35.276999999999994 - type: map_at_10 value: 51.086 - type: map_at_100 value: 51.788000000000004 - type: map_at_1000 value: 51.791 - type: map_at_3 value: 46.147 - type: map_at_5 value: 49.078 - type: mrr_at_1 value: 35.917 - type: mrr_at_10 value: 51.315999999999995 - type: mrr_at_100 value: 52.018 - type: mrr_at_1000 value: 52.022 - type: mrr_at_3 value: 46.349000000000004 - type: mrr_at_5 value: 49.297000000000004 - type: ndcg_at_1 value: 35.276999999999994 - type: ndcg_at_10 value: 59.870999999999995 - type: ndcg_at_100 value: 62.590999999999994 - type: ndcg_at_1000 value: 62.661 - type: ndcg_at_3 value: 49.745 - type: ndcg_at_5 value: 55.067 - type: precision_at_1 value: 35.276999999999994 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.637 - type: recall_at_1 value: 35.276999999999994 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.18599999999999 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 78.03000000000002 - type: ap value: 29.12548553897622 - type: f1 value: 66.54857118886073 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.0 - type: cos_sim_ap value: 76.75437826834582 - type: cos_sim_f1 value: 66.4850136239782 - type: cos_sim_precision value: 68.92655367231639 - type: cos_sim_recall value: 64.21052631578948 - type: dot_accuracy value: 89.0 - type: dot_ap value: 76.75437826834582 - type: dot_f1 value: 66.4850136239782 - type: dot_precision value: 68.92655367231639 - type: dot_recall value: 64.21052631578948 - type: euclidean_accuracy value: 89.0 - type: euclidean_ap value: 76.75437826834582 - type: euclidean_f1 value: 66.4850136239782 - type: euclidean_precision value: 68.92655367231639 - type: euclidean_recall value: 64.21052631578948 - type: manhattan_accuracy value: 89.0 - type: manhattan_ap value: 76.66074220647083 - type: manhattan_f1 value: 66.47058823529412 - type: manhattan_precision value: 75.33333333333333 - type: manhattan_recall value: 59.473684210526315 - type: max_accuracy value: 89.0 - type: max_ap value: 76.75437826834582 - type: max_f1 value: 66.4850136239782 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.12903172428328 - type: cos_sim_spearman value: 92.66381487060741 - type: euclidean_pearson value: 90.37278396708922 - type: euclidean_spearman value: 92.66381487060741 - type: manhattan_pearson value: 90.32503296540962 - type: manhattan_spearman value: 92.6902938354313 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: clarin-knext/dbpedia-pl config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: map_at_1 value: 8.83 - type: map_at_10 value: 18.326 - type: map_at_100 value: 26.496 - type: map_at_1000 value: 28.455000000000002 - type: map_at_3 value: 12.933 - type: map_at_5 value: 15.168000000000001 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 72.76700000000001 - type: mrr_at_100 value: 73.203 - type: mrr_at_1000 value: 73.219 - type: mrr_at_3 value: 71.458 - type: mrr_at_5 value: 72.246 - type: ndcg_at_1 value: 55.375 - type: ndcg_at_10 value: 41.3 - type: ndcg_at_100 value: 45.891 - type: ndcg_at_1000 value: 52.905 - type: ndcg_at_3 value: 46.472 - type: ndcg_at_5 value: 43.734 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 33.074999999999996 - type: precision_at_100 value: 11.094999999999999 - type: precision_at_1000 value: 2.374 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.0 - type: recall_at_1 value: 8.83 - type: recall_at_10 value: 22.587 - type: recall_at_100 value: 50.61600000000001 - type: recall_at_1000 value: 73.559 - type: recall_at_3 value: 13.688 - type: recall_at_5 value: 16.855 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: map_at_1 value: 20.587 - type: map_at_10 value: 33.095 - type: map_at_100 value: 35.24 - type: map_at_1000 value: 35.429 - type: map_at_3 value: 28.626 - type: map_at_5 value: 31.136999999999997 - type: mrr_at_1 value: 40.586 - type: mrr_at_10 value: 49.033 - type: mrr_at_100 value: 49.952999999999996 - type: mrr_at_1000 value: 49.992 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.035 - type: ndcg_at_1 value: 40.586 - type: ndcg_at_10 value: 41.046 - type: ndcg_at_100 value: 48.586 - type: ndcg_at_1000 value: 51.634 - type: ndcg_at_3 value: 36.773 - type: ndcg_at_5 value: 38.389 - type: precision_at_1 value: 40.586 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.909 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.426000000000002 - type: recall_at_1 value: 20.587 - type: recall_at_10 value: 47.986000000000004 - type: recall_at_100 value: 75.761 - type: recall_at_1000 value: 94.065 - type: recall_at_3 value: 33.339 - type: recall_at_5 value: 39.765 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: clarin-knext/hotpotqa-pl config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: map_at_1 value: 40.878 - type: map_at_10 value: 58.775999999999996 - type: map_at_100 value: 59.632 - type: map_at_1000 value: 59.707 - type: map_at_3 value: 56.074 - type: map_at_5 value: 57.629 - type: mrr_at_1 value: 81.756 - type: mrr_at_10 value: 86.117 - type: mrr_at_100 value: 86.299 - type: mrr_at_1000 value: 86.30600000000001 - type: mrr_at_3 value: 85.345 - type: mrr_at_5 value: 85.832 - type: ndcg_at_1 value: 81.756 - type: ndcg_at_10 value: 67.608 - type: ndcg_at_100 value: 70.575 - type: ndcg_at_1000 value: 71.99600000000001 - type: ndcg_at_3 value: 63.723 - type: ndcg_at_5 value: 65.70700000000001 - type: precision_at_1 value: 81.756 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.5939999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 39.604 - type: precision_at_5 value: 25.332 - type: recall_at_1 value: 40.878 - type: recall_at_10 value: 68.096 - type: recall_at_100 value: 79.696 - type: recall_at_1000 value: 89.082 - type: recall_at_3 value: 59.406000000000006 - type: recall_at_5 value: 63.329 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: clarin-knext/msmarco-pl config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: map_at_1 value: 2.1839999999999997 - type: map_at_10 value: 11.346 - type: map_at_100 value: 30.325000000000003 - type: map_at_1000 value: 37.806 - type: map_at_3 value: 4.842 - type: map_at_5 value: 6.891 - type: mrr_at_1 value: 86.047 - type: mrr_at_10 value: 89.14699999999999 - type: mrr_at_100 value: 89.46600000000001 - type: mrr_at_1000 value: 89.46600000000001 - type: mrr_at_3 value: 89.14699999999999 - type: mrr_at_5 value: 89.14699999999999 - type: ndcg_at_1 value: 67.829 - type: ndcg_at_10 value: 62.222 - type: ndcg_at_100 value: 55.337 - type: ndcg_at_1000 value: 64.076 - type: ndcg_at_3 value: 68.12700000000001 - type: ndcg_at_5 value: 64.987 - type: precision_at_1 value: 86.047 - type: precision_at_10 value: 69.535 - type: precision_at_100 value: 32.93 - type: precision_at_1000 value: 6.6049999999999995 - type: precision_at_3 value: 79.845 - type: precision_at_5 value: 75.349 - type: recall_at_1 value: 2.1839999999999997 - type: recall_at_10 value: 12.866 - type: recall_at_100 value: 43.505 - type: recall_at_1000 value: 72.366 - type: recall_at_3 value: 4.947 - type: recall_at_5 value: 7.192 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.75319435104238 - type: f1 value: 77.58961444860606 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.54472091459313 - type: f1 value: 84.29498563572106 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: map_at_1 value: 4.367 - type: map_at_10 value: 10.38 - type: map_at_100 value: 13.516 - type: map_at_1000 value: 14.982000000000001 - type: map_at_3 value: 7.367 - type: map_at_5 value: 8.59 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 48.886 - type: mrr_at_100 value: 49.657000000000004 - type: mrr_at_1000 value: 49.713 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.065000000000005 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 28.393 - type: ndcg_at_1000 value: 37.428 - type: ndcg_at_3 value: 35.394999999999996 - type: ndcg_at_5 value: 33.391999999999996 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.638 - type: precision_at_1000 value: 2.0389999999999997 - type: precision_at_3 value: 32.817 - type: precision_at_5 value: 28.915999999999997 - type: recall_at_1 value: 4.367 - type: recall_at_10 value: 14.655000000000001 - type: recall_at_100 value: 29.665999999999997 - type: recall_at_1000 value: 62.073 - type: recall_at_3 value: 8.51 - type: recall_at_5 value: 10.689 - task: type: Retrieval dataset: name: MTEB NQ-PL type: clarin-knext/nq-pl config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: map_at_1 value: 28.616000000000003 - type: map_at_10 value: 41.626000000000005 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.733 - type: map_at_3 value: 37.729 - type: map_at_5 value: 39.879999999999995 - type: mrr_at_1 value: 32.068000000000005 - type: mrr_at_10 value: 44.029 - type: mrr_at_100 value: 44.87 - type: mrr_at_1000 value: 44.901 - type: mrr_at_3 value: 40.687 - type: mrr_at_5 value: 42.625 - type: ndcg_at_1 value: 32.068000000000005 - type: ndcg_at_10 value: 48.449999999999996 - type: ndcg_at_100 value: 53.13 - type: ndcg_at_1000 value: 54.186 - type: ndcg_at_3 value: 40.983999999999995 - type: ndcg_at_5 value: 44.628 - type: precision_at_1 value: 32.068000000000005 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.061 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 18.404999999999998 - type: precision_at_5 value: 13.111 - type: recall_at_1 value: 28.616000000000003 - type: recall_at_10 value: 66.956 - type: recall_at_100 value: 87.657 - type: recall_at_1000 value: 95.548 - type: recall_at_3 value: 47.453 - type: recall_at_5 value: 55.87800000000001 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.47589122111044 - type: f1 value: 66.6332277374775 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.4 - type: cos_sim_ap value: 94.1044939667201 - type: cos_sim_f1 value: 88.78048780487805 - type: cos_sim_precision value: 87.22044728434504 - type: cos_sim_recall value: 90.39735099337747 - type: dot_accuracy value: 86.4 - type: dot_ap value: 94.1044939667201 - type: dot_f1 value: 88.78048780487805 - type: dot_precision value: 87.22044728434504 - type: dot_recall value: 90.39735099337747 - type: euclidean_accuracy value: 86.4 - type: euclidean_ap value: 94.1044939667201 - type: euclidean_f1 value: 88.78048780487805 - type: euclidean_precision value: 87.22044728434504 - type: euclidean_recall value: 90.39735099337747 - type: manhattan_accuracy value: 86.4 - type: manhattan_ap value: 94.11438365697387 - type: manhattan_f1 value: 88.77968877968877 - type: manhattan_precision value: 87.84440842787681 - type: manhattan_recall value: 89.73509933774835 - type: max_accuracy value: 86.4 - type: max_ap value: 94.11438365697387 - type: max_f1 value: 88.78048780487805 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 97.86641929499072 - type: cos_sim_ap value: 99.36904211868182 - type: cos_sim_f1 value: 96.56203288490283 - type: cos_sim_precision value: 94.72140762463343 - type: cos_sim_recall value: 98.47560975609755 - type: dot_accuracy value: 97.86641929499072 - type: dot_ap value: 99.36904211868183 - type: dot_f1 value: 96.56203288490283 - type: dot_precision value: 94.72140762463343 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 97.86641929499072 - type: euclidean_ap value: 99.36904211868183 - type: euclidean_f1 value: 96.56203288490283 - type: euclidean_precision value: 94.72140762463343 - type: euclidean_recall value: 98.47560975609755 - type: manhattan_accuracy value: 98.14471243042672 - type: manhattan_ap value: 99.43359540492416 - type: manhattan_f1 value: 96.98795180722892 - type: manhattan_precision value: 95.83333333333334 - type: manhattan_recall value: 98.17073170731707 - type: max_accuracy value: 98.14471243042672 - type: max_ap value: 99.43359540492416 - type: max_f1 value: 96.98795180722892 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 89.39058171745152 - type: f1 value: 86.8552093529568 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 74.97975708502024 - type: f1 value: 58.73081628832407 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: map_at_1 value: 64.917 - type: map_at_10 value: 78.74600000000001 - type: map_at_100 value: 79.501 - type: map_at_1000 value: 79.524 - type: map_at_3 value: 75.549 - type: map_at_5 value: 77.495 - type: mrr_at_1 value: 74.9 - type: mrr_at_10 value: 82.112 - type: mrr_at_100 value: 82.314 - type: mrr_at_1000 value: 82.317 - type: mrr_at_3 value: 80.745 - type: mrr_at_5 value: 81.607 - type: ndcg_at_1 value: 74.83999999999999 - type: ndcg_at_10 value: 83.214 - type: ndcg_at_100 value: 84.997 - type: ndcg_at_1000 value: 85.207 - type: ndcg_at_3 value: 79.547 - type: ndcg_at_5 value: 81.46600000000001 - type: precision_at_1 value: 74.83999999999999 - type: precision_at_10 value: 12.822 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 34.903 - type: precision_at_5 value: 23.16 - type: recall_at_1 value: 64.917 - type: recall_at_10 value: 92.27199999999999 - type: recall_at_100 value: 98.715 - type: recall_at_1000 value: 99.854 - type: recall_at_3 value: 82.04599999999999 - type: recall_at_5 value: 87.2 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: map_at_1 value: 3.51 - type: map_at_10 value: 9.046999999999999 - type: map_at_100 value: 10.823 - type: map_at_1000 value: 11.144 - type: map_at_3 value: 6.257 - type: map_at_5 value: 7.648000000000001 - type: mrr_at_1 value: 17.299999999999997 - type: mrr_at_10 value: 27.419 - type: mrr_at_100 value: 28.618 - type: mrr_at_1000 value: 28.685 - type: mrr_at_3 value: 23.817 - type: mrr_at_5 value: 25.927 - type: ndcg_at_1 value: 17.299999999999997 - type: ndcg_at_10 value: 16.084 - type: ndcg_at_100 value: 23.729 - type: ndcg_at_1000 value: 29.476999999999997 - type: ndcg_at_3 value: 14.327000000000002 - type: ndcg_at_5 value: 13.017999999999999 - type: precision_at_1 value: 17.299999999999997 - type: precision_at_10 value: 8.63 - type: precision_at_100 value: 1.981 - type: precision_at_1000 value: 0.336 - type: precision_at_3 value: 13.4 - type: precision_at_5 value: 11.700000000000001 - type: recall_at_1 value: 3.51 - type: recall_at_10 value: 17.518 - type: recall_at_100 value: 40.275 - type: recall_at_1000 value: 68.203 - type: recall_at_3 value: 8.155 - type: recall_at_5 value: 11.875 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.30248675091724 - type: cos_sim_ap value: 83.6756734006714 - type: cos_sim_f1 value: 74.97367497367497 - type: cos_sim_precision value: 73.91003460207612 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 86.30248675091724 - type: dot_ap value: 83.6756734006714 - type: dot_f1 value: 74.97367497367497 - type: dot_precision value: 73.91003460207612 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 86.30248675091724 - type: euclidean_ap value: 83.67566984333091 - type: euclidean_f1 value: 74.97367497367497 - type: euclidean_precision value: 73.91003460207612 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 86.28210354667753 - type: manhattan_ap value: 83.64216119130171 - type: manhattan_f1 value: 74.92152075340078 - type: manhattan_precision value: 73.4107997265892 - type: manhattan_recall value: 76.49572649572649 - type: max_accuracy value: 86.30248675091724 - type: max_ap value: 83.6756734006714 - type: max_f1 value: 74.97367497367497 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.23295940859121 - type: cos_sim_spearman value: 78.89329160768719 - type: euclidean_pearson value: 79.56019107076818 - type: euclidean_spearman value: 78.89330209904084 - type: manhattan_pearson value: 79.76098513973719 - type: manhattan_spearman value: 79.05490162570123 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 37.732606308062486 - type: cos_sim_spearman value: 41.01645667030284 - type: euclidean_pearson value: 26.61722556367085 - type: euclidean_spearman value: 41.01645667030284 - type: manhattan_pearson value: 26.60917378970807 - type: manhattan_spearman value: 41.51335727617614 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: map_at_1 value: 54.31700000000001 - type: map_at_10 value: 65.564 - type: map_at_100 value: 66.062 - type: map_at_1000 value: 66.08699999999999 - type: map_at_3 value: 62.592999999999996 - type: map_at_5 value: 63.888 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.412 - type: mrr_at_100 value: 66.85900000000001 - type: mrr_at_1000 value: 66.88 - type: mrr_at_3 value: 64.22200000000001 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 70.577 - type: ndcg_at_100 value: 72.879 - type: ndcg_at_1000 value: 73.45 - type: ndcg_at_3 value: 65.5 - type: ndcg_at_5 value: 67.278 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.0 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 54.31700000000001 - type: recall_at_10 value: 85.056 - type: recall_at_100 value: 95.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 71.0 - type: recall_at_5 value: 75.672 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.051 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 27.448 - type: map_at_3 value: 0.721 - type: map_at_5 value: 1.13 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.0 - type: mrr_at_100 value: 93.0 - type: mrr_at_1000 value: 93.0 - type: mrr_at_3 value: 93.0 - type: mrr_at_5 value: 93.0 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 80.303 - type: ndcg_at_100 value: 61.23499999999999 - type: ndcg_at_1000 value: 52.978 - type: ndcg_at_3 value: 84.419 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 83.39999999999999 - type: precision_at_100 value: 61.96 - type: precision_at_1000 value: 22.648 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.193 - type: recall_at_100 value: 14.938 - type: recall_at_1000 value: 48.563 - type: recall_at_3 value: 0.738 - type: recall_at_5 value: 1.173 --- # LXC1999/gte-Qwen2-7B-instruct-Q6_K-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo LXC1999/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo LXC1999/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo LXC1999/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo LXC1999/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
Teradata/jina-embeddings-v2-small-en
Teradata
feature-extraction
[ "onnx", "bert", "feature-extraction", "sentence-similarity", "mteb", "teradata", "custom_code", "en", "dataset:jinaai/negation-dataset", "license:apache-2.0", "model-index", "region:us" ]
1,739
1,741
22
0
--- datasets: - jinaai/negation-dataset language: en license: apache-2.0 tags: - feature-extraction - sentence-similarity - mteb - onnx - teradata inference: false model-index: - name: jina-embedding-s-en-v2 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.35820895522387 - type: ap value: 33.99931933598115 - type: f1 value: 65.3853685535555 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 82.90140000000001 - type: ap value: 78.01434597815617 - type: f1 value: 82.83357802722676 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.88999999999999 - type: f1 value: 39.209432767163456 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 23.257 - type: map_at_10 value: 37.946000000000005 - type: map_at_100 value: 39.17 - type: map_at_1000 value: 39.181 - type: map_at_3 value: 32.99 - type: map_at_5 value: 35.467999999999996 - type: mrr_at_1 value: 23.541999999999998 - type: mrr_at_10 value: 38.057 - type: mrr_at_100 value: 39.289 - type: mrr_at_1000 value: 39.299 - type: mrr_at_3 value: 33.096 - type: mrr_at_5 value: 35.628 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 46.729 - type: ndcg_at_100 value: 51.900999999999996 - type: ndcg_at_1000 value: 52.16 - type: ndcg_at_3 value: 36.323 - type: ndcg_at_5 value: 40.766999999999996 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 7.510999999999999 - type: precision_at_100 value: 0.976 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 15.339 - type: precision_at_5 value: 11.350999999999999 - type: recall_at_1 value: 23.257 - type: recall_at_10 value: 75.107 - type: recall_at_100 value: 97.58200000000001 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 46.017 - type: recall_at_5 value: 56.757000000000005 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.02420878391967 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 35.16136856000258 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.61809790513646 - type: mrr value: 73.07215406938397 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 82.0167350090749 - type: cos_sim_spearman value: 80.51569002630401 - type: euclidean_pearson value: 81.46820525099726 - type: euclidean_spearman value: 80.51569002630401 - type: manhattan_pearson value: 81.35596555056757 - type: manhattan_spearman value: 80.12592210903303 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.25 - type: f1 value: 77.34950913540605 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.57238596005698 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 29.066444306196683 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 31.891000000000002 - type: map_at_10 value: 42.772 - type: map_at_100 value: 44.108999999999995 - type: map_at_1000 value: 44.236 - type: map_at_3 value: 39.289 - type: map_at_5 value: 41.113 - type: mrr_at_1 value: 39.342 - type: mrr_at_10 value: 48.852000000000004 - type: mrr_at_100 value: 49.534 - type: mrr_at_1000 value: 49.582 - type: mrr_at_3 value: 46.089999999999996 - type: mrr_at_5 value: 47.685 - type: ndcg_at_1 value: 39.342 - type: ndcg_at_10 value: 48.988 - type: ndcg_at_100 value: 53.854 - type: ndcg_at_1000 value: 55.955 - type: ndcg_at_3 value: 43.877 - type: ndcg_at_5 value: 46.027 - type: precision_at_1 value: 39.342 - type: precision_at_10 value: 9.285 - type: precision_at_100 value: 1.488 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 20.696 - type: precision_at_5 value: 14.878 - type: recall_at_1 value: 31.891000000000002 - type: recall_at_10 value: 60.608 - type: recall_at_100 value: 81.025 - type: recall_at_1000 value: 94.883 - type: recall_at_3 value: 45.694 - type: recall_at_5 value: 51.684 - type: map_at_1 value: 28.778 - type: map_at_10 value: 37.632 - type: map_at_100 value: 38.800000000000004 - type: map_at_1000 value: 38.934999999999995 - type: map_at_3 value: 35.293 - type: map_at_5 value: 36.547000000000004 - type: mrr_at_1 value: 35.35 - type: mrr_at_10 value: 42.936 - type: mrr_at_100 value: 43.69 - type: mrr_at_1000 value: 43.739 - type: mrr_at_3 value: 41.062 - type: mrr_at_5 value: 42.097 - type: ndcg_at_1 value: 35.35 - type: ndcg_at_10 value: 42.528 - type: ndcg_at_100 value: 46.983000000000004 - type: ndcg_at_1000 value: 49.187999999999995 - type: ndcg_at_3 value: 39.271 - type: ndcg_at_5 value: 40.654 - type: precision_at_1 value: 35.35 - type: precision_at_10 value: 7.828 - type: precision_at_100 value: 1.3010000000000002 - type: precision_at_1000 value: 0.17700000000000002 - type: precision_at_3 value: 18.96 - type: precision_at_5 value: 13.120999999999999 - type: recall_at_1 value: 28.778 - type: recall_at_10 value: 50.775000000000006 - type: recall_at_100 value: 69.66799999999999 - type: recall_at_1000 value: 83.638 - type: recall_at_3 value: 40.757 - type: recall_at_5 value: 44.86 - type: map_at_1 value: 37.584 - type: map_at_10 value: 49.69 - type: map_at_100 value: 50.639 - type: map_at_1000 value: 50.702999999999996 - type: map_at_3 value: 46.61 - type: map_at_5 value: 48.486000000000004 - type: mrr_at_1 value: 43.009 - type: mrr_at_10 value: 52.949999999999996 - type: mrr_at_100 value: 53.618 - type: mrr_at_1000 value: 53.65299999999999 - type: mrr_at_3 value: 50.605999999999995 - type: mrr_at_5 value: 52.095 - type: ndcg_at_1 value: 43.009 - type: ndcg_at_10 value: 55.278000000000006 - type: ndcg_at_100 value: 59.134 - type: ndcg_at_1000 value: 60.528999999999996 - type: ndcg_at_3 value: 50.184 - type: ndcg_at_5 value: 52.919000000000004 - type: precision_at_1 value: 43.009 - type: precision_at_10 value: 8.821 - type: precision_at_100 value: 1.161 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.424 - type: precision_at_5 value: 15.436 - type: recall_at_1 value: 37.584 - type: recall_at_10 value: 68.514 - type: recall_at_100 value: 85.099 - type: recall_at_1000 value: 95.123 - type: recall_at_3 value: 55.007 - type: recall_at_5 value: 61.714999999999996 - type: map_at_1 value: 24.7 - type: map_at_10 value: 32.804 - type: map_at_100 value: 33.738 - type: map_at_1000 value: 33.825 - type: map_at_3 value: 30.639 - type: map_at_5 value: 31.781 - type: mrr_at_1 value: 26.328000000000003 - type: mrr_at_10 value: 34.679 - type: mrr_at_100 value: 35.510000000000005 - type: mrr_at_1000 value: 35.577999999999996 - type: mrr_at_3 value: 32.58 - type: mrr_at_5 value: 33.687 - type: ndcg_at_1 value: 26.328000000000003 - type: ndcg_at_10 value: 37.313 - type: ndcg_at_100 value: 42.004000000000005 - type: ndcg_at_1000 value: 44.232 - type: ndcg_at_3 value: 33.076 - type: ndcg_at_5 value: 34.966 - type: precision_at_1 value: 26.328000000000003 - type: precision_at_10 value: 5.627 - type: precision_at_100 value: 0.8410000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 14.011000000000001 - type: precision_at_5 value: 9.582 - type: recall_at_1 value: 24.7 - type: recall_at_10 value: 49.324 - type: recall_at_100 value: 71.018 - type: recall_at_1000 value: 87.905 - type: recall_at_3 value: 37.7 - type: recall_at_5 value: 42.281 - type: map_at_1 value: 14.350999999999999 - type: map_at_10 value: 21.745 - type: map_at_100 value: 22.731 - type: map_at_1000 value: 22.852 - type: map_at_3 value: 19.245 - type: map_at_5 value: 20.788 - type: mrr_at_1 value: 18.159 - type: mrr_at_10 value: 25.833000000000002 - type: mrr_at_100 value: 26.728 - type: mrr_at_1000 value: 26.802 - type: mrr_at_3 value: 23.383000000000003 - type: mrr_at_5 value: 24.887999999999998 - type: ndcg_at_1 value: 18.159 - type: ndcg_at_10 value: 26.518000000000004 - type: ndcg_at_100 value: 31.473000000000003 - type: ndcg_at_1000 value: 34.576 - type: ndcg_at_3 value: 21.907 - type: ndcg_at_5 value: 24.39 - type: precision_at_1 value: 18.159 - type: precision_at_10 value: 4.938 - type: precision_at_100 value: 0.853 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 10.655000000000001 - type: precision_at_5 value: 7.985 - type: recall_at_1 value: 14.350999999999999 - type: recall_at_10 value: 37.284 - type: recall_at_100 value: 59.11300000000001 - type: recall_at_1000 value: 81.634 - type: recall_at_3 value: 24.753 - type: recall_at_5 value: 30.979 - type: map_at_1 value: 26.978 - type: map_at_10 value: 36.276 - type: map_at_100 value: 37.547000000000004 - type: map_at_1000 value: 37.678 - type: map_at_3 value: 33.674 - type: map_at_5 value: 35.119 - type: mrr_at_1 value: 32.916000000000004 - type: mrr_at_10 value: 41.798 - type: mrr_at_100 value: 42.72 - type: mrr_at_1000 value: 42.778 - type: mrr_at_3 value: 39.493 - type: mrr_at_5 value: 40.927 - type: ndcg_at_1 value: 32.916000000000004 - type: ndcg_at_10 value: 41.81 - type: ndcg_at_100 value: 47.284 - type: ndcg_at_1000 value: 49.702 - type: ndcg_at_3 value: 37.486999999999995 - type: ndcg_at_5 value: 39.597 - type: precision_at_1 value: 32.916000000000004 - type: precision_at_10 value: 7.411 - type: precision_at_100 value: 1.189 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.581 - type: precision_at_5 value: 12.397 - type: recall_at_1 value: 26.978 - type: recall_at_10 value: 52.869 - type: recall_at_100 value: 75.78399999999999 - type: recall_at_1000 value: 91.545 - type: recall_at_3 value: 40.717 - type: recall_at_5 value: 46.168 - type: map_at_1 value: 24.641 - type: map_at_10 value: 32.916000000000004 - type: map_at_100 value: 34.165 - type: map_at_1000 value: 34.286 - type: map_at_3 value: 30.335 - type: map_at_5 value: 31.569000000000003 - type: mrr_at_1 value: 30.593999999999998 - type: mrr_at_10 value: 38.448 - type: mrr_at_100 value: 39.299 - type: mrr_at_1000 value: 39.362 - type: mrr_at_3 value: 36.244 - type: mrr_at_5 value: 37.232 - type: ndcg_at_1 value: 30.593999999999998 - type: ndcg_at_10 value: 38.2 - type: ndcg_at_100 value: 43.742 - type: ndcg_at_1000 value: 46.217000000000006 - type: ndcg_at_3 value: 33.925 - type: ndcg_at_5 value: 35.394 - type: precision_at_1 value: 30.593999999999998 - type: precision_at_10 value: 6.895 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 16.096 - type: precision_at_5 value: 11.05 - type: recall_at_1 value: 24.641 - type: recall_at_10 value: 48.588 - type: recall_at_100 value: 72.841 - type: recall_at_1000 value: 89.535 - type: recall_at_3 value: 36.087 - type: recall_at_5 value: 40.346 - type: map_at_1 value: 24.79425 - type: map_at_10 value: 33.12033333333333 - type: map_at_100 value: 34.221333333333334 - type: map_at_1000 value: 34.3435 - type: map_at_3 value: 30.636583333333338 - type: map_at_5 value: 31.974083333333326 - type: mrr_at_1 value: 29.242416666666664 - type: mrr_at_10 value: 37.11675 - type: mrr_at_100 value: 37.93783333333334 - type: mrr_at_1000 value: 38.003083333333336 - type: mrr_at_3 value: 34.904666666666664 - type: mrr_at_5 value: 36.12916666666667 - type: ndcg_at_1 value: 29.242416666666664 - type: ndcg_at_10 value: 38.03416666666667 - type: ndcg_at_100 value: 42.86674999999999 - type: ndcg_at_1000 value: 45.34550000000001 - type: ndcg_at_3 value: 33.76466666666666 - type: ndcg_at_5 value: 35.668666666666674 - type: precision_at_1 value: 29.242416666666664 - type: precision_at_10 value: 6.589833333333334 - type: precision_at_100 value: 1.0693333333333332 - type: precision_at_1000 value: 0.14641666666666667 - type: precision_at_3 value: 15.430749999999998 - type: precision_at_5 value: 10.833833333333333 - type: recall_at_1 value: 24.79425 - type: recall_at_10 value: 48.582916666666655 - type: recall_at_100 value: 69.88499999999999 - type: recall_at_1000 value: 87.211 - type: recall_at_3 value: 36.625499999999995 - type: recall_at_5 value: 41.553999999999995 - type: map_at_1 value: 22.767 - type: map_at_10 value: 28.450999999999997 - type: map_at_100 value: 29.332 - type: map_at_1000 value: 29.426000000000002 - type: map_at_3 value: 26.379 - type: map_at_5 value: 27.584999999999997 - type: mrr_at_1 value: 25.46 - type: mrr_at_10 value: 30.974 - type: mrr_at_100 value: 31.784000000000002 - type: mrr_at_1000 value: 31.857999999999997 - type: mrr_at_3 value: 28.962 - type: mrr_at_5 value: 30.066 - type: ndcg_at_1 value: 25.46 - type: ndcg_at_10 value: 32.041 - type: ndcg_at_100 value: 36.522 - type: ndcg_at_1000 value: 39.101 - type: ndcg_at_3 value: 28.152 - type: ndcg_at_5 value: 30.03 - type: precision_at_1 value: 25.46 - type: precision_at_10 value: 4.893 - type: precision_at_100 value: 0.77 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 11.605 - type: precision_at_5 value: 8.19 - type: recall_at_1 value: 22.767 - type: recall_at_10 value: 40.71 - type: recall_at_100 value: 61.334999999999994 - type: recall_at_1000 value: 80.567 - type: recall_at_3 value: 30.198000000000004 - type: recall_at_5 value: 34.803 - type: map_at_1 value: 16.722 - type: map_at_10 value: 22.794 - type: map_at_100 value: 23.7 - type: map_at_1000 value: 23.822 - type: map_at_3 value: 20.781 - type: map_at_5 value: 22.024 - type: mrr_at_1 value: 20.061999999999998 - type: mrr_at_10 value: 26.346999999999998 - type: mrr_at_100 value: 27.153 - type: mrr_at_1000 value: 27.233 - type: mrr_at_3 value: 24.375 - type: mrr_at_5 value: 25.593 - type: ndcg_at_1 value: 20.061999999999998 - type: ndcg_at_10 value: 26.785999999999998 - type: ndcg_at_100 value: 31.319999999999997 - type: ndcg_at_1000 value: 34.346 - type: ndcg_at_3 value: 23.219 - type: ndcg_at_5 value: 25.107000000000003 - type: precision_at_1 value: 20.061999999999998 - type: precision_at_10 value: 4.78 - type: precision_at_100 value: 0.83 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 10.874 - type: precision_at_5 value: 7.956 - type: recall_at_1 value: 16.722 - type: recall_at_10 value: 35.204 - type: recall_at_100 value: 55.797 - type: recall_at_1000 value: 77.689 - type: recall_at_3 value: 25.245 - type: recall_at_5 value: 30.115 - type: map_at_1 value: 24.842 - type: map_at_10 value: 32.917 - type: map_at_100 value: 33.961000000000006 - type: map_at_1000 value: 34.069 - type: map_at_3 value: 30.595 - type: map_at_5 value: 31.837 - type: mrr_at_1 value: 29.011 - type: mrr_at_10 value: 36.977 - type: mrr_at_100 value: 37.814 - type: mrr_at_1000 value: 37.885999999999996 - type: mrr_at_3 value: 34.966 - type: mrr_at_5 value: 36.043 - type: ndcg_at_1 value: 29.011 - type: ndcg_at_10 value: 37.735 - type: ndcg_at_100 value: 42.683 - type: ndcg_at_1000 value: 45.198 - type: ndcg_at_3 value: 33.650000000000006 - type: ndcg_at_5 value: 35.386 - type: precision_at_1 value: 29.011 - type: precision_at_10 value: 6.259 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 15.329999999999998 - type: precision_at_5 value: 10.541 - type: recall_at_1 value: 24.842 - type: recall_at_10 value: 48.304 - type: recall_at_100 value: 70.04899999999999 - type: recall_at_1000 value: 87.82600000000001 - type: recall_at_3 value: 36.922 - type: recall_at_5 value: 41.449999999999996 - type: map_at_1 value: 24.252000000000002 - type: map_at_10 value: 32.293 - type: map_at_100 value: 33.816 - type: map_at_1000 value: 34.053 - type: map_at_3 value: 29.781999999999996 - type: map_at_5 value: 31.008000000000003 - type: mrr_at_1 value: 29.051 - type: mrr_at_10 value: 36.722 - type: mrr_at_100 value: 37.663000000000004 - type: mrr_at_1000 value: 37.734 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 35.609 - type: ndcg_at_1 value: 29.051 - type: ndcg_at_10 value: 37.775999999999996 - type: ndcg_at_100 value: 43.221 - type: ndcg_at_1000 value: 46.116 - type: ndcg_at_3 value: 33.403 - type: ndcg_at_5 value: 35.118 - type: precision_at_1 value: 29.051 - type: precision_at_10 value: 7.332 - type: precision_at_100 value: 1.49 - type: precision_at_1000 value: 0.23600000000000002 - type: precision_at_3 value: 15.415000000000001 - type: precision_at_5 value: 11.107 - type: recall_at_1 value: 24.252000000000002 - type: recall_at_10 value: 47.861 - type: recall_at_100 value: 72.21600000000001 - type: recall_at_1000 value: 90.886 - type: recall_at_3 value: 35.533 - type: recall_at_5 value: 39.959 - type: map_at_1 value: 20.025000000000002 - type: map_at_10 value: 27.154 - type: map_at_100 value: 28.118 - type: map_at_1000 value: 28.237000000000002 - type: map_at_3 value: 25.017 - type: map_at_5 value: 25.832 - type: mrr_at_1 value: 21.627 - type: mrr_at_10 value: 28.884999999999998 - type: mrr_at_100 value: 29.741 - type: mrr_at_1000 value: 29.831999999999997 - type: mrr_at_3 value: 26.741 - type: mrr_at_5 value: 27.628000000000004 - type: ndcg_at_1 value: 21.627 - type: ndcg_at_10 value: 31.436999999999998 - type: ndcg_at_100 value: 36.181000000000004 - type: ndcg_at_1000 value: 38.986 - type: ndcg_at_3 value: 27.025 - type: ndcg_at_5 value: 28.436 - type: precision_at_1 value: 21.627 - type: precision_at_10 value: 5.009 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 11.522 - type: precision_at_5 value: 7.763000000000001 - type: recall_at_1 value: 20.025000000000002 - type: recall_at_10 value: 42.954 - type: recall_at_100 value: 64.67500000000001 - type: recall_at_1000 value: 85.301 - type: recall_at_3 value: 30.892999999999997 - type: recall_at_5 value: 34.288000000000004 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.079 - type: map_at_10 value: 16.930999999999997 - type: map_at_100 value: 18.398999999999997 - type: map_at_1000 value: 18.561 - type: map_at_3 value: 14.294 - type: map_at_5 value: 15.579 - type: mrr_at_1 value: 22.606 - type: mrr_at_10 value: 32.513 - type: mrr_at_100 value: 33.463 - type: mrr_at_1000 value: 33.513999999999996 - type: mrr_at_3 value: 29.479 - type: mrr_at_5 value: 31.3 - type: ndcg_at_1 value: 22.606 - type: ndcg_at_10 value: 24.053 - type: ndcg_at_100 value: 30.258000000000003 - type: ndcg_at_1000 value: 33.516 - type: ndcg_at_3 value: 19.721 - type: ndcg_at_5 value: 21.144 - type: precision_at_1 value: 22.606 - type: precision_at_10 value: 7.55 - type: precision_at_100 value: 1.399 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 14.701 - type: precision_at_5 value: 11.192 - type: recall_at_1 value: 10.079 - type: recall_at_10 value: 28.970000000000002 - type: recall_at_100 value: 50.805 - type: recall_at_1000 value: 69.378 - type: recall_at_3 value: 18.199 - type: recall_at_5 value: 22.442 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 7.794 - type: map_at_10 value: 15.165999999999999 - type: map_at_100 value: 20.508000000000003 - type: map_at_1000 value: 21.809 - type: map_at_3 value: 11.568000000000001 - type: map_at_5 value: 13.059000000000001 - type: mrr_at_1 value: 56.49999999999999 - type: mrr_at_10 value: 65.90899999999999 - type: mrr_at_100 value: 66.352 - type: mrr_at_1000 value: 66.369 - type: mrr_at_3 value: 64 - type: mrr_at_5 value: 65.10000000000001 - type: ndcg_at_1 value: 44.25 - type: ndcg_at_10 value: 32.649 - type: ndcg_at_100 value: 36.668 - type: ndcg_at_1000 value: 43.918 - type: ndcg_at_3 value: 37.096000000000004 - type: ndcg_at_5 value: 34.048 - type: precision_at_1 value: 56.49999999999999 - type: precision_at_10 value: 25.45 - type: precision_at_100 value: 8.055 - type: precision_at_1000 value: 1.7489999999999999 - type: precision_at_3 value: 41 - type: precision_at_5 value: 32.85 - type: recall_at_1 value: 7.794 - type: recall_at_10 value: 20.101 - type: recall_at_100 value: 42.448 - type: recall_at_1000 value: 65.88000000000001 - type: recall_at_3 value: 12.753 - type: recall_at_5 value: 15.307 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 44.01 - type: f1 value: 38.659680951114964 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 49.713 - type: map_at_10 value: 61.79 - type: map_at_100 value: 62.28 - type: map_at_1000 value: 62.297000000000004 - type: map_at_3 value: 59.361 - type: map_at_5 value: 60.92100000000001 - type: mrr_at_1 value: 53.405 - type: mrr_at_10 value: 65.79899999999999 - type: mrr_at_100 value: 66.219 - type: mrr_at_1000 value: 66.227 - type: mrr_at_3 value: 63.431000000000004 - type: mrr_at_5 value: 64.98 - type: ndcg_at_1 value: 53.405 - type: ndcg_at_10 value: 68.01899999999999 - type: ndcg_at_100 value: 70.197 - type: ndcg_at_1000 value: 70.571 - type: ndcg_at_3 value: 63.352 - type: ndcg_at_5 value: 66.018 - type: precision_at_1 value: 53.405 - type: precision_at_10 value: 9.119 - type: precision_at_100 value: 1.03 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 25.602999999999998 - type: precision_at_5 value: 16.835 - type: recall_at_1 value: 49.713 - type: recall_at_10 value: 83.306 - type: recall_at_100 value: 92.92 - type: recall_at_1000 value: 95.577 - type: recall_at_3 value: 70.798 - type: recall_at_5 value: 77.254 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 15.310000000000002 - type: map_at_10 value: 26.204 - type: map_at_100 value: 27.932000000000002 - type: map_at_1000 value: 28.121000000000002 - type: map_at_3 value: 22.481 - type: map_at_5 value: 24.678 - type: mrr_at_1 value: 29.784 - type: mrr_at_10 value: 39.582 - type: mrr_at_100 value: 40.52 - type: mrr_at_1000 value: 40.568 - type: mrr_at_3 value: 37.114000000000004 - type: mrr_at_5 value: 38.596000000000004 - type: ndcg_at_1 value: 29.784 - type: ndcg_at_10 value: 33.432 - type: ndcg_at_100 value: 40.281 - type: ndcg_at_1000 value: 43.653999999999996 - type: ndcg_at_3 value: 29.612 - type: ndcg_at_5 value: 31.223 - type: precision_at_1 value: 29.784 - type: precision_at_10 value: 9.645 - type: precision_at_100 value: 1.645 - type: precision_at_1000 value: 0.22499999999999998 - type: precision_at_3 value: 20.165 - type: precision_at_5 value: 15.401000000000002 - type: recall_at_1 value: 15.310000000000002 - type: recall_at_10 value: 40.499 - type: recall_at_100 value: 66.643 - type: recall_at_1000 value: 87.059 - type: recall_at_3 value: 27.492 - type: recall_at_5 value: 33.748 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 33.599000000000004 - type: map_at_10 value: 47.347 - type: map_at_100 value: 48.191 - type: map_at_1000 value: 48.263 - type: map_at_3 value: 44.698 - type: map_at_5 value: 46.278999999999996 - type: mrr_at_1 value: 67.19800000000001 - type: mrr_at_10 value: 74.054 - type: mrr_at_100 value: 74.376 - type: mrr_at_1000 value: 74.392 - type: mrr_at_3 value: 72.849 - type: mrr_at_5 value: 73.643 - type: ndcg_at_1 value: 67.19800000000001 - type: ndcg_at_10 value: 56.482 - type: ndcg_at_100 value: 59.694 - type: ndcg_at_1000 value: 61.204 - type: ndcg_at_3 value: 52.43299999999999 - type: ndcg_at_5 value: 54.608000000000004 - type: precision_at_1 value: 67.19800000000001 - type: precision_at_10 value: 11.613999999999999 - type: precision_at_100 value: 1.415 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 32.726 - type: precision_at_5 value: 21.349999999999998 - type: recall_at_1 value: 33.599000000000004 - type: recall_at_10 value: 58.069 - type: recall_at_100 value: 70.736 - type: recall_at_1000 value: 80.804 - type: recall_at_3 value: 49.088 - type: recall_at_5 value: 53.376000000000005 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 73.64359999999999 - type: ap value: 67.54685976014599 - type: f1 value: 73.55148707559482 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 19.502 - type: map_at_10 value: 30.816 - type: map_at_100 value: 32.007999999999996 - type: map_at_1000 value: 32.067 - type: map_at_3 value: 27.215 - type: map_at_5 value: 29.304000000000002 - type: mrr_at_1 value: 20.072000000000003 - type: mrr_at_10 value: 31.406 - type: mrr_at_100 value: 32.549 - type: mrr_at_1000 value: 32.602 - type: mrr_at_3 value: 27.839000000000002 - type: mrr_at_5 value: 29.926000000000002 - type: ndcg_at_1 value: 20.086000000000002 - type: ndcg_at_10 value: 37.282 - type: ndcg_at_100 value: 43.206 - type: ndcg_at_1000 value: 44.690000000000005 - type: ndcg_at_3 value: 29.932 - type: ndcg_at_5 value: 33.668 - type: precision_at_1 value: 20.086000000000002 - type: precision_at_10 value: 5.961 - type: precision_at_100 value: 0.898 - type: precision_at_1000 value: 0.10200000000000001 - type: precision_at_3 value: 12.856000000000002 - type: precision_at_5 value: 9.596 - type: recall_at_1 value: 19.502 - type: recall_at_10 value: 57.182 - type: recall_at_100 value: 84.952 - type: recall_at_1000 value: 96.34700000000001 - type: recall_at_3 value: 37.193 - type: recall_at_5 value: 46.157 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.96488828089375 - type: f1 value: 93.32119260543482 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.4965800273598 - type: f1 value: 49.34896217536082 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.60928043039678 - type: f1 value: 64.34244712074538 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.75453934095493 - type: f1 value: 68.39224867489249 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.862573504920082 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.511123551196803 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.99145104942086 - type: mrr value: 32.03606480418627 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.015 - type: map_at_10 value: 11.054 - type: map_at_100 value: 13.773 - type: map_at_1000 value: 15.082999999999998 - type: map_at_3 value: 8.253 - type: map_at_5 value: 9.508999999999999 - type: mrr_at_1 value: 42.105 - type: mrr_at_10 value: 50.44499999999999 - type: mrr_at_100 value: 51.080000000000005 - type: mrr_at_1000 value: 51.129999999999995 - type: mrr_at_3 value: 48.555 - type: mrr_at_5 value: 49.84 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.403000000000002 - type: ndcg_at_100 value: 28.216 - type: ndcg_at_1000 value: 37.021 - type: ndcg_at_3 value: 35.53 - type: ndcg_at_5 value: 33.202999999999996 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 22.353 - type: precision_at_100 value: 7.266 - type: precision_at_1000 value: 2.011 - type: precision_at_3 value: 32.921 - type: precision_at_5 value: 28.297 - type: recall_at_1 value: 5.015 - type: recall_at_10 value: 14.393 - type: recall_at_100 value: 28.893 - type: recall_at_1000 value: 60.18 - type: recall_at_3 value: 9.184000000000001 - type: recall_at_5 value: 11.39 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 29.524 - type: map_at_10 value: 44.182 - type: map_at_100 value: 45.228 - type: map_at_1000 value: 45.265 - type: map_at_3 value: 39.978 - type: map_at_5 value: 42.482 - type: mrr_at_1 value: 33.256 - type: mrr_at_10 value: 46.661 - type: mrr_at_100 value: 47.47 - type: mrr_at_1000 value: 47.496 - type: mrr_at_3 value: 43.187999999999995 - type: mrr_at_5 value: 45.330999999999996 - type: ndcg_at_1 value: 33.227000000000004 - type: ndcg_at_10 value: 51.589 - type: ndcg_at_100 value: 56.043 - type: ndcg_at_1000 value: 56.937000000000005 - type: ndcg_at_3 value: 43.751 - type: ndcg_at_5 value: 47.937000000000005 - type: precision_at_1 value: 33.227000000000004 - type: precision_at_10 value: 8.556999999999999 - type: precision_at_100 value: 1.103 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.921 - type: precision_at_5 value: 14.396999999999998 - type: recall_at_1 value: 29.524 - type: recall_at_10 value: 71.615 - type: recall_at_100 value: 91.056 - type: recall_at_1000 value: 97.72800000000001 - type: recall_at_3 value: 51.451 - type: recall_at_5 value: 61.119 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 69.596 - type: map_at_10 value: 83.281 - type: map_at_100 value: 83.952 - type: map_at_1000 value: 83.97200000000001 - type: map_at_3 value: 80.315 - type: map_at_5 value: 82.223 - type: mrr_at_1 value: 80.17 - type: mrr_at_10 value: 86.522 - type: mrr_at_100 value: 86.644 - type: mrr_at_1000 value: 86.64500000000001 - type: mrr_at_3 value: 85.438 - type: mrr_at_5 value: 86.21799999999999 - type: ndcg_at_1 value: 80.19 - type: ndcg_at_10 value: 87.19 - type: ndcg_at_100 value: 88.567 - type: ndcg_at_1000 value: 88.70400000000001 - type: ndcg_at_3 value: 84.17999999999999 - type: ndcg_at_5 value: 85.931 - type: precision_at_1 value: 80.19 - type: precision_at_10 value: 13.209000000000001 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.717 - type: precision_at_5 value: 24.248 - type: recall_at_1 value: 69.596 - type: recall_at_10 value: 94.533 - type: recall_at_100 value: 99.322 - type: recall_at_1000 value: 99.965 - type: recall_at_3 value: 85.911 - type: recall_at_5 value: 90.809 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 49.27650627571912 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 57.08550946534183 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.568 - type: map_at_10 value: 10.862 - type: map_at_100 value: 12.757 - type: map_at_1000 value: 13.031 - type: map_at_3 value: 7.960000000000001 - type: map_at_5 value: 9.337 - type: mrr_at_1 value: 22.5 - type: mrr_at_10 value: 32.6 - type: mrr_at_100 value: 33.603 - type: mrr_at_1000 value: 33.672000000000004 - type: mrr_at_3 value: 29.299999999999997 - type: mrr_at_5 value: 31.25 - type: ndcg_at_1 value: 22.5 - type: ndcg_at_10 value: 18.605 - type: ndcg_at_100 value: 26.029999999999998 - type: ndcg_at_1000 value: 31.256 - type: ndcg_at_3 value: 17.873 - type: ndcg_at_5 value: 15.511 - type: precision_at_1 value: 22.5 - type: precision_at_10 value: 9.58 - type: precision_at_100 value: 2.033 - type: precision_at_1000 value: 0.33 - type: precision_at_3 value: 16.633 - type: precision_at_5 value: 13.54 - type: recall_at_1 value: 4.568 - type: recall_at_10 value: 19.402 - type: recall_at_100 value: 41.277 - type: recall_at_1000 value: 66.963 - type: recall_at_3 value: 10.112 - type: recall_at_5 value: 13.712 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.31992291680787 - type: cos_sim_spearman value: 76.7212346922664 - type: euclidean_pearson value: 80.42189271706478 - type: euclidean_spearman value: 76.7212342532493 - type: manhattan_pearson value: 80.33171093031578 - type: manhattan_spearman value: 76.63192883074694 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.16654278886763 - type: cos_sim_spearman value: 73.66390263429565 - type: euclidean_pearson value: 79.7485360086639 - type: euclidean_spearman value: 73.66389870373436 - type: manhattan_pearson value: 79.73652237443706 - type: manhattan_spearman value: 73.65296117151647 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.40389689929246 - type: cos_sim_spearman value: 83.29727595993955 - type: euclidean_pearson value: 82.23970587854079 - type: euclidean_spearman value: 83.29727595993955 - type: manhattan_pearson value: 82.18823600831897 - type: manhattan_spearman value: 83.20746192209594 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.73505246913413 - type: cos_sim_spearman value: 79.1686548248754 - type: euclidean_pearson value: 80.48889135993412 - type: euclidean_spearman value: 79.16864112930354 - type: manhattan_pearson value: 80.40720651057302 - type: manhattan_spearman value: 79.0640155089286 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.3953512879065 - type: cos_sim_spearman value: 87.29947322714338 - type: euclidean_pearson value: 86.59759438529645 - type: euclidean_spearman value: 87.29947511092824 - type: manhattan_pearson value: 86.52097806169155 - type: manhattan_spearman value: 87.22987242146534 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.48565753792056 - type: cos_sim_spearman value: 83.6049720319893 - type: euclidean_pearson value: 82.56452023172913 - type: euclidean_spearman value: 83.60490168191697 - type: manhattan_pearson value: 82.58079941137872 - type: manhattan_spearman value: 83.60975807374051 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.18239976618212 - type: cos_sim_spearman value: 88.23061724730616 - type: euclidean_pearson value: 87.78482472776658 - type: euclidean_spearman value: 88.23061724730616 - type: manhattan_pearson value: 87.75059641730239 - type: manhattan_spearman value: 88.22527413524622 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.42816418706765 - type: cos_sim_spearman value: 63.4569864520124 - type: euclidean_pearson value: 64.35405409953853 - type: euclidean_spearman value: 63.4569864520124 - type: manhattan_pearson value: 63.96649236073056 - type: manhattan_spearman value: 63.01448583722708 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 83.41659638047614 - type: cos_sim_spearman value: 84.03893866106175 - type: euclidean_pearson value: 84.2251203953798 - type: euclidean_spearman value: 84.03893866106175 - type: manhattan_pearson value: 84.22733643205514 - type: manhattan_spearman value: 84.06504411263612 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.75608022582414 - type: mrr value: 94.0947732369301 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 50.161 - type: map_at_10 value: 59.458999999999996 - type: map_at_100 value: 60.156 - type: map_at_1000 value: 60.194 - type: map_at_3 value: 56.45400000000001 - type: map_at_5 value: 58.165 - type: mrr_at_1 value: 53.333 - type: mrr_at_10 value: 61.050000000000004 - type: mrr_at_100 value: 61.586 - type: mrr_at_1000 value: 61.624 - type: mrr_at_3 value: 58.889 - type: mrr_at_5 value: 60.122 - type: ndcg_at_1 value: 53.333 - type: ndcg_at_10 value: 63.888999999999996 - type: ndcg_at_100 value: 66.963 - type: ndcg_at_1000 value: 68.062 - type: ndcg_at_3 value: 59.01 - type: ndcg_at_5 value: 61.373999999999995 - type: precision_at_1 value: 53.333 - type: precision_at_10 value: 8.633000000000001 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 23.111 - type: precision_at_5 value: 15.467 - type: recall_at_1 value: 50.161 - type: recall_at_10 value: 75.922 - type: recall_at_100 value: 90 - type: recall_at_1000 value: 98.667 - type: recall_at_3 value: 62.90599999999999 - type: recall_at_5 value: 68.828 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81188118811882 - type: cos_sim_ap value: 95.11619225962413 - type: cos_sim_f1 value: 90.35840484603736 - type: cos_sim_precision value: 91.23343527013252 - type: cos_sim_recall value: 89.5 - type: dot_accuracy value: 99.81188118811882 - type: dot_ap value: 95.11619225962413 - type: dot_f1 value: 90.35840484603736 - type: dot_precision value: 91.23343527013252 - type: dot_recall value: 89.5 - type: euclidean_accuracy value: 99.81188118811882 - type: euclidean_ap value: 95.11619225962413 - type: euclidean_f1 value: 90.35840484603736 - type: euclidean_precision value: 91.23343527013252 - type: euclidean_recall value: 89.5 - type: manhattan_accuracy value: 99.80891089108911 - type: manhattan_ap value: 95.07294266220966 - type: manhattan_f1 value: 90.21794221996959 - type: manhattan_precision value: 91.46968139773895 - type: manhattan_recall value: 89 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 95.11619225962413 - type: max_f1 value: 90.35840484603736 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 55.3481874105239 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.421291695525 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.98746633276634 - type: mrr value: 50.63143249724133 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.009961979844036 - type: cos_sim_spearman value: 30.558416108881044 - type: dot_pearson value: 31.009964941134253 - type: dot_spearman value: 30.545760761761393 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.207 - type: map_at_10 value: 1.6 - type: map_at_100 value: 8.594 - type: map_at_1000 value: 20.213 - type: map_at_3 value: 0.585 - type: map_at_5 value: 0.9039999999999999 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 87.4 - type: mrr_at_100 value: 87.4 - type: mrr_at_1000 value: 87.4 - type: mrr_at_3 value: 86.667 - type: mrr_at_5 value: 87.06700000000001 - type: ndcg_at_1 value: 73 - type: ndcg_at_10 value: 65.18 - type: ndcg_at_100 value: 49.631 - type: ndcg_at_1000 value: 43.498999999999995 - type: ndcg_at_3 value: 71.83800000000001 - type: ndcg_at_5 value: 69.271 - type: precision_at_1 value: 78 - type: precision_at_10 value: 69.19999999999999 - type: precision_at_100 value: 50.980000000000004 - type: precision_at_1000 value: 19.426 - type: precision_at_3 value: 77.333 - type: precision_at_5 value: 74 - type: recall_at_1 value: 0.207 - type: recall_at_10 value: 1.822 - type: recall_at_100 value: 11.849 - type: recall_at_1000 value: 40.492 - type: recall_at_3 value: 0.622 - type: recall_at_5 value: 0.9809999999999999 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.001 - type: map_at_10 value: 10.376000000000001 - type: map_at_100 value: 16.936999999999998 - type: map_at_1000 value: 18.615000000000002 - type: map_at_3 value: 5.335999999999999 - type: map_at_5 value: 7.374 - type: mrr_at_1 value: 20.408 - type: mrr_at_10 value: 38.29 - type: mrr_at_100 value: 39.33 - type: mrr_at_1000 value: 39.347 - type: mrr_at_3 value: 32.993 - type: mrr_at_5 value: 36.973 - type: ndcg_at_1 value: 17.347 - type: ndcg_at_10 value: 23.515 - type: ndcg_at_100 value: 37.457 - type: ndcg_at_1000 value: 49.439 - type: ndcg_at_3 value: 22.762999999999998 - type: ndcg_at_5 value: 22.622 - type: precision_at_1 value: 20.408 - type: precision_at_10 value: 22.448999999999998 - type: precision_at_100 value: 8.184 - type: precision_at_1000 value: 1.608 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 25.306 - type: recall_at_1 value: 2.001 - type: recall_at_10 value: 17.422 - type: recall_at_100 value: 51.532999999999994 - type: recall_at_1000 value: 87.466 - type: recall_at_3 value: 6.861000000000001 - type: recall_at_5 value: 10.502 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.54419999999999 - type: ap value: 14.372170450843907 - type: f1 value: 54.94420257390529 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.402942840973395 - type: f1 value: 59.4166538875571 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.569064336457906 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.31322644096085 - type: cos_sim_ap value: 72.14518894837381 - type: cos_sim_f1 value: 66.67489813557229 - type: cos_sim_precision value: 62.65954977953121 - type: cos_sim_recall value: 71.2401055408971 - type: dot_accuracy value: 85.31322644096085 - type: dot_ap value: 72.14521480685293 - type: dot_f1 value: 66.67489813557229 - type: dot_precision value: 62.65954977953121 - type: dot_recall value: 71.2401055408971 - type: euclidean_accuracy value: 85.31322644096085 - type: euclidean_ap value: 72.14520820485349 - type: euclidean_f1 value: 66.67489813557229 - type: euclidean_precision value: 62.65954977953121 - type: euclidean_recall value: 71.2401055408971 - type: manhattan_accuracy value: 85.21785778148656 - type: manhattan_ap value: 72.01177147657364 - type: manhattan_f1 value: 66.62594673833374 - type: manhattan_precision value: 62.0336669699727 - type: manhattan_recall value: 71.95250659630607 - type: max_accuracy value: 85.31322644096085 - type: max_ap value: 72.14521480685293 - type: max_f1 value: 66.67489813557229 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.12756626693057 - type: cos_sim_ap value: 86.05430786440826 - type: cos_sim_f1 value: 78.27759692216631 - type: cos_sim_precision value: 75.33466248931929 - type: cos_sim_recall value: 81.45980905451185 - type: dot_accuracy value: 89.12950673341872 - type: dot_ap value: 86.05431161145492 - type: dot_f1 value: 78.27759692216631 - type: dot_precision value: 75.33466248931929 - type: dot_recall value: 81.45980905451185 - type: euclidean_accuracy value: 89.12756626693057 - type: euclidean_ap value: 86.05431303247397 - type: euclidean_f1 value: 78.27759692216631 - type: euclidean_precision value: 75.33466248931929 - type: euclidean_recall value: 81.45980905451185 - type: manhattan_accuracy value: 89.04994760740482 - type: manhattan_ap value: 86.00860610892074 - type: manhattan_f1 value: 78.1846776005392 - type: manhattan_precision value: 76.10438839480975 - type: manhattan_recall value: 80.3818909762858 - type: max_accuracy value: 89.12950673341872 - type: max_ap value: 86.05431303247397 - type: max_f1 value: 78.27759692216631 --- ***See Disclaimer below*** ---- # A Teradata Vantage compatible Embeddings Model # jinaai/jina-embeddings-v2-small-en ## Overview of this Model An Embedding Model which maps text (sentence/ paragraphs) into a vector. The [jinaai/jina-embeddings-v2-small-en](https://huggingface.co/jinaai/jina-embeddings-v2-small-en) model well known for its effectiveness in capturing semantic meanings in text data. It's a state-of-the-art model trained on a large corpus, capable of generating high-quality text embeddings. - 32.69M params (Sizes in ONNX format - "fp32": 123.8MB, "int8": 31.14MB, "uint8": 31.14MB) - 8192 maximum input tokens - 512 dimensions of output vector - Licence: apache-2.0. The released models can be used for commercial purposes free of charge. - Reference to Original Model: https://huggingface.co/jinaai/jina-embeddings-v2-small-en ## Quickstart: Deploying this Model in Teradata Vantage We have pre-converted the model into the ONNX format compatible with BYOM 6.0, eliminating the need for manual conversion. **Note:** Ensure you have access to a Teradata Database with BYOM 6.0 installed. To get started, clone the pre-converted model directly from the Teradata HuggingFace repository. ```python import teradataml as tdml import getpass from huggingface_hub import hf_hub_download model_name = "jina-embeddings-v2-small-en" number_dimensions_output = 512 model_file_name = "model.onnx" # Step 1: Download Model from Teradata HuggingFace Page hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"onnx/{model_file_name}", local_dir="./") hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"tokenizer.json", local_dir="./") # Step 2: Create Connection to Vantage tdml.create_context(host = input('enter your hostname'), username=input('enter your username'), password = getpass.getpass("enter your password")) # Step 3: Load Models into Vantage # a) Embedding model tdml.save_byom(model_id = model_name, # must be unique in the models table model_file = f"onnx/{model_file_name}", table_name = 'embeddings_models' ) # b) Tokenizer tdml.save_byom(model_id = model_name, # must be unique in the models table model_file = 'tokenizer.json', table_name = 'embeddings_tokenizers') # Step 4: Test ONNXEmbeddings Function # Note that ONNXEmbeddings expects the 'payload' column to be 'txt'. # If it has got a different name, just rename it in a subquery/CTE. input_table = "emails.emails" embeddings_query = f""" SELECT * from mldb.ONNXEmbeddings( on {input_table} as InputTable on (select * from embeddings_models where model_id = '{model_name}') as ModelTable DIMENSION on (select model as tokenizer from embeddings_tokenizers where model_id = '{model_name}') as TokenizerTable DIMENSION using Accumulate('id', 'txt') ModelOutputTensor('sentence_embedding') EnableMemoryCheck('false') OutputFormat('FLOAT32({number_dimensions_output})') OverwriteCachedModel('true') ) a """ DF_embeddings = tdml.DataFrame.from_query(embeddings_query) DF_embeddings ``` ## What Can I Do with the Embeddings? Teradata Vantage includes pre-built in-database functions to process embeddings further. Explore the following examples: - **Semantic Clustering with TD_KMeans:** [Semantic Clustering Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Clustering_Python.ipynb) - **Semantic Distance with TD_VectorDistance:** [Semantic Similarity Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Similarity_Python.ipynb) - **RAG-Based Application with TD_VectorDistance:** [RAG and Bedrock Query PDF Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/RAG_and_Bedrock_QueryPDF.ipynb) ## Deep Dive into Model Conversion to ONNX **The steps below outline how we converted the open-source Hugging Face model into an ONNX file compatible with the in-database ONNXEmbeddings function.** You do not need to perform these steps—they are provided solely for documentation and transparency. However, they may be helpful if you wish to convert another model to the required format. ### Part 1. Importing and Converting Model using optimum We start by importing the pre-trained [jinaai/jina-embeddings-v2-small-en](https://huggingface.co/jinaai/jina-embeddings-v2-small-en) model from Hugging Face. We are downloading the ONNX files from the repository prepared by the model authors. After downloading, we are fixing the opset in the ONNX file for compatibility with ONNX runtime used in Teradata Vantage Also we adding the man pooling and normalization layers to the ONNX file We are generating ONNX files for multiple different precisions: fp32, int8, uint8 You can find the detailed conversion steps in the file [convert.py](./convert.py) ### Part 2. Running the model in Python with onnxruntime & compare results Once the fixes are applied, we proceed to test the correctness of the ONNX model by calculating cosine similarity between two texts using native SentenceTransformers and ONNX runtime, comparing the results. If the results are identical, it confirms that the ONNX model gives the same result as the native models, validating its correctness and suitability for further use in the database. ```python import onnxruntime as rt from sentence_transformers.util import cos_sim from sentence_transformers import SentenceTransformer import transformers sentences_1 = 'How is the weather today?' sentences_2 = 'What is the current weather like today?' # Calculate ONNX result tokenizer = transformers.AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v2-small-en") predef_sess = rt.InferenceSession("onnx/model.onnx") enc1 = tokenizer(sentences_1) embeddings_1_onnx = predef_sess.run(None, {"input_ids": [enc1.input_ids], "attention_mask": [enc1.attention_mask]}) enc2 = tokenizer(sentences_2) embeddings_2_onnx = predef_sess.run(None, {"input_ids": [enc2.input_ids], "attention_mask": [enc2.attention_mask]}) # Calculate embeddings with SentenceTransformer model = SentenceTransformer(model_id, trust_remote_code=True) embeddings_1_sentence_transformer = model.encode(sentences_1, normalize_embeddings=True, trust_remote_code=True) embeddings_2_sentence_transformer = model.encode(sentences_2, normalize_embeddings=True, trust_remote_code=True) # Compare results print("Cosine similiarity for embeddings calculated with ONNX:" + str(cos_sim(embeddings_1_onnx[1][0], embeddings_2_onnx[1][0]))) print("Cosine similiarity for embeddings calculated with SentenceTransformer:" + str(cos_sim(embeddings_1_sentence_transformer, embeddings_2_sentence_transformer))) ``` You can find the detailed ONNX vs. SentenceTransformer result comparison steps in the file [test_local.py](./test_local.py) ----- DISCLAIMER: The content herein (“Content”) is provided “AS IS” and is not covered by any Teradata Operations, Inc. and its affiliates (“Teradata”) agreements. Its listing here does not constitute certification or endorsement by Teradata. To the extent any of the Content contains or is related to any artificial intelligence (“AI”) or other language learning models (“Models”) that interoperate with the products and services of Teradata, by accessing, bringing, deploying or using such Models, you acknowledge and agree that you are solely responsible for ensuring compliance with all applicable laws, regulations, and restrictions governing the use, deployment, and distribution of AI technologies. This includes, but is not limited to, AI Diffusion Rules, European Union AI Act, AI-related laws and regulations, privacy laws, export controls, and financial or sector-specific regulations. While Teradata may provide support, guidance, or assistance in the deployment or implementation of Models to interoperate with Teradata’s products and/or services, you remain fully responsible for ensuring that your Models, data, and applications comply with all relevant legal and regulatory obligations. Our assistance does not constitute legal or regulatory approval, and Teradata disclaims any liability arising from non-compliance with applicable laws. You must determine the suitability of the Models for any purpose. Given the probabilistic nature of machine learning and modeling, the use of the Models may in some situations result in incorrect output that does not accurately reflect the action generated. You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
HIT-TMG/KaLM-embedding-multilingual-max-instruct-v1
HIT-TMG
null
[ "mteb", "model-index", "region:us" ]
1,732
1,736
0
9
--- tags: - mteb model-index: - name: KaLM-Embedding results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 94.89505247376313 - type: ap value: 64.78774888517734 - type: ap_weighted value: 64.78774888517734 - type: f1 value: 88.11460157320857 - type: f1_weighted value: 95.22074397272716 - type: main_score value: 94.89505247376313 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 93.71641791044777 - type: ap value: 75.08750683510948 - type: ap_weighted value: 75.08750683510948 - type: f1 value: 90.83321356354264 - type: f1_weighted value: 93.96359461200854 - type: main_score value: 93.71641791044777 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.2393 - type: ap value: 95.64635258594004 - type: ap_weighted value: 95.64635258594004 - type: f1 value: 97.23897196428621 - type: f1_weighted value: 97.23897196428621 - type: main_score value: 97.2393 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 63.242 - type: f1 value: 61.61382228477497 - type: f1_weighted value: 61.61382228477497 - type: main_score value: 63.242 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 57.001000000000005 - type: map_at_1 value: 31.579 - type: map_at_10 value: 47.608 - type: map_at_100 value: 48.355 - type: map_at_1000 value: 48.358000000000004 - type: map_at_20 value: 48.251 - type: map_at_3 value: 42.141 - type: map_at_5 value: 45.107 - type: mrr_at_1 value: 32.005689900426745 - type: mrr_at_10 value: 47.78630133893301 - type: mrr_at_100 value: 48.526996572187365 - type: mrr_at_1000 value: 48.52998293331554 - type: mrr_at_20 value: 48.42307540999908 - type: mrr_at_3 value: 42.33048838311999 - type: mrr_at_5 value: 45.29279279279284 - type: nauc_map_at_1000_diff1 value: 3.975707828444012 - type: nauc_map_at_1000_max value: -8.508536488810098 - type: nauc_map_at_1000_std value: -12.394531033411965 - type: nauc_map_at_100_diff1 value: 3.980642580317845 - type: nauc_map_at_100_max value: -8.50321527222367 - type: nauc_map_at_100_std value: -12.394985565396297 - type: nauc_map_at_10_diff1 value: 3.893567849541356 - type: nauc_map_at_10_max value: -8.271980181442737 - type: nauc_map_at_10_std value: -12.312905395474713 - type: nauc_map_at_1_diff1 value: 5.01085575286572 - type: nauc_map_at_1_max value: -11.363672050604157 - type: nauc_map_at_1_std value: -11.995919412735057 - type: nauc_map_at_20_diff1 value: 3.9346708746144134 - type: nauc_map_at_20_max value: -8.439896611546802 - type: nauc_map_at_20_std value: -12.361203788668389 - type: nauc_map_at_3_diff1 value: 3.743269266512459 - type: nauc_map_at_3_max value: -8.22680712736569 - type: nauc_map_at_3_std value: -12.7911586403021 - type: nauc_map_at_5_diff1 value: 4.210565900704311 - type: nauc_map_at_5_max value: -8.300679250967558 - type: nauc_map_at_5_std value: -12.297010083783297 - type: nauc_mrr_at_1000_diff1 value: 2.6890165178859644 - type: nauc_mrr_at_1000_max value: -8.908073671643209 - type: nauc_mrr_at_1000_std value: -12.28522362969723 - type: nauc_mrr_at_100_diff1 value: 2.694071328592611 - type: nauc_mrr_at_100_max value: -8.90272031925046 - type: nauc_mrr_at_100_std value: -12.285688313011413 - type: nauc_mrr_at_10_diff1 value: 2.6202946436162757 - type: nauc_mrr_at_10_max value: -8.661484408173118 - type: nauc_mrr_at_10_std value: -12.21587817885135 - type: nauc_mrr_at_1_diff1 value: 3.7776170605790957 - type: nauc_mrr_at_1_max value: -11.09253154366557 - type: nauc_mrr_at_1_std value: -11.785521817968217 - type: nauc_mrr_at_20_diff1 value: 2.6521648248215843 - type: nauc_mrr_at_20_max value: -8.838157718530379 - type: nauc_mrr_at_20_std value: -12.252360108609953 - type: nauc_mrr_at_3_diff1 value: 2.4731200282807007 - type: nauc_mrr_at_3_max value: -8.666767296113468 - type: nauc_mrr_at_3_std value: -12.677898342896492 - type: nauc_mrr_at_5_diff1 value: 3.014858760125055 - type: nauc_mrr_at_5_max value: -8.681386979182577 - type: nauc_mrr_at_5_std value: -12.152778387690352 - type: nauc_ndcg_at_1000_diff1 value: 4.02700634185317 - type: nauc_ndcg_at_1000_max value: -7.90622869574075 - type: nauc_ndcg_at_1000_std value: -12.052240010016689 - type: nauc_ndcg_at_100_diff1 value: 4.1586699446096365 - type: nauc_ndcg_at_100_max value: -7.770944362546775 - type: nauc_ndcg_at_100_std value: -12.029866779235611 - type: nauc_ndcg_at_10_diff1 value: 3.6889869038334426 - type: nauc_ndcg_at_10_max value: -6.663423609407744 - type: nauc_ndcg_at_10_std value: -11.709445993744765 - type: nauc_ndcg_at_1_diff1 value: 5.01085575286572 - type: nauc_ndcg_at_1_max value: -11.363672050604157 - type: nauc_ndcg_at_1_std value: -11.995919412735057 - type: nauc_ndcg_at_20_diff1 value: 3.906327916792264 - type: nauc_ndcg_at_20_max value: -7.2453986746925825 - type: nauc_ndcg_at_20_std value: -11.850535874582807 - type: nauc_ndcg_at_3_diff1 value: 3.438181037582238 - type: nauc_ndcg_at_3_max value: -7.114704612909642 - type: nauc_ndcg_at_3_std value: -12.81020014782788 - type: nauc_ndcg_at_5_diff1 value: 4.347648183437709 - type: nauc_ndcg_at_5_max value: -7.059450574661502 - type: nauc_ndcg_at_5_std value: -11.841954074407058 - type: nauc_precision_at_1000_diff1 value: 30.752404058283062 - type: nauc_precision_at_1000_max value: 25.067465235993 - type: nauc_precision_at_1000_std value: 73.43547834923922 - type: nauc_precision_at_100_diff1 value: 46.20282993499485 - type: nauc_precision_at_100_max value: 37.42658285150555 - type: nauc_precision_at_100_std value: 34.45050238262001 - type: nauc_precision_at_10_diff1 value: 2.4893089137078603 - type: nauc_precision_at_10_max value: 5.875977804770932 - type: nauc_precision_at_10_std value: -6.353442736911137 - type: nauc_precision_at_1_diff1 value: 5.01085575286572 - type: nauc_precision_at_1_max value: -11.363672050604157 - type: nauc_precision_at_1_std value: -11.995919412735057 - type: nauc_precision_at_20_diff1 value: 5.913356944621958 - type: nauc_precision_at_20_max value: 17.59293075220789 - type: nauc_precision_at_20_std value: 1.3017849029612656 - type: nauc_precision_at_3_diff1 value: 2.5562621001691115 - type: nauc_precision_at_3_max value: -3.743851981046284 - type: nauc_precision_at_3_std value: -12.789902696420377 - type: nauc_precision_at_5_diff1 value: 5.091135127202832 - type: nauc_precision_at_5_max value: -2.4978007000117155 - type: nauc_precision_at_5_std value: -9.912417664884615 - type: nauc_recall_at_1000_diff1 value: 30.75240405828252 - type: nauc_recall_at_1000_max value: 25.067465235989367 - type: nauc_recall_at_1000_std value: 73.43547834923713 - type: nauc_recall_at_100_diff1 value: 46.20282993499401 - type: nauc_recall_at_100_max value: 37.426582851507746 - type: nauc_recall_at_100_std value: 34.45050238261915 - type: nauc_recall_at_10_diff1 value: 2.4893089137075974 - type: nauc_recall_at_10_max value: 5.875977804770882 - type: nauc_recall_at_10_std value: -6.35344273691122 - type: nauc_recall_at_1_diff1 value: 5.01085575286572 - type: nauc_recall_at_1_max value: -11.363672050604157 - type: nauc_recall_at_1_std value: -11.995919412735057 - type: nauc_recall_at_20_diff1 value: 5.9133569446220005 - type: nauc_recall_at_20_max value: 17.592930752208254 - type: nauc_recall_at_20_std value: 1.3017849029618116 - type: nauc_recall_at_3_diff1 value: 2.5562621001691705 - type: nauc_recall_at_3_max value: -3.7438519810462445 - type: nauc_recall_at_3_std value: -12.789902696420356 - type: nauc_recall_at_5_diff1 value: 5.09113512720278 - type: nauc_recall_at_5_max value: -2.4978007000117453 - type: nauc_recall_at_5_std value: -9.91241766488461 - type: ndcg_at_1 value: 31.579 - type: ndcg_at_10 value: 57.001000000000005 - type: ndcg_at_100 value: 59.891000000000005 - type: ndcg_at_1000 value: 59.95 - type: ndcg_at_20 value: 59.23500000000001 - type: ndcg_at_3 value: 45.635 - type: ndcg_at_5 value: 50.988 - type: precision_at_1 value: 31.579 - type: precision_at_10 value: 8.727 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.7940000000000005 - type: precision_at_3 value: 18.587 - type: precision_at_5 value: 13.755 - type: recall_at_1 value: 31.579 - type: recall_at_10 value: 87.26899999999999 - type: recall_at_100 value: 99.21799999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 95.875 - type: recall_at_3 value: 55.761 - type: recall_at_5 value: 68.777 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 52.97653625462488 - type: v_measure value: 52.97653625462488 - type: v_measure_std value: 14.008984673132934 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 48.48583330645067 - type: v_measure value: 48.48583330645067 - type: v_measure_std value: 14.267964156859984 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 62.24253504089556 - type: map value: 62.24253504089556 - type: mrr value: 76.33128874818625 - type: nAUC_map_diff1 value: 4.176880288432435 - type: nAUC_map_max value: 11.819450923749487 - type: nAUC_map_std value: 17.4613469587158 - type: nAUC_mrr_diff1 value: 12.732722534858695 - type: nAUC_mrr_max value: 18.43070743488316 - type: nAUC_mrr_std value: 20.000499971038455 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 90.04909665335794 - type: cosine_spearman value: 87.42307633144942 - type: euclidean_pearson value: 89.2025951864775 - type: euclidean_spearman value: 87.42307633144942 - type: main_score value: 87.42307633144942 - type: manhattan_pearson value: 89.21547857295786 - type: manhattan_spearman value: 87.42548602491014 - type: pearson value: 90.04909665335794 - type: spearman value: 87.42307633144942 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 90.0422077922078 - type: f1 value: 89.74361913624858 - type: f1_weighted value: 89.74361913624858 - type: main_score value: 90.0422077922078 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 46.14362661035026 - type: v_measure value: 46.14362661035026 - type: v_measure_std value: 0.7301809991373645 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 43.15187727673325 - type: v_measure value: 43.15187727673325 - type: v_measure_std value: 0.903171615579515 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: main_score value: 50.55 - type: map_at_1 value: 30.524 - type: map_at_10 value: 43.303000000000004 - type: map_at_100 value: 44.884 - type: map_at_1000 value: 44.988 - type: map_at_20 value: 44.25 - type: map_at_3 value: 39.223 - type: map_at_5 value: 41.526 - type: mrr_at_1 value: 37.33905579399141 - type: mrr_at_10 value: 49.18097281831182 - type: mrr_at_100 value: 49.875998410207785 - type: mrr_at_1000 value: 49.90193241244399 - type: mrr_at_20 value: 49.63961997147391 - type: mrr_at_3 value: 45.85121602288983 - type: mrr_at_5 value: 47.91845493562227 - type: nauc_map_at_1000_diff1 value: 50.87835448616023 - type: nauc_map_at_1000_max value: 22.967590148656136 - type: nauc_map_at_1000_std value: -11.074964378865882 - type: nauc_map_at_100_diff1 value: 50.836191470106094 - type: nauc_map_at_100_max value: 23.004317727993794 - type: nauc_map_at_100_std value: -11.03683263151284 - type: nauc_map_at_10_diff1 value: 51.029979478127 - type: nauc_map_at_10_max value: 22.403181482973423 - type: nauc_map_at_10_std value: -12.08842665832672 - type: nauc_map_at_1_diff1 value: 56.685765978079104 - type: nauc_map_at_1_max value: 20.18890217442875 - type: nauc_map_at_1_std value: -16.18134719380821 - type: nauc_map_at_20_diff1 value: 50.93444469847115 - type: nauc_map_at_20_max value: 22.75581584688478 - type: nauc_map_at_20_std value: -11.388960684671968 - type: nauc_map_at_3_diff1 value: 52.51826923816856 - type: nauc_map_at_3_max value: 22.01783468494555 - type: nauc_map_at_3_std value: -12.60190918367979 - type: nauc_map_at_5_diff1 value: 51.59135231216494 - type: nauc_map_at_5_max value: 22.20977273899385 - type: nauc_map_at_5_std value: -12.118393799591253 - type: nauc_mrr_at_1000_diff1 value: 49.62171163269558 - type: nauc_mrr_at_1000_max value: 21.923043748600087 - type: nauc_mrr_at_1000_std value: -9.680926976914957 - type: nauc_mrr_at_100_diff1 value: 49.59811053509576 - type: nauc_mrr_at_100_max value: 21.913454668559275 - type: nauc_mrr_at_100_std value: -9.672048537630879 - type: nauc_mrr_at_10_diff1 value: 49.58359769106626 - type: nauc_mrr_at_10_max value: 21.73110258783102 - type: nauc_mrr_at_10_std value: -10.130675415222257 - type: nauc_mrr_at_1_diff1 value: 54.309459671423774 - type: nauc_mrr_at_1_max value: 21.563830920135707 - type: nauc_mrr_at_1_std value: -13.421862266004053 - type: nauc_mrr_at_20_diff1 value: 49.59553148953588 - type: nauc_mrr_at_20_max value: 21.801105215804707 - type: nauc_mrr_at_20_std value: -9.719343039469418 - type: nauc_mrr_at_3_diff1 value: 50.22906571400681 - type: nauc_mrr_at_3_max value: 22.414076676024557 - type: nauc_mrr_at_3_std value: -10.592386840515493 - type: nauc_mrr_at_5_diff1 value: 49.94769223296349 - type: nauc_mrr_at_5_max value: 22.156012648090666 - type: nauc_mrr_at_5_std value: -9.967460566741972 - type: nauc_ndcg_at_1000_diff1 value: 48.44084970097002 - type: nauc_ndcg_at_1000_max value: 23.27939730927873 - type: nauc_ndcg_at_1000_std value: -8.3169007128838 - type: nauc_ndcg_at_100_diff1 value: 47.61783008151571 - type: nauc_ndcg_at_100_max value: 23.363658909885288 - type: nauc_ndcg_at_100_std value: -7.542314697294894 - type: nauc_ndcg_at_10_diff1 value: 48.40177354418614 - type: nauc_ndcg_at_10_max value: 22.04032784529905 - type: nauc_ndcg_at_10_std value: -10.521057264382854 - type: nauc_ndcg_at_1_diff1 value: 54.309459671423774 - type: nauc_ndcg_at_1_max value: 21.563830920135707 - type: nauc_ndcg_at_1_std value: -13.421862266004053 - type: nauc_ndcg_at_20_diff1 value: 48.05509104678278 - type: nauc_ndcg_at_20_max value: 22.23403669779171 - type: nauc_ndcg_at_20_std value: -8.884501785863877 - type: nauc_ndcg_at_3_diff1 value: 50.63893148031915 - type: nauc_ndcg_at_3_max value: 22.985487564912173 - type: nauc_ndcg_at_3_std value: -9.961969212617811 - type: nauc_ndcg_at_5_diff1 value: 49.840718181887205 - type: nauc_ndcg_at_5_max value: 22.62266067823627 - type: nauc_ndcg_at_5_std value: -9.753465469827125 - type: nauc_precision_at_1000_diff1 value: -9.642177815038657 - type: nauc_precision_at_1000_max value: -7.011848867218704 - type: nauc_precision_at_1000_std value: -1.0554008965229682 - type: nauc_precision_at_100_diff1 value: -10.82958078076645 - type: nauc_precision_at_100_max value: 3.7360468761626735 - type: nauc_precision_at_100_std value: 10.213629797601754 - type: nauc_precision_at_10_diff1 value: 7.228619585448093 - type: nauc_precision_at_10_max value: 15.505787188436223 - type: nauc_precision_at_10_std value: 3.453167330810958 - type: nauc_precision_at_1_diff1 value: 54.309459671423774 - type: nauc_precision_at_1_max value: 21.563830920135707 - type: nauc_precision_at_1_std value: -13.421862266004053 - type: nauc_precision_at_20_diff1 value: -1.7520984766129104 - type: nauc_precision_at_20_max value: 11.062834415673692 - type: nauc_precision_at_20_std value: 9.110362167451381 - type: nauc_precision_at_3_diff1 value: 30.810349807525146 - type: nauc_precision_at_3_max value: 21.454275520082614 - type: nauc_precision_at_3_std value: -3.361922508754609 - type: nauc_precision_at_5_diff1 value: 20.565139612074127 - type: nauc_precision_at_5_max value: 20.018698697322932 - type: nauc_precision_at_5_std value: 0.6090753970348068 - type: nauc_recall_at_1000_diff1 value: 23.305610956929122 - type: nauc_recall_at_1000_max value: 47.93957263377874 - type: nauc_recall_at_1000_std value: 56.51460933889256 - type: nauc_recall_at_100_diff1 value: 21.055069984278102 - type: nauc_recall_at_100_max value: 27.28361854728133 - type: nauc_recall_at_100_std value: 22.95330046751934 - type: nauc_recall_at_10_diff1 value: 36.824297431318534 - type: nauc_recall_at_10_max value: 16.78391962481304 - type: nauc_recall_at_10_std value: -7.654512690478331 - type: nauc_recall_at_1_diff1 value: 56.685765978079104 - type: nauc_recall_at_1_max value: 20.18890217442875 - type: nauc_recall_at_1_std value: -16.18134719380821 - type: nauc_recall_at_20_diff1 value: 33.73117180369375 - type: nauc_recall_at_20_max value: 17.378862974482637 - type: nauc_recall_at_20_std value: 0.9094643417016374 - type: nauc_recall_at_3_diff1 value: 46.622283696180126 - type: nauc_recall_at_3_max value: 21.479984733453943 - type: nauc_recall_at_3_std value: -8.824613646181701 - type: nauc_recall_at_5_diff1 value: 42.94803414271339 - type: nauc_recall_at_5_max value: 19.71393422033561 - type: nauc_recall_at_5_std value: -6.7509927712272795 - type: ndcg_at_1 value: 37.339 - type: ndcg_at_10 value: 50.55 - type: ndcg_at_100 value: 55.982 - type: ndcg_at_1000 value: 57.408 - type: ndcg_at_20 value: 52.934000000000005 - type: ndcg_at_3 value: 44.457 - type: ndcg_at_5 value: 47.436 - type: precision_at_1 value: 37.339 - type: precision_at_10 value: 9.886000000000001 - type: precision_at_100 value: 1.5789999999999997 - type: precision_at_1000 value: 0.198 - type: precision_at_20 value: 5.937 - type: precision_at_3 value: 21.65 - type: precision_at_5 value: 15.937000000000001 - type: recall_at_1 value: 30.524 - type: recall_at_10 value: 65.352 - type: recall_at_100 value: 87.637 - type: recall_at_1000 value: 96.196 - type: recall_at_20 value: 73.76 - type: recall_at_3 value: 47.969 - type: recall_at_5 value: 56.04599999999999 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: main_score value: 49.755 - type: map_at_1 value: 32.126 - type: map_at_10 value: 43.763999999999996 - type: map_at_100 value: 45.046 - type: map_at_1000 value: 45.175 - type: map_at_20 value: 44.452999999999996 - type: map_at_3 value: 40.68 - type: map_at_5 value: 42.472 - type: mrr_at_1 value: 39.80891719745223 - type: mrr_at_10 value: 49.715347285411035 - type: mrr_at_100 value: 50.3067137739916 - type: mrr_at_1000 value: 50.351755886641016 - type: mrr_at_20 value: 50.068925634771844 - type: mrr_at_3 value: 47.52653927813167 - type: mrr_at_5 value: 48.841825902335536 - type: nauc_map_at_1000_diff1 value: 48.59094553355779 - type: nauc_map_at_1000_max value: 30.455633813140746 - type: nauc_map_at_1000_std value: -7.217406073058152 - type: nauc_map_at_100_diff1 value: 48.60124648489388 - type: nauc_map_at_100_max value: 30.428573520920466 - type: nauc_map_at_100_std value: -7.272726299567417 - type: nauc_map_at_10_diff1 value: 48.947272730266825 - type: nauc_map_at_10_max value: 29.84188561695202 - type: nauc_map_at_10_std value: -8.462571966727642 - type: nauc_map_at_1_diff1 value: 55.74406817125299 - type: nauc_map_at_1_max value: 24.430384754170788 - type: nauc_map_at_1_std value: -12.790982292634853 - type: nauc_map_at_20_diff1 value: 48.67656984970764 - type: nauc_map_at_20_max value: 30.052835051394112 - type: nauc_map_at_20_std value: -7.952122633861652 - type: nauc_map_at_3_diff1 value: 49.88369074916337 - type: nauc_map_at_3_max value: 27.92376094690595 - type: nauc_map_at_3_std value: -10.39159662660692 - type: nauc_map_at_5_diff1 value: 49.33479052279142 - type: nauc_map_at_5_max value: 28.89736772207175 - type: nauc_map_at_5_std value: -9.247446286716967 - type: nauc_mrr_at_1000_diff1 value: 48.02495353590695 - type: nauc_mrr_at_1000_max value: 33.01510639655688 - type: nauc_mrr_at_1000_std value: -2.608804747034229 - type: nauc_mrr_at_100_diff1 value: 48.00159674365556 - type: nauc_mrr_at_100_max value: 33.012146981367046 - type: nauc_mrr_at_100_std value: -2.596093152602284 - type: nauc_mrr_at_10_diff1 value: 47.92254673059952 - type: nauc_mrr_at_10_max value: 33.033698258656116 - type: nauc_mrr_at_10_std value: -2.6561699541152817 - type: nauc_mrr_at_1_diff1 value: 52.806910277190724 - type: nauc_mrr_at_1_max value: 31.341718131270625 - type: nauc_mrr_at_1_std value: -5.32066722747698 - type: nauc_mrr_at_20_diff1 value: 47.955516111940696 - type: nauc_mrr_at_20_max value: 32.99583133632421 - type: nauc_mrr_at_20_std value: -2.6479648540774274 - type: nauc_mrr_at_3_diff1 value: 48.56227782101869 - type: nauc_mrr_at_3_max value: 32.91019308993462 - type: nauc_mrr_at_3_std value: -3.2801391720497146 - type: nauc_mrr_at_5_diff1 value: 48.14323048037672 - type: nauc_mrr_at_5_max value: 33.051501845550824 - type: nauc_mrr_at_5_std value: -2.7131542652545235 - type: nauc_ndcg_at_1000_diff1 value: 46.57549884078857 - type: nauc_ndcg_at_1000_max value: 32.73441537313754 - type: nauc_ndcg_at_1000_std value: -2.9505871012666405 - type: nauc_ndcg_at_100_diff1 value: 46.25945736855511 - type: nauc_ndcg_at_100_max value: 32.731968732338615 - type: nauc_ndcg_at_100_std value: -2.860884900888649 - type: nauc_ndcg_at_10_diff1 value: 46.64064953658979 - type: nauc_ndcg_at_10_max value: 32.19083804142894 - type: nauc_ndcg_at_10_std value: -5.114718930051209 - type: nauc_ndcg_at_1_diff1 value: 52.806910277190724 - type: nauc_ndcg_at_1_max value: 31.341718131270625 - type: nauc_ndcg_at_1_std value: -5.32066722747698 - type: nauc_ndcg_at_20_diff1 value: 46.10776120787693 - type: nauc_ndcg_at_20_max value: 31.98180440767045 - type: nauc_ndcg_at_20_std value: -4.675498030188404 - type: nauc_ndcg_at_3_diff1 value: 47.54938256917904 - type: nauc_ndcg_at_3_max value: 31.381011523121472 - type: nauc_ndcg_at_3_std value: -6.200346745025213 - type: nauc_ndcg_at_5_diff1 value: 47.16330401930461 - type: nauc_ndcg_at_5_max value: 31.74089919030278 - type: nauc_ndcg_at_5_std value: -5.585078134051873 - type: nauc_precision_at_1000_diff1 value: -18.354995847082844 - type: nauc_precision_at_1000_max value: 7.536381798998833 - type: nauc_precision_at_1000_std value: 24.16855904999215 - type: nauc_precision_at_100_diff1 value: -13.330847169761142 - type: nauc_precision_at_100_max value: 17.34454376124087 - type: nauc_precision_at_100_std value: 28.940981276008166 - type: nauc_precision_at_10_diff1 value: 6.230361767352096 - type: nauc_precision_at_10_max value: 31.227148549129446 - type: nauc_precision_at_10_std value: 18.706855139007033 - type: nauc_precision_at_1_diff1 value: 52.806910277190724 - type: nauc_precision_at_1_max value: 31.341718131270625 - type: nauc_precision_at_1_std value: -5.32066722747698 - type: nauc_precision_at_20_diff1 value: -2.729763591398866 - type: nauc_precision_at_20_max value: 25.47075162004421 - type: nauc_precision_at_20_std value: 22.28998066735407 - type: nauc_precision_at_3_diff1 value: 25.377250814586304 - type: nauc_precision_at_3_max value: 32.92513118389388 - type: nauc_precision_at_3_std value: 6.309867600396586 - type: nauc_precision_at_5_diff1 value: 16.054142936713312 - type: nauc_precision_at_5_max value: 32.4817644691642 - type: nauc_precision_at_5_std value: 12.729986221747236 - type: nauc_recall_at_1000_diff1 value: 37.86057983442115 - type: nauc_recall_at_1000_max value: 42.61982722440853 - type: nauc_recall_at_1000_std value: 19.51530679930037 - type: nauc_recall_at_100_diff1 value: 35.14403552681517 - type: nauc_recall_at_100_max value: 35.93548871128352 - type: nauc_recall_at_100_std value: 10.900510103543851 - type: nauc_recall_at_10_diff1 value: 39.28172711059838 - type: nauc_recall_at_10_max value: 31.400042136145668 - type: nauc_recall_at_10_std value: -3.8018308194028667 - type: nauc_recall_at_1_diff1 value: 55.74406817125299 - type: nauc_recall_at_1_max value: 24.430384754170788 - type: nauc_recall_at_1_std value: -12.790982292634853 - type: nauc_recall_at_20_diff1 value: 35.986564737355316 - type: nauc_recall_at_20_max value: 30.944252645108172 - type: nauc_recall_at_20_std value: -1.4278934558452683 - type: nauc_recall_at_3_diff1 value: 43.602270947605085 - type: nauc_recall_at_3_max value: 28.47701279091494 - type: nauc_recall_at_3_std value: -8.430742781917855 - type: nauc_recall_at_5_diff1 value: 41.509698674149625 - type: nauc_recall_at_5_max value: 29.565124244183714 - type: nauc_recall_at_5_std value: -6.024428635685282 - type: ndcg_at_1 value: 39.809 - type: ndcg_at_10 value: 49.755 - type: ndcg_at_100 value: 54.083999999999996 - type: ndcg_at_1000 value: 56.006 - type: ndcg_at_20 value: 51.458000000000006 - type: ndcg_at_3 value: 45.466 - type: ndcg_at_5 value: 47.579 - type: precision_at_1 value: 39.809 - type: precision_at_10 value: 9.325 - type: precision_at_100 value: 1.496 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_20 value: 5.481 - type: precision_at_3 value: 22.144 - type: precision_at_5 value: 15.656 - type: recall_at_1 value: 32.126 - type: recall_at_10 value: 60.479000000000006 - type: recall_at_100 value: 78.20700000000001 - type: recall_at_1000 value: 90.104 - type: recall_at_20 value: 66.5 - type: recall_at_3 value: 48.013 - type: recall_at_5 value: 53.76800000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: main_score value: 60.809000000000005 - type: map_at_1 value: 41.749 - type: map_at_10 value: 54.837 - type: map_at_100 value: 55.901999999999994 - type: map_at_1000 value: 55.949000000000005 - type: map_at_20 value: 55.53 - type: map_at_3 value: 51.615 - type: map_at_5 value: 53.574 - type: mrr_at_1 value: 47.58620689655172 - type: mrr_at_10 value: 58.19089913917508 - type: mrr_at_100 value: 58.84936132267041 - type: mrr_at_1000 value: 58.871035397713634 - type: mrr_at_20 value: 58.647503342955076 - type: mrr_at_3 value: 55.82027168234072 - type: mrr_at_5 value: 57.368861024033556 - type: nauc_map_at_1000_diff1 value: 52.402557197657586 - type: nauc_map_at_1000_max value: 27.576477868328396 - type: nauc_map_at_1000_std value: -8.99329187458237 - type: nauc_map_at_100_diff1 value: 52.381911979062004 - type: nauc_map_at_100_max value: 27.567441542826515 - type: nauc_map_at_100_std value: -8.974384513269332 - type: nauc_map_at_10_diff1 value: 52.43568738965503 - type: nauc_map_at_10_max value: 27.202397861188498 - type: nauc_map_at_10_std value: -9.658790772870079 - type: nauc_map_at_1_diff1 value: 55.44817856281672 - type: nauc_map_at_1_max value: 21.74063132275011 - type: nauc_map_at_1_std value: -12.491279758397635 - type: nauc_map_at_20_diff1 value: 52.36014101832447 - type: nauc_map_at_20_max value: 27.391225204112978 - type: nauc_map_at_20_std value: -9.247769516787553 - type: nauc_map_at_3_diff1 value: 52.99053106630418 - type: nauc_map_at_3_max value: 25.217871247817225 - type: nauc_map_at_3_std value: -11.832159341852192 - type: nauc_map_at_5_diff1 value: 52.892553369125714 - type: nauc_map_at_5_max value: 26.138698198481773 - type: nauc_map_at_5_std value: -11.142006671374872 - type: nauc_mrr_at_1000_diff1 value: 52.282852278286676 - type: nauc_mrr_at_1000_max value: 29.035101022588993 - type: nauc_mrr_at_1000_std value: -7.533923187353252 - type: nauc_mrr_at_100_diff1 value: 52.27658025254698 - type: nauc_mrr_at_100_max value: 29.046272472216167 - type: nauc_mrr_at_100_std value: -7.5193280598760275 - type: nauc_mrr_at_10_diff1 value: 52.0973984077142 - type: nauc_mrr_at_10_max value: 29.034639694702445 - type: nauc_mrr_at_10_std value: -7.688997296006921 - type: nauc_mrr_at_1_diff1 value: 55.35362841092645 - type: nauc_mrr_at_1_max value: 26.3544412906144 - type: nauc_mrr_at_1_std value: -10.271693671623822 - type: nauc_mrr_at_20_diff1 value: 52.17826228222121 - type: nauc_mrr_at_20_max value: 29.07700992148465 - type: nauc_mrr_at_20_std value: -7.575227708091961 - type: nauc_mrr_at_3_diff1 value: 52.47042589581697 - type: nauc_mrr_at_3_max value: 27.86908046170552 - type: nauc_mrr_at_3_std value: -8.877207171875764 - type: nauc_mrr_at_5_diff1 value: 52.44080737508035 - type: nauc_mrr_at_5_max value: 28.653161999073866 - type: nauc_mrr_at_5_std value: -8.137979343768452 - type: nauc_ndcg_at_1000_diff1 value: 51.63844182148695 - type: nauc_ndcg_at_1000_max value: 30.146221863674764 - type: nauc_ndcg_at_1000_std value: -5.890960422356722 - type: nauc_ndcg_at_100_diff1 value: 51.377361900247934 - type: nauc_ndcg_at_100_max value: 30.37104796007538 - type: nauc_ndcg_at_100_std value: -5.3155581070589255 - type: nauc_ndcg_at_10_diff1 value: 50.79025674027366 - type: nauc_ndcg_at_10_max value: 29.850333158184107 - type: nauc_ndcg_at_10_std value: -6.935159993029581 - type: nauc_ndcg_at_1_diff1 value: 55.35362841092645 - type: nauc_ndcg_at_1_max value: 26.3544412906144 - type: nauc_ndcg_at_1_std value: -10.271693671623822 - type: nauc_ndcg_at_20_diff1 value: 50.828114114114534 - type: nauc_ndcg_at_20_max value: 29.9983233605573 - type: nauc_ndcg_at_20_std value: -6.27157620880109 - type: nauc_ndcg_at_3_diff1 value: 51.74439976321089 - type: nauc_ndcg_at_3_max value: 26.35748659893694 - type: nauc_ndcg_at_3_std value: -10.502758740626387 - type: nauc_ndcg_at_5_diff1 value: 51.691906428113654 - type: nauc_ndcg_at_5_max value: 28.07037282482589 - type: nauc_ndcg_at_5_std value: -9.26498713131674 - type: nauc_precision_at_1000_diff1 value: -13.045862942872343 - type: nauc_precision_at_1000_max value: 18.71100102940669 - type: nauc_precision_at_1000_std value: 20.185301094052814 - type: nauc_precision_at_100_diff1 value: -10.519069240740276 - type: nauc_precision_at_100_max value: 21.592332795236054 - type: nauc_precision_at_100_std value: 24.40820689604234 - type: nauc_precision_at_10_diff1 value: 9.83612702521244 - type: nauc_precision_at_10_max value: 27.78464829064637 - type: nauc_precision_at_10_std value: 12.77575216627001 - type: nauc_precision_at_1_diff1 value: 55.35362841092645 - type: nauc_precision_at_1_max value: 26.3544412906144 - type: nauc_precision_at_1_std value: -10.271693671623822 - type: nauc_precision_at_20_diff1 value: -0.3012586758362439 - type: nauc_precision_at_20_max value: 25.49024158891868 - type: nauc_precision_at_20_std value: 19.54602887922898 - type: nauc_precision_at_3_diff1 value: 30.881428997961386 - type: nauc_precision_at_3_max value: 27.317400062905563 - type: nauc_precision_at_3_std value: -2.5767669869177166 - type: nauc_precision_at_5_diff1 value: 21.526439269416084 - type: nauc_precision_at_5_max value: 26.985523814770033 - type: nauc_precision_at_5_std value: 3.1676703484387407 - type: nauc_recall_at_1000_diff1 value: 46.02303492714767 - type: nauc_recall_at_1000_max value: 65.70236210629923 - type: nauc_recall_at_1000_std value: 68.66861203066527 - type: nauc_recall_at_100_diff1 value: 42.575155686556656 - type: nauc_recall_at_100_max value: 46.072807106917715 - type: nauc_recall_at_100_std value: 28.576545146471123 - type: nauc_recall_at_10_diff1 value: 42.579622990720075 - type: nauc_recall_at_10_max value: 35.123988767729784 - type: nauc_recall_at_10_std value: 0.15607034121893276 - type: nauc_recall_at_1_diff1 value: 55.44817856281672 - type: nauc_recall_at_1_max value: 21.74063132275011 - type: nauc_recall_at_1_std value: -12.491279758397635 - type: nauc_recall_at_20_diff1 value: 40.358928980023364 - type: nauc_recall_at_20_max value: 37.07231354969976 - type: nauc_recall_at_20_std value: 5.934903575091139 - type: nauc_recall_at_3_diff1 value: 48.043483569633075 - type: nauc_recall_at_3_max value: 25.019763887646075 - type: nauc_recall_at_3_std value: -11.296304351496861 - type: nauc_recall_at_5_diff1 value: 47.07832336832927 - type: nauc_recall_at_5_max value: 28.774525559808477 - type: nauc_recall_at_5_std value: -8.325974216587271 - type: ndcg_at_1 value: 47.586 - type: ndcg_at_10 value: 60.809000000000005 - type: ndcg_at_100 value: 64.777 - type: ndcg_at_1000 value: 65.65299999999999 - type: ndcg_at_20 value: 62.77700000000001 - type: ndcg_at_3 value: 55.542 - type: ndcg_at_5 value: 58.45 - type: precision_at_1 value: 47.586 - type: precision_at_10 value: 9.699 - type: precision_at_100 value: 1.2630000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_20 value: 5.47 - type: precision_at_3 value: 24.744 - type: precision_at_5 value: 17.052999999999997 - type: recall_at_1 value: 41.749 - type: recall_at_10 value: 74.925 - type: recall_at_100 value: 91.63799999999999 - type: recall_at_1000 value: 97.707 - type: recall_at_20 value: 82.113 - type: recall_at_3 value: 61.013 - type: recall_at_5 value: 68.024 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: main_score value: 41.010999999999996 - type: map_at_1 value: 26.512 - type: map_at_10 value: 35.582 - type: map_at_100 value: 36.725 - type: map_at_1000 value: 36.792 - type: map_at_20 value: 36.189 - type: map_at_3 value: 32.698 - type: map_at_5 value: 34.196 - type: mrr_at_1 value: 28.8135593220339 - type: mrr_at_10 value: 37.858353510895874 - type: mrr_at_100 value: 38.84150249404619 - type: mrr_at_1000 value: 38.88720782084809 - type: mrr_at_20 value: 38.40545808724167 - type: mrr_at_3 value: 35.16007532956685 - type: mrr_at_5 value: 36.60075329566853 - type: nauc_map_at_1000_diff1 value: 44.14341045007619 - type: nauc_map_at_1000_max value: 21.83090230361459 - type: nauc_map_at_1000_std value: -2.347667496652236 - type: nauc_map_at_100_diff1 value: 44.134415678663686 - type: nauc_map_at_100_max value: 21.79251087024944 - type: nauc_map_at_100_std value: -2.3306180903580227 - type: nauc_map_at_10_diff1 value: 44.23805076968619 - type: nauc_map_at_10_max value: 21.455438591479883 - type: nauc_map_at_10_std value: -2.493722512501044 - type: nauc_map_at_1_diff1 value: 50.75219800970029 - type: nauc_map_at_1_max value: 20.095603365172607 - type: nauc_map_at_1_std value: -4.985153146291869 - type: nauc_map_at_20_diff1 value: 44.166388805585115 - type: nauc_map_at_20_max value: 21.693543933661257 - type: nauc_map_at_20_std value: -2.4508276071275974 - type: nauc_map_at_3_diff1 value: 45.779507531455415 - type: nauc_map_at_3_max value: 20.59562953790094 - type: nauc_map_at_3_std value: -3.993844219399372 - type: nauc_map_at_5_diff1 value: 44.45066189078404 - type: nauc_map_at_5_max value: 21.707994147387325 - type: nauc_map_at_5_std value: -2.9790983285318395 - type: nauc_mrr_at_1000_diff1 value: 42.91797208986425 - type: nauc_mrr_at_1000_max value: 23.664531768019025 - type: nauc_mrr_at_1000_std value: -1.6115041452205332 - type: nauc_mrr_at_100_diff1 value: 42.896545810702506 - type: nauc_mrr_at_100_max value: 23.65897919747262 - type: nauc_mrr_at_100_std value: -1.5949118957726913 - type: nauc_mrr_at_10_diff1 value: 42.913381675422166 - type: nauc_mrr_at_10_max value: 23.428617152734823 - type: nauc_mrr_at_10_std value: -1.6718636026362976 - type: nauc_mrr_at_1_diff1 value: 48.663850340907125 - type: nauc_mrr_at_1_max value: 22.184582175432073 - type: nauc_mrr_at_1_std value: -4.230141768769419 - type: nauc_mrr_at_20_diff1 value: 42.865247040053525 - type: nauc_mrr_at_20_max value: 23.591991138674793 - type: nauc_mrr_at_20_std value: -1.750585998397851 - type: nauc_mrr_at_3_diff1 value: 44.419529298412215 - type: nauc_mrr_at_3_max value: 22.926968973330816 - type: nauc_mrr_at_3_std value: -2.931485628192958 - type: nauc_mrr_at_5_diff1 value: 43.176659989311794 - type: nauc_mrr_at_5_max value: 23.7215633400734 - type: nauc_mrr_at_5_std value: -1.7935219288720698 - type: nauc_ndcg_at_1000_diff1 value: 41.43232327601143 - type: nauc_ndcg_at_1000_max value: 23.869403930448875 - type: nauc_ndcg_at_1000_std value: 0.4696487244354181 - type: nauc_ndcg_at_100_diff1 value: 41.11770422295755 - type: nauc_ndcg_at_100_max value: 23.405734969894752 - type: nauc_ndcg_at_100_std value: 0.9501158369966024 - type: nauc_ndcg_at_10_diff1 value: 41.39919262908605 - type: nauc_ndcg_at_10_max value: 22.078683245248705 - type: nauc_ndcg_at_10_std value: -0.48471046612071483 - type: nauc_ndcg_at_1_diff1 value: 48.663850340907125 - type: nauc_ndcg_at_1_max value: 22.184582175432073 - type: nauc_ndcg_at_1_std value: -4.230141768769419 - type: nauc_ndcg_at_20_diff1 value: 41.057153028930955 - type: nauc_ndcg_at_20_max value: 22.75075414646254 - type: nauc_ndcg_at_20_std value: -0.5009809403847804 - type: nauc_ndcg_at_3_diff1 value: 44.12808162157037 - type: nauc_ndcg_at_3_max value: 21.513304011669216 - type: nauc_ndcg_at_3_std value: -3.476314502254043 - type: nauc_ndcg_at_5_diff1 value: 42.02477993539081 - type: nauc_ndcg_at_5_max value: 22.993280155485113 - type: nauc_ndcg_at_5_std value: -1.4348485052196784 - type: nauc_precision_at_1000_diff1 value: -12.703282521999684 - type: nauc_precision_at_1000_max value: 21.261559147365443 - type: nauc_precision_at_1000_std value: 11.987510813010104 - type: nauc_precision_at_100_diff1 value: 1.3193540120181582 - type: nauc_precision_at_100_max value: 23.586465484483046 - type: nauc_precision_at_100_std value: 15.377037583037842 - type: nauc_precision_at_10_diff1 value: 23.92411685901801 - type: nauc_precision_at_10_max value: 25.769592185336972 - type: nauc_precision_at_10_std value: 5.55297241086051 - type: nauc_precision_at_1_diff1 value: 48.663850340907125 - type: nauc_precision_at_1_max value: 22.184582175432073 - type: nauc_precision_at_1_std value: -4.230141768769419 - type: nauc_precision_at_20_diff1 value: 17.41583018509334 - type: nauc_precision_at_20_max value: 27.16806805449341 - type: nauc_precision_at_20_std value: 6.169046574472412 - type: nauc_precision_at_3_diff1 value: 35.934668249365224 - type: nauc_precision_at_3_max value: 25.18809399580456 - type: nauc_precision_at_3_std value: -1.1408993044710884 - type: nauc_precision_at_5_diff1 value: 29.084737319157888 - type: nauc_precision_at_5_max value: 29.00904198291267 - type: nauc_precision_at_5_std value: 2.5605025859025385 - type: nauc_recall_at_1000_diff1 value: 18.33857511421712 - type: nauc_recall_at_1000_max value: 44.629511499405275 - type: nauc_recall_at_1000_std value: 40.7761741514711 - type: nauc_recall_at_100_diff1 value: 25.434730951251172 - type: nauc_recall_at_100_max value: 27.236232434597017 - type: nauc_recall_at_100_std value: 23.17061685077859 - type: nauc_recall_at_10_diff1 value: 32.24292251195904 - type: nauc_recall_at_10_max value: 20.2187522298695 - type: nauc_recall_at_10_std value: 5.308768538226124 - type: nauc_recall_at_1_diff1 value: 50.75219800970029 - type: nauc_recall_at_1_max value: 20.095603365172607 - type: nauc_recall_at_1_std value: -4.985153146291869 - type: nauc_recall_at_20_diff1 value: 29.83705638335845 - type: nauc_recall_at_20_max value: 22.27631501260551 - type: nauc_recall_at_20_std value: 5.622813321851248 - type: nauc_recall_at_3_diff1 value: 40.19464882091112 - type: nauc_recall_at_3_max value: 20.560679014064025 - type: nauc_recall_at_3_std value: -2.660817664035202 - type: nauc_recall_at_5_diff1 value: 35.17294819092021 - type: nauc_recall_at_5_max value: 23.781966725747765 - type: nauc_recall_at_5_std value: 2.158710218858196 - type: ndcg_at_1 value: 28.814 - type: ndcg_at_10 value: 41.010999999999996 - type: ndcg_at_100 value: 46.625 - type: ndcg_at_1000 value: 48.166 - type: ndcg_at_20 value: 43.084 - type: ndcg_at_3 value: 35.3 - type: ndcg_at_5 value: 37.828 - type: precision_at_1 value: 28.814 - type: precision_at_10 value: 6.372999999999999 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 3.6839999999999997 - type: precision_at_3 value: 14.991 - type: precision_at_5 value: 10.441 - type: recall_at_1 value: 26.512 - type: recall_at_10 value: 55.772 - type: recall_at_100 value: 81.39800000000001 - type: recall_at_1000 value: 92.85900000000001 - type: recall_at_20 value: 63.482000000000006 - type: recall_at_3 value: 40.11 - type: recall_at_5 value: 46.235 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: main_score value: 33.495000000000005 - type: map_at_1 value: 18.509999999999998 - type: map_at_10 value: 27.588 - type: map_at_100 value: 28.937 - type: map_at_1000 value: 29.038000000000004 - type: map_at_20 value: 28.349000000000004 - type: map_at_3 value: 24.567 - type: map_at_5 value: 26.222 - type: mrr_at_1 value: 22.63681592039801 - type: mrr_at_10 value: 32.362345020927116 - type: mrr_at_100 value: 33.34094306800873 - type: mrr_at_1000 value: 33.39517463630533 - type: mrr_at_20 value: 32.960201164030146 - type: mrr_at_3 value: 29.519071310116107 - type: mrr_at_5 value: 31.036484245439482 - type: nauc_map_at_1000_diff1 value: 24.154026257799263 - type: nauc_map_at_1000_max value: 21.311219137444258 - type: nauc_map_at_1000_std value: -2.5239081904127816 - type: nauc_map_at_100_diff1 value: 24.17157132613522 - type: nauc_map_at_100_max value: 21.31518640920159 - type: nauc_map_at_100_std value: -2.5579521484914975 - type: nauc_map_at_10_diff1 value: 23.86937479997375 - type: nauc_map_at_10_max value: 20.730553841216402 - type: nauc_map_at_10_std value: -2.984377872023596 - type: nauc_map_at_1_diff1 value: 28.20399969350878 - type: nauc_map_at_1_max value: 21.092411173110623 - type: nauc_map_at_1_std value: -2.355133156530113 - type: nauc_map_at_20_diff1 value: 23.989940502938843 - type: nauc_map_at_20_max value: 21.129447732785415 - type: nauc_map_at_20_std value: -2.6856108820515754 - type: nauc_map_at_3_diff1 value: 24.5338503568311 - type: nauc_map_at_3_max value: 20.3140323631877 - type: nauc_map_at_3_std value: -3.5893266173494176 - type: nauc_map_at_5_diff1 value: 23.828834650934596 - type: nauc_map_at_5_max value: 20.668700456540407 - type: nauc_map_at_5_std value: -3.4248771374423663 - type: nauc_mrr_at_1000_diff1 value: 23.431872574500176 - type: nauc_mrr_at_1000_max value: 21.898677562008924 - type: nauc_mrr_at_1000_std value: -2.214356190453914 - type: nauc_mrr_at_100_diff1 value: 23.454791986270962 - type: nauc_mrr_at_100_max value: 21.892376527575756 - type: nauc_mrr_at_100_std value: -2.2250470787614876 - type: nauc_mrr_at_10_diff1 value: 23.21857221649048 - type: nauc_mrr_at_10_max value: 21.80133864592139 - type: nauc_mrr_at_10_std value: -2.366980583648149 - type: nauc_mrr_at_1_diff1 value: 27.157881198158783 - type: nauc_mrr_at_1_max value: 21.601786829936433 - type: nauc_mrr_at_1_std value: -2.831383077547147 - type: nauc_mrr_at_20_diff1 value: 23.36592063714778 - type: nauc_mrr_at_20_max value: 21.943784707367183 - type: nauc_mrr_at_20_std value: -2.275301184484456 - type: nauc_mrr_at_3_diff1 value: 23.42493357741843 - type: nauc_mrr_at_3_max value: 21.51794229469302 - type: nauc_mrr_at_3_std value: -2.8403025245692053 - type: nauc_mrr_at_5_diff1 value: 23.09361104232496 - type: nauc_mrr_at_5_max value: 21.633041369993762 - type: nauc_mrr_at_5_std value: -2.4786874807071735 - type: nauc_ndcg_at_1000_diff1 value: 23.022273404374424 - type: nauc_ndcg_at_1000_max value: 22.991361978075954 - type: nauc_ndcg_at_1000_std value: 0.18153114824679512 - type: nauc_ndcg_at_100_diff1 value: 23.559298750876117 - type: nauc_ndcg_at_100_max value: 22.867138599638423 - type: nauc_ndcg_at_100_std value: -0.33841026524213386 - type: nauc_ndcg_at_10_diff1 value: 22.476239602270873 - type: nauc_ndcg_at_10_max value: 21.504002872557006 - type: nauc_ndcg_at_10_std value: -1.8676510759488962 - type: nauc_ndcg_at_1_diff1 value: 27.157881198158783 - type: nauc_ndcg_at_1_max value: 21.601786829936433 - type: nauc_ndcg_at_1_std value: -2.831383077547147 - type: nauc_ndcg_at_20_diff1 value: 22.850419852466032 - type: nauc_ndcg_at_20_max value: 22.543556058554582 - type: nauc_ndcg_at_20_std value: -1.1223300955195037 - type: nauc_ndcg_at_3_diff1 value: 23.576709980109943 - type: nauc_ndcg_at_3_max value: 20.98005022537365 - type: nauc_ndcg_at_3_std value: -3.4150814729224632 - type: nauc_ndcg_at_5_diff1 value: 22.418819576039574 - type: nauc_ndcg_at_5_max value: 21.157104875464984 - type: nauc_ndcg_at_5_std value: -2.727281992701386 - type: nauc_precision_at_1000_diff1 value: -0.2418803168229846 - type: nauc_precision_at_1000_max value: 4.7509057963503345 - type: nauc_precision_at_1000_std value: 4.862124108075474 - type: nauc_precision_at_100_diff1 value: 9.414277026375698 - type: nauc_precision_at_100_max value: 15.611397966739327 - type: nauc_precision_at_100_std value: 6.131008472677945 - type: nauc_precision_at_10_diff1 value: 13.500248662521026 - type: nauc_precision_at_10_max value: 20.27159793813296 - type: nauc_precision_at_10_std value: 0.36295387414869346 - type: nauc_precision_at_1_diff1 value: 27.157881198158783 - type: nauc_precision_at_1_max value: 21.601786829936433 - type: nauc_precision_at_1_std value: -2.831383077547147 - type: nauc_precision_at_20_diff1 value: 12.41644887272953 - type: nauc_precision_at_20_max value: 20.04934603426798 - type: nauc_precision_at_20_std value: 2.5263812441981117 - type: nauc_precision_at_3_diff1 value: 17.769700788858774 - type: nauc_precision_at_3_max value: 20.145180776013085 - type: nauc_precision_at_3_std value: -4.64889997854223 - type: nauc_precision_at_5_diff1 value: 14.437820424464798 - type: nauc_precision_at_5_max value: 21.086799398849397 - type: nauc_precision_at_5_std value: -3.542726145322661 - type: nauc_recall_at_1000_diff1 value: 5.340484078912298 - type: nauc_recall_at_1000_max value: 38.819059569745434 - type: nauc_recall_at_1000_std value: 35.261295626072965 - type: nauc_recall_at_100_diff1 value: 19.859262378217075 - type: nauc_recall_at_100_max value: 24.843220411163898 - type: nauc_recall_at_100_std value: 9.02424296030646 - type: nauc_recall_at_10_diff1 value: 18.128753186700266 - type: nauc_recall_at_10_max value: 20.873864236953324 - type: nauc_recall_at_10_std value: 0.7180942369537235 - type: nauc_recall_at_1_diff1 value: 28.20399969350878 - type: nauc_recall_at_1_max value: 21.092411173110623 - type: nauc_recall_at_1_std value: -2.355133156530113 - type: nauc_recall_at_20_diff1 value: 18.16983053968982 - type: nauc_recall_at_20_max value: 23.78295921592487 - type: nauc_recall_at_20_std value: 3.445605920721629 - type: nauc_recall_at_3_diff1 value: 20.52573155136365 - type: nauc_recall_at_3_max value: 19.725261691653092 - type: nauc_recall_at_3_std value: -2.985002529881709 - type: nauc_recall_at_5_diff1 value: 18.268276062359906 - type: nauc_recall_at_5_max value: 19.83117733925397 - type: nauc_recall_at_5_std value: -1.5709756011931044 - type: ndcg_at_1 value: 22.637 - type: ndcg_at_10 value: 33.495000000000005 - type: ndcg_at_100 value: 39.571 - type: ndcg_at_1000 value: 42.056 - type: ndcg_at_20 value: 35.987 - type: ndcg_at_3 value: 27.938000000000002 - type: ndcg_at_5 value: 30.426 - type: precision_at_1 value: 22.637 - type: precision_at_10 value: 6.381 - type: precision_at_100 value: 1.075 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_20 value: 3.887 - type: precision_at_3 value: 13.806 - type: precision_at_5 value: 10.025 - type: recall_at_1 value: 18.509999999999998 - type: recall_at_10 value: 46.848 - type: recall_at_100 value: 73.08200000000001 - type: recall_at_1000 value: 90.82000000000001 - type: recall_at_20 value: 55.752 - type: recall_at_3 value: 31.461 - type: recall_at_5 value: 37.82 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: main_score value: 45.824 - type: map_at_1 value: 28.612 - type: map_at_10 value: 39.765 - type: map_at_100 value: 41.119 - type: map_at_1000 value: 41.227000000000004 - type: map_at_20 value: 40.541 - type: map_at_3 value: 36.506 - type: map_at_5 value: 38.261 - type: mrr_at_1 value: 34.55245428296439 - type: mrr_at_10 value: 44.875758131292294 - type: mrr_at_100 value: 45.75120721078986 - type: mrr_at_1000 value: 45.79381377334475 - type: mrr_at_20 value: 45.405431948897736 - type: mrr_at_3 value: 42.07571382739811 - type: mrr_at_5 value: 43.687840872633885 - type: nauc_map_at_1000_diff1 value: 45.54603816601572 - type: nauc_map_at_1000_max value: 24.204301607914598 - type: nauc_map_at_1000_std value: -3.385400241809544 - type: nauc_map_at_100_diff1 value: 45.54021209259281 - type: nauc_map_at_100_max value: 24.139240543977365 - type: nauc_map_at_100_std value: -3.4416424537503274 - type: nauc_map_at_10_diff1 value: 45.84733590786064 - type: nauc_map_at_10_max value: 23.87997713953964 - type: nauc_map_at_10_std value: -3.9977454684108364 - type: nauc_map_at_1_diff1 value: 51.77762922778957 - type: nauc_map_at_1_max value: 21.548940119767266 - type: nauc_map_at_1_std value: -6.774027308069757 - type: nauc_map_at_20_diff1 value: 45.6305134929685 - type: nauc_map_at_20_max value: 23.949898891211983 - type: nauc_map_at_20_std value: -3.8117633658105916 - type: nauc_map_at_3_diff1 value: 45.66231736851152 - type: nauc_map_at_3_max value: 23.292236552904384 - type: nauc_map_at_3_std value: -4.860026375260737 - type: nauc_map_at_5_diff1 value: 46.395069418251616 - type: nauc_map_at_5_max value: 23.500643171747225 - type: nauc_map_at_5_std value: -4.639116765481091 - type: nauc_mrr_at_1000_diff1 value: 43.59466223019413 - type: nauc_mrr_at_1000_max value: 25.500101182603146 - type: nauc_mrr_at_1000_std value: -2.8252405398970026 - type: nauc_mrr_at_100_diff1 value: 43.58521178366279 - type: nauc_mrr_at_100_max value: 25.499541544730093 - type: nauc_mrr_at_100_std value: -2.8171198325250226 - type: nauc_mrr_at_10_diff1 value: 43.62497401903436 - type: nauc_mrr_at_10_max value: 25.528257757563583 - type: nauc_mrr_at_10_std value: -3.033700543344133 - type: nauc_mrr_at_1_diff1 value: 46.962041492938845 - type: nauc_mrr_at_1_max value: 24.033390474152572 - type: nauc_mrr_at_1_std value: -4.371468850014676 - type: nauc_mrr_at_20_diff1 value: 43.57458456860973 - type: nauc_mrr_at_20_max value: 25.542827142027825 - type: nauc_mrr_at_20_std value: -2.942977643863032 - type: nauc_mrr_at_3_diff1 value: 43.490236992416406 - type: nauc_mrr_at_3_max value: 25.501895862532454 - type: nauc_mrr_at_3_std value: -3.195016044753707 - type: nauc_mrr_at_5_diff1 value: 44.05762384636381 - type: nauc_mrr_at_5_max value: 25.637192189122654 - type: nauc_mrr_at_5_std value: -3.108485562228445 - type: nauc_ndcg_at_1000_diff1 value: 43.750329376899145 - type: nauc_ndcg_at_1000_max value: 25.988769629465047 - type: nauc_ndcg_at_1000_std value: -0.779579989595003 - type: nauc_ndcg_at_100_diff1 value: 43.46369631570989 - type: nauc_ndcg_at_100_max value: 25.277438910530144 - type: nauc_ndcg_at_100_std value: -0.7982583332900034 - type: nauc_ndcg_at_10_diff1 value: 44.219854561129 - type: nauc_ndcg_at_10_max value: 24.488811135713366 - type: nauc_ndcg_at_10_std value: -3.0634463911544074 - type: nauc_ndcg_at_1_diff1 value: 46.962041492938845 - type: nauc_ndcg_at_1_max value: 24.033390474152572 - type: nauc_ndcg_at_1_std value: -4.371468850014676 - type: nauc_ndcg_at_20_diff1 value: 43.65750993509317 - type: nauc_ndcg_at_20_max value: 24.716204288403954 - type: nauc_ndcg_at_20_std value: -2.4571990559048693 - type: nauc_ndcg_at_3_diff1 value: 43.52084908897581 - type: nauc_ndcg_at_3_max value: 24.196196258265594 - type: nauc_ndcg_at_3_std value: -3.7543715034197094 - type: nauc_ndcg_at_5_diff1 value: 45.136234842051294 - type: nauc_ndcg_at_5_max value: 24.265515874537016 - type: nauc_ndcg_at_5_std value: -3.677818346298181 - type: nauc_precision_at_1000_diff1 value: -17.37107028658623 - type: nauc_precision_at_1000_max value: 11.852925239469377 - type: nauc_precision_at_1000_std value: 17.267039287022246 - type: nauc_precision_at_100_diff1 value: -8.83034667931023 - type: nauc_precision_at_100_max value: 15.674062413762499 - type: nauc_precision_at_100_std value: 17.443055501165748 - type: nauc_precision_at_10_diff1 value: 13.97225627982781 - type: nauc_precision_at_10_max value: 22.903732145381213 - type: nauc_precision_at_10_std value: 10.438944427071494 - type: nauc_precision_at_1_diff1 value: 46.962041492938845 - type: nauc_precision_at_1_max value: 24.033390474152572 - type: nauc_precision_at_1_std value: -4.371468850014676 - type: nauc_precision_at_20_diff1 value: 5.1840650860759006 - type: nauc_precision_at_20_max value: 20.65674986095816 - type: nauc_precision_at_20_std value: 13.43829791560826 - type: nauc_precision_at_3_diff1 value: 26.863442923738162 - type: nauc_precision_at_3_max value: 24.89992943990019 - type: nauc_precision_at_3_std value: 2.705507445737673 - type: nauc_precision_at_5_diff1 value: 25.047410532713528 - type: nauc_precision_at_5_max value: 24.792105468863745 - type: nauc_precision_at_5_std value: 6.064895256436395 - type: nauc_recall_at_1000_diff1 value: 38.55225790237392 - type: nauc_recall_at_1000_max value: 38.66004655379001 - type: nauc_recall_at_1000_std value: 30.074119645781032 - type: nauc_recall_at_100_diff1 value: 33.70955627870792 - type: nauc_recall_at_100_max value: 22.94584483255064 - type: nauc_recall_at_100_std value: 13.383196050226015 - type: nauc_recall_at_10_diff1 value: 39.19271153993607 - type: nauc_recall_at_10_max value: 21.949914437712632 - type: nauc_recall_at_10_std value: -2.333073190222427 - type: nauc_recall_at_1_diff1 value: 51.77762922778957 - type: nauc_recall_at_1_max value: 21.548940119767266 - type: nauc_recall_at_1_std value: -6.774027308069757 - type: nauc_recall_at_20_diff1 value: 36.128976477817226 - type: nauc_recall_at_20_max value: 21.758803887678624 - type: nauc_recall_at_20_std value: 0.24345057487832894 - type: nauc_recall_at_3_diff1 value: 40.36085174972692 - type: nauc_recall_at_3_max value: 23.008684064089795 - type: nauc_recall_at_3_std value: -4.009673059808576 - type: nauc_recall_at_5_diff1 value: 42.47055957862573 - type: nauc_recall_at_5_max value: 22.49445757462206 - type: nauc_recall_at_5_std value: -4.200704887512875 - type: ndcg_at_1 value: 34.552 - type: ndcg_at_10 value: 45.824 - type: ndcg_at_100 value: 51.398999999999994 - type: ndcg_at_1000 value: 53.418 - type: ndcg_at_20 value: 48.181000000000004 - type: ndcg_at_3 value: 40.369 - type: ndcg_at_5 value: 42.936 - type: precision_at_1 value: 34.552 - type: precision_at_10 value: 8.334999999999999 - type: precision_at_100 value: 1.28 - type: precision_at_1000 value: 0.163 - type: precision_at_20 value: 4.904 - type: precision_at_3 value: 19.185 - type: precision_at_5 value: 13.550999999999998 - type: recall_at_1 value: 28.612 - type: recall_at_10 value: 58.542 - type: recall_at_100 value: 81.765 - type: recall_at_1000 value: 94.91000000000001 - type: recall_at_20 value: 66.923 - type: recall_at_3 value: 43.844 - type: recall_at_5 value: 50.353 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: main_score value: 43.86 - type: map_at_1 value: 25.764 - type: map_at_10 value: 37.19 - type: map_at_100 value: 38.657000000000004 - type: map_at_1000 value: 38.766 - type: map_at_20 value: 38.065 - type: map_at_3 value: 33.506 - type: map_at_5 value: 35.687000000000005 - type: mrr_at_1 value: 31.963470319634702 - type: mrr_at_10 value: 42.584348046676794 - type: mrr_at_100 value: 43.468986492322905 - type: mrr_at_1000 value: 43.51733725024093 - type: mrr_at_20 value: 43.123013383096485 - type: mrr_at_3 value: 39.49771689497716 - type: mrr_at_5 value: 41.3184931506849 - type: nauc_map_at_1000_diff1 value: 41.02254426402428 - type: nauc_map_at_1000_max value: 27.212188389036356 - type: nauc_map_at_1000_std value: -0.09284436172372049 - type: nauc_map_at_100_diff1 value: 41.024063360704524 - type: nauc_map_at_100_max value: 27.231415107652566 - type: nauc_map_at_100_std value: -0.11503072867335035 - type: nauc_map_at_10_diff1 value: 40.762484471371295 - type: nauc_map_at_10_max value: 26.552095220263983 - type: nauc_map_at_10_std value: -0.8961691473645707 - type: nauc_map_at_1_diff1 value: 46.0351777590459 - type: nauc_map_at_1_max value: 22.407309997018807 - type: nauc_map_at_1_std value: -5.8196986966998026 - type: nauc_map_at_20_diff1 value: 40.977183873424 - type: nauc_map_at_20_max value: 27.102353759407233 - type: nauc_map_at_20_std value: -0.2634342101271879 - type: nauc_map_at_3_diff1 value: 42.14689988383428 - type: nauc_map_at_3_max value: 25.03437706538189 - type: nauc_map_at_3_std value: -3.919393358878002 - type: nauc_map_at_5_diff1 value: 41.65119564986465 - type: nauc_map_at_5_max value: 26.285135372203783 - type: nauc_map_at_5_std value: -1.6186083609321076 - type: nauc_mrr_at_1000_diff1 value: 38.79875623439539 - type: nauc_mrr_at_1000_max value: 27.761719103654077 - type: nauc_mrr_at_1000_std value: 1.7009545757110647 - type: nauc_mrr_at_100_diff1 value: 38.791338829094414 - type: nauc_mrr_at_100_max value: 27.773943681897943 - type: nauc_mrr_at_100_std value: 1.7278801972398536 - type: nauc_mrr_at_10_diff1 value: 38.49632022153806 - type: nauc_mrr_at_10_max value: 27.77096700597113 - type: nauc_mrr_at_10_std value: 1.7302610962780125 - type: nauc_mrr_at_1_diff1 value: 42.48391167224108 - type: nauc_mrr_at_1_max value: 24.059631099761877 - type: nauc_mrr_at_1_std value: -3.3826521142445998 - type: nauc_mrr_at_20_diff1 value: 38.78568729552403 - type: nauc_mrr_at_20_max value: 27.830624573272438 - type: nauc_mrr_at_20_std value: 1.8702442428163355 - type: nauc_mrr_at_3_diff1 value: 39.14108666396381 - type: nauc_mrr_at_3_max value: 27.126136544524147 - type: nauc_mrr_at_3_std value: -0.6328064994794298 - type: nauc_mrr_at_5_diff1 value: 38.684789795150884 - type: nauc_mrr_at_5_max value: 27.524102240409142 - type: nauc_mrr_at_5_std value: 0.9039722426754292 - type: nauc_ndcg_at_1000_diff1 value: 39.151840725737735 - type: nauc_ndcg_at_1000_max value: 29.02571712184575 - type: nauc_ndcg_at_1000_std value: 4.000158107473303 - type: nauc_ndcg_at_100_diff1 value: 38.87706908494562 - type: nauc_ndcg_at_100_max value: 29.639606130771863 - type: nauc_ndcg_at_100_std value: 4.682439878287167 - type: nauc_ndcg_at_10_diff1 value: 37.841809143608586 - type: nauc_ndcg_at_10_max value: 28.232681174485542 - type: nauc_ndcg_at_10_std value: 2.6534878126703156 - type: nauc_ndcg_at_1_diff1 value: 42.48391167224108 - type: nauc_ndcg_at_1_max value: 24.059631099761877 - type: nauc_ndcg_at_1_std value: -3.3826521142445998 - type: nauc_ndcg_at_20_diff1 value: 38.78794350531766 - type: nauc_ndcg_at_20_max value: 29.391888718250126 - type: nauc_ndcg_at_20_std value: 4.246096416844256 - type: nauc_ndcg_at_3_diff1 value: 39.94959683105012 - type: nauc_ndcg_at_3_max value: 26.44461394195945 - type: nauc_ndcg_at_3_std value: -2.057142075379544 - type: nauc_ndcg_at_5_diff1 value: 39.228212224837854 - type: nauc_ndcg_at_5_max value: 27.63367669291804 - type: nauc_ndcg_at_5_std value: 0.8515177431823633 - type: nauc_precision_at_1000_diff1 value: -10.224587930955357 - type: nauc_precision_at_1000_max value: 2.0367826445781665 - type: nauc_precision_at_1000_std value: 10.157637353732063 - type: nauc_precision_at_100_diff1 value: -2.94132245415521 - type: nauc_precision_at_100_max value: 14.497654423038803 - type: nauc_precision_at_100_std value: 17.719614669918094 - type: nauc_precision_at_10_diff1 value: 11.348279066652248 - type: nauc_precision_at_10_max value: 24.801591961312027 - type: nauc_precision_at_10_std value: 15.999695471134517 - type: nauc_precision_at_1_diff1 value: 42.48391167224108 - type: nauc_precision_at_1_max value: 24.059631099761877 - type: nauc_precision_at_1_std value: -3.3826521142445998 - type: nauc_precision_at_20_diff1 value: 7.358574224272124 - type: nauc_precision_at_20_max value: 24.541749557197846 - type: nauc_precision_at_20_std value: 20.029723114376434 - type: nauc_precision_at_3_diff1 value: 27.31140928134787 - type: nauc_precision_at_3_max value: 27.266527909477595 - type: nauc_precision_at_3_std value: 4.293966422589966 - type: nauc_precision_at_5_diff1 value: 21.318237989903597 - type: nauc_precision_at_5_max value: 27.05790559252359 - type: nauc_precision_at_5_std value: 11.540331816577428 - type: nauc_recall_at_1000_diff1 value: 29.270735599789248 - type: nauc_recall_at_1000_max value: 42.74905404229601 - type: nauc_recall_at_1000_std value: 54.29872297065133 - type: nauc_recall_at_100_diff1 value: 29.423638581914137 - type: nauc_recall_at_100_max value: 38.27370611473139 - type: nauc_recall_at_100_std value: 26.86946286594378 - type: nauc_recall_at_10_diff1 value: 28.333642493802024 - type: nauc_recall_at_10_max value: 29.41983784943617 - type: nauc_recall_at_10_std value: 10.567148468398461 - type: nauc_recall_at_1_diff1 value: 46.0351777590459 - type: nauc_recall_at_1_max value: 22.407309997018807 - type: nauc_recall_at_1_std value: -5.8196986966998026 - type: nauc_recall_at_20_diff1 value: 31.42419633746832 - type: nauc_recall_at_20_max value: 33.84795718348709 - type: nauc_recall_at_20_std value: 17.206446408377992 - type: nauc_recall_at_3_diff1 value: 36.12978905683338 - type: nauc_recall_at_3_max value: 24.50013074408603 - type: nauc_recall_at_3_std value: -2.4884799065183474 - type: nauc_recall_at_5_diff1 value: 33.21734540694272 - type: nauc_recall_at_5_max value: 27.34082104368914 - type: nauc_recall_at_5_std value: 4.47285014662224 - type: ndcg_at_1 value: 31.963 - type: ndcg_at_10 value: 43.86 - type: ndcg_at_100 value: 49.522 - type: ndcg_at_1000 value: 51.635 - type: ndcg_at_20 value: 46.372 - type: ndcg_at_3 value: 37.742 - type: ndcg_at_5 value: 40.744 - type: precision_at_1 value: 31.963 - type: precision_at_10 value: 8.322000000000001 - type: precision_at_100 value: 1.311 - type: precision_at_1000 value: 0.167 - type: precision_at_20 value: 4.989 - type: precision_at_3 value: 18.151 - type: precision_at_5 value: 13.447000000000001 - type: recall_at_1 value: 25.764 - type: recall_at_10 value: 58.157000000000004 - type: recall_at_100 value: 81.631 - type: recall_at_1000 value: 95.863 - type: recall_at_20 value: 67.048 - type: recall_at_3 value: 41.465999999999994 - type: recall_at_5 value: 49.075 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: CQADupstackRetrieval_is_a_combined_dataset config: default split: test revision: CQADupstackRetrieval_is_a_combined_dataset metrics: - type: main_score value: 42.7395 - type: ndcg_at_10 value: 42.7395 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: main_score value: 38.382 - type: map_at_1 value: 24.956 - type: map_at_10 value: 33.364 - type: map_at_100 value: 34.346 - type: map_at_1000 value: 34.441 - type: map_at_20 value: 33.928000000000004 - type: map_at_3 value: 30.779 - type: map_at_5 value: 32.131 - type: mrr_at_1 value: 27.607361963190186 - type: mrr_at_10 value: 36.07538465283862 - type: mrr_at_100 value: 36.89078004113302 - type: mrr_at_1000 value: 36.960908522547534 - type: mrr_at_20 value: 36.57365123432853 - type: mrr_at_3 value: 33.64008179959101 - type: mrr_at_5 value: 34.91308793456033 - type: nauc_map_at_1000_diff1 value: 50.99876246634483 - type: nauc_map_at_1000_max value: 27.11375348316956 - type: nauc_map_at_1000_std value: -0.8223687586531171 - type: nauc_map_at_100_diff1 value: 50.96744235857796 - type: nauc_map_at_100_max value: 27.101746621071577 - type: nauc_map_at_100_std value: -0.8290435699503054 - type: nauc_map_at_10_diff1 value: 51.40563037369904 - type: nauc_map_at_10_max value: 26.715210946916446 - type: nauc_map_at_10_std value: -1.3301013304956433 - type: nauc_map_at_1_diff1 value: 59.10851769256635 - type: nauc_map_at_1_max value: 26.942179588390854 - type: nauc_map_at_1_std value: -5.088146259999974 - type: nauc_map_at_20_diff1 value: 51.00500762209007 - type: nauc_map_at_20_max value: 26.903545483709436 - type: nauc_map_at_20_std value: -0.8879313638699978 - type: nauc_map_at_3_diff1 value: 52.558964129136776 - type: nauc_map_at_3_max value: 26.54574953680247 - type: nauc_map_at_3_std value: -2.85116716794946 - type: nauc_map_at_5_diff1 value: 51.91166685128522 - type: nauc_map_at_5_max value: 26.97684718237022 - type: nauc_map_at_5_std value: -2.0545584607744303 - type: nauc_mrr_at_1000_diff1 value: 49.187292707593365 - type: nauc_mrr_at_1000_max value: 26.917886822963645 - type: nauc_mrr_at_1000_std value: 0.09314372039813201 - type: nauc_mrr_at_100_diff1 value: 49.15674865433787 - type: nauc_mrr_at_100_max value: 26.926539281262464 - type: nauc_mrr_at_100_std value: 0.09488690166949496 - type: nauc_mrr_at_10_diff1 value: 49.5191167745581 - type: nauc_mrr_at_10_max value: 26.578191574020853 - type: nauc_mrr_at_10_std value: -0.4332010149168712 - type: nauc_mrr_at_1_diff1 value: 56.83136962805171 - type: nauc_mrr_at_1_max value: 27.232682843362134 - type: nauc_mrr_at_1_std value: -3.4753930473122594 - type: nauc_mrr_at_20_diff1 value: 49.15134939617399 - type: nauc_mrr_at_20_max value: 26.87344888664184 - type: nauc_mrr_at_20_std value: 0.13910244198874352 - type: nauc_mrr_at_3_diff1 value: 49.893769596880894 - type: nauc_mrr_at_3_max value: 26.19959284832838 - type: nauc_mrr_at_3_std value: -1.4056523149404336 - type: nauc_mrr_at_5_diff1 value: 49.68766816395909 - type: nauc_mrr_at_5_max value: 26.826463837331012 - type: nauc_mrr_at_5_std value: -0.8964336795779043 - type: nauc_ndcg_at_1000_diff1 value: 47.30330423873059 - type: nauc_ndcg_at_1000_max value: 28.301104564231483 - type: nauc_ndcg_at_1000_std value: 2.683338095267426 - type: nauc_ndcg_at_100_diff1 value: 46.66867291937423 - type: nauc_ndcg_at_100_max value: 28.078461708764458 - type: nauc_ndcg_at_100_std value: 2.5295465311428695 - type: nauc_ndcg_at_10_diff1 value: 48.07351804799436 - type: nauc_ndcg_at_10_max value: 26.25185116704038 - type: nauc_ndcg_at_10_std value: 0.31947530103221494 - type: nauc_ndcg_at_1_diff1 value: 56.83136962805171 - type: nauc_ndcg_at_1_max value: 27.232682843362134 - type: nauc_ndcg_at_1_std value: -3.4753930473122594 - type: nauc_ndcg_at_20_diff1 value: 46.72863113281496 - type: nauc_ndcg_at_20_max value: 27.0829019438828 - type: nauc_ndcg_at_20_std value: 2.1819721644725316 - type: nauc_ndcg_at_3_diff1 value: 49.38507546500055 - type: nauc_ndcg_at_3_max value: 26.02547349067848 - type: nauc_ndcg_at_3_std value: -2.062107710534561 - type: nauc_ndcg_at_5_diff1 value: 48.702028938234946 - type: nauc_ndcg_at_5_max value: 26.631557342797297 - type: nauc_ndcg_at_5_std value: -1.13458716673632 - type: nauc_precision_at_1000_diff1 value: -7.616362974733644 - type: nauc_precision_at_1000_max value: 12.704716068960298 - type: nauc_precision_at_1000_std value: 10.693420265647761 - type: nauc_precision_at_100_diff1 value: 4.245047434408532 - type: nauc_precision_at_100_max value: 20.138149934295146 - type: nauc_precision_at_100_std value: 13.988324018580354 - type: nauc_precision_at_10_diff1 value: 24.713962748803503 - type: nauc_precision_at_10_max value: 21.912272587921095 - type: nauc_precision_at_10_std value: 9.923685641756377 - type: nauc_precision_at_1_diff1 value: 56.83136962805171 - type: nauc_precision_at_1_max value: 27.232682843362134 - type: nauc_precision_at_1_std value: -3.4753930473122594 - type: nauc_precision_at_20_diff1 value: 15.160997381408379 - type: nauc_precision_at_20_max value: 23.14475206210582 - type: nauc_precision_at_20_std value: 16.324297281253212 - type: nauc_precision_at_3_diff1 value: 37.310592783673044 - type: nauc_precision_at_3_max value: 25.183575695472932 - type: nauc_precision_at_3_std value: 2.3270248619137135 - type: nauc_precision_at_5_diff1 value: 31.548441277121807 - type: nauc_precision_at_5_max value: 25.36772873604284 - type: nauc_precision_at_5_std value: 5.676988862734406 - type: nauc_recall_at_1000_diff1 value: 20.43691655991097 - type: nauc_recall_at_1000_max value: 40.23701936874751 - type: nauc_recall_at_1000_std value: 35.76336885517243 - type: nauc_recall_at_100_diff1 value: 27.835043122315188 - type: nauc_recall_at_100_max value: 31.805810699439853 - type: nauc_recall_at_100_std value: 16.658546206916487 - type: nauc_recall_at_10_diff1 value: 39.044956198775424 - type: nauc_recall_at_10_max value: 24.230121801610007 - type: nauc_recall_at_10_std value: 4.204867352942831 - type: nauc_recall_at_1_diff1 value: 59.10851769256635 - type: nauc_recall_at_1_max value: 26.942179588390854 - type: nauc_recall_at_1_std value: -5.088146259999974 - type: nauc_recall_at_20_diff1 value: 32.48770164983945 - type: nauc_recall_at_20_max value: 26.349180533221002 - type: nauc_recall_at_20_std value: 11.188589531295396 - type: nauc_recall_at_3_diff1 value: 44.33638562753659 - type: nauc_recall_at_3_max value: 23.88918858892684 - type: nauc_recall_at_3_std value: -2.135430126322962 - type: nauc_recall_at_5_diff1 value: 41.72838294550878 - type: nauc_recall_at_5_max value: 25.134424065202815 - type: nauc_recall_at_5_std value: 0.4272804347838425 - type: ndcg_at_1 value: 27.607 - type: ndcg_at_10 value: 38.382 - type: ndcg_at_100 value: 43.003 - type: ndcg_at_1000 value: 45.299 - type: ndcg_at_20 value: 40.251 - type: ndcg_at_3 value: 33.451 - type: ndcg_at_5 value: 35.659 - type: precision_at_1 value: 27.607 - type: precision_at_10 value: 6.227 - type: precision_at_100 value: 0.928 - type: precision_at_1000 value: 0.12 - type: precision_at_20 value: 3.612 - type: precision_at_3 value: 14.468 - type: precision_at_5 value: 10.215 - type: recall_at_1 value: 24.956 - type: recall_at_10 value: 51.117000000000004 - type: recall_at_100 value: 71.80499999999999 - type: recall_at_1000 value: 88.494 - type: recall_at_20 value: 57.989999999999995 - type: recall_at_3 value: 37.387 - type: recall_at_5 value: 42.884 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: main_score value: 31.025999999999996 - type: map_at_1 value: 18.17 - type: map_at_10 value: 26.032 - type: map_at_100 value: 27.218999999999998 - type: map_at_1000 value: 27.345999999999997 - type: map_at_20 value: 26.666 - type: map_at_3 value: 23.463 - type: map_at_5 value: 24.806 - type: mrr_at_1 value: 21.472814865794906 - type: mrr_at_10 value: 29.512844252176716 - type: mrr_at_100 value: 30.490177986229945 - type: mrr_at_1000 value: 30.56207014886559 - type: mrr_at_20 value: 30.0506487851583 - type: mrr_at_3 value: 27.058958476714874 - type: mrr_at_5 value: 28.3752007341134 - type: nauc_map_at_1000_diff1 value: 33.44724427669794 - type: nauc_map_at_1000_max value: 22.395823506833914 - type: nauc_map_at_1000_std value: -0.4150981274633923 - type: nauc_map_at_100_diff1 value: 33.42890753994663 - type: nauc_map_at_100_max value: 22.376239097847126 - type: nauc_map_at_100_std value: -0.4401702678219545 - type: nauc_map_at_10_diff1 value: 33.666850629964244 - type: nauc_map_at_10_max value: 22.114097714173088 - type: nauc_map_at_10_std value: -1.0852366926427355 - type: nauc_map_at_1_diff1 value: 38.77085709277668 - type: nauc_map_at_1_max value: 19.957196529260628 - type: nauc_map_at_1_std value: -2.784123448084558 - type: nauc_map_at_20_diff1 value: 33.472381054071036 - type: nauc_map_at_20_max value: 22.308855255579438 - type: nauc_map_at_20_std value: -0.6486904865466587 - type: nauc_map_at_3_diff1 value: 34.42189817938094 - type: nauc_map_at_3_max value: 21.590305717180428 - type: nauc_map_at_3_std value: -1.7333267848855112 - type: nauc_map_at_5_diff1 value: 33.695706357298796 - type: nauc_map_at_5_max value: 21.987149458167956 - type: nauc_map_at_5_std value: -1.6701271473188217 - type: nauc_mrr_at_1000_diff1 value: 33.136413250064436 - type: nauc_mrr_at_1000_max value: 23.471325782764698 - type: nauc_mrr_at_1000_std value: -0.3980564209208141 - type: nauc_mrr_at_100_diff1 value: 33.13122882871807 - type: nauc_mrr_at_100_max value: 23.469147640101035 - type: nauc_mrr_at_100_std value: -0.39790519096729465 - type: nauc_mrr_at_10_diff1 value: 33.25655785925077 - type: nauc_mrr_at_10_max value: 23.35946047835974 - type: nauc_mrr_at_10_std value: -0.8500490754980572 - type: nauc_mrr_at_1_diff1 value: 38.34949791334492 - type: nauc_mrr_at_1_max value: 21.926534783990416 - type: nauc_mrr_at_1_std value: -2.7773636603036542 - type: nauc_mrr_at_20_diff1 value: 33.056629387468575 - type: nauc_mrr_at_20_max value: 23.482437558868856 - type: nauc_mrr_at_20_std value: -0.5598040986560595 - type: nauc_mrr_at_3_diff1 value: 33.889365112764544 - type: nauc_mrr_at_3_max value: 23.20061693129839 - type: nauc_mrr_at_3_std value: -1.25616825144634 - type: nauc_mrr_at_5_diff1 value: 33.44787691745913 - type: nauc_mrr_at_5_max value: 23.34712279165282 - type: nauc_mrr_at_5_std value: -1.3806302517881062 - type: nauc_ndcg_at_1000_diff1 value: 31.318327402226604 - type: nauc_ndcg_at_1000_max value: 23.71300269763234 - type: nauc_ndcg_at_1000_std value: 2.916517607448075 - type: nauc_ndcg_at_100_diff1 value: 31.040708439004266 - type: nauc_ndcg_at_100_max value: 23.467949695024597 - type: nauc_ndcg_at_100_std value: 2.7972274387802716 - type: nauc_ndcg_at_10_diff1 value: 31.816826867584318 - type: nauc_ndcg_at_10_max value: 22.924178018704605 - type: nauc_ndcg_at_10_std value: 0.11423808529946625 - type: nauc_ndcg_at_1_diff1 value: 38.34949791334492 - type: nauc_ndcg_at_1_max value: 21.926534783990416 - type: nauc_ndcg_at_1_std value: -2.7773636603036542 - type: nauc_ndcg_at_20_diff1 value: 31.129932166551626 - type: nauc_ndcg_at_20_max value: 23.35498887744279 - type: nauc_ndcg_at_20_std value: 1.491332034695489 - type: nauc_ndcg_at_3_diff1 value: 32.77551220179279 - type: nauc_ndcg_at_3_max value: 22.496210905750942 - type: nauc_ndcg_at_3_std value: -1.2280899372748 - type: nauc_ndcg_at_5_diff1 value: 31.924061220406134 - type: nauc_ndcg_at_5_max value: 22.91828327955767 - type: nauc_ndcg_at_5_std value: -1.2161178799994699 - type: nauc_precision_at_1000_diff1 value: 2.0558810641108645 - type: nauc_precision_at_1000_max value: 12.261412181056347 - type: nauc_precision_at_1000_std value: 6.082384169997254 - type: nauc_precision_at_100_diff1 value: 8.320082813012062 - type: nauc_precision_at_100_max value: 19.430325566521223 - type: nauc_precision_at_100_std value: 10.538646165339417 - type: nauc_precision_at_10_diff1 value: 21.082431908664486 - type: nauc_precision_at_10_max value: 24.52535332353091 - type: nauc_precision_at_10_std value: 4.566893805885459 - type: nauc_precision_at_1_diff1 value: 38.34949791334492 - type: nauc_precision_at_1_max value: 21.926534783990416 - type: nauc_precision_at_1_std value: -2.7773636603036542 - type: nauc_precision_at_20_diff1 value: 16.776282883791417 - type: nauc_precision_at_20_max value: 23.571814338924387 - type: nauc_precision_at_20_std value: 7.957033137318803 - type: nauc_precision_at_3_diff1 value: 26.92608583979234 - type: nauc_precision_at_3_max value: 24.697979517743974 - type: nauc_precision_at_3_std value: 0.9245173696347126 - type: nauc_precision_at_5_diff1 value: 23.379067251418306 - type: nauc_precision_at_5_max value: 25.064384143107183 - type: nauc_precision_at_5_std value: 1.2352265382532668 - type: nauc_recall_at_1000_diff1 value: 13.384576623547348 - type: nauc_recall_at_1000_max value: 26.174069812711664 - type: nauc_recall_at_1000_std value: 32.3995862628019 - type: nauc_recall_at_100_diff1 value: 20.21494084876213 - type: nauc_recall_at_100_max value: 22.83711613883119 - type: nauc_recall_at_100_std value: 16.218904596086052 - type: nauc_recall_at_10_diff1 value: 25.819867218299624 - type: nauc_recall_at_10_max value: 21.846054076621346 - type: nauc_recall_at_10_std value: 2.750587027235345 - type: nauc_recall_at_1_diff1 value: 38.77085709277668 - type: nauc_recall_at_1_max value: 19.957196529260628 - type: nauc_recall_at_1_std value: -2.784123448084558 - type: nauc_recall_at_20_diff1 value: 22.795734700198647 - type: nauc_recall_at_20_max value: 22.97792980515984 - type: nauc_recall_at_20_std value: 7.656686479045141 - type: nauc_recall_at_3_diff1 value: 29.469751389287712 - type: nauc_recall_at_3_max value: 21.743909396914702 - type: nauc_recall_at_3_std value: -0.23744939226524805 - type: nauc_recall_at_5_diff1 value: 27.12041567978547 - type: nauc_recall_at_5_max value: 22.615965251684102 - type: nauc_recall_at_5_std value: -0.44175896354919747 - type: ndcg_at_1 value: 21.473 - type: ndcg_at_10 value: 31.025999999999996 - type: ndcg_at_100 value: 36.678 - type: ndcg_at_1000 value: 39.437 - type: ndcg_at_20 value: 33.073 - type: ndcg_at_3 value: 26.302999999999997 - type: ndcg_at_5 value: 28.323999999999998 - type: precision_at_1 value: 21.473 - type: precision_at_10 value: 5.712 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_20 value: 3.4450000000000003 - type: precision_at_3 value: 12.446 - type: precision_at_5 value: 9.002 - type: recall_at_1 value: 18.17 - type: recall_at_10 value: 42.545 - type: recall_at_100 value: 67.975 - type: recall_at_1000 value: 87.28200000000001 - type: recall_at_20 value: 50.099000000000004 - type: recall_at_3 value: 29.384 - type: recall_at_5 value: 34.574 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: main_score value: 44.277 - type: map_at_1 value: 29.042 - type: map_at_10 value: 38.666 - type: map_at_100 value: 39.794000000000004 - type: map_at_1000 value: 39.899 - type: map_at_20 value: 39.318 - type: map_at_3 value: 35.849 - type: map_at_5 value: 37.15 - type: mrr_at_1 value: 33.76865671641791 - type: mrr_at_10 value: 42.855180940535384 - type: mrr_at_100 value: 43.65565296806929 - type: mrr_at_1000 value: 43.71842180129229 - type: mrr_at_20 value: 43.3625132565656 - type: mrr_at_3 value: 40.391791044776085 - type: mrr_at_5 value: 41.618470149253675 - type: nauc_map_at_1000_diff1 value: 44.76754658362534 - type: nauc_map_at_1000_max value: 28.57841117124753 - type: nauc_map_at_1000_std value: -3.4660129167762643 - type: nauc_map_at_100_diff1 value: 44.75063288012039 - type: nauc_map_at_100_max value: 28.567968761586204 - type: nauc_map_at_100_std value: -3.477752632088259 - type: nauc_map_at_10_diff1 value: 44.73573302540937 - type: nauc_map_at_10_max value: 28.282280517161297 - type: nauc_map_at_10_std value: -3.7765442881437945 - type: nauc_map_at_1_diff1 value: 48.91202430221943 - type: nauc_map_at_1_max value: 27.2906556539887 - type: nauc_map_at_1_std value: -6.424630898785891 - type: nauc_map_at_20_diff1 value: 44.73630640872106 - type: nauc_map_at_20_max value: 28.526827874611932 - type: nauc_map_at_20_std value: -3.5923658150006554 - type: nauc_map_at_3_diff1 value: 44.93936041725324 - type: nauc_map_at_3_max value: 28.405229039045217 - type: nauc_map_at_3_std value: -4.281801393459546 - type: nauc_map_at_5_diff1 value: 45.12444472593884 - type: nauc_map_at_5_max value: 27.942749088135542 - type: nauc_map_at_5_std value: -4.31660794202584 - type: nauc_mrr_at_1000_diff1 value: 44.910772765630306 - type: nauc_mrr_at_1000_max value: 28.416973496166264 - type: nauc_mrr_at_1000_std value: -3.833035086221904 - type: nauc_mrr_at_100_diff1 value: 44.892619575891594 - type: nauc_mrr_at_100_max value: 28.404215527925608 - type: nauc_mrr_at_100_std value: -3.8217556133591444 - type: nauc_mrr_at_10_diff1 value: 44.85481976563178 - type: nauc_mrr_at_10_max value: 28.260751080109873 - type: nauc_mrr_at_10_std value: -4.045215043850954 - type: nauc_mrr_at_1_diff1 value: 48.40337895219412 - type: nauc_mrr_at_1_max value: 26.79679664862529 - type: nauc_mrr_at_1_std value: -7.487638965886408 - type: nauc_mrr_at_20_diff1 value: 44.87333343738951 - type: nauc_mrr_at_20_max value: 28.482139448224014 - type: nauc_mrr_at_20_std value: -3.8752286014067696 - type: nauc_mrr_at_3_diff1 value: 44.99758703191339 - type: nauc_mrr_at_3_max value: 28.452096144117228 - type: nauc_mrr_at_3_std value: -4.140282085210403 - type: nauc_mrr_at_5_diff1 value: 45.10512842837422 - type: nauc_mrr_at_5_max value: 28.05725516011186 - type: nauc_mrr_at_5_std value: -4.384647594071191 - type: nauc_ndcg_at_1000_diff1 value: 43.74564180532601 - type: nauc_ndcg_at_1000_max value: 29.551312025286137 - type: nauc_ndcg_at_1000_std value: -1.1083160515730703 - type: nauc_ndcg_at_100_diff1 value: 43.348496487866434 - type: nauc_ndcg_at_100_max value: 29.39942330551924 - type: nauc_ndcg_at_100_std value: -0.6705398040193502 - type: nauc_ndcg_at_10_diff1 value: 43.45725992484101 - type: nauc_ndcg_at_10_max value: 28.695126687511458 - type: nauc_ndcg_at_10_std value: -2.3740899316066018 - type: nauc_ndcg_at_1_diff1 value: 48.40337895219412 - type: nauc_ndcg_at_1_max value: 26.79679664862529 - type: nauc_ndcg_at_1_std value: -7.487638965886408 - type: nauc_ndcg_at_20_diff1 value: 43.42127264221856 - type: nauc_ndcg_at_20_max value: 29.610554267953955 - type: nauc_ndcg_at_20_std value: -1.5160151729087175 - type: nauc_ndcg_at_3_diff1 value: 43.971896193021074 - type: nauc_ndcg_at_3_max value: 28.837730342585 - type: nauc_ndcg_at_3_std value: -3.4378603384782007 - type: nauc_ndcg_at_5_diff1 value: 44.15567566140498 - type: nauc_ndcg_at_5_max value: 27.930607400156386 - type: nauc_ndcg_at_5_std value: -3.585093099817761 - type: nauc_precision_at_1000_diff1 value: -9.956611160892146 - type: nauc_precision_at_1000_max value: -0.8063171225425729 - type: nauc_precision_at_1000_std value: 3.2066057786084965 - type: nauc_precision_at_100_diff1 value: 3.146382306675135 - type: nauc_precision_at_100_max value: 11.124524772709485 - type: nauc_precision_at_100_std value: 8.246530036118072 - type: nauc_precision_at_10_diff1 value: 22.21083744539443 - type: nauc_precision_at_10_max value: 20.9279510282379 - type: nauc_precision_at_10_std value: 0.8735630455251976 - type: nauc_precision_at_1_diff1 value: 48.40337895219412 - type: nauc_precision_at_1_max value: 26.79679664862529 - type: nauc_precision_at_1_std value: -7.487638965886408 - type: nauc_precision_at_20_diff1 value: 16.234465676348854 - type: nauc_precision_at_20_max value: 20.16948133183925 - type: nauc_precision_at_20_std value: 3.8327418329672596 - type: nauc_precision_at_3_diff1 value: 33.96408049466874 - type: nauc_precision_at_3_max value: 26.54959675402931 - type: nauc_precision_at_3_std value: -1.5057033459640596 - type: nauc_precision_at_5_diff1 value: 31.730951214863268 - type: nauc_precision_at_5_max value: 22.928409396183813 - type: nauc_precision_at_5_std value: -1.2932144850491032 - type: nauc_recall_at_1000_diff1 value: 31.911131601344994 - type: nauc_recall_at_1000_max value: 41.29845876948943 - type: nauc_recall_at_1000_std value: 32.86114928598439 - type: nauc_recall_at_100_diff1 value: 34.190518909175935 - type: nauc_recall_at_100_max value: 30.435498463683913 - type: nauc_recall_at_100_std value: 15.306245199286572 - type: nauc_recall_at_10_diff1 value: 37.76931016052013 - type: nauc_recall_at_10_max value: 28.354258415554945 - type: nauc_recall_at_10_std value: 2.1382726105961383 - type: nauc_recall_at_1_diff1 value: 48.91202430221943 - type: nauc_recall_at_1_max value: 27.2906556539887 - type: nauc_recall_at_1_std value: -6.424630898785891 - type: nauc_recall_at_20_diff1 value: 36.64127277665404 - type: nauc_recall_at_20_max value: 31.88064756844142 - type: nauc_recall_at_20_std value: 6.330041609155803 - type: nauc_recall_at_3_diff1 value: 39.928915008535874 - type: nauc_recall_at_3_max value: 28.886859365998934 - type: nauc_recall_at_3_std value: -0.5614077232648746 - type: nauc_recall_at_5_diff1 value: 40.11510908090141 - type: nauc_recall_at_5_max value: 26.598685733701206 - type: nauc_recall_at_5_std value: -1.3682209196972956 - type: ndcg_at_1 value: 33.769 - type: ndcg_at_10 value: 44.277 - type: ndcg_at_100 value: 49.228 - type: ndcg_at_1000 value: 51.49700000000001 - type: ndcg_at_20 value: 46.327 - type: ndcg_at_3 value: 39.21 - type: ndcg_at_5 value: 41.079 - type: precision_at_1 value: 33.769 - type: precision_at_10 value: 7.444000000000001 - type: precision_at_100 value: 1.1119999999999999 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_20 value: 4.295999999999999 - type: precision_at_3 value: 17.662 - type: precision_at_5 value: 12.071 - type: recall_at_1 value: 29.042 - type: recall_at_10 value: 57.111999999999995 - type: recall_at_100 value: 78.3 - type: recall_at_1000 value: 93.953 - type: recall_at_20 value: 64.497 - type: recall_at_3 value: 43.203 - type: recall_at_5 value: 47.977 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: main_score value: 38.599 - type: map_at_1 value: 21.217 - type: map_at_10 value: 31.759999999999998 - type: map_at_100 value: 33.46 - type: map_at_1000 value: 33.693 - type: map_at_20 value: 32.538 - type: map_at_3 value: 28.12 - type: map_at_5 value: 30.278 - type: mrr_at_1 value: 25.296442687747035 - type: mrr_at_10 value: 35.919442875964606 - type: mrr_at_100 value: 36.8779303312437 - type: mrr_at_1000 value: 36.922570739309975 - type: mrr_at_20 value: 36.363236891801236 - type: mrr_at_3 value: 32.27931488801054 - type: mrr_at_5 value: 34.67061923583662 - type: nauc_map_at_1000_diff1 value: 43.0148058737978 - type: nauc_map_at_1000_max value: 17.553889912339514 - type: nauc_map_at_1000_std value: 0.9601873788177007 - type: nauc_map_at_100_diff1 value: 42.969459838326216 - type: nauc_map_at_100_max value: 17.76919717409622 - type: nauc_map_at_100_std value: 0.6632625815223269 - type: nauc_map_at_10_diff1 value: 42.98826908360665 - type: nauc_map_at_10_max value: 17.76896358622925 - type: nauc_map_at_10_std value: -0.45566206208185633 - type: nauc_map_at_1_diff1 value: 50.590335982075594 - type: nauc_map_at_1_max value: 16.97180935006884 - type: nauc_map_at_1_std value: -2.940015286306796 - type: nauc_map_at_20_diff1 value: 42.72422678706294 - type: nauc_map_at_20_max value: 17.641464587206702 - type: nauc_map_at_20_std value: -0.26097954117038147 - type: nauc_map_at_3_diff1 value: 44.71021406842305 - type: nauc_map_at_3_max value: 16.57587387432079 - type: nauc_map_at_3_std value: -2.230661818926213 - type: nauc_map_at_5_diff1 value: 43.802805094104194 - type: nauc_map_at_5_max value: 17.640518843508353 - type: nauc_map_at_5_std value: -1.6190220314007822 - type: nauc_mrr_at_1000_diff1 value: 42.68310467353682 - type: nauc_mrr_at_1000_max value: 16.809230915513226 - type: nauc_mrr_at_1000_std value: 2.530781715420812 - type: nauc_mrr_at_100_diff1 value: 42.65536982994195 - type: nauc_mrr_at_100_max value: 16.81836928558118 - type: nauc_mrr_at_100_std value: 2.555278034267055 - type: nauc_mrr_at_10_diff1 value: 42.577786166678486 - type: nauc_mrr_at_10_max value: 16.645561294057938 - type: nauc_mrr_at_10_std value: 2.502600191358007 - type: nauc_mrr_at_1_diff1 value: 48.34939409795325 - type: nauc_mrr_at_1_max value: 14.841478345109453 - type: nauc_mrr_at_1_std value: 1.0717766686776664 - type: nauc_mrr_at_20_diff1 value: 42.427753141104304 - type: nauc_mrr_at_20_max value: 16.664781724264145 - type: nauc_mrr_at_20_std value: 2.395840190443403 - type: nauc_mrr_at_3_diff1 value: 43.66899063945567 - type: nauc_mrr_at_3_max value: 15.543248241002669 - type: nauc_mrr_at_3_std value: 1.2576977387893074 - type: nauc_mrr_at_5_diff1 value: 42.99892264765085 - type: nauc_mrr_at_5_max value: 16.81932916641511 - type: nauc_mrr_at_5_std value: 1.8643647111878687 - type: nauc_ndcg_at_1000_diff1 value: 41.42201980313824 - type: nauc_ndcg_at_1000_max value: 19.07092299908919 - type: nauc_ndcg_at_1000_std value: 4.295482250528043 - type: nauc_ndcg_at_100_diff1 value: 40.412303224666836 - type: nauc_ndcg_at_100_max value: 19.150525298676474 - type: nauc_ndcg_at_100_std value: 4.346757305462373 - type: nauc_ndcg_at_10_diff1 value: 39.690541084634866 - type: nauc_ndcg_at_10_max value: 17.42767047724514 - type: nauc_ndcg_at_10_std value: 2.6951617967923736 - type: nauc_ndcg_at_1_diff1 value: 48.34939409795325 - type: nauc_ndcg_at_1_max value: 14.841478345109453 - type: nauc_ndcg_at_1_std value: 1.0717766686776664 - type: nauc_ndcg_at_20_diff1 value: 39.015028370760206 - type: nauc_ndcg_at_20_max value: 17.464821460656594 - type: nauc_ndcg_at_20_std value: 2.1306007919526593 - type: nauc_ndcg_at_3_diff1 value: 42.5545826087251 - type: nauc_ndcg_at_3_max value: 15.501551627104107 - type: nauc_ndcg_at_3_std value: 0.43815674039665264 - type: nauc_ndcg_at_5_diff1 value: 41.16176253155491 - type: nauc_ndcg_at_5_max value: 17.21541866304199 - type: nauc_ndcg_at_5_std value: 1.2475713954135355 - type: nauc_precision_at_1000_diff1 value: -5.018264434243664 - type: nauc_precision_at_1000_max value: -19.051595112230373 - type: nauc_precision_at_1000_std value: 27.49016185285424 - type: nauc_precision_at_100_diff1 value: 0.6676020228422808 - type: nauc_precision_at_100_max value: -8.065467335328094 - type: nauc_precision_at_100_std value: 28.34507443509331 - type: nauc_precision_at_10_diff1 value: 14.960481851261331 - type: nauc_precision_at_10_max value: 8.180908417423614 - type: nauc_precision_at_10_std value: 15.782334983720018 - type: nauc_precision_at_1_diff1 value: 48.34939409795325 - type: nauc_precision_at_1_max value: 14.841478345109453 - type: nauc_precision_at_1_std value: 1.0717766686776664 - type: nauc_precision_at_20_diff1 value: 6.799749571010497 - type: nauc_precision_at_20_max value: 2.7700077220190544 - type: nauc_precision_at_20_std value: 18.063969796619165 - type: nauc_precision_at_3_diff1 value: 32.81890592828406 - type: nauc_precision_at_3_max value: 12.805769393300215 - type: nauc_precision_at_3_std value: 4.401586696810425 - type: nauc_precision_at_5_diff1 value: 23.921161576360568 - type: nauc_precision_at_5_max value: 13.031428928244152 - type: nauc_precision_at_5_std value: 9.699568722955304 - type: nauc_recall_at_1000_diff1 value: 22.236575533894708 - type: nauc_recall_at_1000_max value: 54.436097597300005 - type: nauc_recall_at_1000_std value: 43.621140974086295 - type: nauc_recall_at_100_diff1 value: 24.005061022725336 - type: nauc_recall_at_100_max value: 27.767764791874622 - type: nauc_recall_at_100_std value: 22.80866673645538 - type: nauc_recall_at_10_diff1 value: 28.097551153230526 - type: nauc_recall_at_10_max value: 17.57728377350311 - type: nauc_recall_at_10_std value: 5.256733501506101 - type: nauc_recall_at_1_diff1 value: 50.590335982075594 - type: nauc_recall_at_1_max value: 16.97180935006884 - type: nauc_recall_at_1_std value: -2.940015286306796 - type: nauc_recall_at_20_diff1 value: 24.73878192984989 - type: nauc_recall_at_20_max value: 16.729004763940104 - type: nauc_recall_at_20_std value: 4.444628995374048 - type: nauc_recall_at_3_diff1 value: 37.735425023845295 - type: nauc_recall_at_3_max value: 14.499939981335283 - type: nauc_recall_at_3_std value: -1.8203061094896973 - type: nauc_recall_at_5_diff1 value: 33.55839532086379 - type: nauc_recall_at_5_max value: 17.75773937538373 - type: nauc_recall_at_5_std value: -0.07451143688637211 - type: ndcg_at_1 value: 25.296000000000003 - type: ndcg_at_10 value: 38.599 - type: ndcg_at_100 value: 45.025 - type: ndcg_at_1000 value: 47.176 - type: ndcg_at_20 value: 40.509 - type: ndcg_at_3 value: 31.996000000000002 - type: ndcg_at_5 value: 35.548 - type: precision_at_1 value: 25.296000000000003 - type: precision_at_10 value: 7.767 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.244 - type: precision_at_20 value: 4.872 - type: precision_at_3 value: 15.152 - type: precision_at_5 value: 11.937000000000001 - type: recall_at_1 value: 21.217 - type: recall_at_10 value: 53.437999999999995 - type: recall_at_100 value: 81.96799999999999 - type: recall_at_1000 value: 94.855 - type: recall_at_20 value: 60.363 - type: recall_at_3 value: 35.416 - type: recall_at_5 value: 44.107 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 35.286 - type: map_at_1 value: 22.192999999999998 - type: map_at_10 value: 30.124000000000002 - type: map_at_100 value: 31.146 - type: map_at_1000 value: 31.247000000000003 - type: map_at_20 value: 30.724 - type: map_at_3 value: 27.119 - type: map_at_5 value: 28.71 - type: mrr_at_1 value: 24.029574861367838 - type: mrr_at_10 value: 32.05931109350702 - type: mrr_at_100 value: 32.96551163821783 - type: mrr_at_1000 value: 33.043459542699296 - type: mrr_at_20 value: 32.59428858130953 - type: mrr_at_3 value: 29.390018484288348 - type: mrr_at_5 value: 30.850277264325314 - type: nauc_map_at_1000_diff1 value: 35.16694213270797 - type: nauc_map_at_1000_max value: 22.797276834623574 - type: nauc_map_at_1000_std value: -3.0634948181060198 - type: nauc_map_at_100_diff1 value: 35.20491950561727 - type: nauc_map_at_100_max value: 22.796629844096863 - type: nauc_map_at_100_std value: -3.077355076917843 - type: nauc_map_at_10_diff1 value: 35.06820967831128 - type: nauc_map_at_10_max value: 22.5678322137875 - type: nauc_map_at_10_std value: -3.7558046413513013 - type: nauc_map_at_1_diff1 value: 40.50413598561168 - type: nauc_map_at_1_max value: 21.966442375416722 - type: nauc_map_at_1_std value: -6.712726678544362 - type: nauc_map_at_20_diff1 value: 35.14124459228563 - type: nauc_map_at_20_max value: 22.718751682757006 - type: nauc_map_at_20_std value: -3.357819045837222 - type: nauc_map_at_3_diff1 value: 35.93212210391374 - type: nauc_map_at_3_max value: 22.40784360829625 - type: nauc_map_at_3_std value: -4.7336084886833305 - type: nauc_map_at_5_diff1 value: 35.63892032488633 - type: nauc_map_at_5_max value: 22.55532746912718 - type: nauc_map_at_5_std value: -4.285573151638768 - type: nauc_mrr_at_1000_diff1 value: 36.30571855088176 - type: nauc_mrr_at_1000_max value: 26.51213781024889 - type: nauc_mrr_at_1000_std value: -2.027245271968698 - type: nauc_mrr_at_100_diff1 value: 36.29250233792124 - type: nauc_mrr_at_100_max value: 26.48681755038979 - type: nauc_mrr_at_100_std value: -2.0403918178592417 - type: nauc_mrr_at_10_diff1 value: 36.16188746318805 - type: nauc_mrr_at_10_max value: 26.301480069835275 - type: nauc_mrr_at_10_std value: -2.551636416429969 - type: nauc_mrr_at_1_diff1 value: 43.02454864876149 - type: nauc_mrr_at_1_max value: 26.567214425393164 - type: nauc_mrr_at_1_std value: -5.346998954028162 - type: nauc_mrr_at_20_diff1 value: 36.16765735531818 - type: nauc_mrr_at_20_max value: 26.463701839238247 - type: nauc_mrr_at_20_std value: -2.1593929836262515 - type: nauc_mrr_at_3_diff1 value: 37.91898816620748 - type: nauc_mrr_at_3_max value: 27.408664226667177 - type: nauc_mrr_at_3_std value: -3.2055902944778696 - type: nauc_mrr_at_5_diff1 value: 37.07752653343404 - type: nauc_mrr_at_5_max value: 27.04838616584666 - type: nauc_mrr_at_5_std value: -2.6905535922490627 - type: nauc_ndcg_at_1000_diff1 value: 32.77919521624436 - type: nauc_ndcg_at_1000_max value: 24.53344626291134 - type: nauc_ndcg_at_1000_std value: 0.7250839739175355 - type: nauc_ndcg_at_100_diff1 value: 33.09789248177774 - type: nauc_ndcg_at_100_max value: 24.39220995450699 - type: nauc_ndcg_at_100_std value: 0.42021432060057934 - type: nauc_ndcg_at_10_diff1 value: 32.475687182953514 - type: nauc_ndcg_at_10_max value: 23.289940547957013 - type: nauc_ndcg_at_10_std value: -2.2372954440736628 - type: nauc_ndcg_at_1_diff1 value: 43.02454864876149 - type: nauc_ndcg_at_1_max value: 26.567214425393164 - type: nauc_ndcg_at_1_std value: -5.346998954028162 - type: nauc_ndcg_at_20_diff1 value: 32.615162301314406 - type: nauc_ndcg_at_20_max value: 23.592283255223638 - type: nauc_ndcg_at_20_std value: -0.9666029411482454 - type: nauc_ndcg_at_3_diff1 value: 34.91305857074282 - type: nauc_ndcg_at_3_max value: 24.114483830046765 - type: nauc_ndcg_at_3_std value: -3.6221786676260725 - type: nauc_ndcg_at_5_diff1 value: 34.15722647039212 - type: nauc_ndcg_at_5_max value: 23.885765133899003 - type: nauc_ndcg_at_5_std value: -3.0464520124354526 - type: nauc_precision_at_1000_diff1 value: -16.774877787196818 - type: nauc_precision_at_1000_max value: 3.434920920797534 - type: nauc_precision_at_1000_std value: 12.693242189177525 - type: nauc_precision_at_100_diff1 value: 10.971079821338591 - type: nauc_precision_at_100_max value: 23.952967311935705 - type: nauc_precision_at_100_std value: 18.53730146884614 - type: nauc_precision_at_10_diff1 value: 23.009747058852795 - type: nauc_precision_at_10_max value: 26.01626635498701 - type: nauc_precision_at_10_std value: 5.512964190005387 - type: nauc_precision_at_1_diff1 value: 43.02454864876149 - type: nauc_precision_at_1_max value: 26.567214425393164 - type: nauc_precision_at_1_std value: -5.346998954028162 - type: nauc_precision_at_20_diff1 value: 21.0449838413959 - type: nauc_precision_at_20_max value: 27.1788421077057 - type: nauc_precision_at_20_std value: 12.1005925779907 - type: nauc_precision_at_3_diff1 value: 31.620555316251835 - type: nauc_precision_at_3_max value: 27.48904359714662 - type: nauc_precision_at_3_std value: -0.37680032200429714 - type: nauc_precision_at_5_diff1 value: 29.175984831220962 - type: nauc_precision_at_5_max value: 27.20617375505473 - type: nauc_precision_at_5_std value: 0.7980896555153171 - type: nauc_recall_at_1000_diff1 value: 6.904715425517141 - type: nauc_recall_at_1000_max value: 33.38883092681765 - type: nauc_recall_at_1000_std value: 42.78326229564927 - type: nauc_recall_at_100_diff1 value: 23.6743337390406 - type: nauc_recall_at_100_max value: 26.455577684379545 - type: nauc_recall_at_100_std value: 14.88555128462821 - type: nauc_recall_at_10_diff1 value: 23.276637895958206 - type: nauc_recall_at_10_max value: 21.09618028934919 - type: nauc_recall_at_10_std value: 0.6332438472540376 - type: nauc_recall_at_1_diff1 value: 40.50413598561168 - type: nauc_recall_at_1_max value: 21.966442375416722 - type: nauc_recall_at_1_std value: -6.712726678544362 - type: nauc_recall_at_20_diff1 value: 23.02548969713959 - type: nauc_recall_at_20_max value: 21.118464808683886 - type: nauc_recall_at_20_std value: 4.9048810729410555 - type: nauc_recall_at_3_diff1 value: 30.708484078575864 - type: nauc_recall_at_3_max value: 23.798172366006927 - type: nauc_recall_at_3_std value: -2.8762947540136694 - type: nauc_recall_at_5_diff1 value: 28.644452067839847 - type: nauc_recall_at_5_max value: 23.413735887800126 - type: nauc_recall_at_5_std value: -0.9731245183499054 - type: ndcg_at_1 value: 24.03 - type: ndcg_at_10 value: 35.286 - type: ndcg_at_100 value: 40.318 - type: ndcg_at_1000 value: 42.799 - type: ndcg_at_20 value: 37.363 - type: ndcg_at_3 value: 29.486 - type: ndcg_at_5 value: 32.147 - type: precision_at_1 value: 24.03 - type: precision_at_10 value: 5.7299999999999995 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 3.383 - type: precision_at_3 value: 12.508 - type: precision_at_5 value: 9.094 - type: recall_at_1 value: 22.192999999999998 - type: recall_at_10 value: 49.461 - type: recall_at_100 value: 72.563 - type: recall_at_1000 value: 90.81 - type: recall_at_20 value: 57.375 - type: recall_at_3 value: 33.717999999999996 - type: recall_at_5 value: 40.176 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 25.968999999999998 - type: map_at_1 value: 10.465 - type: map_at_10 value: 18.169 - type: map_at_100 value: 20.092 - type: map_at_1000 value: 20.284 - type: map_at_20 value: 19.171 - type: map_at_3 value: 14.881 - type: map_at_5 value: 16.514 - type: mrr_at_1 value: 23.257328990228014 - type: mrr_at_10 value: 33.97151129724417 - type: mrr_at_100 value: 35.08279716446454 - type: mrr_at_1000 value: 35.12955360595455 - type: mrr_at_20 value: 34.67890892662469 - type: mrr_at_3 value: 30.31487513572199 - type: mrr_at_5 value: 32.47774158523336 - type: nauc_map_at_1000_diff1 value: 22.508790878195743 - type: nauc_map_at_1000_max value: 42.86110108324203 - type: nauc_map_at_1000_std value: 17.460592768800236 - type: nauc_map_at_100_diff1 value: 22.47424380453617 - type: nauc_map_at_100_max value: 42.817537062906894 - type: nauc_map_at_100_std value: 17.39646930826512 - type: nauc_map_at_10_diff1 value: 22.895817017299535 - type: nauc_map_at_10_max value: 42.57389260104658 - type: nauc_map_at_10_std value: 15.663345782505141 - type: nauc_map_at_1_diff1 value: 29.782434770483434 - type: nauc_map_at_1_max value: 40.30779478123252 - type: nauc_map_at_1_std value: 9.169072850474336 - type: nauc_map_at_20_diff1 value: 22.52636007972728 - type: nauc_map_at_20_max value: 42.72977175598548 - type: nauc_map_at_20_std value: 16.826430314819532 - type: nauc_map_at_3_diff1 value: 25.430362223440216 - type: nauc_map_at_3_max value: 41.23018690763574 - type: nauc_map_at_3_std value: 11.237131213228738 - type: nauc_map_at_5_diff1 value: 24.65629039714421 - type: nauc_map_at_5_max value: 41.988672858397216 - type: nauc_map_at_5_std value: 13.242478093862001 - type: nauc_mrr_at_1000_diff1 value: 20.034648607271407 - type: nauc_mrr_at_1000_max value: 40.23257048114861 - type: nauc_mrr_at_1000_std value: 18.69886218513857 - type: nauc_mrr_at_100_diff1 value: 20.019074148137793 - type: nauc_mrr_at_100_max value: 40.24098309862084 - type: nauc_mrr_at_100_std value: 18.725617772901064 - type: nauc_mrr_at_10_diff1 value: 19.78613711521588 - type: nauc_mrr_at_10_max value: 40.30354678203266 - type: nauc_mrr_at_10_std value: 18.641502254160113 - type: nauc_mrr_at_1_diff1 value: 26.212266274024305 - type: nauc_mrr_at_1_max value: 38.39690305559415 - type: nauc_mrr_at_1_std value: 13.686935637449835 - type: nauc_mrr_at_20_diff1 value: 19.738943766823667 - type: nauc_mrr_at_20_max value: 40.15994046799137 - type: nauc_mrr_at_20_std value: 18.7675323725771 - type: nauc_mrr_at_3_diff1 value: 21.535633620495098 - type: nauc_mrr_at_3_max value: 39.814964124435555 - type: nauc_mrr_at_3_std value: 16.867563481348512 - type: nauc_mrr_at_5_diff1 value: 20.554028607806643 - type: nauc_mrr_at_5_max value: 39.95273649684803 - type: nauc_mrr_at_5_std value: 18.018606564393508 - type: nauc_ndcg_at_1000_diff1 value: 19.231694930865835 - type: nauc_ndcg_at_1000_max value: 43.786342262504384 - type: nauc_ndcg_at_1000_std value: 24.58938106769215 - type: nauc_ndcg_at_100_diff1 value: 18.449692303446884 - type: nauc_ndcg_at_100_max value: 43.09332257111741 - type: nauc_ndcg_at_100_std value: 24.076148997875972 - type: nauc_ndcg_at_10_diff1 value: 18.794870643983113 - type: nauc_ndcg_at_10_max value: 42.68489686537609 - type: nauc_ndcg_at_10_std value: 19.824273193830138 - type: nauc_ndcg_at_1_diff1 value: 26.212266274024305 - type: nauc_ndcg_at_1_max value: 38.39690305559415 - type: nauc_ndcg_at_1_std value: 13.686935637449835 - type: nauc_ndcg_at_20_diff1 value: 17.888976798208986 - type: nauc_ndcg_at_20_max value: 42.68344681480489 - type: nauc_ndcg_at_20_std value: 22.024920930367635 - type: nauc_ndcg_at_3_diff1 value: 22.66758693649605 - type: nauc_ndcg_at_3_max value: 40.73129464185028 - type: nauc_ndcg_at_3_std value: 14.296972427434584 - type: nauc_ndcg_at_5_diff1 value: 21.679144576532003 - type: nauc_ndcg_at_5_max value: 41.71672238804214 - type: nauc_ndcg_at_5_std value: 16.555959072290143 - type: nauc_precision_at_1000_diff1 value: -2.081929796569419 - type: nauc_precision_at_1000_max value: 13.924323936713579 - type: nauc_precision_at_1000_std value: 25.683621437744993 - type: nauc_precision_at_100_diff1 value: 1.1123551017787119 - type: nauc_precision_at_100_max value: 22.8934883876795 - type: nauc_precision_at_100_std value: 30.75370813152731 - type: nauc_precision_at_10_diff1 value: 6.134591528037698 - type: nauc_precision_at_10_max value: 36.378203893137886 - type: nauc_precision_at_10_std value: 28.203991309470194 - type: nauc_precision_at_1_diff1 value: 26.212266274024305 - type: nauc_precision_at_1_max value: 38.39690305559415 - type: nauc_precision_at_1_std value: 13.686935637449835 - type: nauc_precision_at_20_diff1 value: 2.8948491365237414 - type: nauc_precision_at_20_max value: 31.995440435175894 - type: nauc_precision_at_20_std value: 30.907948426803074 - type: nauc_precision_at_3_diff1 value: 17.803374402107583 - type: nauc_precision_at_3_max value: 39.71382363935541 - type: nauc_precision_at_3_std value: 19.05607379919279 - type: nauc_precision_at_5_diff1 value: 13.183647572995858 - type: nauc_precision_at_5_max value: 37.372958758333176 - type: nauc_precision_at_5_std value: 22.684954201772694 - type: nauc_recall_at_1000_diff1 value: 10.687446445630595 - type: nauc_recall_at_1000_max value: 38.014958891448764 - type: nauc_recall_at_1000_std value: 39.687398878563165 - type: nauc_recall_at_100_diff1 value: 7.86242321071247 - type: nauc_recall_at_100_max value: 34.59799033668923 - type: nauc_recall_at_100_std value: 31.18604585594386 - type: nauc_recall_at_10_diff1 value: 10.058818014539304 - type: nauc_recall_at_10_max value: 37.71306951133003 - type: nauc_recall_at_10_std value: 21.639511051775173 - type: nauc_recall_at_1_diff1 value: 29.782434770483434 - type: nauc_recall_at_1_max value: 40.30779478123252 - type: nauc_recall_at_1_std value: 9.169072850474336 - type: nauc_recall_at_20_diff1 value: 7.5900422288024485 - type: nauc_recall_at_20_max value: 36.58324344566059 - type: nauc_recall_at_20_std value: 26.074077704752884 - type: nauc_recall_at_3_diff1 value: 20.442039600313066 - type: nauc_recall_at_3_max value: 39.03868055580001 - type: nauc_recall_at_3_std value: 13.406827316257171 - type: nauc_recall_at_5_diff1 value: 16.88422991349309 - type: nauc_recall_at_5_max value: 38.546255662975426 - type: nauc_recall_at_5_std value: 17.080233955497103 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.968999999999998 - type: ndcg_at_100 value: 33.657 - type: ndcg_at_1000 value: 37.181 - type: ndcg_at_20 value: 28.87 - type: ndcg_at_3 value: 20.36 - type: ndcg_at_5 value: 22.424 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.248 - type: precision_at_100 value: 1.644 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_20 value: 5.3420000000000005 - type: precision_at_3 value: 15.071000000000002 - type: precision_at_5 value: 12.039 - type: recall_at_1 value: 10.465 - type: recall_at_10 value: 32.365 - type: recall_at_100 value: 58.835 - type: recall_at_1000 value: 78.545 - type: recall_at_20 value: 40.572 - type: recall_at_3 value: 18.831999999999997 - type: recall_at_5 value: 24.215999999999998 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 42.795 - type: map_at_1 value: 9.628 - type: map_at_10 value: 21.549 - type: map_at_100 value: 30.675 - type: map_at_1000 value: 32.617000000000004 - type: map_at_20 value: 25.008000000000003 - type: map_at_3 value: 15.126000000000001 - type: map_at_5 value: 17.754 - type: mrr_at_1 value: 68.0 - type: mrr_at_10 value: 76.7046626984127 - type: mrr_at_100 value: 76.97906268203754 - type: mrr_at_1000 value: 76.98661470807507 - type: mrr_at_20 value: 76.88214895806809 - type: mrr_at_3 value: 75.08333333333334 - type: mrr_at_5 value: 76.25833333333333 - type: nauc_map_at_1000_diff1 value: 15.06294563042792 - type: nauc_map_at_1000_max value: 7.307605123960991 - type: nauc_map_at_1000_std value: 19.892780673298173 - type: nauc_map_at_100_diff1 value: 16.48781539305217 - type: nauc_map_at_100_max value: 4.163036696915528 - type: nauc_map_at_100_std value: 16.9862711285641 - type: nauc_map_at_10_diff1 value: 25.351685305752092 - type: nauc_map_at_10_max value: -6.328798521224069 - type: nauc_map_at_10_std value: -7.316100107756104 - type: nauc_map_at_1_diff1 value: 38.3295470731631 - type: nauc_map_at_1_max value: -13.880427961544012 - type: nauc_map_at_1_std value: -22.099143608658252 - type: nauc_map_at_20_diff1 value: 22.2373690636284 - type: nauc_map_at_20_max value: -2.6229531452834056 - type: nauc_map_at_20_std value: 1.6286682207920802 - type: nauc_map_at_3_diff1 value: 31.582526735533577 - type: nauc_map_at_3_max value: -10.152686655405788 - type: nauc_map_at_3_std value: -17.997301336774466 - type: nauc_map_at_5_diff1 value: 28.501849128731603 - type: nauc_map_at_5_max value: -9.420764154585926 - type: nauc_map_at_5_std value: -14.952855186121209 - type: nauc_mrr_at_1000_diff1 value: 30.838732279173076 - type: nauc_mrr_at_1000_max value: 32.742075195276975 - type: nauc_mrr_at_1000_std value: 38.42496648842422 - type: nauc_mrr_at_100_diff1 value: 30.8338161596336 - type: nauc_mrr_at_100_max value: 32.747953204575985 - type: nauc_mrr_at_100_std value: 38.40776137669358 - type: nauc_mrr_at_10_diff1 value: 30.83472234877192 - type: nauc_mrr_at_10_max value: 32.75116109344571 - type: nauc_mrr_at_10_std value: 38.561736475692676 - type: nauc_mrr_at_1_diff1 value: 32.014028056112174 - type: nauc_mrr_at_1_max value: 31.408074770230073 - type: nauc_mrr_at_1_std value: 36.184005942920294 - type: nauc_mrr_at_20_diff1 value: 30.885805642613395 - type: nauc_mrr_at_20_max value: 32.905137024099005 - type: nauc_mrr_at_20_std value: 38.3565815722824 - type: nauc_mrr_at_3_diff1 value: 30.51305186466627 - type: nauc_mrr_at_3_max value: 30.693328691723114 - type: nauc_mrr_at_3_std value: 39.38622400861065 - type: nauc_mrr_at_5_diff1 value: 30.317279990938452 - type: nauc_mrr_at_5_max value: 31.926594839342393 - type: nauc_mrr_at_5_std value: 38.46685947469381 - type: nauc_ndcg_at_1000_diff1 value: 16.28596536744406 - type: nauc_ndcg_at_1000_max value: 19.413497051062997 - type: nauc_ndcg_at_1000_std value: 32.19501591498477 - type: nauc_ndcg_at_100_diff1 value: 18.55686261799664 - type: nauc_ndcg_at_100_max value: 9.725126148607572 - type: nauc_ndcg_at_100_std value: 24.696663921228648 - type: nauc_ndcg_at_10_diff1 value: 21.669435236871433 - type: nauc_ndcg_at_10_max value: 12.731873441825986 - type: nauc_ndcg_at_10_std value: 20.482348650861326 - type: nauc_ndcg_at_1_diff1 value: 31.538883148722135 - type: nauc_ndcg_at_1_max value: 20.576090539094245 - type: nauc_ndcg_at_1_std value: 20.233717369003852 - type: nauc_ndcg_at_20_diff1 value: 21.115728605763355 - type: nauc_ndcg_at_20_max value: 8.575022320641088 - type: nauc_ndcg_at_20_std value: 17.16237882479797 - type: nauc_ndcg_at_3_diff1 value: 21.96172812117672 - type: nauc_ndcg_at_3_max value: 19.298402519375337 - type: nauc_ndcg_at_3_std value: 23.923843562473767 - type: nauc_ndcg_at_5_diff1 value: 22.03436389555251 - type: nauc_ndcg_at_5_max value: 16.258866065882057 - type: nauc_ndcg_at_5_std value: 21.68792802793435 - type: nauc_precision_at_1000_diff1 value: -18.062026408537747 - type: nauc_precision_at_1000_max value: 32.37834793726209 - type: nauc_precision_at_1000_std value: 15.562855786223656 - type: nauc_precision_at_100_diff1 value: -17.50364274426908 - type: nauc_precision_at_100_max value: 32.384814142255365 - type: nauc_precision_at_100_std value: 46.98395338876178 - type: nauc_precision_at_10_diff1 value: -6.291350100959028 - type: nauc_precision_at_10_max value: 31.138748065701144 - type: nauc_precision_at_10_std value: 48.694830834125774 - type: nauc_precision_at_1_diff1 value: 32.014028056112174 - type: nauc_precision_at_1_max value: 31.408074770230073 - type: nauc_precision_at_1_std value: 36.184005942920294 - type: nauc_precision_at_20_diff1 value: -9.594405455110774 - type: nauc_precision_at_20_max value: 31.65229115461723 - type: nauc_precision_at_20_std value: 49.63751798560031 - type: nauc_precision_at_3_diff1 value: 2.907362402129637 - type: nauc_precision_at_3_max value: 29.336784202857842 - type: nauc_precision_at_3_std value: 41.233594127921656 - type: nauc_precision_at_5_diff1 value: -2.002808834315085 - type: nauc_precision_at_5_max value: 29.391646001138856 - type: nauc_precision_at_5_std value: 41.27215777932478 - type: nauc_recall_at_1000_diff1 value: 3.6669593750924676 - type: nauc_recall_at_1000_max value: 12.118453863666423 - type: nauc_recall_at_1000_std value: 37.7404411804956 - type: nauc_recall_at_100_diff1 value: 11.551328204082889 - type: nauc_recall_at_100_max value: -0.4201950194636047 - type: nauc_recall_at_100_std value: 20.603663886349473 - type: nauc_recall_at_10_diff1 value: 22.004325506597784 - type: nauc_recall_at_10_max value: -10.916078771299482 - type: nauc_recall_at_10_std value: -11.651436530615781 - type: nauc_recall_at_1_diff1 value: 38.3295470731631 - type: nauc_recall_at_1_max value: -13.880427961544012 - type: nauc_recall_at_1_std value: -22.099143608658252 - type: nauc_recall_at_20_diff1 value: 17.28828514515132 - type: nauc_recall_at_20_max value: -8.963532309390933 - type: nauc_recall_at_20_std value: -3.0611415145174004 - type: nauc_recall_at_3_diff1 value: 28.096648289440473 - type: nauc_recall_at_3_max value: -13.329899258907819 - type: nauc_recall_at_3_std value: -20.197327039357774 - type: nauc_recall_at_5_diff1 value: 25.390692207787435 - type: nauc_recall_at_5_max value: -13.790276409207744 - type: nauc_recall_at_5_std value: -18.5418320355191 - type: ndcg_at_1 value: 53.25 - type: ndcg_at_10 value: 42.795 - type: ndcg_at_100 value: 49.099 - type: ndcg_at_1000 value: 56.603 - type: ndcg_at_20 value: 42.626 - type: ndcg_at_3 value: 46.288000000000004 - type: ndcg_at_5 value: 44.131 - type: precision_at_1 value: 68.0 - type: precision_at_10 value: 34.425 - type: precision_at_100 value: 11.437999999999999 - type: precision_at_1000 value: 2.419 - type: precision_at_20 value: 26.150000000000002 - type: precision_at_3 value: 50.833 - type: precision_at_5 value: 43.35 - type: recall_at_1 value: 9.628 - type: recall_at_10 value: 27.354 - type: recall_at_100 value: 57.792 - type: recall_at_1000 value: 80.312 - type: recall_at_20 value: 35.022999999999996 - type: recall_at_3 value: 16.408 - type: recall_at_5 value: 20.415 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 91.95500000000001 - type: f1 value: 88.27225607128511 - type: f1_weighted value: 92.14368925298409 - type: main_score value: 91.95500000000001 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 90.275 - type: map_at_1 value: 80.721 - type: map_at_10 value: 87.238 - type: map_at_100 value: 87.41799999999999 - type: map_at_1000 value: 87.434 - type: map_at_20 value: 87.348 - type: map_at_3 value: 86.374 - type: map_at_5 value: 86.953 - type: mrr_at_1 value: 86.94869486948696 - type: mrr_at_10 value: 92.0686354349719 - type: mrr_at_100 value: 92.09215030076146 - type: mrr_at_1000 value: 92.09344861007547 - type: mrr_at_20 value: 92.0846280116474 - type: mrr_at_3 value: 91.6666666666665 - type: mrr_at_5 value: 91.9771977197718 - type: nauc_map_at_1000_diff1 value: 44.446874776362726 - type: nauc_map_at_1000_max value: 16.85832681494148 - type: nauc_map_at_1000_std value: -17.455166829409652 - type: nauc_map_at_100_diff1 value: 44.39421076627414 - type: nauc_map_at_100_max value: 16.859206709004894 - type: nauc_map_at_100_std value: -17.437449601907005 - type: nauc_map_at_10_diff1 value: 44.047289123932885 - type: nauc_map_at_10_max value: 16.610920553416857 - type: nauc_map_at_10_std value: -17.44761661034698 - type: nauc_map_at_1_diff1 value: 50.44398867726515 - type: nauc_map_at_1_max value: 13.24078069953482 - type: nauc_map_at_1_std value: -19.210048245062886 - type: nauc_map_at_20_diff1 value: 44.288412502291386 - type: nauc_map_at_20_max value: 16.7417203527263 - type: nauc_map_at_20_std value: -17.442736788911155 - type: nauc_map_at_3_diff1 value: 43.58931349226214 - type: nauc_map_at_3_max value: 15.834164485867314 - type: nauc_map_at_3_std value: -18.0736541220278 - type: nauc_map_at_5_diff1 value: 43.6482956305525 - type: nauc_map_at_5_max value: 16.42704963322462 - type: nauc_map_at_5_std value: -17.174816205253062 - type: nauc_mrr_at_1000_diff1 value: 67.77435408307646 - type: nauc_mrr_at_1000_max value: 20.14236166181914 - type: nauc_mrr_at_1000_std value: -32.692278423000545 - type: nauc_mrr_at_100_diff1 value: 67.7672396928755 - type: nauc_mrr_at_100_max value: 20.150626964982006 - type: nauc_mrr_at_100_std value: -32.689293692169954 - type: nauc_mrr_at_10_diff1 value: 67.74578233808445 - type: nauc_mrr_at_10_max value: 20.298254431905445 - type: nauc_mrr_at_10_std value: -32.82329737843994 - type: nauc_mrr_at_1_diff1 value: 70.33588000237124 - type: nauc_mrr_at_1_max value: 16.9043692756742 - type: nauc_mrr_at_1_std value: -31.522857872957193 - type: nauc_mrr_at_20_diff1 value: 67.76428206610476 - type: nauc_mrr_at_20_max value: 20.202375397586163 - type: nauc_mrr_at_20_std value: -32.67136675382927 - type: nauc_mrr_at_3_diff1 value: 67.417613259637 - type: nauc_mrr_at_3_max value: 20.11275620894492 - type: nauc_mrr_at_3_std value: -34.342859783441604 - type: nauc_mrr_at_5_diff1 value: 67.48335609889256 - type: nauc_mrr_at_5_max value: 20.48428961056853 - type: nauc_mrr_at_5_std value: -32.78396456892126 - type: nauc_ndcg_at_1000_diff1 value: 46.54506830965683 - type: nauc_ndcg_at_1000_max value: 18.660778615757575 - type: nauc_ndcg_at_1000_std value: -18.19132930651756 - type: nauc_ndcg_at_100_diff1 value: 45.28000017258881 - type: nauc_ndcg_at_100_max value: 18.834681603109253 - type: nauc_ndcg_at_100_std value: -17.649395026761326 - type: nauc_ndcg_at_10_diff1 value: 44.11229706633734 - type: nauc_ndcg_at_10_max value: 18.29305093798137 - type: nauc_ndcg_at_10_std value: -17.90162239517308 - type: nauc_ndcg_at_1_diff1 value: 70.33588000237124 - type: nauc_ndcg_at_1_max value: 16.9043692756742 - type: nauc_ndcg_at_1_std value: -31.522857872957193 - type: nauc_ndcg_at_20_diff1 value: 44.77175886871614 - type: nauc_ndcg_at_20_max value: 18.518760585752798 - type: nauc_ndcg_at_20_std value: -17.65466327111102 - type: nauc_ndcg_at_3_diff1 value: 44.58868211138186 - type: nauc_ndcg_at_3_max value: 17.45341631980873 - type: nauc_ndcg_at_3_std value: -20.229056146112327 - type: nauc_ndcg_at_5_diff1 value: 43.676377139846316 - type: nauc_ndcg_at_5_max value: 18.25586241672028 - type: nauc_ndcg_at_5_std value: -17.534188919457723 - type: nauc_precision_at_1000_diff1 value: -4.980071248368879 - type: nauc_precision_at_1000_max value: 2.4206787157448364 - type: nauc_precision_at_1000_std value: 0.5319803632816764 - type: nauc_precision_at_100_diff1 value: -7.200777182380218 - type: nauc_precision_at_100_max value: 6.740083180557893 - type: nauc_precision_at_100_std value: 0.9536087616853052 - type: nauc_precision_at_10_diff1 value: -4.289467533938273 - type: nauc_precision_at_10_max value: 10.211741066763434 - type: nauc_precision_at_10_std value: -4.662371526181242 - type: nauc_precision_at_1_diff1 value: 70.33588000237124 - type: nauc_precision_at_1_max value: 16.9043692756742 - type: nauc_precision_at_1_std value: -31.522857872957193 - type: nauc_precision_at_20_diff1 value: -5.030769074291992 - type: nauc_precision_at_20_max value: 8.797987398213115 - type: nauc_precision_at_20_std value: -1.9605490727783594 - type: nauc_precision_at_3_diff1 value: 13.216958149697206 - type: nauc_precision_at_3_max value: 15.772686705906544 - type: nauc_precision_at_3_std value: -18.61391770138856 - type: nauc_precision_at_5_diff1 value: 1.228153404561234 - type: nauc_precision_at_5_max value: 13.481906339974865 - type: nauc_precision_at_5_std value: -6.9325392271540345 - type: nauc_recall_at_1000_diff1 value: 0.8786936512021057 - type: nauc_recall_at_1000_max value: 26.20876021169295 - type: nauc_recall_at_1000_std value: 30.71758149247617 - type: nauc_recall_at_100_diff1 value: 1.2080920456198185 - type: nauc_recall_at_100_max value: 26.3879323418646 - type: nauc_recall_at_100_std value: 16.80701517543882 - type: nauc_recall_at_10_diff1 value: 11.013642598590218 - type: nauc_recall_at_10_max value: 21.180208536877522 - type: nauc_recall_at_10_std value: 1.2484916503530337 - type: nauc_recall_at_1_diff1 value: 50.44398867726515 - type: nauc_recall_at_1_max value: 13.24078069953482 - type: nauc_recall_at_1_std value: -19.210048245062886 - type: nauc_recall_at_20_diff1 value: 8.886530767181634 - type: nauc_recall_at_20_max value: 22.450424455938666 - type: nauc_recall_at_20_std value: 7.716990193814734 - type: nauc_recall_at_3_diff1 value: 22.673393155116656 - type: nauc_recall_at_3_max value: 17.303096270043312 - type: nauc_recall_at_3_std value: -11.388552495036427 - type: nauc_recall_at_5_diff1 value: 14.941514606639128 - type: nauc_recall_at_5_max value: 20.543868518133763 - type: nauc_recall_at_5_std value: -0.6872973655386441 - type: ndcg_at_1 value: 86.949 - type: ndcg_at_10 value: 90.275 - type: ndcg_at_100 value: 90.82900000000001 - type: ndcg_at_1000 value: 91.078 - type: ndcg_at_20 value: 90.537 - type: ndcg_at_3 value: 89.11 - type: ndcg_at_5 value: 89.812 - type: precision_at_1 value: 86.949 - type: precision_at_10 value: 10.539 - type: precision_at_100 value: 1.105 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 5.367999999999999 - type: precision_at_3 value: 33.418 - type: precision_at_5 value: 20.594 - type: recall_at_1 value: 80.721 - type: recall_at_10 value: 94.918 - type: recall_at_100 value: 96.935 - type: recall_at_1000 value: 98.436 - type: recall_at_20 value: 95.747 - type: recall_at_3 value: 91.718 - type: recall_at_5 value: 93.56400000000001 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 56.330000000000005 - type: map_at_1 value: 28.515 - type: map_at_10 value: 48.025 - type: map_at_100 value: 50.12799999999999 - type: map_at_1000 value: 50.24999999999999 - type: map_at_20 value: 49.18 - type: map_at_3 value: 42.034 - type: map_at_5 value: 45.6 - type: mrr_at_1 value: 55.24691358024691 - type: mrr_at_10 value: 63.68980256711737 - type: mrr_at_100 value: 64.24731331137605 - type: mrr_at_1000 value: 64.26573357425247 - type: mrr_at_20 value: 63.99195099496453 - type: mrr_at_3 value: 61.59979423868312 - type: mrr_at_5 value: 62.926954732510254 - type: nauc_map_at_1000_diff1 value: 49.040013260249495 - type: nauc_map_at_1000_max value: 24.802876038045156 - type: nauc_map_at_1000_std value: -9.376537757150528 - type: nauc_map_at_100_diff1 value: 49.029024479160654 - type: nauc_map_at_100_max value: 24.731164716292035 - type: nauc_map_at_100_std value: -9.387893635217946 - type: nauc_map_at_10_diff1 value: 48.91153420990774 - type: nauc_map_at_10_max value: 23.27580721149473 - type: nauc_map_at_10_std value: -10.564031051673071 - type: nauc_map_at_1_diff1 value: 53.587940279417666 - type: nauc_map_at_1_max value: 15.611943280419307 - type: nauc_map_at_1_std value: -12.267333881231597 - type: nauc_map_at_20_diff1 value: 48.85162598138413 - type: nauc_map_at_20_max value: 24.17245534349571 - type: nauc_map_at_20_std value: -9.697612075654563 - type: nauc_map_at_3_diff1 value: 49.497063075399254 - type: nauc_map_at_3_max value: 19.347836327990144 - type: nauc_map_at_3_std value: -11.754013093455304 - type: nauc_map_at_5_diff1 value: 49.56885152050869 - type: nauc_map_at_5_max value: 21.683090195847054 - type: nauc_map_at_5_std value: -12.359651486493577 - type: nauc_mrr_at_1000_diff1 value: 58.22478165676272 - type: nauc_mrr_at_1000_max value: 34.219439813531146 - type: nauc_mrr_at_1000_std value: -6.440674450457267 - type: nauc_mrr_at_100_diff1 value: 58.22250007958456 - type: nauc_mrr_at_100_max value: 34.22499443483518 - type: nauc_mrr_at_100_std value: -6.421819585924411 - type: nauc_mrr_at_10_diff1 value: 58.19624157440326 - type: nauc_mrr_at_10_max value: 34.30656859300808 - type: nauc_mrr_at_10_std value: -6.617472991048374 - type: nauc_mrr_at_1_diff1 value: 61.52279187661879 - type: nauc_mrr_at_1_max value: 33.384132430839486 - type: nauc_mrr_at_1_std value: -9.438207319997437 - type: nauc_mrr_at_20_diff1 value: 58.143667799506325 - type: nauc_mrr_at_20_max value: 34.28807288879312 - type: nauc_mrr_at_20_std value: -6.324570006615823 - type: nauc_mrr_at_3_diff1 value: 58.00157320010862 - type: nauc_mrr_at_3_max value: 34.43694937057336 - type: nauc_mrr_at_3_std value: -6.972347709999522 - type: nauc_mrr_at_5_diff1 value: 58.10855260603556 - type: nauc_mrr_at_5_max value: 34.18965630065092 - type: nauc_mrr_at_5_std value: -7.399200039009866 - type: nauc_ndcg_at_1000_diff1 value: 50.75464173351383 - type: nauc_ndcg_at_1000_max value: 29.362088935935933 - type: nauc_ndcg_at_1000_std value: -5.366292073342733 - type: nauc_ndcg_at_100_diff1 value: 50.605586281429595 - type: nauc_ndcg_at_100_max value: 28.699532558361295 - type: nauc_ndcg_at_100_std value: -5.169194404036768 - type: nauc_ndcg_at_10_diff1 value: 49.98728438134757 - type: nauc_ndcg_at_10_max value: 26.646204536505568 - type: nauc_ndcg_at_10_std value: -8.109618785582915 - type: nauc_ndcg_at_1_diff1 value: 61.52279187661879 - type: nauc_ndcg_at_1_max value: 33.384132430839486 - type: nauc_ndcg_at_1_std value: -9.438207319997437 - type: nauc_ndcg_at_20_diff1 value: 49.8141337873794 - type: nauc_ndcg_at_20_max value: 27.842850955625376 - type: nauc_ndcg_at_20_std value: -5.976165863414487 - type: nauc_ndcg_at_3_diff1 value: 49.11396652814998 - type: nauc_ndcg_at_3_max value: 27.967139302963663 - type: nauc_ndcg_at_3_std value: -8.89915627933036 - type: nauc_ndcg_at_5_diff1 value: 50.093484046883404 - type: nauc_ndcg_at_5_max value: 26.156066061187524 - type: nauc_ndcg_at_5_std value: -11.095816956480336 - type: nauc_precision_at_1000_diff1 value: -9.270311947050661 - type: nauc_precision_at_1000_max value: 23.04482327672264 - type: nauc_precision_at_1000_std value: 14.972627298920138 - type: nauc_precision_at_100_diff1 value: -4.676390958277394 - type: nauc_precision_at_100_max value: 25.0896059423525 - type: nauc_precision_at_100_std value: 15.33272070938812 - type: nauc_precision_at_10_diff1 value: 8.905479014103273 - type: nauc_precision_at_10_max value: 28.56627287604059 - type: nauc_precision_at_10_std value: 8.253485713332454 - type: nauc_precision_at_1_diff1 value: 61.52279187661879 - type: nauc_precision_at_1_max value: 33.384132430839486 - type: nauc_precision_at_1_std value: -9.438207319997437 - type: nauc_precision_at_20_diff1 value: 3.3003066155025165 - type: nauc_precision_at_20_max value: 27.93594493211361 - type: nauc_precision_at_20_std value: 11.783159709527421 - type: nauc_precision_at_3_diff1 value: 25.149615983427537 - type: nauc_precision_at_3_max value: 28.04821486791947 - type: nauc_precision_at_3_std value: -1.2324117013013483 - type: nauc_precision_at_5_diff1 value: 17.374857830665306 - type: nauc_precision_at_5_max value: 27.82986152171657 - type: nauc_precision_at_5_std value: -0.7158790594400065 - type: nauc_recall_at_1000_diff1 value: 28.955731733389488 - type: nauc_recall_at_1000_max value: 15.571928427636642 - type: nauc_recall_at_1000_std value: 32.40735854507484 - type: nauc_recall_at_100_diff1 value: 36.90294611236181 - type: nauc_recall_at_100_max value: 18.018834042261187 - type: nauc_recall_at_100_std value: 11.710132835876887 - type: nauc_recall_at_10_diff1 value: 39.88756672042866 - type: nauc_recall_at_10_max value: 20.589897373897337 - type: nauc_recall_at_10_std value: -5.7047632365411385 - type: nauc_recall_at_1_diff1 value: 53.587940279417666 - type: nauc_recall_at_1_max value: 15.611943280419307 - type: nauc_recall_at_1_std value: -12.267333881231597 - type: nauc_recall_at_20_diff1 value: 37.70258582556698 - type: nauc_recall_at_20_max value: 22.673384060544873 - type: nauc_recall_at_20_std value: 2.6968199642576827 - type: nauc_recall_at_3_diff1 value: 42.72692103833461 - type: nauc_recall_at_3_max value: 16.556949642584353 - type: nauc_recall_at_3_std value: -10.532484120188565 - type: nauc_recall_at_5_diff1 value: 42.37410230009641 - type: nauc_recall_at_5_max value: 17.965335809420804 - type: nauc_recall_at_5_std value: -13.061585820037388 - type: ndcg_at_1 value: 55.247 - type: ndcg_at_10 value: 56.330000000000005 - type: ndcg_at_100 value: 62.709 - type: ndcg_at_1000 value: 64.39099999999999 - type: ndcg_at_20 value: 58.713 - type: ndcg_at_3 value: 52.139 - type: ndcg_at_5 value: 53.81 - type: precision_at_1 value: 55.247 - type: precision_at_10 value: 15.525 - type: precision_at_100 value: 2.242 - type: precision_at_1000 value: 0.254 - type: precision_at_20 value: 8.896999999999998 - type: precision_at_3 value: 34.928 - type: precision_at_5 value: 25.772000000000002 - type: recall_at_1 value: 28.515 - type: recall_at_10 value: 63.539 - type: recall_at_100 value: 86.69200000000001 - type: recall_at_1000 value: 96.52 - type: recall_at_20 value: 70.56 - type: recall_at_3 value: 47.56 - type: recall_at_5 value: 55.337 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 75.685 - type: map_at_1 value: 42.539 - type: map_at_10 value: 67.308 - type: map_at_100 value: 68.13900000000001 - type: map_at_1000 value: 68.188 - type: map_at_20 value: 67.831 - type: map_at_3 value: 63.49 - type: map_at_5 value: 66.011 - type: mrr_at_1 value: 85.0776502363268 - type: mrr_at_10 value: 89.88030395592838 - type: mrr_at_100 value: 89.96760055420727 - type: mrr_at_1000 value: 89.97012453666811 - type: mrr_at_20 value: 89.93827441471682 - type: mrr_at_3 value: 89.2437542201214 - type: mrr_at_5 value: 89.69412559081678 - type: nauc_map_at_1000_diff1 value: 6.9961771195427 - type: nauc_map_at_1000_max value: 22.9570925064648 - type: nauc_map_at_1000_std value: -1.5458624569883217 - type: nauc_map_at_100_diff1 value: 6.949902900492461 - type: nauc_map_at_100_max value: 22.934603899256967 - type: nauc_map_at_100_std value: -1.5269182504793082 - type: nauc_map_at_10_diff1 value: 6.838435260011044 - type: nauc_map_at_10_max value: 22.538659032323334 - type: nauc_map_at_10_std value: -2.311077884054465 - type: nauc_map_at_1_diff1 value: 75.21290774225054 - type: nauc_map_at_1_max value: 45.48693992183137 - type: nauc_map_at_1_std value: -19.2367213399715 - type: nauc_map_at_20_diff1 value: 6.805356249993981 - type: nauc_map_at_20_max value: 22.83058596653147 - type: nauc_map_at_20_std value: -1.6384957800821724 - type: nauc_map_at_3_diff1 value: 8.286114142646966 - type: nauc_map_at_3_max value: 21.62078046451462 - type: nauc_map_at_3_std value: -4.9588683683869785 - type: nauc_map_at_5_diff1 value: 6.991105807878547 - type: nauc_map_at_5_max value: 22.151155325913845 - type: nauc_map_at_5_std value: -3.5482395810019414 - type: nauc_mrr_at_1000_diff1 value: 75.13590328157673 - type: nauc_mrr_at_1000_max value: 48.89884493733756 - type: nauc_mrr_at_1000_std value: -17.474849088345444 - type: nauc_mrr_at_100_diff1 value: 75.13672870634296 - type: nauc_mrr_at_100_max value: 48.90749248480418 - type: nauc_mrr_at_100_std value: -17.468196322533082 - type: nauc_mrr_at_10_diff1 value: 75.14728903978204 - type: nauc_mrr_at_10_max value: 48.97606572231439 - type: nauc_mrr_at_10_std value: -17.457311806563517 - type: nauc_mrr_at_1_diff1 value: 75.21290774225054 - type: nauc_mrr_at_1_max value: 45.48693992183137 - type: nauc_mrr_at_1_std value: -19.2367213399715 - type: nauc_mrr_at_20_diff1 value: 75.14959983420921 - type: nauc_mrr_at_20_max value: 48.91223464788833 - type: nauc_mrr_at_20_std value: -17.509615177596416 - type: nauc_mrr_at_3_diff1 value: 74.93986877753086 - type: nauc_mrr_at_3_max value: 49.164844648240376 - type: nauc_mrr_at_3_std value: -18.059088139032735 - type: nauc_mrr_at_5_diff1 value: 75.19916252823927 - type: nauc_mrr_at_5_max value: 49.29976353391373 - type: nauc_mrr_at_5_std value: -17.333971153969397 - type: nauc_ndcg_at_1000_diff1 value: 13.40538465528209 - type: nauc_ndcg_at_1000_max value: 27.881635601892285 - type: nauc_ndcg_at_1000_std value: 0.8178258561298533 - type: nauc_ndcg_at_100_diff1 value: 11.966858211252239 - type: nauc_ndcg_at_100_max value: 27.377574833076135 - type: nauc_ndcg_at_100_std value: 1.6515554599007498 - type: nauc_ndcg_at_10_diff1 value: 11.150845083280156 - type: nauc_ndcg_at_10_max value: 25.803280700323576 - type: nauc_ndcg_at_10_std value: -1.0065871529103982 - type: nauc_ndcg_at_1_diff1 value: 75.21290774225054 - type: nauc_ndcg_at_1_max value: 45.48693992183137 - type: nauc_ndcg_at_1_std value: -19.2367213399715 - type: nauc_ndcg_at_20_diff1 value: 11.018576645234422 - type: nauc_ndcg_at_20_max value: 26.62929478495291 - type: nauc_ndcg_at_20_std value: 0.9257748198539285 - type: nauc_ndcg_at_3_diff1 value: 13.834887466881474 - type: nauc_ndcg_at_3_max value: 24.708855813470244 - type: nauc_ndcg_at_3_std value: -5.574281265029321 - type: nauc_ndcg_at_5_diff1 value: 11.645117370406775 - type: nauc_ndcg_at_5_max value: 25.251378089856114 - type: nauc_ndcg_at_5_std value: -3.3235793710523542 - type: nauc_precision_at_1000_diff1 value: -15.278565171879166 - type: nauc_precision_at_1000_max value: 35.36959533318545 - type: nauc_precision_at_1000_std value: 39.53956860779969 - type: nauc_precision_at_100_diff1 value: -15.866024292154146 - type: nauc_precision_at_100_max value: 26.0540809940565 - type: nauc_precision_at_100_std value: 29.458932963531588 - type: nauc_precision_at_10_diff1 value: -8.887744228134736 - type: nauc_precision_at_10_max value: 20.281508980645114 - type: nauc_precision_at_10_std value: 9.44054788498503 - type: nauc_precision_at_1_diff1 value: 75.21290774225054 - type: nauc_precision_at_1_max value: 45.48693992183137 - type: nauc_precision_at_1_std value: -19.2367213399715 - type: nauc_precision_at_20_diff1 value: -12.365550499602511 - type: nauc_precision_at_20_max value: 22.51674191316991 - type: nauc_precision_at_20_std value: 17.85545013302992 - type: nauc_precision_at_3_diff1 value: 0.4610061075289727 - type: nauc_precision_at_3_max value: 20.355559229508017 - type: nauc_precision_at_3_std value: -1.8950368204664811 - type: nauc_precision_at_5_diff1 value: -5.203212766693584 - type: nauc_precision_at_5_max value: 20.197292283256754 - type: nauc_precision_at_5_std value: 2.7834110269807733 - type: nauc_recall_at_1000_diff1 value: -15.27856517187918 - type: nauc_recall_at_1000_max value: 35.36959533318564 - type: nauc_recall_at_1000_std value: 39.53956860779948 - type: nauc_recall_at_100_diff1 value: -15.86602429215417 - type: nauc_recall_at_100_max value: 26.054080994056495 - type: nauc_recall_at_100_std value: 29.45893296353165 - type: nauc_recall_at_10_diff1 value: -8.887744228134533 - type: nauc_recall_at_10_max value: 20.281508980645132 - type: nauc_recall_at_10_std value: 9.440547884985136 - type: nauc_recall_at_1_diff1 value: 75.21290774225054 - type: nauc_recall_at_1_max value: 45.48693992183137 - type: nauc_recall_at_1_std value: -19.2367213399715 - type: nauc_recall_at_20_diff1 value: -12.365550499602412 - type: nauc_recall_at_20_max value: 22.516741913169824 - type: nauc_recall_at_20_std value: 17.85545013302977 - type: nauc_recall_at_3_diff1 value: 0.461006107528911 - type: nauc_recall_at_3_max value: 20.355559229507904 - type: nauc_recall_at_3_std value: -1.8950368204665768 - type: nauc_recall_at_5_diff1 value: -5.203212766693609 - type: nauc_recall_at_5_max value: 20.197292283256754 - type: nauc_recall_at_5_std value: 2.7834110269807857 - type: ndcg_at_1 value: 85.078 - type: ndcg_at_10 value: 75.685 - type: ndcg_at_100 value: 78.321 - type: ndcg_at_1000 value: 79.226 - type: ndcg_at_20 value: 76.89099999999999 - type: ndcg_at_3 value: 70.621 - type: ndcg_at_5 value: 73.64 - type: precision_at_1 value: 85.078 - type: precision_at_10 value: 15.762 - type: precision_at_100 value: 1.779 - type: precision_at_1000 value: 0.19 - type: precision_at_20 value: 8.266 - type: precision_at_3 value: 45.163 - type: precision_at_5 value: 29.476999999999997 - type: recall_at_1 value: 42.539 - type: recall_at_10 value: 78.812 - type: recall_at_100 value: 88.947 - type: recall_at_1000 value: 94.902 - type: recall_at_20 value: 82.66 - type: recall_at_3 value: 67.745 - type: recall_at_5 value: 73.693 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.4236 - type: ap value: 94.7476765500497 - type: ap_weighted value: 94.7476765500497 - type: f1 value: 96.42282467294943 - type: f1_weighted value: 96.42282467294943 - type: main_score value: 96.4236 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 39.096 - type: map_at_1 value: 18.093 - type: map_at_10 value: 31.336000000000002 - type: map_at_100 value: 32.646 - type: map_at_1000 value: 32.689 - type: map_at_20 value: 32.177 - type: map_at_3 value: 26.865 - type: map_at_5 value: 29.387 - type: mrr_at_1 value: 18.595988538681947 - type: mrr_at_10 value: 31.883635784781777 - type: mrr_at_100 value: 33.114642504493105 - type: mrr_at_1000 value: 33.15236434886478 - type: mrr_at_20 value: 32.677353366745386 - type: mrr_at_3 value: 27.499999999999748 - type: mrr_at_5 value: 29.988538681948324 - type: nauc_map_at_1000_diff1 value: 26.422595414287752 - type: nauc_map_at_1000_max value: -2.088410151340385 - type: nauc_map_at_1000_std value: -16.696230361721536 - type: nauc_map_at_100_diff1 value: 26.42500720558264 - type: nauc_map_at_100_max value: -2.1023099232080336 - type: nauc_map_at_100_std value: -16.66575149689904 - type: nauc_map_at_10_diff1 value: 26.459404478808775 - type: nauc_map_at_10_max value: -2.269901449601564 - type: nauc_map_at_10_std value: -17.455458022751035 - type: nauc_map_at_1_diff1 value: 28.54432241866563 - type: nauc_map_at_1_max value: -0.40791474629986774 - type: nauc_map_at_1_std value: -14.404864547978569 - type: nauc_map_at_20_diff1 value: 26.38676225786991 - type: nauc_map_at_20_max value: -2.1891600295638067 - type: nauc_map_at_20_std value: -16.959644101605377 - type: nauc_map_at_3_diff1 value: 26.23330227912876 - type: nauc_map_at_3_max value: -1.8164148507831337 - type: nauc_map_at_3_std value: -16.99491855375184 - type: nauc_map_at_5_diff1 value: 26.325175925374637 - type: nauc_map_at_5_max value: -2.0180910030934056 - type: nauc_map_at_5_std value: -17.45283277022505 - type: nauc_mrr_at_1000_diff1 value: 26.169050161755454 - type: nauc_mrr_at_1000_max value: -1.8391740603373914 - type: nauc_mrr_at_1000_std value: -16.43861685620324 - type: nauc_mrr_at_100_diff1 value: 26.17221780652128 - type: nauc_mrr_at_100_max value: -1.8498065915909387 - type: nauc_mrr_at_100_std value: -16.409831493692746 - type: nauc_mrr_at_10_diff1 value: 26.189399153570548 - type: nauc_mrr_at_10_max value: -1.9764469588029125 - type: nauc_mrr_at_10_std value: -17.145818272121605 - type: nauc_mrr_at_1_diff1 value: 28.22126171647418 - type: nauc_mrr_at_1_max value: -0.11857961224466163 - type: nauc_mrr_at_1_std value: -14.05918102647804 - type: nauc_mrr_at_20_diff1 value: 26.14305738353977 - type: nauc_mrr_at_20_max value: -1.9124852659396923 - type: nauc_mrr_at_20_std value: -16.666262236151226 - type: nauc_mrr_at_3_diff1 value: 25.875828530480092 - type: nauc_mrr_at_3_max value: -1.6026086125908872 - type: nauc_mrr_at_3_std value: -16.71173696808467 - type: nauc_mrr_at_5_diff1 value: 26.08730765049035 - type: nauc_mrr_at_5_max value: -1.7222664412599995 - type: nauc_mrr_at_5_std value: -17.147775822889788 - type: nauc_ndcg_at_1000_diff1 value: 26.080706702114743 - type: nauc_ndcg_at_1000_max value: -2.0971743668729084 - type: nauc_ndcg_at_1000_std value: -15.7708671585612 - type: nauc_ndcg_at_100_diff1 value: 26.17654865824926 - type: nauc_ndcg_at_100_max value: -2.387244090421558 - type: nauc_ndcg_at_100_std value: -14.647668629565816 - type: nauc_ndcg_at_10_diff1 value: 26.079442516936325 - type: nauc_ndcg_at_10_max value: -3.1545015518425035 - type: nauc_ndcg_at_10_std value: -18.444956406266947 - type: nauc_ndcg_at_1_diff1 value: 28.22126171647418 - type: nauc_ndcg_at_1_max value: -0.11857961224466163 - type: nauc_ndcg_at_1_std value: -14.05918102647804 - type: nauc_ndcg_at_20_diff1 value: 25.82008991785661 - type: nauc_ndcg_at_20_max value: -2.9583771179238614 - type: nauc_ndcg_at_20_std value: -16.693055164836963 - type: nauc_ndcg_at_3_diff1 value: 25.5710298650636 - type: nauc_ndcg_at_3_max value: -2.218224936981852 - type: nauc_ndcg_at_3_std value: -17.694121753232615 - type: nauc_ndcg_at_5_diff1 value: 25.771066639196416 - type: nauc_ndcg_at_5_max value: -2.5332524565573666 - type: nauc_ndcg_at_5_std value: -18.481381062423043 - type: nauc_precision_at_1000_diff1 value: -2.463633213526286 - type: nauc_precision_at_1000_max value: 14.6662952419131 - type: nauc_precision_at_1000_std value: 10.633618922732419 - type: nauc_precision_at_100_diff1 value: 12.829443660572027 - type: nauc_precision_at_100_max value: 4.533516248969494 - type: nauc_precision_at_100_std value: 15.753867134166018 - type: nauc_precision_at_10_diff1 value: 23.14452771198422 - type: nauc_precision_at_10_max value: -4.889938548928317 - type: nauc_precision_at_10_std value: -19.47135474946216 - type: nauc_precision_at_1_diff1 value: 28.22126171647418 - type: nauc_precision_at_1_max value: -0.11857961224466163 - type: nauc_precision_at_1_std value: -14.05918102647804 - type: nauc_precision_at_20_diff1 value: 19.617469162922806 - type: nauc_precision_at_20_max value: -3.369261237383013 - type: nauc_precision_at_20_std value: -10.440733098930027 - type: nauc_precision_at_3_diff1 value: 23.562821287356147 - type: nauc_precision_at_3_max value: -3.050696929026444 - type: nauc_precision_at_3_std value: -19.256168898117743 - type: nauc_precision_at_5_diff1 value: 23.59237070693645 - type: nauc_precision_at_5_max value: -3.391495817446261 - type: nauc_precision_at_5_std value: -20.431384367763556 - type: nauc_recall_at_1000_diff1 value: 17.321277809623652 - type: nauc_recall_at_1000_max value: 28.35805826926937 - type: nauc_recall_at_1000_std value: 73.86793130411475 - type: nauc_recall_at_100_diff1 value: 26.886950291153394 - type: nauc_recall_at_100_max value: -4.561316272010665 - type: nauc_recall_at_100_std value: 20.563905398924636 - type: nauc_recall_at_10_diff1 value: 25.028406909428547 - type: nauc_recall_at_10_max value: -6.379843964294479 - type: nauc_recall_at_10_std value: -21.407672616024666 - type: nauc_recall_at_1_diff1 value: 28.54432241866563 - type: nauc_recall_at_1_max value: -0.40791474629986774 - type: nauc_recall_at_1_std value: -14.404864547978569 - type: nauc_recall_at_20_diff1 value: 23.501471852525228 - type: nauc_recall_at_20_max value: -6.707662803744487 - type: nauc_recall_at_20_std value: -13.994466479286649 - type: nauc_recall_at_3_diff1 value: 24.005389823573537 - type: nauc_recall_at_3_max value: -3.3942514176696026 - type: nauc_recall_at_3_std value: -19.525956754173976 - type: nauc_recall_at_5_diff1 value: 24.356739198783767 - type: nauc_recall_at_5_max value: -4.1454695177252034 - type: nauc_recall_at_5_std value: -21.2881986104369 - type: ndcg_at_1 value: 18.596 - type: ndcg_at_10 value: 39.096 - type: ndcg_at_100 value: 45.255 - type: ndcg_at_1000 value: 46.285 - type: ndcg_at_20 value: 42.05 - type: ndcg_at_3 value: 29.974 - type: ndcg_at_5 value: 34.475 - type: precision_at_1 value: 18.596 - type: precision_at_10 value: 6.617000000000001 - type: precision_at_100 value: 0.967 - type: precision_at_1000 value: 0.106 - type: precision_at_20 value: 3.923 - type: precision_at_3 value: 13.276 - type: precision_at_5 value: 10.255 - type: recall_at_1 value: 18.093 - type: recall_at_10 value: 63.19200000000001 - type: recall_at_100 value: 91.418 - type: recall_at_1000 value: 99.177 - type: recall_at_20 value: 74.619 - type: recall_at_3 value: 38.346000000000004 - type: recall_at_5 value: 49.156 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.02188782489742 - type: f1 value: 98.91843101772031 - type: f1_weighted value: 99.02275333864246 - type: main_score value: 99.02188782489742 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.1217510259918 - type: f1 value: 70.49499563988088 - type: f1_weighted value: 91.23538081145682 - type: main_score value: 91.1217510259918 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 81.90652320107598 - type: f1 value: 79.93778330619065 - type: f1_weighted value: 81.11001189722018 - type: main_score value: 81.90652320107598 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 89.21318090114325 - type: f1 value: 88.09390677800496 - type: f1_weighted value: 88.79037980610785 - type: main_score value: 89.21318090114325 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 40.97987057589157 - type: v_measure value: 40.97987057589157 - type: v_measure_std value: 0.9595500801375094 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 39.164547725954996 - type: v_measure value: 39.164547725954996 - type: v_measure_std value: 1.2824642026478994 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 32.151918733303695 - type: map value: 32.151918733303695 - type: mrr value: 33.436589720422084 - type: nAUC_map_diff1 value: 11.16356032762711 - type: nAUC_map_max value: -17.051714062653385 - type: nAUC_map_std value: 3.6166597896247756 - type: nAUC_mrr_diff1 value: 10.835983194949183 - type: nAUC_mrr_max value: -11.557478363717925 - type: nAUC_mrr_std value: 4.985178033763766 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 40.336 - type: map_at_1 value: 7.034999999999999 - type: map_at_10 value: 15.671 - type: map_at_100 value: 20.22 - type: map_at_1000 value: 21.837999999999997 - type: map_at_20 value: 17.502000000000002 - type: map_at_3 value: 11.651 - type: map_at_5 value: 13.563 - type: mrr_at_1 value: 52.63157894736842 - type: mrr_at_10 value: 61.826871099316925 - type: mrr_at_100 value: 62.229063214736655 - type: mrr_at_1000 value: 62.25978745222259 - type: mrr_at_20 value: 62.017132302551516 - type: mrr_at_3 value: 60.06191950464399 - type: mrr_at_5 value: 61.33126934984522 - type: nauc_map_at_1000_diff1 value: 21.648670926546952 - type: nauc_map_at_1000_max value: 23.885175390129906 - type: nauc_map_at_1000_std value: 12.718321166322925 - type: nauc_map_at_100_diff1 value: 23.229468625665575 - type: nauc_map_at_100_max value: 23.468901411991062 - type: nauc_map_at_100_std value: 9.706516076964897 - type: nauc_map_at_10_diff1 value: 29.57657594345363 - type: nauc_map_at_10_max value: 17.963452106982118 - type: nauc_map_at_10_std value: -2.5294451126594124 - type: nauc_map_at_1_diff1 value: 48.350174879836096 - type: nauc_map_at_1_max value: 3.946334372368094 - type: nauc_map_at_1_std value: -17.849285033341584 - type: nauc_map_at_20_diff1 value: 26.65441763179981 - type: nauc_map_at_20_max value: 21.02777654571385 - type: nauc_map_at_20_std value: 2.7894705407486047 - type: nauc_map_at_3_diff1 value: 38.41834300826446 - type: nauc_map_at_3_max value: 10.74526856759504 - type: nauc_map_at_3_std value: -10.731985683904883 - type: nauc_map_at_5_diff1 value: 33.892749747271516 - type: nauc_map_at_5_max value: 14.045987583774153 - type: nauc_map_at_5_std value: -8.062293157967538 - type: nauc_mrr_at_1000_diff1 value: 30.52776986334351 - type: nauc_mrr_at_1000_max value: 40.151271324267725 - type: nauc_mrr_at_1000_std value: 26.692949936707038 - type: nauc_mrr_at_100_diff1 value: 30.541627760540933 - type: nauc_mrr_at_100_max value: 40.177283472965364 - type: nauc_mrr_at_100_std value: 26.732950728007122 - type: nauc_mrr_at_10_diff1 value: 30.68229234674581 - type: nauc_mrr_at_10_max value: 39.975164420469234 - type: nauc_mrr_at_10_std value: 26.499033999722098 - type: nauc_mrr_at_1_diff1 value: 30.760566388139708 - type: nauc_mrr_at_1_max value: 35.02398717712965 - type: nauc_mrr_at_1_std value: 19.679504342414695 - type: nauc_mrr_at_20_diff1 value: 30.56841620886074 - type: nauc_mrr_at_20_max value: 40.226142456190956 - type: nauc_mrr_at_20_std value: 26.730669827048477 - type: nauc_mrr_at_3_diff1 value: 31.106510163929784 - type: nauc_mrr_at_3_max value: 38.96643476207935 - type: nauc_mrr_at_3_std value: 25.21933048360791 - type: nauc_mrr_at_5_diff1 value: 30.831207752570815 - type: nauc_mrr_at_5_max value: 39.90213179154124 - type: nauc_mrr_at_5_std value: 25.898244714250108 - type: nauc_ndcg_at_1000_diff1 value: 19.96523616472766 - type: nauc_ndcg_at_1000_max value: 40.102450563469354 - type: nauc_ndcg_at_1000_std value: 31.780695178031092 - type: nauc_ndcg_at_100_diff1 value: 18.29141350584988 - type: nauc_ndcg_at_100_max value: 33.217946395720304 - type: nauc_ndcg_at_100_std value: 25.91953793041382 - type: nauc_ndcg_at_10_diff1 value: 14.50319706154019 - type: nauc_ndcg_at_10_max value: 31.77258320465841 - type: nauc_ndcg_at_10_std value: 23.612459300338852 - type: nauc_ndcg_at_1_diff1 value: 32.67478198272269 - type: nauc_ndcg_at_1_max value: 32.480893992251225 - type: nauc_ndcg_at_1_std value: 18.084577950595566 - type: nauc_ndcg_at_20_diff1 value: 15.262035505807503 - type: nauc_ndcg_at_20_max value: 31.342019352990913 - type: nauc_ndcg_at_20_std value: 24.197954778000465 - type: nauc_ndcg_at_3_diff1 value: 22.26310249149197 - type: nauc_ndcg_at_3_max value: 34.39233038678674 - type: nauc_ndcg_at_3_std value: 22.298962644610917 - type: nauc_ndcg_at_5_diff1 value: 16.20505408271046 - type: nauc_ndcg_at_5_max value: 32.472046662862134 - type: nauc_ndcg_at_5_std value: 22.12440390937827 - type: nauc_precision_at_1000_diff1 value: -18.1992280418387 - type: nauc_precision_at_1000_max value: -0.14504027624401056 - type: nauc_precision_at_1000_std value: 28.728686840822586 - type: nauc_precision_at_100_diff1 value: -18.890472953162003 - type: nauc_precision_at_100_max value: 11.13535749454528 - type: nauc_precision_at_100_std value: 39.25554880828357 - type: nauc_precision_at_10_diff1 value: -9.714649324902075 - type: nauc_precision_at_10_max value: 30.344283615773975 - type: nauc_precision_at_10_std value: 35.49664478004321 - type: nauc_precision_at_1_diff1 value: 30.760566388139708 - type: nauc_precision_at_1_max value: 35.02398717712965 - type: nauc_precision_at_1_std value: 19.679504342414695 - type: nauc_precision_at_20_diff1 value: -13.200665933477627 - type: nauc_precision_at_20_max value: 25.1207959687035 - type: nauc_precision_at_20_std value: 38.85776906396036 - type: nauc_precision_at_3_diff1 value: 8.220730025668981 - type: nauc_precision_at_3_max value: 36.22034319762123 - type: nauc_precision_at_3_std value: 28.12392324478213 - type: nauc_precision_at_5_diff1 value: -3.6321638567344396 - type: nauc_precision_at_5_max value: 33.227196141105445 - type: nauc_precision_at_5_std value: 29.907501305320068 - type: nauc_recall_at_1000_diff1 value: 7.967300491712526 - type: nauc_recall_at_1000_max value: 19.980165183771206 - type: nauc_recall_at_1000_std value: 19.140830234036876 - type: nauc_recall_at_100_diff1 value: 11.141369781846388 - type: nauc_recall_at_100_max value: 18.951402610508083 - type: nauc_recall_at_100_std value: 14.738952156631067 - type: nauc_recall_at_10_diff1 value: 23.90292148597915 - type: nauc_recall_at_10_max value: 17.751156184761655 - type: nauc_recall_at_10_std value: -0.6705078411610252 - type: nauc_recall_at_1_diff1 value: 48.350174879836096 - type: nauc_recall_at_1_max value: 3.946334372368094 - type: nauc_recall_at_1_std value: -17.849285033341584 - type: nauc_recall_at_20_diff1 value: 19.943354062055192 - type: nauc_recall_at_20_max value: 21.177985765604276 - type: nauc_recall_at_20_std value: 5.320291087740789 - type: nauc_recall_at_3_diff1 value: 35.30089980248878 - type: nauc_recall_at_3_max value: 10.395146596807242 - type: nauc_recall_at_3_std value: -8.838602447204481 - type: nauc_recall_at_5_diff1 value: 28.72944376709497 - type: nauc_recall_at_5_max value: 13.550632758927897 - type: nauc_recall_at_5_std value: -6.775215741511598 - type: ndcg_at_1 value: 50.619 - type: ndcg_at_10 value: 40.336 - type: ndcg_at_100 value: 37.624 - type: ndcg_at_1000 value: 45.796 - type: ndcg_at_20 value: 37.869 - type: ndcg_at_3 value: 46.221000000000004 - type: ndcg_at_5 value: 44.201 - type: precision_at_1 value: 52.632 - type: precision_at_10 value: 29.720999999999997 - type: precision_at_100 value: 9.625 - type: precision_at_1000 value: 2.246 - type: precision_at_20 value: 22.152 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 38.39 - type: recall_at_1 value: 7.034999999999999 - type: recall_at_10 value: 19.538 - type: recall_at_100 value: 38.146 - type: recall_at_1000 value: 67.726 - type: recall_at_20 value: 24.014 - type: recall_at_3 value: 12.933 - type: recall_at_5 value: 15.966 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 57.973 - type: map_at_1 value: 31.965 - type: map_at_10 value: 49.254999999999995 - type: map_at_100 value: 50.214000000000006 - type: map_at_1000 value: 50.229 - type: map_at_20 value: 49.928 - type: map_at_3 value: 44.119 - type: map_at_5 value: 47.179 - type: mrr_at_1 value: 35.92120509849362 - type: mrr_at_10 value: 51.672563869116495 - type: mrr_at_100 value: 52.313374300588634 - type: mrr_at_1000 value: 52.32344018159938 - type: mrr_at_20 value: 52.133239169739575 - type: mrr_at_3 value: 47.528003089996 - type: mrr_at_5 value: 50.05117806102738 - type: nauc_map_at_1000_diff1 value: 29.824242384334525 - type: nauc_map_at_1000_max value: 13.995693176688448 - type: nauc_map_at_1000_std value: -8.373469172548125 - type: nauc_map_at_100_diff1 value: 29.824783577565455 - type: nauc_map_at_100_max value: 14.014232791380248 - type: nauc_map_at_100_std value: -8.354567512476217 - type: nauc_map_at_10_diff1 value: 29.69721621452462 - type: nauc_map_at_10_max value: 14.013773981967729 - type: nauc_map_at_10_std value: -8.811618763746901 - type: nauc_map_at_1_diff1 value: 33.067072402069165 - type: nauc_map_at_1_max value: 9.692535049814342 - type: nauc_map_at_1_std value: -9.214265514134015 - type: nauc_map_at_20_diff1 value: 29.773661699973868 - type: nauc_map_at_20_max value: 14.103991415921719 - type: nauc_map_at_20_std value: -8.399614830370254 - type: nauc_map_at_3_diff1 value: 29.604244382973494 - type: nauc_map_at_3_max value: 12.726985941576045 - type: nauc_map_at_3_std value: -9.174499958794579 - type: nauc_map_at_5_diff1 value: 29.526178054166003 - type: nauc_map_at_5_max value: 13.485623539224505 - type: nauc_map_at_5_std value: -9.332715715777457 - type: nauc_mrr_at_1000_diff1 value: 29.990159094273984 - type: nauc_mrr_at_1000_max value: 14.798553638662531 - type: nauc_mrr_at_1000_std value: -6.536835639249748 - type: nauc_mrr_at_100_diff1 value: 29.99096196473329 - type: nauc_mrr_at_100_max value: 14.813647125488611 - type: nauc_mrr_at_100_std value: -6.5207559360795795 - type: nauc_mrr_at_10_diff1 value: 29.866060653972433 - type: nauc_mrr_at_10_max value: 14.932057781270828 - type: nauc_mrr_at_10_std value: -6.7199900977246045 - type: nauc_mrr_at_1_diff1 value: 33.522585698879105 - type: nauc_mrr_at_1_max value: 11.03359132659101 - type: nauc_mrr_at_1_std value: -7.065729635829634 - type: nauc_mrr_at_20_diff1 value: 29.937849347501995 - type: nauc_mrr_at_20_max value: 14.933127757332631 - type: nauc_mrr_at_20_std value: -6.5101713991165076 - type: nauc_mrr_at_3_diff1 value: 29.709282383756214 - type: nauc_mrr_at_3_max value: 14.293242212683008 - type: nauc_mrr_at_3_std value: -6.766453971049555 - type: nauc_mrr_at_5_diff1 value: 29.583175805360916 - type: nauc_mrr_at_5_max value: 14.73037078089027 - type: nauc_mrr_at_5_std value: -6.997762632641283 - type: nauc_ndcg_at_1000_diff1 value: 29.490454126425163 - type: nauc_ndcg_at_1000_max value: 15.419019530679389 - type: nauc_ndcg_at_1000_std value: -7.0484017992481744 - type: nauc_ndcg_at_100_diff1 value: 29.505190293936156 - type: nauc_ndcg_at_100_max value: 15.918909416610521 - type: nauc_ndcg_at_100_std value: -6.49127219891478 - type: nauc_ndcg_at_10_diff1 value: 28.833840829896463 - type: nauc_ndcg_at_10_max value: 16.373438787446197 - type: nauc_ndcg_at_10_std value: -7.9778251252372705 - type: nauc_ndcg_at_1_diff1 value: 33.60872721647907 - type: nauc_ndcg_at_1_max value: 11.064299819969465 - type: nauc_ndcg_at_1_std value: -6.985252399557631 - type: nauc_ndcg_at_20_diff1 value: 29.081791504309212 - type: nauc_ndcg_at_20_max value: 16.69651954979618 - type: nauc_ndcg_at_20_std value: -6.5711475047802335 - type: nauc_ndcg_at_3_diff1 value: 28.539570911163125 - type: nauc_ndcg_at_3_max value: 14.010530248113884 - type: nauc_ndcg_at_3_std value: -8.637239917621597 - type: nauc_ndcg_at_5_diff1 value: 28.27028574171474 - type: nauc_ndcg_at_5_max value: 15.230131680757033 - type: nauc_ndcg_at_5_std value: -9.050189391014829 - type: nauc_precision_at_1000_diff1 value: -5.456910590056672 - type: nauc_precision_at_1000_max value: 4.4296686382162385 - type: nauc_precision_at_1000_std value: 11.973463805098273 - type: nauc_precision_at_100_diff1 value: -2.9950499706062947 - type: nauc_precision_at_100_max value: 8.727441061794135 - type: nauc_precision_at_100_std value: 14.536609028727096 - type: nauc_precision_at_10_diff1 value: 8.79516862926112 - type: nauc_precision_at_10_max value: 17.471695342611472 - type: nauc_precision_at_10_std value: 4.0438514865197925 - type: nauc_precision_at_1_diff1 value: 33.60872721647907 - type: nauc_precision_at_1_max value: 11.064299819969465 - type: nauc_precision_at_1_std value: -6.985252399557631 - type: nauc_precision_at_20_diff1 value: 3.6059314683741928 - type: nauc_precision_at_20_max value: 15.944255335307014 - type: nauc_precision_at_20_std value: 11.625863542424076 - type: nauc_precision_at_3_diff1 value: 20.204302583527532 - type: nauc_precision_at_3_max value: 16.332566250985476 - type: nauc_precision_at_3_std value: -3.4702610490043777 - type: nauc_precision_at_5_diff1 value: 14.594065643339766 - type: nauc_precision_at_5_max value: 17.26474710654306 - type: nauc_precision_at_5_std value: -2.7233890637924625 - type: nauc_recall_at_1000_diff1 value: 51.01005353923607 - type: nauc_recall_at_1000_max value: 95.9468807413851 - type: nauc_recall_at_1000_std value: 96.43516709723872 - type: nauc_recall_at_100_diff1 value: 30.992518657749013 - type: nauc_recall_at_100_max value: 56.25345462048048 - type: nauc_recall_at_100_std value: 41.3102757318071 - type: nauc_recall_at_10_diff1 value: 23.025777269325026 - type: nauc_recall_at_10_max value: 26.314920590981533 - type: nauc_recall_at_10_std value: -6.936581744358684 - type: nauc_recall_at_1_diff1 value: 33.067072402069165 - type: nauc_recall_at_1_max value: 9.692535049814342 - type: nauc_recall_at_1_std value: -9.214265514134015 - type: nauc_recall_at_20_diff1 value: 22.81995680991129 - type: nauc_recall_at_20_max value: 36.01028848554346 - type: nauc_recall_at_20_std value: 6.03249054323601 - type: nauc_recall_at_3_diff1 value: 24.538095713395393 - type: nauc_recall_at_3_max value: 15.820241399506815 - type: nauc_recall_at_3_std value: -9.133686977749287 - type: nauc_recall_at_5_diff1 value: 22.72999021746731 - type: nauc_recall_at_5_max value: 19.12645303427032 - type: nauc_recall_at_5_std value: -10.59744542818235 - type: ndcg_at_1 value: 35.892 - type: ndcg_at_10 value: 57.973 - type: ndcg_at_100 value: 61.663999999999994 - type: ndcg_at_1000 value: 61.986 - type: ndcg_at_20 value: 60.061 - type: ndcg_at_3 value: 48.463 - type: ndcg_at_5 value: 53.502 - type: precision_at_1 value: 35.892 - type: precision_at_10 value: 9.774 - type: precision_at_100 value: 1.185 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.4 - type: precision_at_3 value: 22.402 - type: precision_at_5 value: 16.309 - type: recall_at_1 value: 31.965 - type: recall_at_10 value: 82.12899999999999 - type: recall_at_100 value: 97.506 - type: recall_at_1000 value: 99.84100000000001 - type: recall_at_20 value: 89.75 - type: recall_at_3 value: 57.554 - type: recall_at_5 value: 69.16799999999999 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 89.631 - type: map_at_1 value: 71.873 - type: map_at_10 value: 86.1 - type: map_at_100 value: 86.722 - type: map_at_1000 value: 86.733 - type: map_at_20 value: 86.52499999999999 - type: map_at_3 value: 83.159 - type: map_at_5 value: 85.042 - type: mrr_at_1 value: 82.67999999999999 - type: mrr_at_10 value: 88.6650476190474 - type: mrr_at_100 value: 88.74403876192653 - type: mrr_at_1000 value: 88.74442902286796 - type: mrr_at_20 value: 88.7274053321547 - type: mrr_at_3 value: 87.76666666666645 - type: mrr_at_5 value: 88.41616666666637 - type: nauc_map_at_1000_diff1 value: 77.30191824401264 - type: nauc_map_at_1000_max value: 13.034154761866718 - type: nauc_map_at_1000_std value: -52.91261514794493 - type: nauc_map_at_100_diff1 value: 77.30371920479556 - type: nauc_map_at_100_max value: 12.988179399770516 - type: nauc_map_at_100_std value: -52.97052916307793 - type: nauc_map_at_10_diff1 value: 77.58060662262783 - type: nauc_map_at_10_max value: 12.225416731922186 - type: nauc_map_at_10_std value: -55.3613786070821 - type: nauc_map_at_1_diff1 value: 81.28542355395726 - type: nauc_map_at_1_max value: 9.394601627274016 - type: nauc_map_at_1_std value: -46.26696310796872 - type: nauc_map_at_20_diff1 value: 77.4264543798561 - type: nauc_map_at_20_max value: 12.640046637669617 - type: nauc_map_at_20_std value: -53.97773942738519 - type: nauc_map_at_3_diff1 value: 78.15821432444955 - type: nauc_map_at_3_max value: 10.15912724311145 - type: nauc_map_at_3_std value: -57.22864996907624 - type: nauc_map_at_5_diff1 value: 77.79975399887553 - type: nauc_map_at_5_max value: 11.71325789204571 - type: nauc_map_at_5_std value: -57.107677263258495 - type: nauc_mrr_at_1000_diff1 value: 77.45264852637524 - type: nauc_mrr_at_1000_max value: 15.52341959282284 - type: nauc_mrr_at_1000_std value: -48.64447896830792 - type: nauc_mrr_at_100_diff1 value: 77.45217072333344 - type: nauc_mrr_at_100_max value: 15.521827007218691 - type: nauc_mrr_at_100_std value: -48.646922241709994 - type: nauc_mrr_at_10_diff1 value: 77.43456749114439 - type: nauc_mrr_at_10_max value: 15.529836831176164 - type: nauc_mrr_at_10_std value: -48.75875392088208 - type: nauc_mrr_at_1_diff1 value: 78.54537995037919 - type: nauc_mrr_at_1_max value: 16.640560984015902 - type: nauc_mrr_at_1_std value: -45.28482868966014 - type: nauc_mrr_at_20_diff1 value: 77.4579261494039 - type: nauc_mrr_at_20_max value: 15.530406990945803 - type: nauc_mrr_at_20_std value: -48.677032167317236 - type: nauc_mrr_at_3_diff1 value: 77.20998980632015 - type: nauc_mrr_at_3_max value: 15.126679640441187 - type: nauc_mrr_at_3_std value: -49.743271509326284 - type: nauc_mrr_at_5_diff1 value: 77.31365975119465 - type: nauc_mrr_at_5_max value: 15.286704108033772 - type: nauc_mrr_at_5_std value: -49.258038230371994 - type: nauc_ndcg_at_1000_diff1 value: 76.99868569886195 - type: nauc_ndcg_at_1000_max value: 14.067546676855178 - type: nauc_ndcg_at_1000_std value: -50.79545103564982 - type: nauc_ndcg_at_100_diff1 value: 76.97431479230265 - type: nauc_ndcg_at_100_max value: 13.790203746757465 - type: nauc_ndcg_at_100_std value: -51.06792832759592 - type: nauc_ndcg_at_10_diff1 value: 77.1479433270543 - type: nauc_ndcg_at_10_max value: 12.973183509342773 - type: nauc_ndcg_at_10_std value: -54.71505928977531 - type: nauc_ndcg_at_1_diff1 value: 78.50656620759376 - type: nauc_ndcg_at_1_max value: 16.543901338375292 - type: nauc_ndcg_at_1_std value: -45.228060755270924 - type: nauc_ndcg_at_20_diff1 value: 77.16983455784539 - type: nauc_ndcg_at_20_max value: 13.315620423480794 - type: nauc_ndcg_at_20_std value: -53.02984622667913 - type: nauc_ndcg_at_3_diff1 value: 76.55713182168297 - type: nauc_ndcg_at_3_max value: 12.081676808245932 - type: nauc_ndcg_at_3_std value: -55.18222046959792 - type: nauc_ndcg_at_5_diff1 value: 76.9006202244737 - type: nauc_ndcg_at_5_max value: 12.90360775727033 - type: nauc_ndcg_at_5_std value: -55.99445333353582 - type: nauc_precision_at_1000_diff1 value: -45.31975944341808 - type: nauc_precision_at_1000_max value: 6.29027160882043 - type: nauc_precision_at_1000_std value: 45.38096248837178 - type: nauc_precision_at_100_diff1 value: -45.30333307019884 - type: nauc_precision_at_100_max value: 4.798109392607744 - type: nauc_precision_at_100_std value: 44.17265435105678 - type: nauc_precision_at_10_diff1 value: -41.06166076037899 - type: nauc_precision_at_10_max value: 3.1383589635972946 - type: nauc_precision_at_10_std value: 29.793783541894808 - type: nauc_precision_at_1_diff1 value: 78.50656620759376 - type: nauc_precision_at_1_max value: 16.543901338375292 - type: nauc_precision_at_1_std value: -45.228060755270924 - type: nauc_precision_at_20_diff1 value: -43.652129476251716 - type: nauc_precision_at_20_max value: 3.2858069466648216 - type: nauc_precision_at_20_std value: 37.028312444009465 - type: nauc_precision_at_3_diff1 value: -22.417878997483122 - type: nauc_precision_at_3_max value: 4.357588406195106 - type: nauc_precision_at_3_std value: 5.548454556466125 - type: nauc_precision_at_5_diff1 value: -34.59346173557382 - type: nauc_precision_at_5_max value: 4.092275688412817 - type: nauc_precision_at_5_std value: 18.571795479923363 - type: nauc_recall_at_1000_diff1 value: 60.81475778096289 - type: nauc_recall_at_1000_max value: -31.05691334029901 - type: nauc_recall_at_1000_std value: 17.690001316678824 - type: nauc_recall_at_100_diff1 value: 66.92860112572923 - type: nauc_recall_at_100_max value: -20.096801559362024 - type: nauc_recall_at_100_std value: -68.9845088182372 - type: nauc_recall_at_10_diff1 value: 74.17807393588308 - type: nauc_recall_at_10_max value: 3.6718305333112307 - type: nauc_recall_at_10_std value: -80.75005939962519 - type: nauc_recall_at_1_diff1 value: 81.28542355395726 - type: nauc_recall_at_1_max value: 9.394601627274016 - type: nauc_recall_at_1_std value: -46.26696310796872 - type: nauc_recall_at_20_diff1 value: 75.23032147657926 - type: nauc_recall_at_20_max value: -0.03516363792685841 - type: nauc_recall_at_20_std value: -82.42013443698025 - type: nauc_recall_at_3_diff1 value: 74.33274649676034 - type: nauc_recall_at_3_max value: 4.764207227787686 - type: nauc_recall_at_3_std value: -67.89402783108405 - type: nauc_recall_at_5_diff1 value: 73.04544826821459 - type: nauc_recall_at_5_max value: 5.5335471808875205 - type: nauc_recall_at_5_std value: -75.37168632889185 - type: ndcg_at_1 value: 82.69999999999999 - type: ndcg_at_10 value: 89.631 - type: ndcg_at_100 value: 90.671 - type: ndcg_at_1000 value: 90.728 - type: ndcg_at_20 value: 90.251 - type: ndcg_at_3 value: 86.943 - type: ndcg_at_5 value: 88.506 - type: precision_at_1 value: 82.69999999999999 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.23 - type: precision_at_3 value: 38.107 - type: precision_at_5 value: 25.096 - type: recall_at_1 value: 71.873 - type: recall_at_10 value: 96.414 - type: recall_at_100 value: 99.76899999999999 - type: recall_at_1000 value: 99.98 - type: recall_at_20 value: 98.35199999999999 - type: recall_at_3 value: 88.69399999999999 - type: recall_at_5 value: 93.098 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 62.06917394472442 - type: v_measure value: 62.06917394472442 - type: v_measure_std value: 4.943151033431419 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 69.22733490519639 - type: v_measure value: 69.22733490519639 - type: v_measure_std value: 13.377934681081163 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 23.05 - type: map_at_1 value: 5.453 - type: map_at_10 value: 14.044 - type: map_at_100 value: 16.552 - type: map_at_1000 value: 16.878 - type: map_at_20 value: 15.301 - type: map_at_3 value: 9.876999999999999 - type: map_at_5 value: 11.795 - type: mrr_at_1 value: 26.8 - type: mrr_at_10 value: 38.10575396825392 - type: mrr_at_100 value: 39.22960431676882 - type: mrr_at_1000 value: 39.27303645178868 - type: mrr_at_20 value: 38.79283461937093 - type: mrr_at_3 value: 34.93333333333331 - type: mrr_at_5 value: 36.833333333333265 - type: nauc_map_at_1000_diff1 value: 15.83509790600893 - type: nauc_map_at_1000_max value: 26.24412309285264 - type: nauc_map_at_1000_std value: 8.509912487483804 - type: nauc_map_at_100_diff1 value: 15.79290413461707 - type: nauc_map_at_100_max value: 26.29405340863185 - type: nauc_map_at_100_std value: 8.342652598208991 - type: nauc_map_at_10_diff1 value: 15.685067888518105 - type: nauc_map_at_10_max value: 25.55365734226509 - type: nauc_map_at_10_std value: 5.443581397457305 - type: nauc_map_at_1_diff1 value: 19.890826140704405 - type: nauc_map_at_1_max value: 18.571983014050776 - type: nauc_map_at_1_std value: 0.7282160023666692 - type: nauc_map_at_20_diff1 value: 15.604228999117606 - type: nauc_map_at_20_max value: 25.914082775189712 - type: nauc_map_at_20_std value: 6.618058124712935 - type: nauc_map_at_3_diff1 value: 17.243583831563896 - type: nauc_map_at_3_max value: 23.989306351982645 - type: nauc_map_at_3_std value: 2.750722234499615 - type: nauc_map_at_5_diff1 value: 16.416721826214868 - type: nauc_map_at_5_max value: 24.289258470596494 - type: nauc_map_at_5_std value: 3.5318278077707266 - type: nauc_mrr_at_1000_diff1 value: 18.159556434705603 - type: nauc_mrr_at_1000_max value: 21.85066952735879 - type: nauc_mrr_at_1000_std value: 4.877956024495391 - type: nauc_mrr_at_100_diff1 value: 18.147842867473464 - type: nauc_mrr_at_100_max value: 21.851576391218245 - type: nauc_mrr_at_100_std value: 4.914456023591578 - type: nauc_mrr_at_10_diff1 value: 18.402284894586295 - type: nauc_mrr_at_10_max value: 21.937638889135496 - type: nauc_mrr_at_10_std value: 4.795941003675795 - type: nauc_mrr_at_1_diff1 value: 20.00724187285097 - type: nauc_mrr_at_1_max value: 18.89430286994851 - type: nauc_mrr_at_1_std value: 0.832530264756033 - type: nauc_mrr_at_20_diff1 value: 18.166042536965495 - type: nauc_mrr_at_20_max value: 21.956527896385104 - type: nauc_mrr_at_20_std value: 4.953268517852472 - type: nauc_mrr_at_3_diff1 value: 17.439379075748157 - type: nauc_mrr_at_3_max value: 21.778191027575406 - type: nauc_mrr_at_3_std value: 3.9046873265275908 - type: nauc_mrr_at_5_diff1 value: 18.181749683051816 - type: nauc_mrr_at_5_max value: 21.75852211586367 - type: nauc_mrr_at_5_std value: 4.5573370913949205 - type: nauc_ndcg_at_1000_diff1 value: 16.26265940273677 - type: nauc_ndcg_at_1000_max value: 26.76405498342847 - type: nauc_ndcg_at_1000_std value: 15.305696457284704 - type: nauc_ndcg_at_100_diff1 value: 15.835715535652216 - type: nauc_ndcg_at_100_max value: 27.52544278395052 - type: nauc_ndcg_at_100_std value: 14.984129606447347 - type: nauc_ndcg_at_10_diff1 value: 16.305421877142873 - type: nauc_ndcg_at_10_max value: 26.04920150942696 - type: nauc_ndcg_at_10_std value: 7.3715098732860875 - type: nauc_ndcg_at_1_diff1 value: 20.00724187285097 - type: nauc_ndcg_at_1_max value: 18.89430286994851 - type: nauc_ndcg_at_1_std value: 0.832530264756033 - type: nauc_ndcg_at_20_diff1 value: 15.957812909225675 - type: nauc_ndcg_at_20_max value: 26.73874805693458 - type: nauc_ndcg_at_20_std value: 9.445743449181023 - type: nauc_ndcg_at_3_diff1 value: 16.907542932061347 - type: nauc_ndcg_at_3_max value: 24.10195208238332 - type: nauc_ndcg_at_3_std value: 4.2558628942284 - type: nauc_ndcg_at_5_diff1 value: 16.757400054919763 - type: nauc_ndcg_at_5_max value: 24.500001119288996 - type: nauc_ndcg_at_5_std value: 5.46600678624086 - type: nauc_precision_at_1000_diff1 value: 7.829614320017092 - type: nauc_precision_at_1000_max value: 18.552313928878853 - type: nauc_precision_at_1000_std value: 31.67901423674111 - type: nauc_precision_at_100_diff1 value: 9.564085128323068 - type: nauc_precision_at_100_max value: 24.80995247750652 - type: nauc_precision_at_100_std value: 27.019281458663453 - type: nauc_precision_at_10_diff1 value: 13.560218697417328 - type: nauc_precision_at_10_max value: 26.50289219410562 - type: nauc_precision_at_10_std value: 10.333452967470425 - type: nauc_precision_at_1_diff1 value: 20.00724187285097 - type: nauc_precision_at_1_max value: 18.89430286994851 - type: nauc_precision_at_1_std value: 0.832530264756033 - type: nauc_precision_at_20_diff1 value: 12.23792883716372 - type: nauc_precision_at_20_max value: 26.52003953582503 - type: nauc_precision_at_20_std value: 14.095312993321937 - type: nauc_precision_at_3_diff1 value: 15.790498950071271 - type: nauc_precision_at_3_max value: 26.217004704355695 - type: nauc_precision_at_3_std value: 6.00338370025878 - type: nauc_precision_at_5_diff1 value: 14.982885989652628 - type: nauc_precision_at_5_max value: 25.49696747450349 - type: nauc_precision_at_5_std value: 7.904034204757165 - type: nauc_recall_at_1000_diff1 value: 7.869779867534929 - type: nauc_recall_at_1000_max value: 18.447958241897062 - type: nauc_recall_at_1000_std value: 33.40550883180547 - type: nauc_recall_at_100_diff1 value: 9.276867449557107 - type: nauc_recall_at_100_max value: 24.7296081517642 - type: nauc_recall_at_100_std value: 27.51189589980202 - type: nauc_recall_at_10_diff1 value: 13.2948955685031 - type: nauc_recall_at_10_max value: 26.176157566779036 - type: nauc_recall_at_10_std value: 10.235160480354189 - type: nauc_recall_at_1_diff1 value: 19.890826140704405 - type: nauc_recall_at_1_max value: 18.571983014050776 - type: nauc_recall_at_1_std value: 0.7282160023666692 - type: nauc_recall_at_20_diff1 value: 12.045704204225952 - type: nauc_recall_at_20_max value: 26.26856701427816 - type: nauc_recall_at_20_std value: 14.18936905592523 - type: nauc_recall_at_3_diff1 value: 15.624488486823054 - type: nauc_recall_at_3_max value: 25.963467662344463 - type: nauc_recall_at_3_std value: 5.7459486903540125 - type: nauc_recall_at_5_diff1 value: 14.719691959242631 - type: nauc_recall_at_5_max value: 25.281392451119533 - type: nauc_recall_at_5_std value: 7.668697286095074 - type: ndcg_at_1 value: 26.8 - type: ndcg_at_10 value: 23.05 - type: ndcg_at_100 value: 32.281 - type: ndcg_at_1000 value: 37.449 - type: ndcg_at_20 value: 26.343 - type: ndcg_at_3 value: 21.813 - type: ndcg_at_5 value: 18.978 - type: precision_at_1 value: 26.8 - type: precision_at_10 value: 12.04 - type: precision_at_100 value: 2.5309999999999997 - type: precision_at_1000 value: 0.376 - type: precision_at_20 value: 7.920000000000001 - type: precision_at_3 value: 20.467 - type: precision_at_5 value: 16.66 - type: recall_at_1 value: 5.453 - type: recall_at_10 value: 24.407 - type: recall_at_100 value: 51.388 - type: recall_at_1000 value: 76.385 - type: recall_at_20 value: 32.132 - type: recall_at_3 value: 12.458 - type: recall_at_5 value: 16.883 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 83.30199374106634 - type: cosine_spearman value: 81.13661060651675 - type: euclidean_pearson value: 80.74756859182727 - type: euclidean_spearman value: 81.13661231617098 - type: main_score value: 81.13661060651675 - type: manhattan_pearson value: 80.79987665196892 - type: manhattan_spearman value: 81.19071318923478 - type: pearson value: 83.30199374106634 - type: spearman value: 81.13661060651675 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 87.28127488429784 - type: cosine_spearman value: 80.84701244619681 - type: euclidean_pearson value: 84.63075827597196 - type: euclidean_spearman value: 80.84536982511581 - type: main_score value: 80.84701244619681 - type: manhattan_pearson value: 84.73599041680716 - type: manhattan_spearman value: 80.93999055513295 - type: pearson value: 87.28127488429784 - type: spearman value: 80.84701244619681 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 80.66880014782137 - type: cosine_spearman value: 83.45193788333383 - type: euclidean_pearson value: 82.84711656880242 - type: euclidean_spearman value: 83.4519378091543 - type: main_score value: 83.45193788333383 - type: manhattan_pearson value: 83.20679773566451 - type: manhattan_spearman value: 83.68427989986384 - type: pearson value: 80.66880014782137 - type: spearman value: 83.45193788333383 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 81.70473811658245 - type: cosine_spearman value: 81.37150133146272 - type: euclidean_pearson value: 81.82289045206721 - type: euclidean_spearman value: 81.37150250773698 - type: main_score value: 81.37150133146272 - type: manhattan_pearson value: 81.84018518966202 - type: manhattan_spearman value: 81.4791733102674 - type: pearson value: 81.70473811658245 - type: spearman value: 81.37150133146272 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 85.23548514858807 - type: cosine_spearman value: 86.56697002494492 - type: euclidean_pearson value: 86.00739925740125 - type: euclidean_spearman value: 86.5669601560328 - type: main_score value: 86.56697002494492 - type: manhattan_pearson value: 86.01926247979789 - type: manhattan_spearman value: 86.58200443341161 - type: pearson value: 85.23548514858807 - type: spearman value: 86.56697002494492 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 84.76487207857608 - type: cosine_spearman value: 85.96973829887335 - type: euclidean_pearson value: 85.39563735627405 - type: euclidean_spearman value: 85.96973768046821 - type: main_score value: 85.96973829887335 - type: manhattan_pearson value: 85.44181395460119 - type: manhattan_spearman value: 85.98361475342077 - type: pearson value: 84.76487207857608 - type: spearman value: 85.96973829887335 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 73.59778720153878 - type: cosine_spearman value: 73.21365573663648 - type: euclidean_pearson value: 74.61013811041204 - type: euclidean_spearman value: 73.21365573663648 - type: main_score value: 73.21365573663648 - type: manhattan_pearson value: 75.46428528424805 - type: manhattan_spearman value: 74.29181782091922 - type: pearson value: 73.59778720153878 - type: spearman value: 73.21365573663648 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 80.45215095199184 - type: cosine_spearman value: 80.18358296781457 - type: euclidean_pearson value: 81.11825325108214 - type: euclidean_spearman value: 80.18358296781457 - type: main_score value: 80.18358296781457 - type: manhattan_pearson value: 81.4591437652861 - type: manhattan_spearman value: 80.61195448433135 - type: pearson value: 80.45215095199184 - type: spearman value: 80.18358296781457 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 85.71499870763965 - type: cosine_spearman value: 85.87991701852383 - type: euclidean_pearson value: 86.26482803799405 - type: euclidean_spearman value: 85.87991701852383 - type: main_score value: 85.87991701852383 - type: manhattan_pearson value: 86.31138576225774 - type: manhattan_spearman value: 85.97213375112646 - type: pearson value: 85.71499870763965 - type: spearman value: 85.87991701852383 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 73.29542480691444 - type: cosine_spearman value: 71.47958526963733 - type: euclidean_pearson value: 73.93627613725454 - type: euclidean_spearman value: 71.47958526963733 - type: main_score value: 71.47958526963733 - type: manhattan_pearson value: 74.44025905945567 - type: manhattan_spearman value: 71.96624843850806 - type: pearson value: 73.29542480691444 - type: spearman value: 71.47958526963733 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 82.41531123241937 - type: cosine_spearman value: 82.4879904820364 - type: euclidean_pearson value: 83.27714045603713 - type: euclidean_spearman value: 82.4879904820364 - type: main_score value: 82.4879904820364 - type: manhattan_pearson value: 83.20321223974034 - type: manhattan_spearman value: 82.45108504740335 - type: pearson value: 82.41531123241937 - type: spearman value: 82.4879904820364 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 82.36534108108745 - type: cosine_spearman value: 82.60235982579208 - type: euclidean_pearson value: 83.38376484479176 - type: euclidean_spearman value: 82.60235982579208 - type: main_score value: 82.60235982579208 - type: manhattan_pearson value: 83.1266661207628 - type: manhattan_spearman value: 82.29914782630499 - type: pearson value: 82.36534108108745 - type: spearman value: 82.60235982579208 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 81.6455680038347 - type: cosine_spearman value: 82.30529817112216 - type: euclidean_pearson value: 82.64048244637631 - type: euclidean_spearman value: 82.30529817112216 - type: main_score value: 82.30529817112216 - type: manhattan_pearson value: 82.5841168628191 - type: manhattan_spearman value: 82.22315262815766 - type: pearson value: 81.6455680038347 - type: spearman value: 82.30529817112216 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 81.31015324957383 - type: cosine_spearman value: 81.67150513771149 - type: euclidean_pearson value: 82.18829538438011 - type: euclidean_spearman value: 81.67150513771149 - type: main_score value: 81.67150513771149 - type: manhattan_pearson value: 81.9426348184988 - type: manhattan_spearman value: 81.31839846589499 - type: pearson value: 81.31015324957383 - type: spearman value: 81.67150513771149 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 71.83303469703797 - type: cosine_spearman value: 71.17442108295238 - type: euclidean_pearson value: 71.99378163260577 - type: euclidean_spearman value: 71.17442108295238 - type: main_score value: 71.17442108295238 - type: manhattan_pearson value: 72.17433166481283 - type: manhattan_spearman value: 71.32848567021358 - type: pearson value: 71.83303469703797 - type: spearman value: 71.17442108295238 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 71.5617971809721 - type: cosine_spearman value: 69.26497488645118 - type: euclidean_pearson value: 73.77290240232199 - type: euclidean_spearman value: 69.26497488645118 - type: main_score value: 69.26497488645118 - type: manhattan_pearson value: 74.6285666652718 - type: manhattan_spearman value: 70.29660365676885 - type: pearson value: 71.5617971809721 - type: spearman value: 69.26497488645118 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 81.17428283241915 - type: cosine_spearman value: 83.15967976089405 - type: euclidean_pearson value: 82.11129224970894 - type: euclidean_spearman value: 83.15967976089405 - type: main_score value: 83.15967976089405 - type: manhattan_pearson value: 83.88320594891758 - type: manhattan_spearman value: 84.21150297680087 - type: pearson value: 81.17428283241915 - type: spearman value: 83.15967976089405 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 84.79991447537422 - type: cosine_spearman value: 84.29259538220988 - type: euclidean_pearson value: 83.6515451078445 - type: euclidean_spearman value: 84.29259538220988 - type: main_score value: 84.29259538220988 - type: manhattan_pearson value: 83.34017347225922 - type: manhattan_spearman value: 85.22314841310823 - type: pearson value: 84.79991447537422 - type: spearman value: 84.29259538220988 - type: cosine_pearson value: 84.7999084116691 - type: cosine_spearman value: 84.29259538220988 - type: euclidean_pearson value: 83.65153743329672 - type: euclidean_spearman value: 84.29259538220988 - type: main_score value: 84.29259538220988 - type: manhattan_pearson value: 83.3401730943064 - type: manhattan_spearman value: 85.22314841310823 - type: pearson value: 84.7999084116691 - type: spearman value: 84.29259538220988 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 77.20169600765146 - type: cosine_spearman value: 74.90653473871943 - type: euclidean_pearson value: 78.15249739396126 - type: euclidean_spearman value: 74.90653473871943 - type: main_score value: 74.90653473871943 - type: manhattan_pearson value: 78.28938036790484 - type: manhattan_spearman value: 75.05487827510268 - type: pearson value: 77.20169600765146 - type: spearman value: 74.90653473871943 - type: cosine_pearson value: 77.20169606146547 - type: cosine_spearman value: 74.90653473871943 - type: euclidean_pearson value: 78.15249735935164 - type: euclidean_spearman value: 74.90653473871943 - type: main_score value: 74.90653473871943 - type: manhattan_pearson value: 78.28938036790484 - type: manhattan_spearman value: 75.05487827510268 - type: pearson value: 77.20169606146547 - type: spearman value: 74.90653473871943 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 82.02846752351698 - type: cosine_spearman value: 84.43251200064613 - type: euclidean_pearson value: 83.97505218523716 - type: euclidean_spearman value: 84.43251200064613 - type: main_score value: 84.43251200064613 - type: manhattan_pearson value: 83.99261500966325 - type: manhattan_spearman value: 84.47935243587095 - type: pearson value: 82.02846752351698 - type: spearman value: 84.43251200064613 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 86.07845920585898 - type: map value: 86.07845920585898 - type: mrr value: 96.41839641839643 - type: nAUC_map_diff1 value: -0.842643700986476 - type: nAUC_map_max value: 51.87683748536326 - type: nAUC_map_std value: 70.46131124609762 - type: nAUC_mrr_diff1 value: 48.46021089146518 - type: nAUC_mrr_max value: 83.92600322127902 - type: nAUC_mrr_std value: 84.54594067723419 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 79.945 - type: map_at_1 value: 64.161 - type: map_at_10 value: 75.47 - type: map_at_100 value: 75.794 - type: map_at_1000 value: 75.797 - type: map_at_20 value: 75.70700000000001 - type: map_at_3 value: 73.152 - type: map_at_5 value: 74.26700000000001 - type: mrr_at_1 value: 67.33333333333333 - type: mrr_at_10 value: 76.38544973544973 - type: mrr_at_100 value: 76.59273460106952 - type: mrr_at_1000 value: 76.59569524196121 - type: mrr_at_20 value: 76.52124756335283 - type: mrr_at_3 value: 74.77777777777779 - type: mrr_at_5 value: 75.66111111111111 - type: nauc_map_at_1000_diff1 value: 76.15028048092202 - type: nauc_map_at_1000_max value: 56.538149672254875 - type: nauc_map_at_1000_std value: 3.704721784625868 - type: nauc_map_at_100_diff1 value: 76.15301570724966 - type: nauc_map_at_100_max value: 56.54022753605153 - type: nauc_map_at_100_std value: 3.710343234630538 - type: nauc_map_at_10_diff1 value: 75.95811880169259 - type: nauc_map_at_10_max value: 56.370110060103585 - type: nauc_map_at_10_std value: 3.1050633374399763 - type: nauc_map_at_1_diff1 value: 79.18280233077802 - type: nauc_map_at_1_max value: 49.80324907065242 - type: nauc_map_at_1_std value: -2.4529471800694576 - type: nauc_map_at_20_diff1 value: 76.06105794325309 - type: nauc_map_at_20_max value: 56.50983388527086 - type: nauc_map_at_20_std value: 3.5438509096689357 - type: nauc_map_at_3_diff1 value: 77.30131743846023 - type: nauc_map_at_3_max value: 54.88345820574091 - type: nauc_map_at_3_std value: -0.14414153724376336 - type: nauc_map_at_5_diff1 value: 76.3760021484074 - type: nauc_map_at_5_max value: 56.238991517151405 - type: nauc_map_at_5_std value: 2.032924236599453 - type: nauc_mrr_at_1000_diff1 value: 75.76788613755507 - type: nauc_mrr_at_1000_max value: 58.052755437812806 - type: nauc_mrr_at_1000_std value: 6.4693625323421395 - type: nauc_mrr_at_100_diff1 value: 75.77073741995821 - type: nauc_mrr_at_100_max value: 58.054659119201915 - type: nauc_mrr_at_100_std value: 6.474706478778545 - type: nauc_mrr_at_10_diff1 value: 75.54115735059217 - type: nauc_mrr_at_10_max value: 58.17265501482297 - type: nauc_mrr_at_10_std value: 6.251843373595271 - type: nauc_mrr_at_1_diff1 value: 77.57603990319609 - type: nauc_mrr_at_1_max value: 55.86220217467876 - type: nauc_mrr_at_1_std value: 7.101223682865022 - type: nauc_mrr_at_20_diff1 value: 75.65587300975086 - type: nauc_mrr_at_20_max value: 58.06955862304443 - type: nauc_mrr_at_20_std value: 6.426259261520951 - type: nauc_mrr_at_3_diff1 value: 76.09312522665512 - type: nauc_mrr_at_3_max value: 57.79116645551433 - type: nauc_mrr_at_3_std value: 5.340465414196046 - type: nauc_mrr_at_5_diff1 value: 75.45748931746186 - type: nauc_mrr_at_5_max value: 58.37483417758293 - type: nauc_mrr_at_5_std value: 6.583732482357576 - type: nauc_ndcg_at_1000_diff1 value: 75.63299082223676 - type: nauc_ndcg_at_1000_max value: 57.993614411068904 - type: nauc_ndcg_at_1000_std value: 5.468178341243107 - type: nauc_ndcg_at_100_diff1 value: 75.72790601940984 - type: nauc_ndcg_at_100_max value: 58.09005146018939 - type: nauc_ndcg_at_100_std value: 5.71991898098629 - type: nauc_ndcg_at_10_diff1 value: 74.51570123942263 - type: nauc_ndcg_at_10_max value: 58.1674126126442 - type: nauc_ndcg_at_10_std value: 3.5291957180471485 - type: nauc_ndcg_at_1_diff1 value: 77.57603990319609 - type: nauc_ndcg_at_1_max value: 55.86220217467876 - type: nauc_ndcg_at_1_std value: 7.101223682865022 - type: nauc_ndcg_at_20_diff1 value: 74.87370264715129 - type: nauc_ndcg_at_20_max value: 58.26479583945405 - type: nauc_ndcg_at_20_std value: 4.9410010121533485 - type: nauc_ndcg_at_3_diff1 value: 75.7799770695112 - type: nauc_ndcg_at_3_max value: 57.17058509382753 - type: nauc_ndcg_at_3_std value: 1.3057457066922815 - type: nauc_ndcg_at_5_diff1 value: 74.93409961910731 - type: nauc_ndcg_at_5_max value: 58.10546350113983 - type: nauc_ndcg_at_5_std value: 2.3728589558592525 - type: nauc_precision_at_1000_diff1 value: -36.988372487202895 - type: nauc_precision_at_1000_max value: 9.243703176379006 - type: nauc_precision_at_1000_std value: 50.62137699583042 - type: nauc_precision_at_100_diff1 value: -33.30632037370124 - type: nauc_precision_at_100_max value: 11.176117908274431 - type: nauc_precision_at_100_std value: 50.77711672892819 - type: nauc_precision_at_10_diff1 value: -13.462060179997415 - type: nauc_precision_at_10_max value: 24.57035350735441 - type: nauc_precision_at_10_std value: 38.3237594215549 - type: nauc_precision_at_1_diff1 value: 77.57603990319609 - type: nauc_precision_at_1_max value: 55.86220217467876 - type: nauc_precision_at_1_std value: 7.101223682865022 - type: nauc_precision_at_20_diff1 value: -20.905637069236803 - type: nauc_precision_at_20_max value: 19.222790681412974 - type: nauc_precision_at_20_std value: 42.69173843625813 - type: nauc_precision_at_3_diff1 value: 27.885276073619607 - type: nauc_precision_at_3_max value: 42.46319018404902 - type: nauc_precision_at_3_std value: 20.63803680981594 - type: nauc_precision_at_5_diff1 value: 10.021834061135383 - type: nauc_precision_at_5_max value: 40.31174187287723 - type: nauc_precision_at_5_std value: 33.500727802037865 - type: nauc_recall_at_1000_diff1 value: 100.0 - type: nauc_recall_at_1000_max value: 100.0 - type: nauc_recall_at_1000_std value: 100.0 - type: nauc_recall_at_100_diff1 value: 95.64270152505469 - type: nauc_recall_at_100_max value: 85.13849984438123 - type: nauc_recall_at_100_std value: 70.7594148770609 - type: nauc_recall_at_10_diff1 value: 63.07050183257385 - type: nauc_recall_at_10_max value: 65.22778265535068 - type: nauc_recall_at_10_std value: -4.821132433072802 - type: nauc_recall_at_1_diff1 value: 79.18280233077802 - type: nauc_recall_at_1_max value: 49.80324907065242 - type: nauc_recall_at_1_std value: -2.4529471800694576 - type: nauc_recall_at_20_diff1 value: 63.58865385234562 - type: nauc_recall_at_20_max value: 69.80424353649502 - type: nauc_recall_at_20_std value: 8.392092469171327 - type: nauc_recall_at_3_diff1 value: 72.47444041652938 - type: nauc_recall_at_3_max value: 56.89729952915068 - type: nauc_recall_at_3_std value: -8.254542768503438 - type: nauc_recall_at_5_diff1 value: 68.01094653591714 - type: nauc_recall_at_5_max value: 61.9124136345221 - type: nauc_recall_at_5_std value: -4.833220968920088 - type: ndcg_at_1 value: 67.333 - type: ndcg_at_10 value: 79.945 - type: ndcg_at_100 value: 81.328 - type: ndcg_at_1000 value: 81.413 - type: ndcg_at_20 value: 80.649 - type: ndcg_at_3 value: 76.29 - type: ndcg_at_5 value: 77.701 - type: precision_at_1 value: 67.333 - type: precision_at_10 value: 10.467 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.4 - type: precision_at_3 value: 30.110999999999997 - type: precision_at_5 value: 19.2 - type: recall_at_1 value: 64.161 - type: recall_at_10 value: 92.55600000000001 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 99.667 - type: recall_at_20 value: 95.167 - type: recall_at_3 value: 82.6 - type: recall_at_5 value: 86.244 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.81881188118813 - type: cosine_accuracy_threshold value: 85.84058284759521 - type: cosine_ap value: 95.78776332556504 - type: cosine_f1 value: 91.01620029455081 - type: cosine_f1_threshold value: 85.75928211212158 - type: cosine_precision value: 89.39247830279653 - type: cosine_recall value: 92.7 - type: dot_accuracy value: 99.81881188118813 - type: dot_accuracy_threshold value: 85.84058284759521 - type: dot_ap value: 95.78773347155844 - type: dot_f1 value: 91.01620029455081 - type: dot_f1_threshold value: 85.75928211212158 - type: dot_precision value: 89.39247830279653 - type: dot_recall value: 92.7 - type: euclidean_accuracy value: 99.81881188118813 - type: euclidean_accuracy_threshold value: 53.215450048446655 - type: euclidean_ap value: 95.78776332556505 - type: euclidean_f1 value: 91.01620029455081 - type: euclidean_f1_threshold value: 53.36800813674927 - type: euclidean_precision value: 89.39247830279653 - type: euclidean_recall value: 92.7 - type: main_score value: 95.91773920491504 - type: manhattan_accuracy value: 99.81881188118813 - type: manhattan_accuracy_threshold value: 2434.398651123047 - type: manhattan_ap value: 95.91773920491504 - type: manhattan_f1 value: 91.05928085519923 - type: manhattan_f1_threshold value: 2558.251953125 - type: manhattan_precision value: 88.5633270321361 - type: manhattan_recall value: 93.7 - type: max_ap value: 95.91773920491504 - type: max_f1 value: 91.05928085519923 - type: max_precision value: 89.39247830279653 - type: max_recall value: 93.7 - type: similarity_accuracy value: 99.81881188118813 - type: similarity_accuracy_threshold value: 85.84058284759521 - type: similarity_ap value: 95.78776332556504 - type: similarity_f1 value: 91.01620029455081 - type: similarity_f1_threshold value: 85.75928211212158 - type: similarity_precision value: 89.39247830279653 - type: similarity_recall value: 92.7 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 75.04678082457019 - type: v_measure value: 75.04678082457019 - type: v_measure_std value: 2.77895031549009 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 46.7480616077338 - type: v_measure value: 46.7480616077338 - type: v_measure_std value: 1.5247582475269905 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 53.066118142981225 - type: map value: 53.066118142981225 - type: mrr value: 53.96447404719464 - type: nAUC_map_diff1 value: 38.329026794054585 - type: nAUC_map_max value: 12.731823775227054 - type: nAUC_map_std value: 7.4769546414816315 - type: nAUC_mrr_diff1 value: 38.45132255702392 - type: nAUC_mrr_max value: 13.565204704342396 - type: nAUC_mrr_std value: 8.287911244819353 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 28.5799749570802 - type: cosine_spearman value: 27.695727859698255 - type: dot_pearson value: 28.579989993905958 - type: dot_spearman value: 27.69484016531357 - type: main_score value: 27.695727859698255 - type: pearson value: 28.5799749570802 - type: spearman value: 27.695727859698255 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 86.454 - type: map_at_1 value: 0.242 - type: map_at_10 value: 2.279 - type: map_at_100 value: 15.088 - type: map_at_1000 value: 37.029 - type: map_at_20 value: 4.275 - type: map_at_3 value: 0.722 - type: map_at_5 value: 1.195 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 96.0 - type: mrr_at_100 value: 96.0 - type: mrr_at_1000 value: 96.0 - type: mrr_at_20 value: 96.0 - type: mrr_at_3 value: 96.0 - type: mrr_at_5 value: 96.0 - type: nauc_map_at_1000_diff1 value: -30.293896217848655 - type: nauc_map_at_1000_max value: 23.85136121467542 - type: nauc_map_at_1000_std value: 63.35470337953567 - type: nauc_map_at_100_diff1 value: -14.703078563009154 - type: nauc_map_at_100_max value: 27.973629581077102 - type: nauc_map_at_100_std value: 46.95359110837345 - type: nauc_map_at_10_diff1 value: -4.124915890350429 - type: nauc_map_at_10_max value: 16.204061410353123 - type: nauc_map_at_10_std value: 22.43823882381022 - type: nauc_map_at_1_diff1 value: 3.871186260963678 - type: nauc_map_at_1_max value: 10.291074663078922 - type: nauc_map_at_1_std value: 15.473300411071794 - type: nauc_map_at_20_diff1 value: -3.9802237172610164 - type: nauc_map_at_20_max value: 18.11012767486046 - type: nauc_map_at_20_std value: 24.60833473170288 - type: nauc_map_at_3_diff1 value: 8.461807764072127 - type: nauc_map_at_3_max value: 11.691666667817504 - type: nauc_map_at_3_std value: 14.06895247661592 - type: nauc_map_at_5_diff1 value: 4.5975621880550985 - type: nauc_map_at_5_max value: 12.187323190544557 - type: nauc_map_at_5_std value: 14.757297772880342 - type: nauc_mrr_at_1000_diff1 value: -18.732492997198666 - type: nauc_mrr_at_1000_max value: 60.38748832866469 - type: nauc_mrr_at_1000_std value: 81.90943043884228 - type: nauc_mrr_at_100_diff1 value: -18.732492997198666 - type: nauc_mrr_at_100_max value: 60.38748832866469 - type: nauc_mrr_at_100_std value: 81.90943043884228 - type: nauc_mrr_at_10_diff1 value: -18.732492997198666 - type: nauc_mrr_at_10_max value: 60.38748832866469 - type: nauc_mrr_at_10_std value: 81.90943043884228 - type: nauc_mrr_at_1_diff1 value: -18.73249299719886 - type: nauc_mrr_at_1_max value: 60.38748832866479 - type: nauc_mrr_at_1_std value: 81.90943043884225 - type: nauc_mrr_at_20_diff1 value: -18.732492997198666 - type: nauc_mrr_at_20_max value: 60.38748832866469 - type: nauc_mrr_at_20_std value: 81.90943043884228 - type: nauc_mrr_at_3_diff1 value: -18.732492997198666 - type: nauc_mrr_at_3_max value: 60.38748832866469 - type: nauc_mrr_at_3_std value: 81.90943043884228 - type: nauc_mrr_at_5_diff1 value: -18.732492997198666 - type: nauc_mrr_at_5_max value: 60.38748832866469 - type: nauc_mrr_at_5_std value: 81.90943043884228 - type: nauc_ndcg_at_1000_diff1 value: -30.17247489441324 - type: nauc_ndcg_at_1000_max value: 25.053572521852125 - type: nauc_ndcg_at_1000_std value: 63.223787007068125 - type: nauc_ndcg_at_100_diff1 value: -49.44136749206699 - type: nauc_ndcg_at_100_max value: 31.726373553802734 - type: nauc_ndcg_at_100_std value: 65.63882146402028 - type: nauc_ndcg_at_10_diff1 value: -64.45463810632792 - type: nauc_ndcg_at_10_max value: 43.77927205228312 - type: nauc_ndcg_at_10_std value: 68.75779829097429 - type: nauc_ndcg_at_1_diff1 value: -39.08462033462035 - type: nauc_ndcg_at_1_max value: 46.987612612612565 - type: nauc_ndcg_at_1_std value: 78.56740669240665 - type: nauc_ndcg_at_20_diff1 value: -50.57831400886549 - type: nauc_ndcg_at_20_max value: 42.05734889491642 - type: nauc_ndcg_at_20_std value: 61.18625152995308 - type: nauc_ndcg_at_3_diff1 value: -27.863732677834065 - type: nauc_ndcg_at_3_max value: 49.33557531113745 - type: nauc_ndcg_at_3_std value: 62.84465354034654 - type: nauc_ndcg_at_5_diff1 value: -52.82815341518435 - type: nauc_ndcg_at_5_max value: 46.74682049734401 - type: nauc_ndcg_at_5_std value: 67.26600512166976 - type: nauc_precision_at_1000_diff1 value: -26.43642783284165 - type: nauc_precision_at_1000_max value: 9.053955764041222 - type: nauc_precision_at_1000_std value: 23.300426218758595 - type: nauc_precision_at_100_diff1 value: -40.51161576611829 - type: nauc_precision_at_100_max value: 33.10808318106693 - type: nauc_precision_at_100_std value: 62.83706604019853 - type: nauc_precision_at_10_diff1 value: -73.73649178751282 - type: nauc_precision_at_10_max value: 49.488775845923996 - type: nauc_precision_at_10_std value: 72.4356540885278 - type: nauc_precision_at_1_diff1 value: -18.73249299719886 - type: nauc_precision_at_1_max value: 60.38748832866479 - type: nauc_precision_at_1_std value: 81.90943043884225 - type: nauc_precision_at_20_diff1 value: -45.011441031577334 - type: nauc_precision_at_20_max value: 39.463752119955885 - type: nauc_precision_at_20_std value: 56.67644762699536 - type: nauc_precision_at_3_diff1 value: -17.377622377622178 - type: nauc_precision_at_3_max value: 65.49950049950061 - type: nauc_precision_at_3_std value: 65.98901098901096 - type: nauc_precision_at_5_diff1 value: -59.953430407975524 - type: nauc_precision_at_5_max value: 61.44562508198852 - type: nauc_precision_at_5_std value: 71.93362193362212 - type: nauc_recall_at_1000_diff1 value: -15.691623330456695 - type: nauc_recall_at_1000_max value: 15.829741919417781 - type: nauc_recall_at_1000_std value: 49.972394503360526 - type: nauc_recall_at_100_diff1 value: -1.98100959017737 - type: nauc_recall_at_100_max value: 18.16585160155718 - type: nauc_recall_at_100_std value: 33.70517511173555 - type: nauc_recall_at_10_diff1 value: 3.7343160902801453 - type: nauc_recall_at_10_max value: 9.582727867819985 - type: nauc_recall_at_10_std value: 14.43434213623839 - type: nauc_recall_at_1_diff1 value: 3.871186260963678 - type: nauc_recall_at_1_max value: 10.291074663078922 - type: nauc_recall_at_1_std value: 15.473300411071794 - type: nauc_recall_at_20_diff1 value: 6.080011926090639 - type: nauc_recall_at_20_max value: 10.276334837294632 - type: nauc_recall_at_20_std value: 14.638854755961765 - type: nauc_recall_at_3_diff1 value: 13.491492355604207 - type: nauc_recall_at_3_max value: 7.583143673445603 - type: nauc_recall_at_3_std value: 8.718723099698545 - type: nauc_recall_at_5_diff1 value: 9.84701641956667 - type: nauc_recall_at_5_max value: 6.865633176042521 - type: nauc_recall_at_5_std value: 8.495525728773917 - type: ndcg_at_1 value: 88.0 - type: ndcg_at_10 value: 86.454 - type: ndcg_at_100 value: 69.773 - type: ndcg_at_1000 value: 62.449 - type: ndcg_at_20 value: 83.828 - type: ndcg_at_3 value: 88.94999999999999 - type: ndcg_at_5 value: 89.008 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 91.4 - type: precision_at_100 value: 72.5 - type: precision_at_1000 value: 27.63 - type: precision_at_20 value: 88.3 - type: precision_at_3 value: 93.333 - type: precision_at_5 value: 93.60000000000001 - type: recall_at_1 value: 0.242 - type: recall_at_10 value: 2.398 - type: recall_at_100 value: 17.687 - type: recall_at_1000 value: 59.114 - type: recall_at_20 value: 4.595 - type: recall_at_3 value: 0.744 - type: recall_at_5 value: 1.242 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 28.291 - type: map_at_1 value: 3.3070000000000004 - type: map_at_10 value: 11.583 - type: map_at_100 value: 19.431 - type: map_at_1000 value: 21.117 - type: map_at_20 value: 15.565000000000001 - type: map_at_3 value: 7.331 - type: map_at_5 value: 8.388 - type: mrr_at_1 value: 42.857142857142854 - type: mrr_at_10 value: 56.073858114674444 - type: mrr_at_100 value: 56.91700589659773 - type: mrr_at_1000 value: 56.91700589659773 - type: mrr_at_20 value: 56.74104806757868 - type: mrr_at_3 value: 52.38095238095237 - type: mrr_at_5 value: 55.238095238095234 - type: nauc_map_at_1000_diff1 value: 44.356213349156434 - type: nauc_map_at_1000_max value: -9.51945221851252 - type: nauc_map_at_1000_std value: -1.8070977193404478 - type: nauc_map_at_100_diff1 value: 43.78087877345666 - type: nauc_map_at_100_max value: -10.847966846757402 - type: nauc_map_at_100_std value: -4.891700065316397 - type: nauc_map_at_10_diff1 value: 34.27489592465229 - type: nauc_map_at_10_max value: -6.162529272432887 - type: nauc_map_at_10_std value: -22.281331588136577 - type: nauc_map_at_1_diff1 value: 29.01257972849859 - type: nauc_map_at_1_max value: -24.063714845829665 - type: nauc_map_at_1_std value: -27.78034952027059 - type: nauc_map_at_20_diff1 value: 40.558911376597514 - type: nauc_map_at_20_max value: -10.318831261038511 - type: nauc_map_at_20_std value: -19.52067901729213 - type: nauc_map_at_3_diff1 value: 25.194838760959527 - type: nauc_map_at_3_max value: -15.096493900206298 - type: nauc_map_at_3_std value: -25.517170624203906 - type: nauc_map_at_5_diff1 value: 28.037488854336395 - type: nauc_map_at_5_max value: -9.84712775315703 - type: nauc_map_at_5_std value: -25.457199540701193 - type: nauc_mrr_at_1000_diff1 value: 30.415287662773423 - type: nauc_mrr_at_1000_max value: -14.955789832238223 - type: nauc_mrr_at_1000_std value: -16.193031932456734 - type: nauc_mrr_at_100_diff1 value: 30.415287662773423 - type: nauc_mrr_at_100_max value: -14.955789832238223 - type: nauc_mrr_at_100_std value: -16.193031932456734 - type: nauc_mrr_at_10_diff1 value: 29.944404093422804 - type: nauc_mrr_at_10_max value: -14.600755940210425 - type: nauc_mrr_at_10_std value: -16.96874938128955 - type: nauc_mrr_at_1_diff1 value: 28.24168623646855 - type: nauc_mrr_at_1_max value: -26.473390810938223 - type: nauc_mrr_at_1_std value: -19.847904251987405 - type: nauc_mrr_at_20_diff1 value: 30.603321907235532 - type: nauc_mrr_at_20_max value: -15.160654418428182 - type: nauc_mrr_at_20_std value: -15.87155825394732 - type: nauc_mrr_at_3_diff1 value: 32.20974550537424 - type: nauc_mrr_at_3_max value: -13.359331637910362 - type: nauc_mrr_at_3_std value: -15.35616967360276 - type: nauc_mrr_at_5_diff1 value: 31.276346997827627 - type: nauc_mrr_at_5_max value: -13.990797683176472 - type: nauc_mrr_at_5_std value: -18.02229007347959 - type: nauc_ndcg_at_1000_diff1 value: 39.77616180280105 - type: nauc_ndcg_at_1000_max value: -13.365497309128537 - type: nauc_ndcg_at_1000_std value: 17.50934476685922 - type: nauc_ndcg_at_100_diff1 value: 43.020478240192034 - type: nauc_ndcg_at_100_max value: -24.398334067917666 - type: nauc_ndcg_at_100_std value: 14.340010824013635 - type: nauc_ndcg_at_10_diff1 value: 36.633307595982686 - type: nauc_ndcg_at_10_max value: -18.16760752311136 - type: nauc_ndcg_at_10_std value: -15.997445904209398 - type: nauc_ndcg_at_1_diff1 value: 23.50897611036144 - type: nauc_ndcg_at_1_max value: -28.8780581730975 - type: nauc_ndcg_at_1_std value: -17.956802591815965 - type: nauc_ndcg_at_20_diff1 value: 40.85273458033189 - type: nauc_ndcg_at_20_max value: -22.637229151669523 - type: nauc_ndcg_at_20_std value: -15.36108209125738 - type: nauc_ndcg_at_3_diff1 value: 26.38130973415932 - type: nauc_ndcg_at_3_max value: -17.8298646711695 - type: nauc_ndcg_at_3_std value: -15.209872297038867 - type: nauc_ndcg_at_5_diff1 value: 31.26935981147898 - type: nauc_ndcg_at_5_max value: -15.836371150882874 - type: nauc_ndcg_at_5_std value: -16.994309600153883 - type: nauc_precision_at_1000_diff1 value: -17.30286313876566 - type: nauc_precision_at_1000_max value: 44.37179979868095 - type: nauc_precision_at_1000_std value: 29.75831973979209 - type: nauc_precision_at_100_diff1 value: 30.789201196601184 - type: nauc_precision_at_100_max value: -3.6870457287567127 - type: nauc_precision_at_100_std value: 67.03237995133328 - type: nauc_precision_at_10_diff1 value: 43.12466785767051 - type: nauc_precision_at_10_max value: -10.719154994043603 - type: nauc_precision_at_10_std value: -8.136545413364837 - type: nauc_precision_at_1_diff1 value: 28.24168623646855 - type: nauc_precision_at_1_max value: -26.473390810938223 - type: nauc_precision_at_1_std value: -19.847904251987405 - type: nauc_precision_at_20_diff1 value: 52.8237598859693 - type: nauc_precision_at_20_max value: -15.964075352696169 - type: nauc_precision_at_20_std value: 2.3317371245526357 - type: nauc_precision_at_3_diff1 value: 29.43889942617868 - type: nauc_precision_at_3_max value: -9.45879416331275 - type: nauc_precision_at_3_std value: -17.368167617615043 - type: nauc_precision_at_5_diff1 value: 33.94543373423699 - type: nauc_precision_at_5_max value: -4.957278927627976 - type: nauc_precision_at_5_std value: -20.583725154303927 - type: nauc_recall_at_1000_diff1 value: 15.184063668664786 - type: nauc_recall_at_1000_max value: 21.47942517330889 - type: nauc_recall_at_1000_std value: 67.10902844029505 - type: nauc_recall_at_100_diff1 value: 34.31184666134344 - type: nauc_recall_at_100_max value: -19.59459765457681 - type: nauc_recall_at_100_std value: 38.97309991608786 - type: nauc_recall_at_10_diff1 value: 34.9024804167275 - type: nauc_recall_at_10_max value: -9.697688212953077 - type: nauc_recall_at_10_std value: -19.026416862546462 - type: nauc_recall_at_1_diff1 value: 29.01257972849859 - type: nauc_recall_at_1_max value: -24.063714845829665 - type: nauc_recall_at_1_std value: -27.78034952027059 - type: nauc_recall_at_20_diff1 value: 40.994356869986134 - type: nauc_recall_at_20_max value: -17.387720169060177 - type: nauc_recall_at_20_std value: -13.391920534091096 - type: nauc_recall_at_3_diff1 value: 23.982332098303335 - type: nauc_recall_at_3_max value: -13.23549015388994 - type: nauc_recall_at_3_std value: -24.967396496125627 - type: nauc_recall_at_5_diff1 value: 27.57909659591337 - type: nauc_recall_at_5_max value: -7.380015117336482 - type: nauc_recall_at_5_std value: -26.115325566585994 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 28.291 - type: ndcg_at_100 value: 41.814 - type: ndcg_at_1000 value: 52.762 - type: ndcg_at_20 value: 31.313999999999997 - type: ndcg_at_3 value: 35.892 - type: ndcg_at_5 value: 29.833 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 23.878 - type: precision_at_100 value: 8.449 - type: precision_at_1000 value: 1.592 - type: precision_at_20 value: 21.02 - type: precision_at_3 value: 37.415 - type: precision_at_5 value: 28.163 - type: recall_at_1 value: 3.3070000000000004 - type: recall_at_10 value: 17.412 - type: recall_at_100 value: 51.685 - type: recall_at_1000 value: 85.87 - type: recall_at_20 value: 29.047 - type: recall_at_3 value: 8.307 - type: recall_at_5 value: 10.395999999999999 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 88.0810546875 - type: ap value: 35.7418777987341 - type: ap_weighted value: 35.7418777987341 - type: f1 value: 73.74925430452048 - type: f1_weighted value: 90.07041976974219 - type: main_score value: 88.0810546875 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 78.23429541595924 - type: f1 value: 78.40457663589217 - type: f1_weighted value: 77.77448608245429 - type: main_score value: 78.23429541595924 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 52.78205531814135 - type: v_measure value: 52.78205531814135 - type: v_measure_std value: 1.165738532699205 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 86.51725576682364 - type: cosine_accuracy_threshold value: 87.56462335586548 - type: cosine_ap value: 75.93126232010206 - type: cosine_f1 value: 69.28154353896929 - type: cosine_f1_threshold value: 85.87252497673035 - type: cosine_precision value: 66.7563600782779 - type: cosine_recall value: 72.00527704485488 - type: dot_accuracy value: 86.51725576682364 - type: dot_accuracy_threshold value: 87.56462931632996 - type: dot_ap value: 75.93126248123106 - type: dot_f1 value: 69.28154353896929 - type: dot_f1_threshold value: 85.87252497673035 - type: dot_precision value: 66.7563600782779 - type: dot_recall value: 72.00527704485488 - type: euclidean_accuracy value: 86.51725576682364 - type: euclidean_accuracy_threshold value: 49.87057447433472 - type: euclidean_ap value: 75.93122690902605 - type: euclidean_f1 value: 69.28154353896929 - type: euclidean_f1_threshold value: 53.155386447906494 - type: euclidean_precision value: 66.7563600782779 - type: euclidean_recall value: 72.00527704485488 - type: main_score value: 75.93126248123106 - type: manhattan_accuracy value: 86.51129522560649 - type: manhattan_accuracy_threshold value: 2384.7103118896484 - type: manhattan_ap value: 75.90557012840495 - type: manhattan_f1 value: 69.18795851252213 - type: manhattan_f1_threshold value: 2518.6872482299805 - type: manhattan_precision value: 66.44800777453838 - type: manhattan_recall value: 72.16358839050132 - type: max_ap value: 75.93126248123106 - type: max_f1 value: 69.28154353896929 - type: max_precision value: 66.7563600782779 - type: max_recall value: 72.16358839050132 - type: similarity_accuracy value: 86.51725576682364 - type: similarity_accuracy_threshold value: 87.56462335586548 - type: similarity_ap value: 75.93126232010206 - type: similarity_f1 value: 69.28154353896929 - type: similarity_f1_threshold value: 85.87252497673035 - type: similarity_precision value: 66.7563600782779 - type: similarity_recall value: 72.00527704485488 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 89.4787907012846 - type: cosine_accuracy_threshold value: 84.99394059181213 - type: cosine_ap value: 87.36213612629781 - type: cosine_f1 value: 79.33653810869325 - type: cosine_f1_threshold value: 83.42517614364624 - type: cosine_precision value: 75.79050690598051 - type: cosine_recall value: 83.23067446874038 - type: dot_accuracy value: 89.4787907012846 - type: dot_accuracy_threshold value: 84.99394059181213 - type: dot_ap value: 87.36212758027688 - type: dot_f1 value: 79.33653810869325 - type: dot_f1_threshold value: 83.4251880645752 - type: dot_precision value: 75.79050690598051 - type: dot_recall value: 83.23067446874038 - type: euclidean_accuracy value: 89.4787907012846 - type: euclidean_accuracy_threshold value: 54.78330850601196 - type: euclidean_ap value: 87.36212210446135 - type: euclidean_f1 value: 79.33653810869325 - type: euclidean_f1_threshold value: 57.57572650909424 - type: euclidean_precision value: 75.79050690598051 - type: euclidean_recall value: 83.23067446874038 - type: main_score value: 87.40831622813965 - type: manhattan_accuracy value: 89.4787907012846 - type: manhattan_accuracy_threshold value: 2580.6427001953125 - type: manhattan_ap value: 87.40831622813965 - type: manhattan_f1 value: 79.41061043918799 - type: manhattan_f1_threshold value: 2771.9974517822266 - type: manhattan_precision value: 73.99109101788444 - type: manhattan_recall value: 85.68678780412688 - type: max_ap value: 87.40831622813965 - type: max_f1 value: 79.41061043918799 - type: max_precision value: 75.79050690598051 - type: max_recall value: 85.68678780412688 - type: similarity_accuracy value: 89.4787907012846 - type: similarity_accuracy_threshold value: 84.99394059181213 - type: similarity_ap value: 87.36213612629781 - type: similarity_f1 value: 79.33653810869325 - type: similarity_f1_threshold value: 83.42517614364624 - type: similarity_precision value: 75.79050690598051 - type: similarity_recall value: 83.23067446874038 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cosine_pearson value: 41.64386835570561 - type: cosine_spearman value: 43.19379151087761 - type: euclidean_pearson value: 41.50918458775045 - type: euclidean_spearman value: 43.19379150765412 - type: main_score value: 43.19379151087761 - type: manhattan_pearson value: 41.44879311570844 - type: manhattan_spearman value: 43.1331569623375 - type: pearson value: 41.64386835570561 - type: spearman value: 43.19379151087761 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cosine_pearson value: 48.743301803415385 - type: cosine_spearman value: 50.1649346804881 - type: euclidean_pearson value: 52.18999372105992 - type: euclidean_spearman value: 50.16493130254488 - type: main_score value: 50.1649346804881 - type: manhattan_pearson value: 52.18395800985427 - type: manhattan_spearman value: 50.14763571495949 - type: pearson value: 48.743301803415385 - type: spearman value: 50.1649346804881 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 51.535999999999994 - type: f1 value: 47.4898954358022 - type: f1_weighted value: 47.48989543580219 - type: main_score value: 51.535999999999994 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cosine_pearson value: 55.419452799381105 - type: cosine_spearman value: 56.293792343775564 - type: euclidean_pearson value: 55.36536266265162 - type: euclidean_spearman value: 56.29378541472789 - type: main_score value: 56.293792343775564 - type: manhattan_pearson value: 55.49541403940816 - type: manhattan_spearman value: 56.44957645829305 - type: pearson value: 55.419452799381105 - type: spearman value: 56.293792343775564 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: main_score value: 55.22891270992726 - type: v_measure value: 55.22891270992726 - type: v_measure_std value: 1.2285658700007676 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: main_score value: 50.63839978827497 - type: v_measure value: 50.63839978827497 - type: v_measure_std value: 1.242473805835589 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: main_score value: 85.96024801465656 - type: map value: 85.96024801465656 - type: mrr value: 88.43456349206349 - type: nAUC_map_diff1 value: 57.337140940549446 - type: nAUC_map_max value: 62.9958193712711 - type: nAUC_map_std value: 31.11271008737696 - type: nAUC_mrr_diff1 value: 65.1415639393879 - type: nAUC_mrr_max value: 72.03136151651076 - type: nAUC_mrr_std value: 41.81297572680883 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: main_score value: 86.16019791195917 - type: map value: 86.16019791195917 - type: mrr value: 88.43142857142857 - type: nAUC_map_diff1 value: 65.73941836563229 - type: nAUC_map_max value: 70.18844498133647 - type: nAUC_map_std value: 20.764350257887205 - type: nAUC_mrr_diff1 value: 72.29089490704929 - type: nAUC_mrr_max value: 79.06040041480205 - type: nAUC_mrr_std value: 29.68793685691943 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: main_score value: 48.878 - type: map_at_1 value: 30.990000000000002 - type: map_at_10 value: 43.101 - type: map_at_100 value: 44.799 - type: map_at_1000 value: 44.917 - type: map_at_20 value: 44.024 - type: map_at_3 value: 39.495999999999995 - type: map_at_5 value: 41.619 - type: mrr_at_1 value: 46.036509127281825 - type: mrr_at_10 value: 52.58885157797379 - type: mrr_at_100 value: 53.491086020573874 - type: mrr_at_1000 value: 53.53388374466903 - type: mrr_at_20 value: 53.106963611093015 - type: mrr_at_3 value: 50.75435525548042 - type: mrr_at_5 value: 51.810869384012626 - type: nauc_map_at_1000_diff1 value: 52.41543525690992 - type: nauc_map_at_1000_max value: 41.553008933748075 - type: nauc_map_at_1000_std value: -10.32929204180765 - type: nauc_map_at_100_diff1 value: 52.381590955115 - type: nauc_map_at_100_max value: 41.528487983429805 - type: nauc_map_at_100_std value: -10.381249064468227 - type: nauc_map_at_10_diff1 value: 52.16869784800555 - type: nauc_map_at_10_max value: 40.50593347217273 - type: nauc_map_at_10_std value: -11.48440163831477 - type: nauc_map_at_1_diff1 value: 54.37950698425308 - type: nauc_map_at_1_max value: 27.99076263656578 - type: nauc_map_at_1_std value: -13.387743308583936 - type: nauc_map_at_20_diff1 value: 52.26486912651778 - type: nauc_map_at_20_max value: 41.1289112053278 - type: nauc_map_at_20_std value: -10.836952272087673 - type: nauc_map_at_3_diff1 value: 52.22837162881318 - type: nauc_map_at_3_max value: 37.28247882586101 - type: nauc_map_at_3_std value: -12.802844493692689 - type: nauc_map_at_5_diff1 value: 52.14901352070414 - type: nauc_map_at_5_max value: 39.30755835274481 - type: nauc_map_at_5_std value: -12.090080928908693 - type: nauc_mrr_at_1000_diff1 value: 61.29362223939591 - type: nauc_mrr_at_1000_max value: 49.504464268268734 - type: nauc_mrr_at_1000_std value: -6.192362955819179 - type: nauc_mrr_at_100_diff1 value: 61.27462778479297 - type: nauc_mrr_at_100_max value: 49.501426021534314 - type: nauc_mrr_at_100_std value: -6.187965501873083 - type: nauc_mrr_at_10_diff1 value: 61.26052149225271 - type: nauc_mrr_at_10_max value: 49.41033526947803 - type: nauc_mrr_at_10_std value: -6.480678335278449 - type: nauc_mrr_at_1_diff1 value: 65.17652550565293 - type: nauc_mrr_at_1_max value: 48.51010542543353 - type: nauc_mrr_at_1_std value: -7.368387510155559 - type: nauc_mrr_at_20_diff1 value: 61.21989112831903 - type: nauc_mrr_at_20_max value: 49.48689488743648 - type: nauc_mrr_at_20_std value: -6.243372597148973 - type: nauc_mrr_at_3_diff1 value: 61.9547565182502 - type: nauc_mrr_at_3_max value: 49.66360537204246 - type: nauc_mrr_at_3_std value: -6.720743933293509 - type: nauc_mrr_at_5_diff1 value: 61.496871352071125 - type: nauc_mrr_at_5_max value: 49.5678171266 - type: nauc_mrr_at_5_std value: -6.6874891389325315 - type: nauc_ndcg_at_1000_diff1 value: 54.01531878364172 - type: nauc_ndcg_at_1000_max value: 45.34209378824649 - type: nauc_ndcg_at_1000_std value: -6.944248444224854 - type: nauc_ndcg_at_100_diff1 value: 53.346748878441474 - type: nauc_ndcg_at_100_max value: 45.14003986050034 - type: nauc_ndcg_at_100_std value: -7.005085495055454 - type: nauc_ndcg_at_10_diff1 value: 52.810226490598126 - type: nauc_ndcg_at_10_max value: 43.07795669853919 - type: nauc_ndcg_at_10_std value: -10.034928499762781 - type: nauc_ndcg_at_1_diff1 value: 65.17652550565293 - type: nauc_ndcg_at_1_max value: 48.51010542543353 - type: nauc_ndcg_at_1_std value: -7.368387510155559 - type: nauc_ndcg_at_20_diff1 value: 52.804323719089496 - type: nauc_ndcg_at_20_max value: 43.997732911446015 - type: nauc_ndcg_at_20_std value: -8.676868642315817 - type: nauc_ndcg_at_3_diff1 value: 53.674179686012266 - type: nauc_ndcg_at_3_max value: 44.060837370301144 - type: nauc_ndcg_at_3_std value: -9.037885820033154 - type: nauc_ndcg_at_5_diff1 value: 53.07635969540409 - type: nauc_ndcg_at_5_max value: 43.25087811115596 - type: nauc_ndcg_at_5_std value: -9.846858466002635 - type: nauc_precision_at_1000_diff1 value: 2.084666373040924 - type: nauc_precision_at_1000_max value: 28.42640828471192 - type: nauc_precision_at_1000_std value: 22.933705383301913 - type: nauc_precision_at_100_diff1 value: 9.069908068584077 - type: nauc_precision_at_100_max value: 37.06160191646647 - type: nauc_precision_at_100_std value: 21.54927708468064 - type: nauc_precision_at_10_diff1 value: 24.20089272765347 - type: nauc_precision_at_10_max value: 46.03710227995257 - type: nauc_precision_at_10_std value: 7.738238301903013 - type: nauc_precision_at_1_diff1 value: 65.17652550565293 - type: nauc_precision_at_1_max value: 48.51010542543353 - type: nauc_precision_at_1_std value: -7.368387510155559 - type: nauc_precision_at_20_diff1 value: 19.201920174779982 - type: nauc_precision_at_20_max value: 44.13300802679899 - type: nauc_precision_at_20_std value: 13.160562176619225 - type: nauc_precision_at_3_diff1 value: 36.167789437136456 - type: nauc_precision_at_3_max value: 48.8924513883858 - type: nauc_precision_at_3_std value: 0.8689238709283229 - type: nauc_precision_at_5_diff1 value: 29.82427928985585 - type: nauc_precision_at_5_max value: 47.80109745837339 - type: nauc_precision_at_5_std value: 3.9881901859384796 - type: nauc_recall_at_1000_diff1 value: 33.90580711293753 - type: nauc_recall_at_1000_max value: 63.570522808962416 - type: nauc_recall_at_1000_std value: 51.2943861130984 - type: nauc_recall_at_100_diff1 value: 36.04779122344113 - type: nauc_recall_at_100_max value: 40.822667691791864 - type: nauc_recall_at_100_std value: 5.0429741472701135 - type: nauc_recall_at_10_diff1 value: 42.796036272531346 - type: nauc_recall_at_10_max value: 37.11160162276398 - type: nauc_recall_at_10_std value: -10.853453090588996 - type: nauc_recall_at_1_diff1 value: 54.37950698425308 - type: nauc_recall_at_1_max value: 27.99076263656578 - type: nauc_recall_at_1_std value: -13.387743308583936 - type: nauc_recall_at_20_diff1 value: 40.701617167157856 - type: nauc_recall_at_20_max value: 38.69709452685056 - type: nauc_recall_at_20_std value: -6.236014503299754 - type: nauc_recall_at_3_diff1 value: 47.008724772852986 - type: nauc_recall_at_3_max value: 36.18196717387915 - type: nauc_recall_at_3_std value: -12.56849547435393 - type: nauc_recall_at_5_diff1 value: 44.83401607708702 - type: nauc_recall_at_5_max value: 37.2376150434735 - type: nauc_recall_at_5_std value: -11.98576967557474 - type: ndcg_at_1 value: 46.037 - type: ndcg_at_10 value: 48.878 - type: ndcg_at_100 value: 55.559000000000005 - type: ndcg_at_1000 value: 57.609 - type: ndcg_at_20 value: 51.376999999999995 - type: ndcg_at_3 value: 45.115 - type: ndcg_at_5 value: 46.69 - type: precision_at_1 value: 46.037 - type: precision_at_10 value: 10.168000000000001 - type: precision_at_100 value: 1.5599999999999998 - type: precision_at_1000 value: 0.183 - type: precision_at_20 value: 5.923 - type: precision_at_3 value: 24.948 - type: precision_at_5 value: 17.444000000000003 - type: recall_at_1 value: 30.990000000000002 - type: recall_at_10 value: 56.45400000000001 - type: recall_at_100 value: 84.285 - type: recall_at_1000 value: 98.03699999999999 - type: recall_at_20 value: 64.936 - type: recall_at_3 value: 43.963 - type: recall_at_5 value: 49.71 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cosine_accuracy value: 72.39927841250751 - type: cosine_accuracy_threshold value: 75.96232295036316 - type: cosine_ap value: 80.23711282712038 - type: cosine_f1 value: 74.77399913904435 - type: cosine_f1_threshold value: 74.5398998260498 - type: cosine_precision value: 69.27218344965105 - type: cosine_recall value: 81.22515782090251 - type: dot_accuracy value: 72.39927841250751 - type: dot_accuracy_threshold value: 75.96232891082764 - type: dot_ap value: 80.2592745288548 - type: dot_f1 value: 74.77399913904435 - type: dot_f1_threshold value: 74.5398998260498 - type: dot_precision value: 69.27218344965105 - type: dot_recall value: 81.22515782090251 - type: euclidean_accuracy value: 72.39927841250751 - type: euclidean_accuracy_threshold value: 69.3363904953003 - type: euclidean_ap value: 80.23711023366968 - type: euclidean_f1 value: 74.77399913904435 - type: euclidean_f1_threshold value: 71.35838270187378 - type: euclidean_precision value: 69.27218344965105 - type: euclidean_recall value: 81.22515782090251 - type: main_score value: 80.2592745288548 - type: manhattan_accuracy value: 72.38725195429946 - type: manhattan_accuracy_threshold value: 3262.3924255371094 - type: manhattan_ap value: 80.20796281059799 - type: manhattan_f1 value: 74.78589922326229 - type: manhattan_f1_threshold value: 3522.083282470703 - type: manhattan_precision value: 65.13443191673895 - type: manhattan_recall value: 87.79518353986438 - type: max_ap value: 80.2592745288548 - type: max_f1 value: 74.78589922326229 - type: max_precision value: 69.27218344965105 - type: max_recall value: 87.79518353986438 - type: similarity_accuracy value: 72.39927841250751 - type: similarity_accuracy_threshold value: 75.96232295036316 - type: similarity_ap value: 80.23711282712038 - type: similarity_f1 value: 74.77399913904435 - type: similarity_f1_threshold value: 74.5398998260498 - type: similarity_precision value: 69.27218344965105 - type: similarity_recall value: 81.22515782090251 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: main_score value: 85.11800000000001 - type: map_at_1 value: 73.63 - type: map_at_10 value: 81.679 - type: map_at_100 value: 81.857 - type: map_at_1000 value: 81.85900000000001 - type: map_at_20 value: 81.797 - type: map_at_3 value: 80.137 - type: map_at_5 value: 81.185 - type: mrr_at_1 value: 73.97260273972603 - type: mrr_at_10 value: 81.75707093515315 - type: mrr_at_100 value: 81.93543323000621 - type: mrr_at_1000 value: 81.93756828328048 - type: mrr_at_20 value: 81.87548986547937 - type: mrr_at_3 value: 80.31260976466457 - type: mrr_at_5 value: 81.29785739374785 - type: nauc_map_at_1000_diff1 value: 81.93788057355742 - type: nauc_map_at_1000_max value: 35.99041105416496 - type: nauc_map_at_1000_std value: -48.78171089687064 - type: nauc_map_at_100_diff1 value: 81.93620480570421 - type: nauc_map_at_100_max value: 35.99750026667062 - type: nauc_map_at_100_std value: -48.77105969575747 - type: nauc_map_at_10_diff1 value: 81.91994980094535 - type: nauc_map_at_10_max value: 35.936389715002434 - type: nauc_map_at_10_std value: -49.17909322969262 - type: nauc_map_at_1_diff1 value: 84.01876408819771 - type: nauc_map_at_1_max value: 36.70512051150278 - type: nauc_map_at_1_std value: -43.39242709520668 - type: nauc_map_at_20_diff1 value: 81.89629060612107 - type: nauc_map_at_20_max value: 35.998722436607224 - type: nauc_map_at_20_std value: -48.795137145085114 - type: nauc_map_at_3_diff1 value: 81.65169701784126 - type: nauc_map_at_3_max value: 34.21369237086454 - type: nauc_map_at_3_std value: -51.38254219438024 - type: nauc_map_at_5_diff1 value: 81.89142627086459 - type: nauc_map_at_5_max value: 35.690016330033146 - type: nauc_map_at_5_std value: -50.19202102899405 - type: nauc_mrr_at_1000_diff1 value: 81.75999363957315 - type: nauc_mrr_at_1000_max value: 36.136685517402135 - type: nauc_mrr_at_1000_std value: -48.352638487245194 - type: nauc_mrr_at_100_diff1 value: 81.75833537458423 - type: nauc_mrr_at_100_max value: 36.14377951768674 - type: nauc_mrr_at_100_std value: -48.34200730825885 - type: nauc_mrr_at_10_diff1 value: 81.74393774612405 - type: nauc_mrr_at_10_max value: 36.08089403053739 - type: nauc_mrr_at_10_std value: -48.75600700693392 - type: nauc_mrr_at_1_diff1 value: 83.56151294191774 - type: nauc_mrr_at_1_max value: 36.82117748749014 - type: nauc_mrr_at_1_std value: -42.64032550449816 - type: nauc_mrr_at_20_diff1 value: 81.71893460337381 - type: nauc_mrr_at_20_max value: 36.144473698390016 - type: nauc_mrr_at_20_std value: -48.36772596598759 - type: nauc_mrr_at_3_diff1 value: 81.31323444477003 - type: nauc_mrr_at_3_max value: 34.749717583977876 - type: nauc_mrr_at_3_std value: -50.49999044146871 - type: nauc_mrr_at_5_diff1 value: 81.66194334976237 - type: nauc_mrr_at_5_max value: 35.93608825443919 - type: nauc_mrr_at_5_std value: -49.61915090103402 - type: nauc_ndcg_at_1000_diff1 value: 81.55763410469278 - type: nauc_ndcg_at_1000_max value: 36.42322037020392 - type: nauc_ndcg_at_1000_std value: -48.7742078811271 - type: nauc_ndcg_at_100_diff1 value: 81.48331573837318 - type: nauc_ndcg_at_100_max value: 36.71054353074742 - type: nauc_ndcg_at_100_std value: -48.369435549215076 - type: nauc_ndcg_at_10_diff1 value: 81.29394592276353 - type: nauc_ndcg_at_10_max value: 36.4517074035948 - type: nauc_ndcg_at_10_std value: -50.20090355449128 - type: nauc_ndcg_at_1_diff1 value: 83.56151294191774 - type: nauc_ndcg_at_1_max value: 36.82117748749014 - type: nauc_ndcg_at_1_std value: -42.64032550449816 - type: nauc_ndcg_at_20_diff1 value: 81.20254779696464 - type: nauc_ndcg_at_20_max value: 36.6482927189098 - type: nauc_ndcg_at_20_std value: -48.571825313722385 - type: nauc_ndcg_at_3_diff1 value: 80.66603026862907 - type: nauc_ndcg_at_3_max value: 33.240952475122505 - type: nauc_ndcg_at_3_std value: -54.35238318429462 - type: nauc_ndcg_at_5_diff1 value: 81.19993125865157 - type: nauc_ndcg_at_5_max value: 35.94971755293486 - type: nauc_ndcg_at_5_std value: -52.56998418921957 - type: nauc_precision_at_1000_diff1 value: -40.60462498343001 - type: nauc_precision_at_1000_max value: 15.136963270766103 - type: nauc_precision_at_1000_std value: 53.270315269342284 - type: nauc_precision_at_100_diff1 value: -14.678538901117824 - type: nauc_precision_at_100_max value: 31.227486523061042 - type: nauc_precision_at_100_std value: 44.407313386101016 - type: nauc_precision_at_10_diff1 value: 37.992508676096854 - type: nauc_precision_at_10_max value: 32.20617803639044 - type: nauc_precision_at_10_std value: -24.651272381791788 - type: nauc_precision_at_1_diff1 value: 83.56151294191774 - type: nauc_precision_at_1_max value: 36.82117748749014 - type: nauc_precision_at_1_std value: -42.64032550449816 - type: nauc_precision_at_20_diff1 value: 22.54300244947699 - type: nauc_precision_at_20_max value: 32.36876652389686 - type: nauc_precision_at_20_std value: 1.7015124554747025 - type: nauc_precision_at_3_diff1 value: 68.81478714821245 - type: nauc_precision_at_3_max value: 26.41457436423825 - type: nauc_precision_at_3_std value: -61.40398331777571 - type: nauc_precision_at_5_diff1 value: 58.69105759402332 - type: nauc_precision_at_5_max value: 32.97451532708787 - type: nauc_precision_at_5_std value: -50.69790705388475 - type: nauc_recall_at_1000_diff1 value: 72.21681648749731 - type: nauc_recall_at_1000_max value: 79.571695197873 - type: nauc_recall_at_1000_std value: -10.249054570729779 - type: nauc_recall_at_100_diff1 value: 69.85195714312137 - type: nauc_recall_at_100_max value: 82.29327419137651 - type: nauc_recall_at_100_std value: 9.272355364496024 - type: nauc_recall_at_10_diff1 value: 75.88961282317214 - type: nauc_recall_at_10_max value: 42.33799173568281 - type: nauc_recall_at_10_std value: -59.92110791808928 - type: nauc_recall_at_1_diff1 value: 84.01876408819771 - type: nauc_recall_at_1_max value: 36.70512051150278 - type: nauc_recall_at_1_std value: -43.39242709520668 - type: nauc_recall_at_20_diff1 value: 71.54902295882445 - type: nauc_recall_at_20_max value: 48.23402574935853 - type: nauc_recall_at_20_std value: -36.19907601808263 - type: nauc_recall_at_3_diff1 value: 76.77746829240562 - type: nauc_recall_at_3_max value: 27.957036148475822 - type: nauc_recall_at_3_std value: -68.61906130536217 - type: nauc_recall_at_5_diff1 value: 77.45476586755301 - type: nauc_recall_at_5_max value: 37.49706408332405 - type: nauc_recall_at_5_std value: -68.35743165578008 - type: ndcg_at_1 value: 73.973 - type: ndcg_at_10 value: 85.11800000000001 - type: ndcg_at_100 value: 85.918 - type: ndcg_at_1000 value: 85.994 - type: ndcg_at_20 value: 85.529 - type: ndcg_at_3 value: 82.185 - type: ndcg_at_5 value: 84.003 - type: precision_at_1 value: 73.973 - type: precision_at_10 value: 9.663 - type: precision_at_100 value: 1.002 - type: precision_at_1000 value: 0.101 - type: precision_at_20 value: 4.91 - type: precision_at_3 value: 29.505 - type: precision_at_5 value: 18.609 - type: recall_at_1 value: 73.63 - type: recall_at_10 value: 95.574 - type: recall_at_100 value: 99.157 - type: recall_at_1000 value: 99.789 - type: recall_at_20 value: 97.155 - type: recall_at_3 value: 87.908 - type: recall_at_5 value: 92.255 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: main_score value: 85.546 - type: map_at_1 value: 24.726 - type: map_at_10 value: 77.398 - type: map_at_100 value: 80.512 - type: map_at_1000 value: 80.542 - type: map_at_20 value: 79.89 - type: map_at_3 value: 52.294 - type: map_at_5 value: 66.737 - type: mrr_at_1 value: 84.65 - type: mrr_at_10 value: 90.20547619047615 - type: mrr_at_100 value: 90.27505685543193 - type: mrr_at_1000 value: 90.27765420779204 - type: mrr_at_20 value: 90.24865983066637 - type: mrr_at_3 value: 89.79166666666661 - type: mrr_at_5 value: 90.11666666666662 - type: nauc_map_at_1000_diff1 value: -1.215140973709367 - type: nauc_map_at_1000_max value: 33.108520516658615 - type: nauc_map_at_1000_std value: 6.758685957507468 - type: nauc_map_at_100_diff1 value: -1.207757544020437 - type: nauc_map_at_100_max value: 33.155285829506695 - type: nauc_map_at_100_std value: 6.754183039785769 - type: nauc_map_at_10_diff1 value: 3.5413573051903047 - type: nauc_map_at_10_max value: 29.006738480989004 - type: nauc_map_at_10_std value: -4.221060526808343 - type: nauc_map_at_1_diff1 value: 44.81310475715047 - type: nauc_map_at_1_max value: -8.316916162518954 - type: nauc_map_at_1_std value: -32.633488423702175 - type: nauc_map_at_20_diff1 value: -0.3252090899485349 - type: nauc_map_at_20_max value: 32.95421638619362 - type: nauc_map_at_20_std value: 4.790196784943749 - type: nauc_map_at_3_diff1 value: 27.674115634718188 - type: nauc_map_at_3_max value: 3.177400343231302 - type: nauc_map_at_3_std value: -29.847459692956424 - type: nauc_map_at_5_diff1 value: 16.298632315753334 - type: nauc_map_at_5_max value: 14.449275595437436 - type: nauc_map_at_5_std value: -21.725650182682045 - type: nauc_mrr_at_1000_diff1 value: 16.12774381335271 - type: nauc_mrr_at_1000_max value: 44.56306933921215 - type: nauc_mrr_at_1000_std value: 14.246686478153414 - type: nauc_mrr_at_100_diff1 value: 16.11003916190221 - type: nauc_mrr_at_100_max value: 44.57758602293314 - type: nauc_mrr_at_100_std value: 14.266539208498525 - type: nauc_mrr_at_10_diff1 value: 16.04707311153708 - type: nauc_mrr_at_10_max value: 44.88607103190051 - type: nauc_mrr_at_10_std value: 14.603834034677801 - type: nauc_mrr_at_1_diff1 value: 20.30116277958172 - type: nauc_mrr_at_1_max value: 35.534881794166715 - type: nauc_mrr_at_1_std value: 5.000207127083605 - type: nauc_mrr_at_20_diff1 value: 16.143921778805996 - type: nauc_mrr_at_20_max value: 44.707449455864804 - type: nauc_mrr_at_20_std value: 14.383713595447947 - type: nauc_mrr_at_3_diff1 value: 15.69790889754428 - type: nauc_mrr_at_3_max value: 46.045452572344196 - type: nauc_mrr_at_3_std value: 15.609671398185512 - type: nauc_mrr_at_5_diff1 value: 16.033204218512513 - type: nauc_mrr_at_5_max value: 45.05227844402774 - type: nauc_mrr_at_5_std value: 14.613736935489879 - type: nauc_ndcg_at_1000_diff1 value: -1.4216773326070282 - type: nauc_ndcg_at_1000_max value: 41.10910111412521 - type: nauc_ndcg_at_1000_std value: 16.86477734879313 - type: nauc_ndcg_at_100_diff1 value: -2.003960756412403 - type: nauc_ndcg_at_100_max value: 41.70519523020085 - type: nauc_ndcg_at_100_std value: 17.377814224364503 - type: nauc_ndcg_at_10_diff1 value: 0.12660867221458405 - type: nauc_ndcg_at_10_max value: 37.322966910455804 - type: nauc_ndcg_at_10_std value: 9.042664903565756 - type: nauc_ndcg_at_1_diff1 value: 20.30116277958172 - type: nauc_ndcg_at_1_max value: 35.534881794166715 - type: nauc_ndcg_at_1_std value: 5.000207127083605 - type: nauc_ndcg_at_20_diff1 value: -1.156115903648345 - type: nauc_ndcg_at_20_max value: 42.0674805149201 - type: nauc_ndcg_at_20_std value: 15.12731706664778 - type: nauc_ndcg_at_3_diff1 value: -0.49319667143901985 - type: nauc_ndcg_at_3_max value: 31.903791872436134 - type: nauc_ndcg_at_3_std value: 7.268897004663463 - type: nauc_ndcg_at_5_diff1 value: 1.7704403405480456 - type: nauc_ndcg_at_5_max value: 30.429016694320566 - type: nauc_ndcg_at_5_std value: 3.105284555570875 - type: nauc_precision_at_1000_diff1 value: -34.86243245110438 - type: nauc_precision_at_1000_max value: 16.55390220300698 - type: nauc_precision_at_1000_std value: 47.249390229064616 - type: nauc_precision_at_100_diff1 value: -35.49469618655004 - type: nauc_precision_at_100_max value: 18.01017124557696 - type: nauc_precision_at_100_std value: 48.24492886643511 - type: nauc_precision_at_10_diff1 value: -38.13696581770024 - type: nauc_precision_at_10_max value: 26.98185307554265 - type: nauc_precision_at_10_std value: 44.80668408168652 - type: nauc_precision_at_1_diff1 value: 20.30116277958172 - type: nauc_precision_at_1_max value: 35.534881794166715 - type: nauc_precision_at_1_std value: 5.000207127083605 - type: nauc_precision_at_20_diff1 value: -36.49678023696355 - type: nauc_precision_at_20_max value: 22.530591705844362 - type: nauc_precision_at_20_std value: 48.34192532928907 - type: nauc_precision_at_3_diff1 value: -32.719086262048954 - type: nauc_precision_at_3_max value: 35.39122694943844 - type: nauc_precision_at_3_std value: 28.91765225509027 - type: nauc_precision_at_5_diff1 value: -38.88429973444081 - type: nauc_precision_at_5_max value: 32.46086225329996 - type: nauc_precision_at_5_std value: 37.057698623627736 - type: nauc_recall_at_1000_diff1 value: -25.091814276951112 - type: nauc_recall_at_1000_max value: 79.28277293043296 - type: nauc_recall_at_1000_std value: 74.55138108628938 - type: nauc_recall_at_100_diff1 value: -32.687184978421854 - type: nauc_recall_at_100_max value: 69.17663327735013 - type: nauc_recall_at_100_std value: 57.63458684402335 - type: nauc_recall_at_10_diff1 value: 2.823797050791949 - type: nauc_recall_at_10_max value: 33.25819004443964 - type: nauc_recall_at_10_std value: -3.379510126507516 - type: nauc_recall_at_1_diff1 value: 44.81310475715047 - type: nauc_recall_at_1_max value: -8.316916162518954 - type: nauc_recall_at_1_std value: -32.633488423702175 - type: nauc_recall_at_20_diff1 value: -7.487267387128085 - type: nauc_recall_at_20_max value: 54.27294562215508 - type: nauc_recall_at_20_std value: 25.404864863592596 - type: nauc_recall_at_3_diff1 value: 27.290576205803678 - type: nauc_recall_at_3_max value: 0.18509949842986292 - type: nauc_recall_at_3_std value: -31.927894497312785 - type: nauc_recall_at_5_diff1 value: 17.436811980536145 - type: nauc_recall_at_5_max value: 9.619111814502137 - type: nauc_recall_at_5_std value: -26.80525027384896 - type: ndcg_at_1 value: 84.65 - type: ndcg_at_10 value: 85.546 - type: ndcg_at_100 value: 88.588 - type: ndcg_at_1000 value: 88.838 - type: ndcg_at_20 value: 87.414 - type: ndcg_at_3 value: 82.299 - type: ndcg_at_5 value: 82.309 - type: precision_at_1 value: 84.65 - type: precision_at_10 value: 41.660000000000004 - type: precision_at_100 value: 4.835 - type: precision_at_1000 value: 0.48900000000000005 - type: precision_at_20 value: 22.958000000000002 - type: precision_at_3 value: 73.983 - type: precision_at_5 value: 63.46000000000001 - type: recall_at_1 value: 24.726 - type: recall_at_10 value: 88.533 - type: recall_at_100 value: 98.084 - type: recall_at_1000 value: 99.362 - type: recall_at_20 value: 94.139 - type: recall_at_3 value: 55.559000000000005 - type: recall_at_5 value: 73.27900000000001 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: main_score value: 69.569 - type: map_at_1 value: 55.2 - type: map_at_10 value: 64.73100000000001 - type: map_at_100 value: 65.212 - type: map_at_1000 value: 65.223 - type: map_at_20 value: 65.065 - type: map_at_3 value: 62.25000000000001 - type: map_at_5 value: 63.665000000000006 - type: mrr_at_1 value: 55.2 - type: mrr_at_10 value: 64.73107142857143 - type: mrr_at_100 value: 65.21168735476023 - type: mrr_at_1000 value: 65.22349810383741 - type: mrr_at_20 value: 65.06460730617853 - type: mrr_at_3 value: 62.250000000000014 - type: mrr_at_5 value: 63.66500000000001 - type: nauc_map_at_1000_diff1 value: 69.58058859314815 - type: nauc_map_at_1000_max value: 22.143965598479625 - type: nauc_map_at_1000_std value: -17.77717787393765 - type: nauc_map_at_100_diff1 value: 69.5786282929092 - type: nauc_map_at_100_max value: 22.15596641656083 - type: nauc_map_at_100_std value: -17.77087007615729 - type: nauc_map_at_10_diff1 value: 69.50984160111912 - type: nauc_map_at_10_max value: 22.077878838591417 - type: nauc_map_at_10_std value: -17.999280229699337 - type: nauc_map_at_1_diff1 value: 71.78490211309624 - type: nauc_map_at_1_max value: 17.493977056525587 - type: nauc_map_at_1_std value: -20.738626768887876 - type: nauc_map_at_20_diff1 value: 69.54446967958494 - type: nauc_map_at_20_max value: 22.190247858127446 - type: nauc_map_at_20_std value: -17.81055914472505 - type: nauc_map_at_3_diff1 value: 69.78902422886996 - type: nauc_map_at_3_max value: 20.724520905637984 - type: nauc_map_at_3_std value: -19.780102009399112 - type: nauc_map_at_5_diff1 value: 69.40598262452117 - type: nauc_map_at_5_max value: 22.00585257749536 - type: nauc_map_at_5_std value: -18.06926381339879 - type: nauc_mrr_at_1000_diff1 value: 69.58058859314815 - type: nauc_mrr_at_1000_max value: 22.143965598479625 - type: nauc_mrr_at_1000_std value: -17.77717787393765 - type: nauc_mrr_at_100_diff1 value: 69.5786282929092 - type: nauc_mrr_at_100_max value: 22.15596641656083 - type: nauc_mrr_at_100_std value: -17.77087007615729 - type: nauc_mrr_at_10_diff1 value: 69.50984160111912 - type: nauc_mrr_at_10_max value: 22.077878838591417 - type: nauc_mrr_at_10_std value: -17.999280229699337 - type: nauc_mrr_at_1_diff1 value: 71.78490211309624 - type: nauc_mrr_at_1_max value: 17.493977056525587 - type: nauc_mrr_at_1_std value: -20.738626768887876 - type: nauc_mrr_at_20_diff1 value: 69.54446967958494 - type: nauc_mrr_at_20_max value: 22.190247858127446 - type: nauc_mrr_at_20_std value: -17.81055914472505 - type: nauc_mrr_at_3_diff1 value: 69.78902422886996 - type: nauc_mrr_at_3_max value: 20.724520905637984 - type: nauc_mrr_at_3_std value: -19.780102009399112 - type: nauc_mrr_at_5_diff1 value: 69.40598262452117 - type: nauc_mrr_at_5_max value: 22.00585257749536 - type: nauc_mrr_at_5_std value: -18.06926381339879 - type: nauc_ndcg_at_1000_diff1 value: 69.00910876286885 - type: nauc_ndcg_at_1000_max value: 24.68612156136573 - type: nauc_ndcg_at_1000_std value: -14.678431088013632 - type: nauc_ndcg_at_100_diff1 value: 69.01038835136153 - type: nauc_ndcg_at_100_max value: 25.25855525568926 - type: nauc_ndcg_at_100_std value: -14.11646531874503 - type: nauc_ndcg_at_10_diff1 value: 68.56977946049157 - type: nauc_ndcg_at_10_max value: 24.889549656907388 - type: nauc_ndcg_at_10_std value: -15.492296297438838 - type: nauc_ndcg_at_1_diff1 value: 71.78490211309624 - type: nauc_ndcg_at_1_max value: 17.493977056525587 - type: nauc_ndcg_at_1_std value: -20.738626768887876 - type: nauc_ndcg_at_20_diff1 value: 68.68718093426592 - type: nauc_ndcg_at_20_max value: 25.39502134848165 - type: nauc_ndcg_at_20_std value: -14.577057269604682 - type: nauc_ndcg_at_3_diff1 value: 69.10352441444887 - type: nauc_ndcg_at_3_max value: 21.86447295626149 - type: nauc_ndcg_at_3_std value: -19.33990390741782 - type: nauc_ndcg_at_5_diff1 value: 68.32388314010336 - type: nauc_ndcg_at_5_max value: 24.374824352669094 - type: nauc_ndcg_at_5_std value: -16.067836170764927 - type: nauc_precision_at_1000_diff1 value: 56.43813080787855 - type: nauc_precision_at_1000_max value: 98.75505757858708 - type: nauc_precision_at_1000_std value: 98.75505757858708 - type: nauc_precision_at_100_diff1 value: 64.02348173311819 - type: nauc_precision_at_100_max value: 81.4907523293001 - type: nauc_precision_at_100_std value: 60.99191449629502 - type: nauc_precision_at_10_diff1 value: 63.42500048518244 - type: nauc_precision_at_10_max value: 41.736371222853954 - type: nauc_precision_at_10_std value: 0.1707842490342966 - type: nauc_precision_at_1_diff1 value: 71.78490211309624 - type: nauc_precision_at_1_max value: 17.493977056525587 - type: nauc_precision_at_1_std value: -20.738626768887876 - type: nauc_precision_at_20_diff1 value: 62.35848969277335 - type: nauc_precision_at_20_max value: 53.189362312669275 - type: nauc_precision_at_20_std value: 15.390026936712491 - type: nauc_precision_at_3_diff1 value: 66.8030158975833 - type: nauc_precision_at_3_max value: 25.701645772068343 - type: nauc_precision_at_3_std value: -17.82047596936932 - type: nauc_precision_at_5_diff1 value: 63.922276221850026 - type: nauc_precision_at_5_max value: 34.16942302673978 - type: nauc_precision_at_5_std value: -7.535736031778383 - type: nauc_recall_at_1000_diff1 value: 56.438130807878984 - type: nauc_recall_at_1000_max value: 98.75505757858693 - type: nauc_recall_at_1000_std value: 98.75505757858693 - type: nauc_recall_at_100_diff1 value: 64.02348173311884 - type: nauc_recall_at_100_max value: 81.49075232930095 - type: nauc_recall_at_100_std value: 60.991914496295465 - type: nauc_recall_at_10_diff1 value: 63.42500048518255 - type: nauc_recall_at_10_max value: 41.73637122285414 - type: nauc_recall_at_10_std value: 0.17078424903454548 - type: nauc_recall_at_1_diff1 value: 71.78490211309624 - type: nauc_recall_at_1_max value: 17.493977056525587 - type: nauc_recall_at_1_std value: -20.738626768887876 - type: nauc_recall_at_20_diff1 value: 62.35848969277351 - type: nauc_recall_at_20_max value: 53.189362312669466 - type: nauc_recall_at_20_std value: 15.390026936712559 - type: nauc_recall_at_3_diff1 value: 66.80301589758332 - type: nauc_recall_at_3_max value: 25.701645772068314 - type: nauc_recall_at_3_std value: -17.820475969369348 - type: nauc_recall_at_5_diff1 value: 63.92227622185005 - type: nauc_recall_at_5_max value: 34.16942302673986 - type: nauc_recall_at_5_std value: -7.535736031778269 - type: ndcg_at_1 value: 55.2 - type: ndcg_at_10 value: 69.569 - type: ndcg_at_100 value: 71.83800000000001 - type: ndcg_at_1000 value: 72.163 - type: ndcg_at_20 value: 70.817 - type: ndcg_at_3 value: 64.453 - type: ndcg_at_5 value: 66.984 - type: precision_at_1 value: 55.2 - type: precision_at_10 value: 8.49 - type: precision_at_100 value: 0.9530000000000001 - type: precision_at_1000 value: 0.098 - type: precision_at_20 value: 4.495 - type: precision_at_3 value: 23.599999999999998 - type: precision_at_5 value: 15.379999999999999 - type: recall_at_1 value: 55.2 - type: recall_at_10 value: 84.89999999999999 - type: recall_at_100 value: 95.3 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_20 value: 89.9 - type: recall_at_3 value: 70.8 - type: recall_at_5 value: 76.9 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 52.4355521354367 - type: f1 value: 38.03881618808275 - type: f1_weighted value: 50.86348988322177 - type: main_score value: 52.4355521354367 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 91.20075046904314 - type: ap value: 65.47881077590604 - type: ap_weighted value: 65.47881077590604 - type: f1 value: 86.78614598964556 - type: f1_weighted value: 91.58569437531001 - type: main_score value: 91.20075046904314 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cosine_pearson value: 68.93064450573048 - type: cosine_spearman value: 73.87198381052167 - type: euclidean_pearson value: 72.14686791603229 - type: euclidean_spearman value: 73.87197272267323 - type: main_score value: 73.87198381052167 - type: manhattan_pearson value: 72.21248547981499 - type: manhattan_spearman value: 73.92674432585225 - type: pearson value: 68.93064450573048 - type: spearman value: 73.87198381052167 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6 metrics: - type: main_score value: 33.05502130962135 - type: map value: 33.05502130962135 - type: mrr value: 31.870238095238097 - type: nAUC_map_diff1 value: 21.30927601937602 - type: nAUC_map_max value: 6.152397403288063 - type: nAUC_map_std value: -8.11993134822533 - type: nAUC_mrr_diff1 value: 20.818615722791936 - type: nAUC_mrr_max value: 7.019491834216984 - type: nAUC_mrr_std value: -7.151644031664517 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: main_score value: 84.32300000000001 - type: map_at_1 value: 72.23 - type: map_at_10 value: 80.94 - type: map_at_100 value: 81.162 - type: map_at_1000 value: 81.169 - type: map_at_20 value: 81.098 - type: map_at_3 value: 79.255 - type: map_at_5 value: 80.329 - type: mrr_at_1 value: 74.5702005730659 - type: mrr_at_10 value: 81.4041649611132 - type: mrr_at_100 value: 81.60073576750769 - type: mrr_at_1000 value: 81.6066099076487 - type: mrr_at_20 value: 81.54407433488949 - type: mrr_at_3 value: 79.98089780324726 - type: mrr_at_5 value: 80.88920725883445 - type: nauc_map_at_1000_diff1 value: 81.31217091119935 - type: nauc_map_at_1000_max value: 28.087657539047374 - type: nauc_map_at_1000_std value: -28.95083977221089 - type: nauc_map_at_100_diff1 value: 81.31071887687743 - type: nauc_map_at_100_max value: 28.107899382506673 - type: nauc_map_at_100_std value: -28.92108199132502 - type: nauc_map_at_10_diff1 value: 81.22050812217063 - type: nauc_map_at_10_max value: 28.29491428259605 - type: nauc_map_at_10_std value: -28.957197443487132 - type: nauc_map_at_1_diff1 value: 82.70449426194655 - type: nauc_map_at_1_max value: 19.672801842341542 - type: nauc_map_at_1_std value: -33.27702223899887 - type: nauc_map_at_20_diff1 value: 81.28467538340381 - type: nauc_map_at_20_max value: 28.18940277607488 - type: nauc_map_at_20_std value: -28.863318173008523 - type: nauc_map_at_3_diff1 value: 80.88429397142289 - type: nauc_map_at_3_max value: 26.543555093114723 - type: nauc_map_at_3_std value: -30.790416203653926 - type: nauc_map_at_5_diff1 value: 80.93036946875752 - type: nauc_map_at_5_max value: 28.031556958900733 - type: nauc_map_at_5_std value: -29.49767094531952 - type: nauc_mrr_at_1000_diff1 value: 81.69471769330211 - type: nauc_mrr_at_1000_max value: 29.09674275057747 - type: nauc_mrr_at_1000_std value: -27.875472305538178 - type: nauc_mrr_at_100_diff1 value: 81.69288112951605 - type: nauc_mrr_at_100_max value: 29.115774092481495 - type: nauc_mrr_at_100_std value: -27.845507589689046 - type: nauc_mrr_at_10_diff1 value: 81.60924824377872 - type: nauc_mrr_at_10_max value: 29.324053238301612 - type: nauc_mrr_at_10_std value: -27.797789671462947 - type: nauc_mrr_at_1_diff1 value: 83.70861005499684 - type: nauc_mrr_at_1_max value: 24.177036797141792 - type: nauc_mrr_at_1_std value: -32.90870883927 - type: nauc_mrr_at_20_diff1 value: 81.67730641908086 - type: nauc_mrr_at_20_max value: 29.2055335007723 - type: nauc_mrr_at_20_std value: -27.764138083332764 - type: nauc_mrr_at_3_diff1 value: 81.33371095771679 - type: nauc_mrr_at_3_max value: 28.22690314379001 - type: nauc_mrr_at_3_std value: -29.108849274721898 - type: nauc_mrr_at_5_diff1 value: 81.3785005046575 - type: nauc_mrr_at_5_max value: 29.25582755643928 - type: nauc_mrr_at_5_std value: -28.072008000931525 - type: nauc_ndcg_at_1000_diff1 value: 81.12656608395162 - type: nauc_ndcg_at_1000_max value: 30.287391363506767 - type: nauc_ndcg_at_1000_std value: -26.155157782261835 - type: nauc_ndcg_at_100_diff1 value: 81.09435786660573 - type: nauc_ndcg_at_100_max value: 30.916098732032133 - type: nauc_ndcg_at_100_std value: -25.22277561042267 - type: nauc_ndcg_at_10_diff1 value: 80.69346276832745 - type: nauc_ndcg_at_10_max value: 31.98535095491109 - type: nauc_ndcg_at_10_std value: -25.104317985551035 - type: nauc_ndcg_at_1_diff1 value: 83.70861005499684 - type: nauc_ndcg_at_1_max value: 24.177036797141792 - type: nauc_ndcg_at_1_std value: -32.90870883927 - type: nauc_ndcg_at_20_diff1 value: 80.91974071480954 - type: nauc_ndcg_at_20_max value: 31.614893807002858 - type: nauc_ndcg_at_20_std value: -24.69632291564678 - type: nauc_ndcg_at_3_diff1 value: 80.04001256545276 - type: nauc_ndcg_at_3_max value: 28.547292599110683 - type: nauc_ndcg_at_3_std value: -28.897532640200925 - type: nauc_ndcg_at_5_diff1 value: 80.05238866710195 - type: nauc_ndcg_at_5_max value: 31.24429172243462 - type: nauc_ndcg_at_5_std value: -26.494233213869766 - type: nauc_precision_at_1000_diff1 value: -26.73016497672624 - type: nauc_precision_at_1000_max value: 17.963585830325385 - type: nauc_precision_at_1000_std value: 25.315232271556155 - type: nauc_precision_at_100_diff1 value: -17.09337502284148 - type: nauc_precision_at_100_max value: 25.141615739142942 - type: nauc_precision_at_100_std value: 28.899658331367927 - type: nauc_precision_at_10_diff1 value: 6.717016659823092 - type: nauc_precision_at_10_max value: 34.162000759373306 - type: nauc_precision_at_10_std value: 17.121503146588175 - type: nauc_precision_at_1_diff1 value: 83.70861005499684 - type: nauc_precision_at_1_max value: 24.177036797141792 - type: nauc_precision_at_1_std value: -32.90870883927 - type: nauc_precision_at_20_diff1 value: -3.8253696445249945 - type: nauc_precision_at_20_max value: 31.361141923329527 - type: nauc_precision_at_20_std value: 24.60858534691311 - type: nauc_precision_at_3_diff1 value: 37.97573566697423 - type: nauc_precision_at_3_max value: 30.49045135252249 - type: nauc_precision_at_3_std value: -8.818049676896731 - type: nauc_precision_at_5_diff1 value: 23.49407878583802 - type: nauc_precision_at_5_max value: 34.36426874527954 - type: nauc_precision_at_5_std value: 3.478846453531117 - type: nauc_recall_at_1000_diff1 value: 74.71755368814135 - type: nauc_recall_at_1000_max value: 90.39449112978433 - type: nauc_recall_at_1000_std value: 74.2215219421074 - type: nauc_recall_at_100_diff1 value: 77.15857602704533 - type: nauc_recall_at_100_max value: 87.48049469901645 - type: nauc_recall_at_100_std value: 62.346839599868865 - type: nauc_recall_at_10_diff1 value: 75.24805155222728 - type: nauc_recall_at_10_max value: 61.09625888351906 - type: nauc_recall_at_10_std value: 5.525512450230389 - type: nauc_recall_at_1_diff1 value: 82.70449426194655 - type: nauc_recall_at_1_max value: 19.672801842341542 - type: nauc_recall_at_1_std value: -33.27702223899887 - type: nauc_recall_at_20_diff1 value: 75.72692047994323 - type: nauc_recall_at_20_max value: 71.07253840190907 - type: nauc_recall_at_20_std value: 26.663998932906445 - type: nauc_recall_at_3_diff1 value: 75.83753448474154 - type: nauc_recall_at_3_max value: 33.98612603677782 - type: nauc_recall_at_3_std value: -23.549016128662213 - type: nauc_recall_at_5_diff1 value: 74.15448687221784 - type: nauc_recall_at_5_max value: 45.98971274412654 - type: nauc_recall_at_5_std value: -12.851903796502965 - type: ndcg_at_1 value: 74.57000000000001 - type: ndcg_at_10 value: 84.32300000000001 - type: ndcg_at_100 value: 85.247 - type: ndcg_at_1000 value: 85.402 - type: ndcg_at_20 value: 84.848 - type: ndcg_at_3 value: 81.19 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 74.57000000000001 - type: precision_at_10 value: 10.029 - type: precision_at_100 value: 1.047 - type: precision_at_1000 value: 0.106 - type: precision_at_20 value: 5.122 - type: precision_at_3 value: 30.325000000000003 - type: precision_at_5 value: 19.16 - type: recall_at_1 value: 72.23 - type: recall_at_10 value: 94.23899999999999 - type: recall_at_100 value: 98.25 - type: recall_at_1000 value: 99.42699999999999 - type: recall_at_20 value: 96.231 - type: recall_at_3 value: 86.016 - type: recall_at_5 value: 90.253 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 78.35574983187627 - type: f1 value: 74.89590452942404 - type: f1_weighted value: 77.87503023220823 - type: main_score value: 78.35574983187627 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 74.83187626092803 - type: f1 value: 73.83053337465574 - type: f1_weighted value: 74.02596858799131 - type: main_score value: 74.83187626092803 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 86.47276395427035 - type: f1 value: 85.32868126252416 - type: f1_weighted value: 86.13594825675301 - type: main_score value: 86.47276395427035 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 83.76260928043038 - type: f1 value: 83.13185607007082 - type: f1_weighted value: 83.49785782817072 - type: main_score value: 83.76260928043038 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: main_score value: 63.222 - type: map_at_1 value: 53.400000000000006 - type: map_at_10 value: 60.096000000000004 - type: map_at_100 value: 60.584 - type: map_at_1000 value: 60.632 - type: map_at_20 value: 60.348 - type: map_at_3 value: 58.599999999999994 - type: map_at_5 value: 59.57 - type: mrr_at_1 value: 53.400000000000006 - type: mrr_at_10 value: 60.096349206349245 - type: mrr_at_100 value: 60.584409312946654 - type: mrr_at_1000 value: 60.63165176971444 - type: mrr_at_20 value: 60.34772399942682 - type: mrr_at_3 value: 58.60000000000001 - type: mrr_at_5 value: 59.57000000000002 - type: nauc_map_at_1000_diff1 value: 80.78555103810349 - type: nauc_map_at_1000_max value: 59.730816931247624 - type: nauc_map_at_1000_std value: 14.983445588221494 - type: nauc_map_at_100_diff1 value: 80.76482497876087 - type: nauc_map_at_100_max value: 59.728736132776106 - type: nauc_map_at_100_std value: 14.989918013193673 - type: nauc_map_at_10_diff1 value: 80.84483299066605 - type: nauc_map_at_10_max value: 59.7439325140549 - type: nauc_map_at_10_std value: 15.027017818197002 - type: nauc_map_at_1_diff1 value: 85.25748803873987 - type: nauc_map_at_1_max value: 59.32812642354867 - type: nauc_map_at_1_std value: 10.183981160152523 - type: nauc_map_at_20_diff1 value: 80.75959911713551 - type: nauc_map_at_20_max value: 59.756884656613266 - type: nauc_map_at_20_std value: 15.0343495273408 - type: nauc_map_at_3_diff1 value: 81.60950970451415 - type: nauc_map_at_3_max value: 60.365630145530815 - type: nauc_map_at_3_std value: 14.859430627775755 - type: nauc_map_at_5_diff1 value: 80.96233267337713 - type: nauc_map_at_5_max value: 59.893862677678065 - type: nauc_map_at_5_std value: 15.122518441508895 - type: nauc_mrr_at_1000_diff1 value: 80.78555103810349 - type: nauc_mrr_at_1000_max value: 59.730816931247624 - type: nauc_mrr_at_1000_std value: 14.983445588221494 - type: nauc_mrr_at_100_diff1 value: 80.76482497876087 - type: nauc_mrr_at_100_max value: 59.728736132776106 - type: nauc_mrr_at_100_std value: 14.989918013193673 - type: nauc_mrr_at_10_diff1 value: 80.84483299066605 - type: nauc_mrr_at_10_max value: 59.7439325140549 - type: nauc_mrr_at_10_std value: 15.027017818197002 - type: nauc_mrr_at_1_diff1 value: 85.25748803873987 - type: nauc_mrr_at_1_max value: 59.32812642354867 - type: nauc_mrr_at_1_std value: 10.183981160152523 - type: nauc_mrr_at_20_diff1 value: 80.75959911713551 - type: nauc_mrr_at_20_max value: 59.756884656613266 - type: nauc_mrr_at_20_std value: 15.0343495273408 - type: nauc_mrr_at_3_diff1 value: 81.60950970451415 - type: nauc_mrr_at_3_max value: 60.365630145530815 - type: nauc_mrr_at_3_std value: 14.859430627775755 - type: nauc_mrr_at_5_diff1 value: 80.96233267337713 - type: nauc_mrr_at_5_max value: 59.893862677678065 - type: nauc_mrr_at_5_std value: 15.122518441508895 - type: nauc_ndcg_at_1000_diff1 value: 79.15372156157213 - type: nauc_ndcg_at_1000_max value: 59.5405544982214 - type: nauc_ndcg_at_1000_std value: 16.61759364757034 - type: nauc_ndcg_at_100_diff1 value: 78.5184668065885 - type: nauc_ndcg_at_100_max value: 59.34302969703257 - type: nauc_ndcg_at_100_std value: 16.756513719315905 - type: nauc_ndcg_at_10_diff1 value: 78.82425756639869 - type: nauc_ndcg_at_10_max value: 59.44271533942196 - type: nauc_ndcg_at_10_std value: 16.756768224013037 - type: nauc_ndcg_at_1_diff1 value: 85.25748803873987 - type: nauc_ndcg_at_1_max value: 59.32812642354867 - type: nauc_ndcg_at_1_std value: 10.183981160152523 - type: nauc_ndcg_at_20_diff1 value: 78.4441063027707 - type: nauc_ndcg_at_20_max value: 59.51056493727238 - type: nauc_ndcg_at_20_std value: 16.811613653269568 - type: nauc_ndcg_at_3_diff1 value: 80.4201082661855 - type: nauc_ndcg_at_3_max value: 60.622403161573914 - type: nauc_ndcg_at_3_std value: 16.487926871575237 - type: nauc_ndcg_at_5_diff1 value: 79.16882483328475 - type: nauc_ndcg_at_5_max value: 59.72508213074582 - type: nauc_ndcg_at_5_std value: 16.997051824850505 - type: nauc_precision_at_1000_diff1 value: 57.971188475389866 - type: nauc_precision_at_1000_max value: 60.52687741763361 - type: nauc_precision_at_1000_std value: 52.86647992530319 - type: nauc_precision_at_100_diff1 value: 62.68395866065123 - type: nauc_precision_at_100_max value: 55.92415353791602 - type: nauc_precision_at_100_std value: 28.85790679908329 - type: nauc_precision_at_10_diff1 value: 70.92276238966764 - type: nauc_precision_at_10_max value: 58.073876034520126 - type: nauc_precision_at_10_std value: 23.08635907920343 - type: nauc_precision_at_1_diff1 value: 85.25748803873987 - type: nauc_precision_at_1_max value: 59.32812642354867 - type: nauc_precision_at_1_std value: 10.183981160152523 - type: nauc_precision_at_20_diff1 value: 67.89972776669003 - type: nauc_precision_at_20_max value: 58.329253894664 - type: nauc_precision_at_20_std value: 24.137503294931122 - type: nauc_precision_at_3_diff1 value: 76.68957348655611 - type: nauc_precision_at_3_max value: 61.39858352035809 - type: nauc_precision_at_3_std value: 21.632948280855903 - type: nauc_precision_at_5_diff1 value: 72.916203679207 - type: nauc_precision_at_5_max value: 58.94721061079062 - type: nauc_precision_at_5_std value: 23.399650775173257 - type: nauc_recall_at_1000_diff1 value: 57.971188475390356 - type: nauc_recall_at_1000_max value: 60.52687741763392 - type: nauc_recall_at_1000_std value: 52.86647992530338 - type: nauc_recall_at_100_diff1 value: 62.68395866065127 - type: nauc_recall_at_100_max value: 55.92415353791599 - type: nauc_recall_at_100_std value: 28.857906799083217 - type: nauc_recall_at_10_diff1 value: 70.92276238966758 - type: nauc_recall_at_10_max value: 58.07387603452002 - type: nauc_recall_at_10_std value: 23.08635907920348 - type: nauc_recall_at_1_diff1 value: 85.25748803873987 - type: nauc_recall_at_1_max value: 59.32812642354867 - type: nauc_recall_at_1_std value: 10.183981160152523 - type: nauc_recall_at_20_diff1 value: 67.8997277666901 - type: nauc_recall_at_20_max value: 58.32925389466408 - type: nauc_recall_at_20_std value: 24.137503294931207 - type: nauc_recall_at_3_diff1 value: 76.68957348655606 - type: nauc_recall_at_3_max value: 61.39858352035811 - type: nauc_recall_at_3_std value: 21.632948280855853 - type: nauc_recall_at_5_diff1 value: 72.91620367920703 - type: nauc_recall_at_5_max value: 58.947210610790734 - type: nauc_recall_at_5_std value: 23.399650775173324 - type: ndcg_at_1 value: 53.400000000000006 - type: ndcg_at_10 value: 63.222 - type: ndcg_at_100 value: 65.95299999999999 - type: ndcg_at_1000 value: 67.208 - type: ndcg_at_20 value: 64.151 - type: ndcg_at_3 value: 60.175999999999995 - type: ndcg_at_5 value: 61.936 - type: precision_at_1 value: 53.400000000000006 - type: precision_at_10 value: 7.3 - type: precision_at_100 value: 0.8659999999999999 - type: precision_at_1000 value: 0.097 - type: precision_at_20 value: 3.8350000000000004 - type: precision_at_3 value: 21.567 - type: precision_at_5 value: 13.8 - type: recall_at_1 value: 53.400000000000006 - type: recall_at_10 value: 73.0 - type: recall_at_100 value: 86.6 - type: recall_at_1000 value: 96.5 - type: recall_at_20 value: 76.7 - type: recall_at_3 value: 64.7 - type: recall_at_5 value: 69.0 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: test revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 79.70333333333333 - type: f1 value: 79.287530556871 - type: f1_weighted value: 79.287530556871 - type: main_score value: 79.70333333333333 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cosine_accuracy value: 78.23497563616677 - type: cosine_accuracy_threshold value: 77.55764722824097 - type: cosine_ap value: 82.50970164749991 - type: cosine_f1 value: 80.11336797354747 - type: cosine_f1_threshold value: 74.73551630973816 - type: cosine_precision value: 72.47863247863248 - type: cosine_recall value: 89.54593453009504 - type: dot_accuracy value: 78.23497563616677 - type: dot_accuracy_threshold value: 77.55765318870544 - type: dot_ap value: 82.50970164749991 - type: dot_f1 value: 80.11336797354747 - type: dot_f1_threshold value: 74.73551630973816 - type: dot_precision value: 72.47863247863248 - type: dot_recall value: 89.54593453009504 - type: euclidean_accuracy value: 78.23497563616677 - type: euclidean_accuracy_threshold value: 66.99604988098145 - type: euclidean_ap value: 82.50970164749991 - type: euclidean_f1 value: 80.11336797354747 - type: euclidean_f1_threshold value: 71.08373045921326 - type: euclidean_precision value: 72.47863247863248 - type: euclidean_recall value: 89.54593453009504 - type: main_score value: 82.50970164749991 - type: manhattan_accuracy value: 78.39740119112074 - type: manhattan_accuracy_threshold value: 3158.650016784668 - type: manhattan_ap value: 82.39923329722836 - type: manhattan_f1 value: 79.8283261802575 - type: manhattan_f1_threshold value: 3341.251754760742 - type: manhattan_precision value: 72.78260869565217 - type: manhattan_recall value: 88.3843717001056 - type: max_ap value: 82.50970164749991 - type: max_f1 value: 80.11336797354747 - type: max_precision value: 72.78260869565217 - type: max_recall value: 89.54593453009504 - type: similarity_accuracy value: 78.23497563616677 - type: similarity_accuracy_threshold value: 77.55764722824097 - type: similarity_ap value: 82.50970164749991 - type: similarity_f1 value: 80.11336797354747 - type: similarity_f1_threshold value: 74.73551630973816 - type: similarity_precision value: 72.47863247863248 - type: similarity_recall value: 89.54593453009504 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 95.38 - type: ap value: 93.75949863040576 - type: ap_weighted value: 93.75949863040576 - type: f1 value: 95.36976984629483 - type: f1_weighted value: 95.38009544948058 - type: main_score value: 95.38 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cosine_pearson value: 16.66214038012435 - type: cosine_spearman value: 18.933936531575885 - type: euclidean_pearson value: 21.339915417517258 - type: euclidean_spearman value: 18.9190906666892 - type: main_score value: 18.933936531575885 - type: manhattan_pearson value: 21.335797479057632 - type: manhattan_spearman value: 18.88599523491548 - type: pearson value: 16.66214038012435 - type: spearman value: 18.933936531575885 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cosine_pearson value: 34.73065943737971 - type: cosine_spearman value: 38.00564687145429 - type: euclidean_pearson value: 35.53617738939591 - type: euclidean_spearman value: 38.0065003207164 - type: main_score value: 38.00564687145429 - type: manhattan_pearson value: 35.807453588682655 - type: manhattan_spearman value: 38.24665614671376 - type: pearson value: 34.73065943737971 - type: spearman value: 38.00564687145429 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 75.15702162100195 - type: cosine_spearman value: 74.1317133929849 - type: euclidean_pearson value: 72.33985437269283 - type: euclidean_spearman value: 74.1317133929849 - type: main_score value: 74.1317133929849 - type: manhattan_pearson value: 72.30324170832067 - type: manhattan_spearman value: 74.1721924854986 - type: pearson value: 75.15702162100195 - type: spearman value: 74.1317133929849 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cosine_pearson value: 77.85985786159011 - type: cosine_spearman value: 79.43914109994013 - type: euclidean_pearson value: 78.72698853904203 - type: euclidean_spearman value: 79.438769611819 - type: main_score value: 79.43914109994013 - type: manhattan_pearson value: 78.71975662530679 - type: manhattan_spearman value: 79.4244580368928 - type: pearson value: 77.85985786159011 - type: spearman value: 79.43914109994013 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: main_score value: 66.44032324834474 - type: map value: 66.44032324834474 - type: mrr value: 76.16718251281554 - type: nAUC_map_diff1 value: -11.245614893910917 - type: nAUC_map_max value: 34.20755460573018 - type: nAUC_map_std value: -2.0113484627679235 - type: nAUC_mrr_diff1 value: -9.337265343192676 - type: nAUC_mrr_max value: 27.169675999991284 - type: nAUC_mrr_std value: -4.291118906819815 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: main_score value: 87.241 - type: map_at_1 value: 28.418 - type: map_at_10 value: 80.43599999999999 - type: map_at_100 value: 83.903 - type: map_at_1000 value: 83.952 - type: map_at_20 value: 83.173 - type: map_at_3 value: 56.459 - type: map_at_5 value: 69.49300000000001 - type: mrr_at_1 value: 91.7017359284587 - type: mrr_at_10 value: 93.84601254143608 - type: mrr_at_100 value: 93.90984999385088 - type: mrr_at_1000 value: 93.91248708892668 - type: mrr_at_20 value: 93.88712450867396 - type: mrr_at_3 value: 93.4902682798526 - type: mrr_at_5 value: 93.72873926003862 - type: nauc_map_at_1000_diff1 value: 11.397688510464489 - type: nauc_map_at_1000_max value: 42.99465294143848 - type: nauc_map_at_1000_std value: 17.946353510844045 - type: nauc_map_at_100_diff1 value: 11.40721885559758 - type: nauc_map_at_100_max value: 42.92802593310739 - type: nauc_map_at_100_std value: 17.904049044856023 - type: nauc_map_at_10_diff1 value: 16.76177796419979 - type: nauc_map_at_10_max value: 29.05711008632582 - type: nauc_map_at_10_std value: -0.7888363626563157 - type: nauc_map_at_1_diff1 value: 55.76197416047851 - type: nauc_map_at_1_max value: -27.27596511680105 - type: nauc_map_at_1_std value: -40.180759050662004 - type: nauc_map_at_20_diff1 value: 12.07074726727466 - type: nauc_map_at_20_max value: 40.47195734060083 - type: nauc_map_at_20_std value: 14.32611525026554 - type: nauc_map_at_3_diff1 value: 40.522052911718 - type: nauc_map_at_3_max value: -16.819905730422125 - type: nauc_map_at_3_std value: -39.826056745546 - type: nauc_map_at_5_diff1 value: 31.34500214795733 - type: nauc_map_at_5_max value: -1.5456850415602872 - type: nauc_map_at_5_std value: -30.623980747805657 - type: nauc_mrr_at_1000_diff1 value: 47.54649647385489 - type: nauc_mrr_at_1000_max value: 75.35087140156472 - type: nauc_mrr_at_1000_std value: 41.06127337989305 - type: nauc_mrr_at_100_diff1 value: 47.54613905790605 - type: nauc_mrr_at_100_max value: 75.35918655596235 - type: nauc_mrr_at_100_std value: 41.078290257116805 - type: nauc_mrr_at_10_diff1 value: 47.52418003605644 - type: nauc_mrr_at_10_max value: 75.49771146396608 - type: nauc_mrr_at_10_std value: 41.205249132738686 - type: nauc_mrr_at_1_diff1 value: 47.81150011915281 - type: nauc_mrr_at_1_max value: 70.80968743133832 - type: nauc_mrr_at_1_std value: 34.12058910454593 - type: nauc_mrr_at_20_diff1 value: 47.551559003993276 - type: nauc_mrr_at_20_max value: 75.41924834238061 - type: nauc_mrr_at_20_std value: 41.153192702748235 - type: nauc_mrr_at_3_diff1 value: 47.53087817006066 - type: nauc_mrr_at_3_max value: 75.35450310484637 - type: nauc_mrr_at_3_std value: 40.73415507526735 - type: nauc_mrr_at_5_diff1 value: 47.53619911578793 - type: nauc_mrr_at_5_max value: 75.55210987806407 - type: nauc_mrr_at_5_std value: 41.227934129955955 - type: nauc_ndcg_at_1000_diff1 value: 15.791364228262008 - type: nauc_ndcg_at_1000_max value: 55.663940979628904 - type: nauc_ndcg_at_1000_std value: 31.173275086325276 - type: nauc_ndcg_at_100_diff1 value: 15.373348015886314 - type: nauc_ndcg_at_100_max value: 55.03390778310876 - type: nauc_ndcg_at_100_std value: 31.20120225577878 - type: nauc_ndcg_at_10_diff1 value: 15.163224929316108 - type: nauc_ndcg_at_10_max value: 44.39948453805145 - type: nauc_ndcg_at_10_std value: 18.059941776684493 - type: nauc_ndcg_at_1_diff1 value: 47.81150011915281 - type: nauc_ndcg_at_1_max value: 70.80968743133832 - type: nauc_ndcg_at_1_std value: 34.12058910454593 - type: nauc_ndcg_at_20_diff1 value: 15.318239987691932 - type: nauc_ndcg_at_20_max value: 49.71343698147648 - type: nauc_ndcg_at_20_std value: 24.518513987927275 - type: nauc_ndcg_at_3_diff1 value: 11.666959642695176 - type: nauc_ndcg_at_3_max value: 59.96185824971647 - type: nauc_ndcg_at_3_std value: 31.739929636013276 - type: nauc_ndcg_at_5_diff1 value: 11.544793400857731 - type: nauc_ndcg_at_5_max value: 52.20165452971689 - type: nauc_ndcg_at_5_std value: 25.673658377288916 - type: nauc_precision_at_1000_diff1 value: -37.623642405150974 - type: nauc_precision_at_1000_max value: 45.49074696820348 - type: nauc_precision_at_1000_std value: 61.76021370709046 - type: nauc_precision_at_100_diff1 value: -37.69887461085557 - type: nauc_precision_at_100_max value: 46.863075114374894 - type: nauc_precision_at_100_std value: 62.94204358016603 - type: nauc_precision_at_10_diff1 value: -38.91005052553271 - type: nauc_precision_at_10_max value: 51.0117902381813 - type: nauc_precision_at_10_std value: 58.53119866179844 - type: nauc_precision_at_1_diff1 value: 47.81150011915281 - type: nauc_precision_at_1_max value: 70.80968743133832 - type: nauc_precision_at_1_std value: 34.12058910454593 - type: nauc_precision_at_20_diff1 value: -38.10700896790525 - type: nauc_precision_at_20_max value: 48.83555083324882 - type: nauc_precision_at_20_std value: 61.99860636496578 - type: nauc_precision_at_3_diff1 value: -38.923728668047715 - type: nauc_precision_at_3_max value: 61.441692805173254 - type: nauc_precision_at_3_std value: 50.75068665381258 - type: nauc_precision_at_5_diff1 value: -41.47393696071159 - type: nauc_precision_at_5_max value: 56.42947594070733 - type: nauc_precision_at_5_std value: 54.30313019822085 - type: nauc_recall_at_1000_diff1 value: 2.9269496241172153 - type: nauc_recall_at_1000_max value: 64.00894291065217 - type: nauc_recall_at_1000_std value: 66.70262026919906 - type: nauc_recall_at_100_diff1 value: 4.531333195047854 - type: nauc_recall_at_100_max value: 54.090338584800804 - type: nauc_recall_at_100_std value: 49.29554998040776 - type: nauc_recall_at_10_diff1 value: 15.397260952283629 - type: nauc_recall_at_10_max value: 19.574107744593505 - type: nauc_recall_at_10_std value: -7.419934090674101 - type: nauc_recall_at_1_diff1 value: 55.76197416047851 - type: nauc_recall_at_1_max value: -27.27596511680105 - type: nauc_recall_at_1_std value: -40.180759050662004 - type: nauc_recall_at_20_diff1 value: 8.208554715639794 - type: nauc_recall_at_20_max value: 38.40692245671736 - type: nauc_recall_at_20_std value: 20.50141740592569 - type: nauc_recall_at_3_diff1 value: 39.09343846278106 - type: nauc_recall_at_3_max value: -20.657332761539436 - type: nauc_recall_at_3_std value: -41.94437239291942 - type: nauc_recall_at_5_diff1 value: 30.51405048742498 - type: nauc_recall_at_5_max value: -9.514750927716491 - type: nauc_recall_at_5_std value: -36.26089978353301 - type: ndcg_at_1 value: 91.702 - type: ndcg_at_10 value: 87.241 - type: ndcg_at_100 value: 90.29700000000001 - type: ndcg_at_1000 value: 90.769 - type: ndcg_at_20 value: 88.824 - type: ndcg_at_3 value: 88.346 - type: ndcg_at_5 value: 87.178 - type: precision_at_1 value: 91.702 - type: precision_at_10 value: 43.26 - type: precision_at_100 value: 5.059 - type: precision_at_1000 value: 0.517 - type: precision_at_20 value: 23.880000000000003 - type: precision_at_3 value: 77.199 - type: precision_at_5 value: 64.869 - type: recall_at_1 value: 28.418 - type: recall_at_10 value: 86.154 - type: recall_at_100 value: 96.279 - type: recall_at_1000 value: 98.688 - type: recall_at_20 value: 91.621 - type: recall_at_3 value: 57.945 - type: recall_at_5 value: 72.518 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 54.49499999999999 - type: f1 value: 52.26536070254001 - type: f1_weighted value: 54.19215743051191 - type: main_score value: 54.49499999999999 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: main_score value: 75.7625870685052 - type: v_measure value: 75.7625870685052 - type: v_measure_std value: 1.3016476651336109 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: main_score value: 69.79861827229796 - type: v_measure value: 69.79861827229796 - type: v_measure_std value: 1.8259276351059668 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: main_score value: 78.083 - type: map_at_1 value: 64.4 - type: map_at_10 value: 73.958 - type: map_at_100 value: 74.29 - type: map_at_1000 value: 74.298 - type: map_at_20 value: 74.199 - type: map_at_3 value: 72.217 - type: map_at_5 value: 73.347 - type: mrr_at_1 value: 64.4 - type: mrr_at_10 value: 73.95817460317473 - type: mrr_at_100 value: 74.28974110168443 - type: mrr_at_1000 value: 74.29824956959816 - type: mrr_at_20 value: 74.19886218313863 - type: mrr_at_3 value: 72.21666666666671 - type: mrr_at_5 value: 73.34666666666675 - type: nauc_map_at_1000_diff1 value: 73.71737021866177 - type: nauc_map_at_1000_max value: 12.468796372904206 - type: nauc_map_at_1000_std value: -38.38733946730631 - type: nauc_map_at_100_diff1 value: 73.71970758427913 - type: nauc_map_at_100_max value: 12.483638442192955 - type: nauc_map_at_100_std value: -38.3942712629503 - type: nauc_map_at_10_diff1 value: 73.6749409452003 - type: nauc_map_at_10_max value: 12.377557025269816 - type: nauc_map_at_10_std value: -38.83343830128936 - type: nauc_map_at_1_diff1 value: 75.30680275173268 - type: nauc_map_at_1_max value: 11.155747041372528 - type: nauc_map_at_1_std value: -36.293140617659745 - type: nauc_map_at_20_diff1 value: 73.69590129210275 - type: nauc_map_at_20_max value: 12.506795805663657 - type: nauc_map_at_20_std value: -38.41105721697836 - type: nauc_map_at_3_diff1 value: 73.39787383543842 - type: nauc_map_at_3_max value: 11.662192293430676 - type: nauc_map_at_3_std value: -39.43268460103242 - type: nauc_map_at_5_diff1 value: 73.70919058149413 - type: nauc_map_at_5_max value: 12.830113241179927 - type: nauc_map_at_5_std value: -38.8187110842045 - type: nauc_mrr_at_1000_diff1 value: 73.71737021866177 - type: nauc_mrr_at_1000_max value: 12.468796372904206 - type: nauc_mrr_at_1000_std value: -38.38733946730631 - type: nauc_mrr_at_100_diff1 value: 73.71970758427913 - type: nauc_mrr_at_100_max value: 12.483638442192955 - type: nauc_mrr_at_100_std value: -38.3942712629503 - type: nauc_mrr_at_10_diff1 value: 73.6749409452003 - type: nauc_mrr_at_10_max value: 12.377557025269816 - type: nauc_mrr_at_10_std value: -38.83343830128936 - type: nauc_mrr_at_1_diff1 value: 75.30680275173268 - type: nauc_mrr_at_1_max value: 11.155747041372528 - type: nauc_mrr_at_1_std value: -36.293140617659745 - type: nauc_mrr_at_20_diff1 value: 73.69590129210275 - type: nauc_mrr_at_20_max value: 12.506795805663657 - type: nauc_mrr_at_20_std value: -38.41105721697836 - type: nauc_mrr_at_3_diff1 value: 73.39787383543842 - type: nauc_mrr_at_3_max value: 11.662192293430676 - type: nauc_mrr_at_3_std value: -39.43268460103242 - type: nauc_mrr_at_5_diff1 value: 73.70919058149413 - type: nauc_mrr_at_5_max value: 12.830113241179927 - type: nauc_mrr_at_5_std value: -38.8187110842045 - type: nauc_ndcg_at_1000_diff1 value: 73.39386159674739 - type: nauc_ndcg_at_1000_max value: 13.650025454612095 - type: nauc_ndcg_at_1000_std value: -37.501873222969714 - type: nauc_ndcg_at_100_diff1 value: 73.46030287146141 - type: nauc_ndcg_at_100_max value: 14.242591376553515 - type: nauc_ndcg_at_100_std value: -37.37238503318863 - type: nauc_ndcg_at_10_diff1 value: 73.19041319656063 - type: nauc_ndcg_at_10_max value: 13.72149081437837 - type: nauc_ndcg_at_10_std value: -39.22330058267065 - type: nauc_ndcg_at_1_diff1 value: 75.30680275173268 - type: nauc_ndcg_at_1_max value: 11.155747041372528 - type: nauc_ndcg_at_1_std value: -36.293140617659745 - type: nauc_ndcg_at_20_diff1 value: 73.28277450494292 - type: nauc_ndcg_at_20_max value: 14.535663475990301 - type: nauc_ndcg_at_20_std value: -37.26046955059598 - type: nauc_ndcg_at_3_diff1 value: 72.72482798395563 - type: nauc_ndcg_at_3_max value: 12.444138243180628 - type: nauc_ndcg_at_3_std value: -40.33495436729538 - type: nauc_ndcg_at_5_diff1 value: 73.30133655147367 - type: nauc_ndcg_at_5_max value: 14.829522693370064 - type: nauc_ndcg_at_5_std value: -39.12862351718661 - type: nauc_precision_at_1000_diff1 value: 44.780578898224775 - type: nauc_precision_at_1000_max value: 76.57329598506085 - type: nauc_precision_at_1000_std value: 91.0830999066278 - type: nauc_precision_at_100_diff1 value: 70.9014161220036 - type: nauc_precision_at_100_max value: 62.76649548708496 - type: nauc_precision_at_100_std value: 2.0269218798636595 - type: nauc_precision_at_10_diff1 value: 70.04678683067425 - type: nauc_precision_at_10_max value: 23.381744001948547 - type: nauc_precision_at_10_std value: -41.29572118702558 - type: nauc_precision_at_1_diff1 value: 75.30680275173268 - type: nauc_precision_at_1_max value: 11.155747041372528 - type: nauc_precision_at_1_std value: -36.293140617659745 - type: nauc_precision_at_20_diff1 value: 69.43748259537705 - type: nauc_precision_at_20_max value: 40.7735023834091 - type: nauc_precision_at_20_std value: -16.96234049175245 - type: nauc_precision_at_3_diff1 value: 70.13132727097876 - type: nauc_precision_at_3_max value: 15.740305397347907 - type: nauc_precision_at_3_std value: -43.715738969684544 - type: nauc_precision_at_5_diff1 value: 71.48226384169207 - type: nauc_precision_at_5_max value: 25.61128105858808 - type: nauc_precision_at_5_std value: -40.11777006930588 - type: nauc_recall_at_1000_diff1 value: 44.78057889822695 - type: nauc_recall_at_1000_max value: 76.57329598506108 - type: nauc_recall_at_1000_std value: 91.08309990663042 - type: nauc_recall_at_100_diff1 value: 70.90141612200432 - type: nauc_recall_at_100_max value: 62.76649548708361 - type: nauc_recall_at_100_std value: 2.026921879863032 - type: nauc_recall_at_10_diff1 value: 70.04678683067425 - type: nauc_recall_at_10_max value: 23.381744001948622 - type: nauc_recall_at_10_std value: -41.29572118702555 - type: nauc_recall_at_1_diff1 value: 75.30680275173268 - type: nauc_recall_at_1_max value: 11.155747041372528 - type: nauc_recall_at_1_std value: -36.293140617659745 - type: nauc_recall_at_20_diff1 value: 69.43748259537757 - type: nauc_recall_at_20_max value: 40.773502383409244 - type: nauc_recall_at_20_std value: -16.96234049175241 - type: nauc_recall_at_3_diff1 value: 70.13132727097874 - type: nauc_recall_at_3_max value: 15.740305397347834 - type: nauc_recall_at_3_std value: -43.71573896968448 - type: nauc_recall_at_5_diff1 value: 71.48226384169207 - type: nauc_recall_at_5_max value: 25.611281058588304 - type: nauc_recall_at_5_std value: -40.11777006930557 - type: ndcg_at_1 value: 64.4 - type: ndcg_at_10 value: 78.083 - type: ndcg_at_100 value: 79.58800000000001 - type: ndcg_at_1000 value: 79.827 - type: ndcg_at_20 value: 78.965 - type: ndcg_at_3 value: 74.589 - type: ndcg_at_5 value: 76.616 - type: precision_at_1 value: 64.4 - type: precision_at_10 value: 9.08 - type: precision_at_100 value: 0.976 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.715 - type: precision_at_3 value: 27.133000000000003 - type: precision_at_5 value: 17.26 - type: recall_at_1 value: 64.4 - type: recall_at_10 value: 90.8 - type: recall_at_100 value: 97.6 - type: recall_at_1000 value: 99.5 - type: recall_at_20 value: 94.3 - type: recall_at_3 value: 81.39999999999999 - type: recall_at_5 value: 86.3 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.73 - type: ap value: 75.94477616904837 - type: ap_weighted value: 75.94477616904837 - type: f1 value: 88.49503008728186 - type: f1_weighted value: 89.81243682011228 - type: main_score value: 89.73 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 metrics: - type: accuracy value: 65.99403578528826 - type: f1 value: 53.25166089526729 - type: f1_weighted value: 62.93360409816966 - type: main_score value: 65.99403578528826 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: main_score value: 53.351000000000006 - type: map_at_1 value: 26.529000000000003 - type: map_at_10 value: 43.807 - type: map_at_100 value: 44.718999999999994 - type: map_at_1000 value: 44.723 - type: map_at_20 value: 44.525999999999996 - type: map_at_3 value: 38.644 - type: map_at_5 value: 41.496 - type: mrr_at_1 value: 26.884779516358464 - type: mrr_at_10 value: 43.95479125742281 - type: mrr_at_100 value: 44.86725549680827 - type: mrr_at_1000 value: 44.87116160838017 - type: mrr_at_20 value: 44.67345189919329 - type: mrr_at_3 value: 38.70317686107158 - type: mrr_at_5 value: 41.64414414414416 - type: nauc_map_at_1000_diff1 value: 9.232373982888875 - type: nauc_map_at_1000_max value: -4.068056083461815 - type: nauc_map_at_1000_std value: -12.17898160676414 - type: nauc_map_at_100_diff1 value: 9.236177757941253 - type: nauc_map_at_100_max value: -4.058604696622173 - type: nauc_map_at_100_std value: -12.174775347574824 - type: nauc_map_at_10_diff1 value: 8.823094417220325 - type: nauc_map_at_10_max value: -4.200133290204178 - type: nauc_map_at_10_std value: -12.499507328459753 - type: nauc_map_at_1_diff1 value: 12.75385271339225 - type: nauc_map_at_1_max value: -5.5298282139755575 - type: nauc_map_at_1_std value: -11.362582460965157 - type: nauc_map_at_20_diff1 value: 9.228136527232165 - type: nauc_map_at_20_max value: -3.950649951410435 - type: nauc_map_at_20_std value: -12.160403937450361 - type: nauc_map_at_3_diff1 value: 8.889303495985441 - type: nauc_map_at_3_max value: -4.630707806393413 - type: nauc_map_at_3_std value: -12.279766448071545 - type: nauc_map_at_5_diff1 value: 8.739844838218664 - type: nauc_map_at_5_max value: -4.512794992475515 - type: nauc_map_at_5_std value: -12.615578235387586 - type: nauc_mrr_at_1000_diff1 value: 7.859852535285099 - type: nauc_mrr_at_1000_max value: -4.496086648935649 - type: nauc_mrr_at_1000_std value: -12.215285116169484 - type: nauc_mrr_at_100_diff1 value: 7.8638221999422555 - type: nauc_mrr_at_100_max value: -4.486614354767044 - type: nauc_mrr_at_100_std value: -12.211089781990605 - type: nauc_mrr_at_10_diff1 value: 7.491608732231778 - type: nauc_mrr_at_10_max value: -4.612750259444103 - type: nauc_mrr_at_10_std value: -12.533709553688768 - type: nauc_mrr_at_1_diff1 value: 11.591587453294407 - type: nauc_mrr_at_1_max value: -5.108412151719679 - type: nauc_mrr_at_1_std value: -11.315732159028302 - type: nauc_mrr_at_20_diff1 value: 7.865681116401513 - type: nauc_mrr_at_20_max value: -4.375183790213436 - type: nauc_mrr_at_20_std value: -12.19638025273968 - type: nauc_mrr_at_3_diff1 value: 7.231687029446421 - type: nauc_mrr_at_3_max value: -5.4123411687548355 - type: nauc_mrr_at_3_std value: -12.398561644250819 - type: nauc_mrr_at_5_diff1 value: 7.468909154261347 - type: nauc_mrr_at_5_max value: -4.918205171124155 - type: nauc_mrr_at_5_std value: -12.550158596771954 - type: nauc_ndcg_at_1000_diff1 value: 8.873356105519054 - type: nauc_ndcg_at_1000_max value: -3.6038347222273663 - type: nauc_ndcg_at_1000_std value: -11.960763468098095 - type: nauc_ndcg_at_100_diff1 value: 8.963774517420468 - type: nauc_ndcg_at_100_max value: -3.386116175995973 - type: nauc_ndcg_at_100_std value: -11.8741082666588 - type: nauc_ndcg_at_10_diff1 value: 7.334374734540952 - type: nauc_ndcg_at_10_max value: -3.497929167790477 - type: nauc_ndcg_at_10_std value: -13.031985147192678 - type: nauc_ndcg_at_1_diff1 value: 12.75385271339225 - type: nauc_ndcg_at_1_max value: -5.5298282139755575 - type: nauc_ndcg_at_1_std value: -11.362582460965157 - type: nauc_ndcg_at_20_diff1 value: 8.988492318291843 - type: nauc_ndcg_at_20_max value: -2.420084878132361 - type: nauc_ndcg_at_20_std value: -11.648341662365178 - type: nauc_ndcg_at_3_diff1 value: 7.7688424441042585 - type: nauc_ndcg_at_3_max value: -4.544011121759494 - type: nauc_ndcg_at_3_std value: -12.554960539771004 - type: nauc_ndcg_at_5_diff1 value: 7.467185712528959 - type: nauc_ndcg_at_5_max value: -4.292286418745977 - type: nauc_ndcg_at_5_std value: -13.212953784536655 - type: nauc_precision_at_1000_diff1 value: -8.87724002766174 - type: nauc_precision_at_1000_max value: 1.1191140416885268 - type: nauc_precision_at_1000_std value: 61.15556351251649 - type: nauc_precision_at_100_diff1 value: 19.226839642026334 - type: nauc_precision_at_100_max value: 40.96524244310276 - type: nauc_precision_at_100_std value: 34.93790376379203 - type: nauc_precision_at_10_diff1 value: -1.5820920560168286 - type: nauc_precision_at_10_max value: 1.0112918643622166 - type: nauc_precision_at_10_std value: -16.265324019859303 - type: nauc_precision_at_1_diff1 value: 12.75385271339225 - type: nauc_precision_at_1_max value: -5.5298282139755575 - type: nauc_precision_at_1_std value: -11.362582460965157 - type: nauc_precision_at_20_diff1 value: 11.343841852266277 - type: nauc_precision_at_20_max value: 22.319702129641623 - type: nauc_precision_at_20_std value: -0.38946935027592583 - type: nauc_precision_at_3_diff1 value: 4.565527213195263 - type: nauc_precision_at_3_max value: -4.354252001141582 - type: nauc_precision_at_3_std value: -13.344816310957258 - type: nauc_precision_at_5_diff1 value: 3.224959087808944 - type: nauc_precision_at_5_max value: -3.561501622611176 - type: nauc_precision_at_5_std value: -15.33681368286641 - type: nauc_recall_at_1000_diff1 value: -8.877240027663182 - type: nauc_recall_at_1000_max value: 1.119114041683115 - type: nauc_recall_at_1000_std value: 61.15556351251611 - type: nauc_recall_at_100_diff1 value: 19.226839642023744 - type: nauc_recall_at_100_max value: 40.965242443100955 - type: nauc_recall_at_100_std value: 34.93790376379083 - type: nauc_recall_at_10_diff1 value: -1.5820920560167417 - type: nauc_recall_at_10_max value: 1.0112918643623434 - type: nauc_recall_at_10_std value: -16.265324019859182 - type: nauc_recall_at_1_diff1 value: 12.75385271339225 - type: nauc_recall_at_1_max value: -5.5298282139755575 - type: nauc_recall_at_1_std value: -11.362582460965157 - type: nauc_recall_at_20_diff1 value: 11.343841852266326 - type: nauc_recall_at_20_max value: 22.319702129641623 - type: nauc_recall_at_20_std value: -0.3894693502758765 - type: nauc_recall_at_3_diff1 value: 4.56552721319533 - type: nauc_recall_at_3_max value: -4.354252001141524 - type: nauc_recall_at_3_std value: -13.344816310957178 - type: nauc_recall_at_5_diff1 value: 3.224959087808929 - type: nauc_recall_at_5_max value: -3.561501622611073 - type: nauc_recall_at_5_std value: -15.336813682866316 - type: ndcg_at_1 value: 26.529000000000003 - type: ndcg_at_10 value: 53.351000000000006 - type: ndcg_at_100 value: 56.989999999999995 - type: ndcg_at_1000 value: 57.06099999999999 - type: ndcg_at_20 value: 55.832 - type: ndcg_at_3 value: 42.635 - type: ndcg_at_5 value: 47.798 - type: precision_at_1 value: 26.529000000000003 - type: precision_at_10 value: 8.385 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.6690000000000005 - type: precision_at_3 value: 18.065 - type: precision_at_5 value: 13.357 - type: recall_at_1 value: 26.529000000000003 - type: recall_at_10 value: 83.855 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 93.38499999999999 - type: recall_at_3 value: 54.196 - type: recall_at_5 value: 66.78500000000001 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: 36ddb419bcffe6a5374c3891957912892916f28d metrics: - type: accuracy value: 82.63999999999999 - type: ap value: 37.68239965062571 - type: ap_weighted value: 37.68239965062571 - type: f1 value: 72.69572425169251 - type: f1_weighted value: 84.72936692258361 - type: main_score value: 82.63999999999999 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d metrics: - type: cosine_accuracy value: 88.2 - type: cosine_accuracy_threshold value: 94.6662962436676 - type: cosine_ap value: 74.22222496320697 - type: cosine_f1 value: 66.15384615384615 - type: cosine_f1_threshold value: 92.2417163848877 - type: cosine_precision value: 64.5 - type: cosine_recall value: 67.89473684210526 - type: dot_accuracy value: 88.2 - type: dot_accuracy_threshold value: 94.66629028320312 - type: dot_ap value: 74.22222496320697 - type: dot_f1 value: 66.15384615384615 - type: dot_f1_threshold value: 92.2417163848877 - type: dot_precision value: 64.5 - type: dot_recall value: 67.89473684210526 - type: euclidean_accuracy value: 88.2 - type: euclidean_accuracy_threshold value: 32.66091346740723 - type: euclidean_ap value: 74.22222496320697 - type: euclidean_f1 value: 66.15384615384615 - type: euclidean_f1_threshold value: 39.39100503921509 - type: euclidean_precision value: 64.5 - type: euclidean_recall value: 67.89473684210526 - type: main_score value: 74.36531507975964 - type: manhattan_accuracy value: 88.2 - type: manhattan_accuracy_threshold value: 1549.806785583496 - type: manhattan_ap value: 74.36531507975964 - type: manhattan_f1 value: 66.15384615384615 - type: manhattan_f1_threshold value: 1878.4736633300781 - type: manhattan_precision value: 64.5 - type: manhattan_recall value: 67.89473684210526 - type: max_ap value: 74.36531507975964 - type: max_f1 value: 66.15384615384615 - type: max_precision value: 64.5 - type: max_recall value: 67.89473684210526 - type: similarity_accuracy value: 88.2 - type: similarity_accuracy_threshold value: 94.6662962436676 - type: similarity_ap value: 74.22222496320697 - type: similarity_f1 value: 66.15384615384615 - type: similarity_f1_threshold value: 92.2417163848877 - type: similarity_precision value: 64.5 - type: similarity_recall value: 67.89473684210526 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd metrics: - type: cosine_pearson value: 92.75973491857985 - type: cosine_spearman value: 92.4445246590692 - type: euclidean_pearson value: 90.98932706522189 - type: euclidean_spearman value: 92.44441114690339 - type: main_score value: 92.4445246590692 - type: manhattan_pearson value: 91.03239818337802 - type: manhattan_spearman value: 92.48485691295049 - type: pearson value: 92.75973491857985 - type: spearman value: 92.4445246590692 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: 78b962b130c6690659c65abf67bf1c2f030606b6 metrics: - type: main_score value: 53.60989415215326 - type: v_measure value: 53.60989415215326 - type: v_measure_std value: 2.313378085094977 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: main_score value: 43.111 - type: map_at_1 value: 21.705 - type: map_at_10 value: 35.185 - type: map_at_100 value: 37.24 - type: map_at_1000 value: 37.409 - type: map_at_20 value: 36.369 - type: map_at_3 value: 31.086999999999996 - type: map_at_5 value: 33.346 - type: mrr_at_1 value: 43.20987654320987 - type: mrr_at_10 value: 52.08112874779539 - type: mrr_at_100 value: 52.92693034067337 - type: mrr_at_1000 value: 52.96985638592845 - type: mrr_at_20 value: 52.63433256924181 - type: mrr_at_3 value: 50.12860082304527 - type: mrr_at_5 value: 51.40174897119337 - type: nauc_map_at_1000_diff1 value: 45.22359169866035 - type: nauc_map_at_1000_max value: 26.964225976378625 - type: nauc_map_at_1000_std value: -1.2633276428493687 - type: nauc_map_at_100_diff1 value: 45.17522741559718 - type: nauc_map_at_100_max value: 26.8170207755648 - type: nauc_map_at_100_std value: -1.3067151571124742 - type: nauc_map_at_10_diff1 value: 44.759040662243905 - type: nauc_map_at_10_max value: 25.2398999812798 - type: nauc_map_at_10_std value: -2.6300353727754704 - type: nauc_map_at_1_diff1 value: 50.712357721644544 - type: nauc_map_at_1_max value: 18.48175989564228 - type: nauc_map_at_1_std value: -2.5668616275083886 - type: nauc_map_at_20_diff1 value: 44.88634437373346 - type: nauc_map_at_20_max value: 26.07694343780731 - type: nauc_map_at_20_std value: -2.066632684864094 - type: nauc_map_at_3_diff1 value: 44.89927178718778 - type: nauc_map_at_3_max value: 21.75924665528133 - type: nauc_map_at_3_std value: -4.243935833641743 - type: nauc_map_at_5_diff1 value: 44.822591471849584 - type: nauc_map_at_5_max value: 24.457314644900432 - type: nauc_map_at_5_std value: -2.9604058866761034 - type: nauc_mrr_at_1000_diff1 value: 54.00470787189677 - type: nauc_mrr_at_1000_max value: 36.82223347638309 - type: nauc_mrr_at_1000_std value: 0.5677137777361332 - type: nauc_mrr_at_100_diff1 value: 54.00809810037448 - type: nauc_mrr_at_100_max value: 36.82057634428283 - type: nauc_mrr_at_100_std value: 0.5937776062605836 - type: nauc_mrr_at_10_diff1 value: 53.913976617266876 - type: nauc_mrr_at_10_max value: 36.78443629024914 - type: nauc_mrr_at_10_std value: 0.3156405683490351 - type: nauc_mrr_at_1_diff1 value: 59.548220722261 - type: nauc_mrr_at_1_max value: 36.480987777448576 - type: nauc_mrr_at_1_std value: -0.19083615874029042 - type: nauc_mrr_at_20_diff1 value: 53.81493087917239 - type: nauc_mrr_at_20_max value: 36.77603799391825 - type: nauc_mrr_at_20_std value: 0.44387937560742335 - type: nauc_mrr_at_3_diff1 value: 54.30581644430954 - type: nauc_mrr_at_3_max value: 36.3988298638316 - type: nauc_mrr_at_3_std value: -0.7870642848532561 - type: nauc_mrr_at_5_diff1 value: 54.134566429387846 - type: nauc_mrr_at_5_max value: 37.24697804792816 - type: nauc_mrr_at_5_std value: 0.6599484143161592 - type: nauc_ndcg_at_1000_diff1 value: 46.86756299523301 - type: nauc_ndcg_at_1000_max value: 32.47579882407152 - type: nauc_ndcg_at_1000_std value: 2.9212493033536395 - type: nauc_ndcg_at_100_diff1 value: 46.49674811422101 - type: nauc_ndcg_at_100_max value: 30.90807918981533 - type: nauc_ndcg_at_100_std value: 3.0639785859945508 - type: nauc_ndcg_at_10_diff1 value: 45.095057667243815 - type: nauc_ndcg_at_10_max value: 27.820331872338212 - type: nauc_ndcg_at_10_std value: -1.194673973265985 - type: nauc_ndcg_at_1_diff1 value: 59.548220722261 - type: nauc_ndcg_at_1_max value: 36.480987777448576 - type: nauc_ndcg_at_1_std value: -0.19083615874029042 - type: nauc_ndcg_at_20_diff1 value: 45.00142992123534 - type: nauc_ndcg_at_20_max value: 28.488501226554703 - type: nauc_ndcg_at_20_std value: -0.3191716639403193 - type: nauc_ndcg_at_3_diff1 value: 45.31439967160271 - type: nauc_ndcg_at_3_max value: 29.94608938092995 - type: nauc_ndcg_at_3_std value: -1.9253627902575856 - type: nauc_ndcg_at_5_diff1 value: 45.45846426730726 - type: nauc_ndcg_at_5_max value: 29.38932093491733 - type: nauc_ndcg_at_5_std value: -0.9085140563777799 - type: nauc_precision_at_1000_diff1 value: 3.7560699954595695 - type: nauc_precision_at_1000_max value: 35.240018162324894 - type: nauc_precision_at_1000_std value: 15.003533078217071 - type: nauc_precision_at_100_diff1 value: 11.077365718773592 - type: nauc_precision_at_100_max value: 37.20336505058565 - type: nauc_precision_at_100_std value: 17.346890083595074 - type: nauc_precision_at_10_diff1 value: 21.8215360274433 - type: nauc_precision_at_10_max value: 35.93379458870689 - type: nauc_precision_at_10_std value: 6.090338171659745 - type: nauc_precision_at_1_diff1 value: 59.548220722261 - type: nauc_precision_at_1_max value: 36.480987777448576 - type: nauc_precision_at_1_std value: -0.19083615874029042 - type: nauc_precision_at_20_diff1 value: 17.36401943339932 - type: nauc_precision_at_20_max value: 37.069187376602926 - type: nauc_precision_at_20_std value: 10.266419255060816 - type: nauc_precision_at_3_diff1 value: 31.39256423859058 - type: nauc_precision_at_3_max value: 34.15678019686601 - type: nauc_precision_at_3_std value: 0.02756542022676699 - type: nauc_precision_at_5_diff1 value: 26.23362958027557 - type: nauc_precision_at_5_max value: 37.9855390258922 - type: nauc_precision_at_5_std value: 5.470421998388935 - type: nauc_recall_at_1000_diff1 value: 31.350193187513618 - type: nauc_recall_at_1000_max value: 33.95845031501462 - type: nauc_recall_at_1000_std value: 35.21124266753162 - type: nauc_recall_at_100_diff1 value: 33.30267303607164 - type: nauc_recall_at_100_max value: 21.433003016848104 - type: nauc_recall_at_100_std value: 18.222213857455774 - type: nauc_recall_at_10_diff1 value: 32.89154280626735 - type: nauc_recall_at_10_max value: 18.810546084237014 - type: nauc_recall_at_10_std value: -1.2240791994400735 - type: nauc_recall_at_1_diff1 value: 50.712357721644544 - type: nauc_recall_at_1_max value: 18.48175989564228 - type: nauc_recall_at_1_std value: -2.5668616275083886 - type: nauc_recall_at_20_diff1 value: 29.873966057145047 - type: nauc_recall_at_20_max value: 16.89336942784055 - type: nauc_recall_at_20_std value: 0.21329104768110707 - type: nauc_recall_at_3_diff1 value: 35.59346624099742 - type: nauc_recall_at_3_max value: 17.84711771266179 - type: nauc_recall_at_3_std value: -4.199925899836503 - type: nauc_recall_at_5_diff1 value: 34.7713738660007 - type: nauc_recall_at_5_max value: 21.272448666890547 - type: nauc_recall_at_5_std value: -0.5108237688536543 - type: ndcg_at_1 value: 43.21 - type: ndcg_at_10 value: 43.111 - type: ndcg_at_100 value: 50.259 - type: ndcg_at_1000 value: 53.007000000000005 - type: ndcg_at_20 value: 46.06 - type: ndcg_at_3 value: 40.17 - type: ndcg_at_5 value: 40.952 - type: precision_at_1 value: 43.21 - type: precision_at_10 value: 11.744 - type: precision_at_100 value: 1.9009999999999998 - type: precision_at_1000 value: 0.24 - type: precision_at_20 value: 7.106 - type: precision_at_3 value: 27.006000000000004 - type: precision_at_5 value: 19.506 - type: recall_at_1 value: 21.705 - type: recall_at_10 value: 49.275000000000006 - type: recall_at_100 value: 75.638 - type: recall_at_1000 value: 91.81899999999999 - type: recall_at_20 value: 58.35900000000001 - type: recall_at_3 value: 36.636 - type: recall_at_5 value: 42.143 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 79.99327505043712 - type: f1 value: 77.0333311593554 - type: f1_weighted value: 79.28333714977292 - type: main_score value: 79.99327505043712 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 87.955615332885 - type: f1 value: 86.41797268341179 - type: f1_weighted value: 87.5309539428662 - type: main_score value: 87.955615332885 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: main_score value: 37.181999999999995 - type: map_at_1 value: 6.005 - type: map_at_10 value: 14.035 - type: map_at_100 value: 17.738 - type: map_at_1000 value: 19.255 - type: map_at_20 value: 15.568000000000001 - type: map_at_3 value: 10.358 - type: map_at_5 value: 11.913 - type: mrr_at_1 value: 47.6780185758514 - type: mrr_at_10 value: 57.203425229741015 - type: mrr_at_100 value: 57.56621767702782 - type: mrr_at_1000 value: 57.60137092760998 - type: mrr_at_20 value: 57.383094631546626 - type: mrr_at_3 value: 54.79876160990713 - type: mrr_at_5 value: 56.25386996904025 - type: nauc_map_at_1000_diff1 value: 30.15920658261017 - type: nauc_map_at_1000_max value: 18.697844335779195 - type: nauc_map_at_1000_std value: 17.504145978979857 - type: nauc_map_at_100_diff1 value: 31.69485366172891 - type: nauc_map_at_100_max value: 17.070627131201345 - type: nauc_map_at_100_std value: 13.976975783039036 - type: nauc_map_at_10_diff1 value: 34.59687778994698 - type: nauc_map_at_10_max value: 10.736547255226872 - type: nauc_map_at_10_std value: 2.876051483299374 - type: nauc_map_at_1_diff1 value: 50.74344329574154 - type: nauc_map_at_1_max value: 0.9762792036654588 - type: nauc_map_at_1_std value: -7.655812444165831 - type: nauc_map_at_20_diff1 value: 32.93670297540166 - type: nauc_map_at_20_max value: 13.528817285383326 - type: nauc_map_at_20_std value: 7.845597968128404 - type: nauc_map_at_3_diff1 value: 40.731103498765044 - type: nauc_map_at_3_max value: 5.530076642266395 - type: nauc_map_at_3_std value: -4.307688798634782 - type: nauc_map_at_5_diff1 value: 37.08822769221841 - type: nauc_map_at_5_max value: 6.864140042218396 - type: nauc_map_at_5_std value: -2.5076272546091527 - type: nauc_mrr_at_1000_diff1 value: 37.81105836490243 - type: nauc_mrr_at_1000_max value: 34.49642655690039 - type: nauc_mrr_at_1000_std value: 29.413769393575844 - type: nauc_mrr_at_100_diff1 value: 37.81676746183447 - type: nauc_mrr_at_100_max value: 34.53403764840245 - type: nauc_mrr_at_100_std value: 29.445462765835423 - type: nauc_mrr_at_10_diff1 value: 37.8833741267051 - type: nauc_mrr_at_10_max value: 34.29866132661342 - type: nauc_mrr_at_10_std value: 29.26732666777465 - type: nauc_mrr_at_1_diff1 value: 39.23520764077875 - type: nauc_mrr_at_1_max value: 28.28649166679672 - type: nauc_mrr_at_1_std value: 23.527226416504867 - type: nauc_mrr_at_20_diff1 value: 37.76721134632054 - type: nauc_mrr_at_20_max value: 34.422720184681076 - type: nauc_mrr_at_20_std value: 29.3572353131936 - type: nauc_mrr_at_3_diff1 value: 38.22286912669354 - type: nauc_mrr_at_3_max value: 33.11028988697281 - type: nauc_mrr_at_3_std value: 28.16159032000311 - type: nauc_mrr_at_5_diff1 value: 37.26666908285567 - type: nauc_mrr_at_5_max value: 33.658380105419745 - type: nauc_mrr_at_5_std value: 28.489355304113534 - type: nauc_ndcg_at_1000_diff1 value: 29.28755066222835 - type: nauc_ndcg_at_1000_max value: 38.04786730883246 - type: nauc_ndcg_at_1000_std value: 35.155681103285744 - type: nauc_ndcg_at_100_diff1 value: 26.83688959797987 - type: nauc_ndcg_at_100_max value: 31.007017064637775 - type: nauc_ndcg_at_100_std value: 30.72150464678299 - type: nauc_ndcg_at_10_diff1 value: 24.56743424095603 - type: nauc_ndcg_at_10_max value: 27.90661269311946 - type: nauc_ndcg_at_10_std value: 28.7666082225066 - type: nauc_ndcg_at_1_diff1 value: 38.70206364179928 - type: nauc_ndcg_at_1_max value: 27.190297115274358 - type: nauc_ndcg_at_1_std value: 22.95639446904536 - type: nauc_ndcg_at_20_diff1 value: 24.18819142115177 - type: nauc_ndcg_at_20_max value: 27.703281828683686 - type: nauc_ndcg_at_20_std value: 29.439571642165376 - type: nauc_ndcg_at_3_diff1 value: 30.1805938823191 - type: nauc_ndcg_at_3_max value: 28.137969889145666 - type: nauc_ndcg_at_3_std value: 26.67201910505581 - type: nauc_ndcg_at_5_diff1 value: 26.36616187102256 - type: nauc_ndcg_at_5_max value: 27.064033602387582 - type: nauc_ndcg_at_5_std value: 27.20083837477969 - type: nauc_precision_at_1000_diff1 value: -15.762617643754536 - type: nauc_precision_at_1000_max value: 12.086256164314872 - type: nauc_precision_at_1000_std value: 37.36805458026991 - type: nauc_precision_at_100_diff1 value: -8.924734504714456 - type: nauc_precision_at_100_max value: 20.867005238645664 - type: nauc_precision_at_100_std value: 44.79218976079051 - type: nauc_precision_at_10_diff1 value: 5.623748010617045 - type: nauc_precision_at_10_max value: 29.959820187901148 - type: nauc_precision_at_10_std value: 39.254005672411154 - type: nauc_precision_at_1_diff1 value: 39.23520764077875 - type: nauc_precision_at_1_max value: 28.28649166679672 - type: nauc_precision_at_1_std value: 23.527226416504867 - type: nauc_precision_at_20_diff1 value: 0.7933677746897775 - type: nauc_precision_at_20_max value: 28.309442094141613 - type: nauc_precision_at_20_std value: 42.99301197517682 - type: nauc_precision_at_3_diff1 value: 21.397369218557998 - type: nauc_precision_at_3_max value: 30.09568921070654 - type: nauc_precision_at_3_std value: 31.27635832902314 - type: nauc_precision_at_5_diff1 value: 11.718010513653386 - type: nauc_precision_at_5_max value: 29.17333558510002 - type: nauc_precision_at_5_std value: 34.16919196896968 - type: nauc_recall_at_1000_diff1 value: 16.074024810442996 - type: nauc_recall_at_1000_max value: 24.8803187111926 - type: nauc_recall_at_1000_std value: 22.112351910557678 - type: nauc_recall_at_100_diff1 value: 18.640353798770423 - type: nauc_recall_at_100_max value: 22.022461574613477 - type: nauc_recall_at_100_std value: 21.275712625249728 - type: nauc_recall_at_10_diff1 value: 24.57564999742599 - type: nauc_recall_at_10_max value: 10.159393639399559 - type: nauc_recall_at_10_std value: 1.715146528962189 - type: nauc_recall_at_1_diff1 value: 50.74344329574154 - type: nauc_recall_at_1_max value: 0.9762792036654588 - type: nauc_recall_at_1_std value: -7.655812444165831 - type: nauc_recall_at_20_diff1 value: 23.410240763178415 - type: nauc_recall_at_20_max value: 14.59215011515735 - type: nauc_recall_at_20_std value: 7.87344552929364 - type: nauc_recall_at_3_diff1 value: 35.52933766101892 - type: nauc_recall_at_3_max value: 4.567057901941034 - type: nauc_recall_at_3_std value: -4.83364773944478 - type: nauc_recall_at_5_diff1 value: 28.71866842031599 - type: nauc_recall_at_5_max value: 5.501045118217177 - type: nauc_recall_at_5_std value: -4.12703909487824 - type: ndcg_at_1 value: 45.356 - type: ndcg_at_10 value: 37.181999999999995 - type: ndcg_at_100 value: 33.759 - type: ndcg_at_1000 value: 42.369 - type: ndcg_at_20 value: 34.437 - type: ndcg_at_3 value: 42.692 - type: ndcg_at_5 value: 40.467 - type: precision_at_1 value: 47.678 - type: precision_at_10 value: 27.647 - type: precision_at_100 value: 8.563 - type: precision_at_1000 value: 2.157 - type: precision_at_20 value: 20.341 - type: precision_at_3 value: 40.351 - type: precision_at_5 value: 35.356 - type: recall_at_1 value: 6.005 - type: recall_at_10 value: 18.302 - type: recall_at_100 value: 33.742 - type: recall_at_1000 value: 64.893 - type: recall_at_20 value: 21.741 - type: recall_at_3 value: 11.44 - type: recall_at_5 value: 14.069999999999999 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 metrics: - type: accuracy value: 68.87054735013032 - type: ap value: 77.08014124599376 - type: ap_weighted value: 77.08014124599376 - type: f1 value: 66.18723905427973 - type: f1_weighted value: 69.37126957872458 - type: main_score value: 68.87054735013032 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 metrics: - type: cosine_accuracy value: 98.88682745825604 - type: cosine_accuracy_threshold value: 74.04214143753052 - type: cosine_ap value: 99.19691317424578 - type: cosine_f1 value: 98.17629179331307 - type: cosine_f1_threshold value: 74.04214143753052 - type: cosine_precision value: 97.87878787878788 - type: cosine_recall value: 98.47560975609755 - type: dot_accuracy value: 98.88682745825604 - type: dot_accuracy_threshold value: 74.04214143753052 - type: dot_ap value: 99.19691317424578 - type: dot_f1 value: 98.17629179331307 - type: dot_f1_threshold value: 74.04214143753052 - type: dot_precision value: 97.87878787878788 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 98.88682745825604 - type: euclidean_accuracy_threshold value: 72.0522403717041 - type: euclidean_ap value: 99.19691317424578 - type: euclidean_f1 value: 98.17629179331307 - type: euclidean_f1_threshold value: 72.0522403717041 - type: euclidean_precision value: 97.87878787878788 - type: euclidean_recall value: 98.47560975609755 - type: main_score value: 99.19691317424578 - type: manhattan_accuracy value: 98.88682745825604 - type: manhattan_accuracy_threshold value: 3419.777297973633 - type: manhattan_ap value: 99.16455633817671 - type: manhattan_f1 value: 98.18181818181819 - type: manhattan_f1_threshold value: 3466.407012939453 - type: manhattan_precision value: 97.59036144578313 - type: manhattan_recall value: 98.78048780487805 - type: max_ap value: 99.19691317424578 - type: max_f1 value: 98.18181818181819 - type: max_precision value: 97.87878787878788 - type: max_recall value: 98.78048780487805 - type: similarity_accuracy value: 98.88682745825604 - type: similarity_accuracy_threshold value: 74.04214143753052 - type: similarity_ap value: 99.19691317424578 - type: similarity_f1 value: 98.17629179331307 - type: similarity_f1_threshold value: 74.04214143753052 - type: similarity_precision value: 97.87878787878788 - type: similarity_recall value: 98.47560975609755 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: d90724373c70959f17d2331ad51fb60c71176b03 metrics: - type: accuracy value: 89.69529085872577 - type: f1 value: 85.95689330902374 - type: f1_weighted value: 88.81737709614171 - type: main_score value: 89.69529085872577 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 metrics: - type: accuracy value: 70.54655870445343 - type: f1 value: 53.119395993492425 - type: f1_weighted value: 69.8273475674514 - type: main_score value: 70.54655870445343 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: 2c7d2df57801a591f6b1e3aaf042e7a04ec7d9f2 metrics: - type: cosine_accuracy value: 84.2 - type: cosine_accuracy_threshold value: 92.7859902381897 - type: cosine_ap value: 93.91073870222274 - type: cosine_f1 value: 87.14632174616007 - type: cosine_f1_threshold value: 91.77231788635254 - type: cosine_precision value: 85.15007898894154 - type: cosine_recall value: 89.23841059602648 - type: dot_accuracy value: 84.2 - type: dot_accuracy_threshold value: 92.78599619865417 - type: dot_ap value: 93.91072112420935 - type: dot_f1 value: 87.14632174616007 - type: dot_f1_threshold value: 91.77231788635254 - type: dot_precision value: 85.15007898894154 - type: dot_recall value: 89.23841059602648 - type: euclidean_accuracy value: 84.2 - type: euclidean_accuracy_threshold value: 37.98415660858154 - type: euclidean_ap value: 93.91072112420935 - type: euclidean_f1 value: 87.14632174616007 - type: euclidean_f1_threshold value: 40.56519865989685 - type: euclidean_precision value: 85.15007898894154 - type: euclidean_recall value: 89.23841059602648 - type: main_score value: 93.94349693540352 - type: manhattan_accuracy value: 84.2 - type: manhattan_accuracy_threshold value: 1767.9145812988281 - type: manhattan_ap value: 93.94349693540352 - type: manhattan_f1 value: 87.18775181305399 - type: manhattan_f1_threshold value: 1931.8328857421875 - type: manhattan_precision value: 84.92935635792779 - type: manhattan_recall value: 89.56953642384106 - type: max_ap value: 93.94349693540352 - type: max_f1 value: 87.18775181305399 - type: max_precision value: 85.15007898894154 - type: max_recall value: 89.56953642384106 - type: similarity_accuracy value: 84.2 - type: similarity_accuracy_threshold value: 92.7859902381897 - type: similarity_ap value: 93.91073870222274 - type: similarity_f1 value: 87.14632174616007 - type: similarity_f1_threshold value: 91.77231788635254 - type: similarity_precision value: 85.15007898894154 - type: similarity_recall value: 89.23841059602648 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: main_score value: 82.811 - type: map_at_1 value: 64.22200000000001 - type: map_at_10 value: 78.337 - type: map_at_100 value: 79.104 - type: map_at_1000 value: 79.125 - type: map_at_20 value: 78.828 - type: map_at_3 value: 75.10900000000001 - type: map_at_5 value: 77.14999999999999 - type: mrr_at_1 value: 73.9 - type: mrr_at_10 value: 81.5056150793645 - type: mrr_at_100 value: 81.72450500846445 - type: mrr_at_1000 value: 81.72739086777847 - type: mrr_at_20 value: 81.6637575872693 - type: mrr_at_3 value: 80.19166666666617 - type: mrr_at_5 value: 81.07366666666597 - type: nauc_map_at_1000_diff1 value: 72.15073993574799 - type: nauc_map_at_1000_max value: 14.104308830790208 - type: nauc_map_at_1000_std value: -35.93579764240557 - type: nauc_map_at_100_diff1 value: 72.15491641674454 - type: nauc_map_at_100_max value: 14.070622274367626 - type: nauc_map_at_100_std value: -35.98582215332103 - type: nauc_map_at_10_diff1 value: 72.37131745356795 - type: nauc_map_at_10_max value: 13.325706583324425 - type: nauc_map_at_10_std value: -37.813076604830236 - type: nauc_map_at_1_diff1 value: 76.20667245908288 - type: nauc_map_at_1_max value: 8.916795322813984 - type: nauc_map_at_1_std value: -35.04320029862817 - type: nauc_map_at_20_diff1 value: 72.23763551805725 - type: nauc_map_at_20_max value: 13.841503910593367 - type: nauc_map_at_20_std value: -36.67602262879847 - type: nauc_map_at_3_diff1 value: 72.92560407123295 - type: nauc_map_at_3_max value: 11.235639767513021 - type: nauc_map_at_3_std value: -39.647816514177855 - type: nauc_map_at_5_diff1 value: 72.48073336959445 - type: nauc_map_at_5_max value: 12.220438295805565 - type: nauc_map_at_5_std value: -39.023289819654636 - type: nauc_mrr_at_1000_diff1 value: 72.51583622774653 - type: nauc_mrr_at_1000_max value: 16.98774390273646 - type: nauc_mrr_at_1000_std value: -31.065900159207715 - type: nauc_mrr_at_100_diff1 value: 72.51452567150513 - type: nauc_mrr_at_100_max value: 16.99225053754663 - type: nauc_mrr_at_100_std value: -31.06024377902469 - type: nauc_mrr_at_10_diff1 value: 72.459717564441 - type: nauc_mrr_at_10_max value: 17.047894710156335 - type: nauc_mrr_at_10_std value: -31.163397837325657 - type: nauc_mrr_at_1_diff1 value: 74.34962315595017 - type: nauc_mrr_at_1_max value: 16.232428597703564 - type: nauc_mrr_at_1_std value: -30.33439982860624 - type: nauc_mrr_at_20_diff1 value: 72.50100848926384 - type: nauc_mrr_at_20_max value: 17.044600109831716 - type: nauc_mrr_at_20_std value: -31.041591300581707 - type: nauc_mrr_at_3_diff1 value: 72.15293314544314 - type: nauc_mrr_at_3_max value: 16.423892823355416 - type: nauc_mrr_at_3_std value: -31.62951902905381 - type: nauc_mrr_at_5_diff1 value: 72.26948977798959 - type: nauc_mrr_at_5_max value: 16.83836470409536 - type: nauc_mrr_at_5_std value: -31.469685462677237 - type: nauc_ndcg_at_1000_diff1 value: 71.73241720245922 - type: nauc_ndcg_at_1000_max value: 15.960634740217778 - type: nauc_ndcg_at_1000_std value: -33.08822605963674 - type: nauc_ndcg_at_100_diff1 value: 71.7048577282244 - type: nauc_ndcg_at_100_max value: 15.914787426350644 - type: nauc_ndcg_at_100_std value: -33.09915467535268 - type: nauc_ndcg_at_10_diff1 value: 71.72935687862247 - type: nauc_ndcg_at_10_max value: 15.285595262376422 - type: nauc_ndcg_at_10_std value: -36.114147550342466 - type: nauc_ndcg_at_1_diff1 value: 74.3095060122446 - type: nauc_ndcg_at_1_max value: 15.896165818869049 - type: nauc_ndcg_at_1_std value: -30.577849344412915 - type: nauc_ndcg_at_20_diff1 value: 71.7481385200199 - type: nauc_ndcg_at_20_max value: 15.93738616300547 - type: nauc_ndcg_at_20_std value: -34.397287767250276 - type: nauc_ndcg_at_3_diff1 value: 70.94783415394954 - type: nauc_ndcg_at_3_max value: 13.427132709599332 - type: nauc_ndcg_at_3_std value: -36.69816452579935 - type: nauc_ndcg_at_5_diff1 value: 71.19193194345956 - type: nauc_ndcg_at_5_max value: 14.063914250461268 - type: nauc_ndcg_at_5_std value: -36.931896151825406 - type: nauc_precision_at_1000_diff1 value: -40.1635795214179 - type: nauc_precision_at_1000_max value: 11.09721147807486 - type: nauc_precision_at_1000_std value: 41.5274172621687 - type: nauc_precision_at_100_diff1 value: -39.35624983064254 - type: nauc_precision_at_100_max value: 10.674814349756847 - type: nauc_precision_at_100_std value: 40.07174786563651 - type: nauc_precision_at_10_diff1 value: -27.963679171519495 - type: nauc_precision_at_10_max value: 10.689992039048768 - type: nauc_precision_at_10_std value: 22.521441809013464 - type: nauc_precision_at_1_diff1 value: 74.3095060122446 - type: nauc_precision_at_1_max value: 15.896165818869049 - type: nauc_precision_at_1_std value: -30.577849344412915 - type: nauc_precision_at_20_diff1 value: -34.28581750438746 - type: nauc_precision_at_20_max value: 10.854822103124798 - type: nauc_precision_at_20_std value: 31.646781189681008 - type: nauc_precision_at_3_diff1 value: -2.0635597236958936 - type: nauc_precision_at_3_max value: 11.135678269032631 - type: nauc_precision_at_3_std value: -0.05834309285657541 - type: nauc_precision_at_5_diff1 value: -17.3761733557746 - type: nauc_precision_at_5_max value: 10.737670282318144 - type: nauc_precision_at_5_std value: 11.533938351666164 - type: nauc_recall_at_1000_diff1 value: 58.41007062940435 - type: nauc_recall_at_1000_max value: 40.284040101660565 - type: nauc_recall_at_1000_std value: 27.268352137548433 - type: nauc_recall_at_100_diff1 value: 62.50837127034779 - type: nauc_recall_at_100_max value: 25.688183932525877 - type: nauc_recall_at_100_std value: -7.428850363161603 - type: nauc_recall_at_10_diff1 value: 66.38504668345963 - type: nauc_recall_at_10_max value: 15.817414768095706 - type: nauc_recall_at_10_std value: -43.98475863850886 - type: nauc_recall_at_1_diff1 value: 76.20667245908288 - type: nauc_recall_at_1_max value: 8.916795322813984 - type: nauc_recall_at_1_std value: -35.04320029862817 - type: nauc_recall_at_20_diff1 value: 65.48693395392615 - type: nauc_recall_at_20_max value: 21.67319398834831 - type: nauc_recall_at_20_std value: -33.8694441912123 - type: nauc_recall_at_3_diff1 value: 68.015634348564 - type: nauc_recall_at_3_max value: 8.765242984124635 - type: nauc_recall_at_3_std value: -44.21955965332914 - type: nauc_recall_at_5_diff1 value: 66.64261362254948 - type: nauc_recall_at_5_max value: 10.666104789889985 - type: nauc_recall_at_5_std value: -45.92972699563297 - type: ndcg_at_1 value: 73.92 - type: ndcg_at_10 value: 82.811 - type: ndcg_at_100 value: 84.65700000000001 - type: ndcg_at_1000 value: 84.852 - type: ndcg_at_20 value: 83.78999999999999 - type: ndcg_at_3 value: 79.182 - type: ndcg_at_5 value: 81.227 - type: precision_at_1 value: 73.92 - type: precision_at_10 value: 12.787 - type: precision_at_100 value: 1.508 - type: precision_at_1000 value: 0.155 - type: precision_at_20 value: 6.876 - type: precision_at_3 value: 34.813 - type: precision_at_5 value: 23.183999999999997 - type: recall_at_1 value: 64.22200000000001 - type: recall_at_10 value: 92.00399999999999 - type: recall_at_100 value: 98.725 - type: recall_at_1000 value: 99.811 - type: recall_at_20 value: 95.27799999999999 - type: recall_at_3 value: 81.943 - type: recall_at_5 value: 87.382 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: main_score value: 19.64 - type: map_at_1 value: 4.593 - type: map_at_10 value: 11.802999999999999 - type: map_at_100 value: 13.956 - type: map_at_1000 value: 14.262 - type: map_at_20 value: 12.805 - type: map_at_3 value: 8.488 - type: map_at_5 value: 10.039 - type: mrr_at_1 value: 22.6 - type: mrr_at_10 value: 33.0490079365079 - type: mrr_at_100 value: 34.13187495754542 - type: mrr_at_1000 value: 34.19881538448373 - type: mrr_at_20 value: 33.650772133782915 - type: mrr_at_3 value: 29.616666666666664 - type: mrr_at_5 value: 31.56666666666663 - type: nauc_map_at_1000_diff1 value: 14.889338930379067 - type: nauc_map_at_1000_max value: 29.678699779362898 - type: nauc_map_at_1000_std value: 15.209595818976002 - type: nauc_map_at_100_diff1 value: 14.987116811160131 - type: nauc_map_at_100_max value: 29.62387929406472 - type: nauc_map_at_100_std value: 14.872204942602071 - type: nauc_map_at_10_diff1 value: 15.149980088427023 - type: nauc_map_at_10_max value: 27.405319036412028 - type: nauc_map_at_10_std value: 11.305864303389066 - type: nauc_map_at_1_diff1 value: 19.115149776029494 - type: nauc_map_at_1_max value: 23.93855447858197 - type: nauc_map_at_1_std value: 4.954064604034524 - type: nauc_map_at_20_diff1 value: 15.076788502929347 - type: nauc_map_at_20_max value: 28.611637620849983 - type: nauc_map_at_20_std value: 12.902423005768132 - type: nauc_map_at_3_diff1 value: 15.470906644032095 - type: nauc_map_at_3_max value: 24.127557731914553 - type: nauc_map_at_3_std value: 7.102116140964592 - type: nauc_map_at_5_diff1 value: 15.23030610697015 - type: nauc_map_at_5_max value: 25.928658097022193 - type: nauc_map_at_5_std value: 8.582103580584825 - type: nauc_mrr_at_1000_diff1 value: 16.006011062512478 - type: nauc_mrr_at_1000_max value: 25.49590927977452 - type: nauc_mrr_at_1000_std value: 10.883368749777574 - type: nauc_mrr_at_100_diff1 value: 16.017226265493793 - type: nauc_mrr_at_100_max value: 25.53366020211784 - type: nauc_mrr_at_100_std value: 10.945496539449021 - type: nauc_mrr_at_10_diff1 value: 15.981763814841592 - type: nauc_mrr_at_10_max value: 25.319085276117963 - type: nauc_mrr_at_10_std value: 10.745702565030294 - type: nauc_mrr_at_1_diff1 value: 18.830268815056357 - type: nauc_mrr_at_1_max value: 24.091784960247836 - type: nauc_mrr_at_1_std value: 5.0689785519575 - type: nauc_mrr_at_20_diff1 value: 15.98142761028242 - type: nauc_mrr_at_20_max value: 25.414996424490027 - type: nauc_mrr_at_20_std value: 10.870249434505775 - type: nauc_mrr_at_3_diff1 value: 15.935825577016082 - type: nauc_mrr_at_3_max value: 24.975416181958167 - type: nauc_mrr_at_3_std value: 8.397054982253701 - type: nauc_mrr_at_5_diff1 value: 16.049933283409178 - type: nauc_mrr_at_5_max value: 25.011164358798332 - type: nauc_mrr_at_5_std value: 9.712583082431806 - type: nauc_ndcg_at_1000_diff1 value: 13.97312251549139 - type: nauc_ndcg_at_1000_max value: 32.480212255495594 - type: nauc_ndcg_at_1000_std value: 24.719537001693475 - type: nauc_ndcg_at_100_diff1 value: 14.98523996304762 - type: nauc_ndcg_at_100_max value: 32.83092196243769 - type: nauc_ndcg_at_100_std value: 22.774175882137225 - type: nauc_ndcg_at_10_diff1 value: 14.979597636735898 - type: nauc_ndcg_at_10_max value: 28.83154499526071 - type: nauc_ndcg_at_10_std value: 14.886915986702858 - type: nauc_ndcg_at_1_diff1 value: 18.830268815056357 - type: nauc_ndcg_at_1_max value: 24.091784960247836 - type: nauc_ndcg_at_1_std value: 5.0689785519575 - type: nauc_ndcg_at_20_diff1 value: 15.044456487807839 - type: nauc_ndcg_at_20_max value: 30.352751765564474 - type: nauc_ndcg_at_20_std value: 17.164045241846495 - type: nauc_ndcg_at_3_diff1 value: 14.821924589514419 - type: nauc_ndcg_at_3_max value: 25.104174834821553 - type: nauc_ndcg_at_3_std value: 8.885623249512804 - type: nauc_ndcg_at_5_diff1 value: 15.110572704518827 - type: nauc_ndcg_at_5_max value: 26.76255729581363 - type: nauc_ndcg_at_5_std value: 11.045567973978304 - type: nauc_precision_at_1000_diff1 value: 4.592941553248459 - type: nauc_precision_at_1000_max value: 28.91742506301694 - type: nauc_precision_at_1000_std value: 42.80815708996671 - type: nauc_precision_at_100_diff1 value: 11.02867951986886 - type: nauc_precision_at_100_max value: 33.754597322029554 - type: nauc_precision_at_100_std value: 34.60409801344883 - type: nauc_precision_at_10_diff1 value: 12.886287930179115 - type: nauc_precision_at_10_max value: 30.26046149017384 - type: nauc_precision_at_10_std value: 20.20421800803912 - type: nauc_precision_at_1_diff1 value: 18.830268815056357 - type: nauc_precision_at_1_max value: 24.091784960247836 - type: nauc_precision_at_1_std value: 5.0689785519575 - type: nauc_precision_at_20_diff1 value: 12.422965088689745 - type: nauc_precision_at_20_max value: 31.51632250639963 - type: nauc_precision_at_20_std value: 23.43856563765108 - type: nauc_precision_at_3_diff1 value: 13.289539531613256 - type: nauc_precision_at_3_max value: 25.889785950931497 - type: nauc_precision_at_3_std value: 11.09559992651764 - type: nauc_precision_at_5_diff1 value: 13.50817046296161 - type: nauc_precision_at_5_max value: 27.698708549131336 - type: nauc_precision_at_5_std value: 14.234534631242227 - type: nauc_recall_at_1000_diff1 value: 5.300698253784848 - type: nauc_recall_at_1000_max value: 29.512940206910077 - type: nauc_recall_at_1000_std value: 44.1202381373532 - type: nauc_recall_at_100_diff1 value: 11.387402837406217 - type: nauc_recall_at_100_max value: 33.86033221972651 - type: nauc_recall_at_100_std value: 34.81866892882947 - type: nauc_recall_at_10_diff1 value: 12.864590539302249 - type: nauc_recall_at_10_max value: 30.057799171898708 - type: nauc_recall_at_10_std value: 20.034456607727808 - type: nauc_recall_at_1_diff1 value: 19.115149776029494 - type: nauc_recall_at_1_max value: 23.93855447858197 - type: nauc_recall_at_1_std value: 4.954064604034524 - type: nauc_recall_at_20_diff1 value: 12.461041262909534 - type: nauc_recall_at_20_max value: 31.361291615365595 - type: nauc_recall_at_20_std value: 23.398932591021687 - type: nauc_recall_at_3_diff1 value: 13.514453581030756 - type: nauc_recall_at_3_max value: 25.891366383279905 - type: nauc_recall_at_3_std value: 11.040082969426374 - type: nauc_recall_at_5_diff1 value: 13.550650593088967 - type: nauc_recall_at_5_max value: 27.42420920157467 - type: nauc_recall_at_5_std value: 13.937212248229283 - type: ndcg_at_1 value: 22.6 - type: ndcg_at_10 value: 19.64 - type: ndcg_at_100 value: 27.938000000000002 - type: ndcg_at_1000 value: 33.183 - type: ndcg_at_20 value: 22.399 - type: ndcg_at_3 value: 18.667 - type: ndcg_at_5 value: 16.226 - type: precision_at_1 value: 22.6 - type: precision_at_10 value: 10.16 - type: precision_at_100 value: 2.212 - type: precision_at_1000 value: 0.347 - type: precision_at_20 value: 6.675000000000001 - type: precision_at_3 value: 17.467 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 4.593 - type: recall_at_10 value: 20.607 - type: recall_at_100 value: 44.95 - type: recall_at_1000 value: 70.378 - type: recall_at_20 value: 27.075 - type: recall_at_3 value: 10.628 - type: recall_at_5 value: 14.402999999999999 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 metrics: - type: cosine_accuracy value: 84.18263350998777 - type: cosine_accuracy_threshold value: 93.94426345825195 - type: cosine_ap value: 79.28096611204025 - type: cosine_f1 value: 71.58974358974358 - type: cosine_f1_threshold value: 92.83782243728638 - type: cosine_precision value: 68.83629191321499 - type: cosine_recall value: 74.57264957264957 - type: dot_accuracy value: 84.18263350998777 - type: dot_accuracy_threshold value: 93.94427537918091 - type: dot_ap value: 79.28096474865434 - type: dot_f1 value: 71.58974358974358 - type: dot_f1_threshold value: 92.83782243728638 - type: dot_precision value: 68.83629191321499 - type: dot_recall value: 74.57264957264957 - type: euclidean_accuracy value: 84.18263350998777 - type: euclidean_accuracy_threshold value: 34.80154275894165 - type: euclidean_ap value: 79.2809495685454 - type: euclidean_f1 value: 71.58974358974358 - type: euclidean_f1_threshold value: 37.84753084182739 - type: euclidean_precision value: 68.83629191321499 - type: euclidean_recall value: 74.57264957264957 - type: main_score value: 79.28410418410681 - type: manhattan_accuracy value: 84.12148389726865 - type: manhattan_accuracy_threshold value: 1590.3039932250977 - type: manhattan_ap value: 79.28410418410681 - type: manhattan_f1 value: 71.56462585034014 - type: manhattan_f1_threshold value: 1802.0807266235352 - type: manhattan_precision value: 68.48958333333334 - type: manhattan_recall value: 74.92877492877493 - type: max_ap value: 79.28410418410681 - type: max_f1 value: 71.58974358974358 - type: max_precision value: 68.83629191321499 - type: max_recall value: 74.92877492877493 - type: similarity_accuracy value: 84.18263350998777 - type: similarity_accuracy_threshold value: 93.94426345825195 - type: similarity_ap value: 79.28096611204025 - type: similarity_f1 value: 71.58974358974358 - type: similarity_f1_threshold value: 92.83782243728638 - type: similarity_precision value: 68.83629191321499 - type: similarity_recall value: 74.57264957264957 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: fd5c2441b7eeff8676768036142af4cfa42c1339 metrics: - type: cosine_pearson value: 79.73051490860105 - type: cosine_spearman value: 76.47752563673201 - type: euclidean_pearson value: 77.33537446268512 - type: euclidean_spearman value: 76.47750123747478 - type: main_score value: 76.47752563673201 - type: manhattan_pearson value: 77.36069879391584 - type: manhattan_spearman value: 76.51402354965752 - type: pearson value: 79.73051490860105 - type: spearman value: 76.47752563673201 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 47.431980546750964 - type: cosine_spearman value: 49.746157076230524 - type: euclidean_pearson value: 32.36421008785651 - type: euclidean_spearman value: 49.63851830055781 - type: main_score value: 49.746157076230524 - type: manhattan_pearson value: 32.363921235575155 - type: manhattan_spearman value: 49.69047212448613 - type: pearson value: 47.431980546750964 - type: spearman value: 49.746157076230524 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 56.06369132662453 - type: cosine_spearman value: 62.943147553238774 - type: euclidean_pearson value: 57.49805169961923 - type: euclidean_spearman value: 62.943147553238774 - type: main_score value: 62.943147553238774 - type: manhattan_pearson value: 57.25940410817918 - type: manhattan_spearman value: 62.204089069247715 - type: pearson value: 56.06369132662453 - type: spearman value: 62.943147553238774 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 72.12905800148313 - type: cosine_spearman value: 84.51542547285167 - type: euclidean_pearson value: 73.97515141793754 - type: euclidean_spearman value: 84.51542547285167 - type: main_score value: 84.51542547285167 - type: manhattan_pearson value: 72.84864088669735 - type: manhattan_spearman value: 73.24670207647144 - type: pearson value: 72.12905800148313 - type: spearman value: 84.51542547285167 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: main_score value: 76.91799999999999 - type: map_at_1 value: 61.760999999999996 - type: map_at_10 value: 72.191 - type: map_at_100 value: 72.57900000000001 - type: map_at_1000 value: 72.598 - type: map_at_20 value: 72.465 - type: map_at_3 value: 69.587 - type: map_at_5 value: 71.04899999999999 - type: mrr_at_1 value: 65.0 - type: mrr_at_10 value: 73.34603174603174 - type: mrr_at_100 value: 73.62227068925023 - type: mrr_at_1000 value: 73.64183781066299 - type: mrr_at_20 value: 73.54541985791985 - type: mrr_at_3 value: 71.33333333333334 - type: mrr_at_5 value: 72.61666666666666 - type: nauc_map_at_1000_diff1 value: 66.21840705486845 - type: nauc_map_at_1000_max value: 45.44324551344031 - type: nauc_map_at_1000_std value: -1.0814688355317972 - type: nauc_map_at_100_diff1 value: 66.22462678734968 - type: nauc_map_at_100_max value: 45.45863454979088 - type: nauc_map_at_100_std value: -1.0771139272521286 - type: nauc_map_at_10_diff1 value: 66.08826385237604 - type: nauc_map_at_10_max value: 45.31753671800737 - type: nauc_map_at_10_std value: -1.6275799430529103 - type: nauc_map_at_1_diff1 value: 70.42285626474877 - type: nauc_map_at_1_max value: 39.50179057561205 - type: nauc_map_at_1_std value: -8.851136260159194 - type: nauc_map_at_20_diff1 value: 66.25643144656225 - type: nauc_map_at_20_max value: 45.47864675571612 - type: nauc_map_at_20_std value: -1.0906184744215628 - type: nauc_map_at_3_diff1 value: 66.40622522282507 - type: nauc_map_at_3_max value: 43.1139944072993 - type: nauc_map_at_3_std value: -3.2097290531891627 - type: nauc_map_at_5_diff1 value: 66.17920924370715 - type: nauc_map_at_5_max value: 43.818417943365134 - type: nauc_map_at_5_std value: -3.5794735442648937 - type: nauc_mrr_at_1000_diff1 value: 65.83327739369703 - type: nauc_mrr_at_1000_max value: 48.372009488041705 - type: nauc_mrr_at_1000_std value: 2.115743603667452 - type: nauc_mrr_at_100_diff1 value: 65.83999237085304 - type: nauc_mrr_at_100_max value: 48.38524889954653 - type: nauc_mrr_at_100_std value: 2.1174309444353456 - type: nauc_mrr_at_10_diff1 value: 65.65190877083664 - type: nauc_mrr_at_10_max value: 48.46794744845911 - type: nauc_mrr_at_10_std value: 2.042910402700398 - type: nauc_mrr_at_1_diff1 value: 68.767732660645 - type: nauc_mrr_at_1_max value: 46.44641549353079 - type: nauc_mrr_at_1_std value: 0.9406786557083794 - type: nauc_mrr_at_20_diff1 value: 65.82687213531457 - type: nauc_mrr_at_20_max value: 48.41104900320748 - type: nauc_mrr_at_20_std value: 2.1047509246823237 - type: nauc_mrr_at_3_diff1 value: 65.3897227050321 - type: nauc_mrr_at_3_max value: 47.410165649594184 - type: nauc_mrr_at_3_std value: 2.0699855791392148 - type: nauc_mrr_at_5_diff1 value: 65.33605265864311 - type: nauc_mrr_at_5_max value: 48.19481590143297 - type: nauc_mrr_at_5_std value: 1.5028135894972638 - type: nauc_ndcg_at_1000_diff1 value: 65.57863304614911 - type: nauc_ndcg_at_1000_max value: 47.43852629649108 - type: nauc_ndcg_at_1000_std value: 0.93602139257168 - type: nauc_ndcg_at_100_diff1 value: 65.78957036924776 - type: nauc_ndcg_at_100_max value: 47.879373313112254 - type: nauc_ndcg_at_100_std value: 1.3268984011569667 - type: nauc_ndcg_at_10_diff1 value: 65.13084629929057 - type: nauc_ndcg_at_10_max value: 47.80105065332093 - type: nauc_ndcg_at_10_std value: -0.0425708066962233 - type: nauc_ndcg_at_1_diff1 value: 68.767732660645 - type: nauc_ndcg_at_1_max value: 46.44641549353079 - type: nauc_ndcg_at_1_std value: 0.9406786557083794 - type: nauc_ndcg_at_20_diff1 value: 65.79904145752494 - type: nauc_ndcg_at_20_max value: 48.13275387467153 - type: nauc_ndcg_at_20_std value: 1.286404066666757 - type: nauc_ndcg_at_3_diff1 value: 64.67185918398093 - type: nauc_ndcg_at_3_max value: 44.745883157812365 - type: nauc_ndcg_at_3_std value: -0.8556077804449875 - type: nauc_ndcg_at_5_diff1 value: 64.80551066348806 - type: nauc_ndcg_at_5_max value: 45.269268808180975 - type: nauc_ndcg_at_5_std value: -3.292440817014038 - type: nauc_precision_at_1000_diff1 value: -34.5747313468253 - type: nauc_precision_at_1000_max value: 19.86711413946244 - type: nauc_precision_at_1000_std value: 52.43703176378871 - type: nauc_precision_at_100_diff1 value: -21.848860427047484 - type: nauc_precision_at_100_max value: 25.508925305655005 - type: nauc_precision_at_100_std value: 49.06309774363955 - type: nauc_precision_at_10_diff1 value: -5.3289091663210435 - type: nauc_precision_at_10_max value: 34.20157610949811 - type: nauc_precision_at_10_std value: 38.95534356421951 - type: nauc_precision_at_1_diff1 value: 68.767732660645 - type: nauc_precision_at_1_max value: 46.44641549353079 - type: nauc_precision_at_1_std value: 0.9406786557083794 - type: nauc_precision_at_20_diff1 value: -11.795047749756987 - type: nauc_precision_at_20_max value: 31.411334133928044 - type: nauc_precision_at_20_std value: 46.83822267970486 - type: nauc_precision_at_3_diff1 value: 26.976524876425863 - type: nauc_precision_at_3_max value: 39.68201829904695 - type: nauc_precision_at_3_std value: 21.805216340932333 - type: nauc_precision_at_5_diff1 value: 12.808365852688642 - type: nauc_precision_at_5_max value: 37.20668677470474 - type: nauc_precision_at_5_std value: 24.753903742707926 - type: nauc_recall_at_1000_diff1 value: 12.278244631182748 - type: nauc_recall_at_1000_max value: 86.92810457516407 - type: nauc_recall_at_1000_std value: 35.8076563958937 - type: nauc_recall_at_100_diff1 value: 69.1923436041082 - type: nauc_recall_at_100_max value: 70.49953314659221 - type: nauc_recall_at_100_std value: 24.505135387488444 - type: nauc_recall_at_10_diff1 value: 59.73881286537836 - type: nauc_recall_at_10_max value: 56.13328320089889 - type: nauc_recall_at_10_std value: 0.9891720350741177 - type: nauc_recall_at_1_diff1 value: 70.42285626474877 - type: nauc_recall_at_1_max value: 39.50179057561205 - type: nauc_recall_at_1_std value: -8.851136260159194 - type: nauc_recall_at_20_diff1 value: 65.15817548141362 - type: nauc_recall_at_20_max value: 62.649878433221375 - type: nauc_recall_at_20_std value: 13.313642288598446 - type: nauc_recall_at_3_diff1 value: 61.331617713628084 - type: nauc_recall_at_3_max value: 41.81707619521235 - type: nauc_recall_at_3_std value: -5.674855869735362 - type: nauc_recall_at_5_diff1 value: 58.787683194756035 - type: nauc_recall_at_5_max value: 43.71237378650552 - type: nauc_recall_at_5_std value: -10.631002232674465 - type: ndcg_at_1 value: 65.0 - type: ndcg_at_10 value: 76.91799999999999 - type: ndcg_at_100 value: 78.402 - type: ndcg_at_1000 value: 78.805 - type: ndcg_at_20 value: 77.735 - type: ndcg_at_3 value: 72.437 - type: ndcg_at_5 value: 74.591 - type: precision_at_1 value: 65.0 - type: precision_at_10 value: 10.2 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.283 - type: precision_at_3 value: 28.555999999999997 - type: precision_at_5 value: 18.6 - type: recall_at_1 value: 61.760999999999996 - type: recall_at_10 value: 90.256 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 99.667 - type: recall_at_20 value: 93.267 - type: recall_at_3 value: 77.878 - type: recall_at_5 value: 83.439 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: main_score value: 81.144 - type: map_at_1 value: 0.248 - type: map_at_10 value: 2.157 - type: map_at_100 value: 13.716000000000001 - type: map_at_1000 value: 32.65 - type: map_at_20 value: 4.0009999999999994 - type: map_at_3 value: 0.711 - type: map_at_5 value: 1.162 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 96.0 - type: mrr_at_100 value: 96.0 - type: mrr_at_1000 value: 96.0 - type: mrr_at_20 value: 96.0 - type: mrr_at_3 value: 96.0 - type: mrr_at_5 value: 96.0 - type: nauc_map_at_1000_diff1 value: -28.250203594556584 - type: nauc_map_at_1000_max value: 52.08454072557133 - type: nauc_map_at_1000_std value: 79.48446545419355 - type: nauc_map_at_100_diff1 value: -16.3684798554974 - type: nauc_map_at_100_max value: 39.64355871007487 - type: nauc_map_at_100_std value: 58.517394150771196 - type: nauc_map_at_10_diff1 value: -0.9960431819837273 - type: nauc_map_at_10_max value: 19.77211920343006 - type: nauc_map_at_10_std value: 19.760464568209567 - type: nauc_map_at_1_diff1 value: 6.222121514853931 - type: nauc_map_at_1_max value: 9.04523920801192 - type: nauc_map_at_1_std value: 11.10141876326312 - type: nauc_map_at_20_diff1 value: -9.138051626246725 - type: nauc_map_at_20_max value: 22.419227957131067 - type: nauc_map_at_20_std value: 26.9756119734311 - type: nauc_map_at_3_diff1 value: 8.220277189546465 - type: nauc_map_at_3_max value: 12.162504412238127 - type: nauc_map_at_3_std value: 12.2742063914476 - type: nauc_map_at_5_diff1 value: 7.867830794993266 - type: nauc_map_at_5_max value: 14.872903453579243 - type: nauc_map_at_5_std value: 11.882541228741477 - type: nauc_mrr_at_1000_diff1 value: -4.843604108309577 - type: nauc_mrr_at_1000_max value: 13.130252100840279 - type: nauc_mrr_at_1000_std value: 61.65966386554632 - type: nauc_mrr_at_100_diff1 value: -4.843604108309577 - type: nauc_mrr_at_100_max value: 13.130252100840279 - type: nauc_mrr_at_100_std value: 61.65966386554632 - type: nauc_mrr_at_10_diff1 value: -4.843604108309577 - type: nauc_mrr_at_10_max value: 13.130252100840279 - type: nauc_mrr_at_10_std value: 61.65966386554632 - type: nauc_mrr_at_1_diff1 value: -4.843604108309911 - type: nauc_mrr_at_1_max value: 13.1302521008403 - type: nauc_mrr_at_1_std value: 61.659663865546264 - type: nauc_mrr_at_20_diff1 value: -4.843604108309577 - type: nauc_mrr_at_20_max value: 13.130252100840279 - type: nauc_mrr_at_20_std value: 61.65966386554632 - type: nauc_mrr_at_3_diff1 value: -4.843604108309577 - type: nauc_mrr_at_3_max value: 13.130252100840279 - type: nauc_mrr_at_3_std value: 61.65966386554632 - type: nauc_mrr_at_5_diff1 value: -4.843604108309577 - type: nauc_mrr_at_5_max value: 13.130252100840279 - type: nauc_mrr_at_5_std value: 61.65966386554632 - type: nauc_ndcg_at_1000_diff1 value: -25.16717752216515 - type: nauc_ndcg_at_1000_max value: 53.3418359198012 - type: nauc_ndcg_at_1000_std value: 80.65175228145466 - type: nauc_ndcg_at_100_diff1 value: -40.31785420558625 - type: nauc_ndcg_at_100_max value: 45.09546071865451 - type: nauc_ndcg_at_100_std value: 75.40895234974869 - type: nauc_ndcg_at_10_diff1 value: -28.19826901025097 - type: nauc_ndcg_at_10_max value: 43.078646933310615 - type: nauc_ndcg_at_10_std value: 53.111454343871614 - type: nauc_ndcg_at_1_diff1 value: -9.493493773611297 - type: nauc_ndcg_at_1_max value: 35.20008395130819 - type: nauc_ndcg_at_1_std value: 57.887925003498 - type: nauc_ndcg_at_20_diff1 value: -38.26836193971236 - type: nauc_ndcg_at_20_max value: 45.20766663960982 - type: nauc_ndcg_at_20_std value: 62.27601136797132 - type: nauc_ndcg_at_3_diff1 value: -7.28345892959394 - type: nauc_ndcg_at_3_max value: 24.7974010818429 - type: nauc_ndcg_at_3_std value: 41.70371109282937 - type: nauc_ndcg_at_5_diff1 value: -11.429562197815999 - type: nauc_ndcg_at_5_max value: 30.656429493871055 - type: nauc_ndcg_at_5_std value: 39.02726195692732 - type: nauc_precision_at_1000_diff1 value: -26.345687068428212 - type: nauc_precision_at_1000_max value: 22.393157270986734 - type: nauc_precision_at_1000_std value: 31.02496274402397 - type: nauc_precision_at_100_diff1 value: -36.815660048516264 - type: nauc_precision_at_100_max value: 42.9109935968304 - type: nauc_precision_at_100_std value: 71.79298255172685 - type: nauc_precision_at_10_diff1 value: -27.611688427036103 - type: nauc_precision_at_10_max value: 49.60209610089694 - type: nauc_precision_at_10_std value: 56.93578470556877 - type: nauc_precision_at_1_diff1 value: -4.843604108309911 - type: nauc_precision_at_1_max value: 13.1302521008403 - type: nauc_precision_at_1_std value: 61.659663865546264 - type: nauc_precision_at_20_diff1 value: -39.83848492009805 - type: nauc_precision_at_20_max value: 45.76206269914346 - type: nauc_precision_at_20_std value: 64.84887501731686 - type: nauc_precision_at_3_diff1 value: 9.845318472475325 - type: nauc_precision_at_3_max value: 13.932054442182206 - type: nauc_precision_at_3_std value: 36.0518701103848 - type: nauc_precision_at_5_diff1 value: 2.9725469322580262 - type: nauc_precision_at_5_max value: 39.185620406575865 - type: nauc_precision_at_5_std value: 37.863123630929465 - type: nauc_recall_at_1000_diff1 value: -17.20671458178181 - type: nauc_recall_at_1000_max value: 45.294402552613526 - type: nauc_recall_at_1000_std value: 69.28596985796021 - type: nauc_recall_at_100_diff1 value: -6.428084883700268 - type: nauc_recall_at_100_max value: 29.391445783546548 - type: nauc_recall_at_100_std value: 45.86330770057512 - type: nauc_recall_at_10_diff1 value: 1.6670104066851845 - type: nauc_recall_at_10_max value: 16.585464661642966 - type: nauc_recall_at_10_std value: 16.699588067711847 - type: nauc_recall_at_1_diff1 value: 6.222121514853931 - type: nauc_recall_at_1_max value: 9.04523920801192 - type: nauc_recall_at_1_std value: 11.10141876326312 - type: nauc_recall_at_20_diff1 value: -4.139029194223187 - type: nauc_recall_at_20_max value: 17.943621407808084 - type: nauc_recall_at_20_std value: 22.421389850193094 - type: nauc_recall_at_3_diff1 value: 10.102326826491833 - type: nauc_recall_at_3_max value: 10.384746210117815 - type: nauc_recall_at_3_std value: 9.619148014633025 - type: nauc_recall_at_5_diff1 value: 9.430903751671327 - type: nauc_recall_at_5_max value: 13.105245727063336 - type: nauc_recall_at_5_std value: 9.32079115867563 - type: ndcg_at_1 value: 86.0 - type: ndcg_at_10 value: 81.144 - type: ndcg_at_100 value: 65.38199999999999 - type: ndcg_at_1000 value: 58.163 - type: ndcg_at_20 value: 78.398 - type: ndcg_at_3 value: 86.419 - type: ndcg_at_5 value: 84.30199999999999 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 86.6 - type: precision_at_100 value: 67.75999999999999 - type: precision_at_1000 value: 25.480000000000004 - type: precision_at_20 value: 83.1 - type: precision_at_3 value: 90.667 - type: precision_at_5 value: 89.60000000000001 - type: recall_at_1 value: 0.248 - type: recall_at_10 value: 2.299 - type: recall_at_100 value: 16.668 - type: recall_at_1000 value: 54.629000000000005 - type: recall_at_20 value: 4.367 - type: recall_at_3 value: 0.732 - type: recall_at_5 value: 1.212 - task: type: MultilabelClassification dataset: name: MTEB CEDRClassification type: ai-forever/cedr-classification config: default split: test revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4 metrics: - type: accuracy value: 52.093517534537725 - type: f1 value: 56.37281380517539 - type: lrap value: 82.77763018065957 - type: main_score value: 52.093517534537725 - task: type: Classification dataset: name: MTEB GeoreviewClassification type: ai-forever/georeview-classification config: default split: test revision: 3765c0d1de6b7d264bc459433c45e5a75513839c metrics: - type: accuracy value: 59.67285156249999 - type: f1 value: 56.92752001367594 - type: f1_weighted value: 56.92222807825205 - type: main_score value: 59.67285156249999 - task: type: Clustering dataset: name: MTEB GeoreviewClusteringP2P type: ai-forever/georeview-clustering-p2p config: default split: test revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec metrics: - type: main_score value: 76.71995309518435 - type: v_measure value: 76.71995309518435 - type: v_measure_std value: 0.5051437256482365 - task: type: Classification dataset: name: MTEB HeadlineClassification type: ai-forever/headline-classification config: default split: test revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb metrics: - type: accuracy value: 85.7421875 - type: f1 value: 85.76906486452225 - type: f1_weighted value: 85.76722902514517 - type: main_score value: 85.7421875 - task: type: Classification dataset: name: MTEB InappropriatenessClassification type: ai-forever/inappropriateness-classification config: default split: test revision: 601651fdc45ef243751676e62dd7a19f491c0285 metrics: - type: accuracy value: 79.0380859375 - type: ap value: 73.32419205954841 - type: ap_weighted value: 73.32419205954841 - type: f1 value: 79.0122702123596 - type: f1_weighted value: 79.0122702123596 - type: main_score value: 79.0380859375 - task: type: Classification dataset: name: MTEB KinopoiskClassification type: ai-forever/kinopoisk-sentiment-classification config: default split: test revision: 5911f26666ac11af46cb9c6849d0dc80a378af24 metrics: - type: accuracy value: 71.39333333333333 - type: f1 value: 68.0515088454225 - type: f1_weighted value: 68.05150884542248 - type: main_score value: 71.39333333333333 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 79.77807666442501 - type: f1 value: 77.05503875836447 - type: f1_weighted value: 79.10885935880363 - type: main_score value: 79.77807666442501 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 88.42299932750505 - type: f1 value: 87.15721677058616 - type: f1_weighted value: 87.95844060171521 - type: main_score value: 88.42299932750505 - task: type: STS dataset: name: MTEB RUParaPhraserSTS type: merionum/ru_paraphraser config: default split: test revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4 metrics: - type: cosine_pearson value: 64.12402100059082 - type: cosine_spearman value: 72.1041223475043 - type: euclidean_pearson value: 68.38609067818044 - type: euclidean_spearman value: 72.10401766318856 - type: main_score value: 72.1041223475043 - type: manhattan_pearson value: 68.46796000117776 - type: manhattan_spearman value: 72.13215489094416 - type: pearson value: 64.12402100059082 - type: spearman value: 72.1041223475043 - task: type: Reranking dataset: name: MTEB RuBQReranking type: ai-forever/rubq-reranking config: default split: test revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2 metrics: - type: main_score value: 73.88576271940713 - type: map value: 73.88576271940713 - type: mrr value: 78.00960465854084 - type: nAUC_map_diff1 value: 39.6518603225463 - type: nAUC_map_max value: 4.350383965854549 - type: nAUC_map_std value: -0.014969899892212745 - type: nAUC_mrr_diff1 value: 42.13162353960397 - type: nAUC_mrr_max value: 8.922658395240406 - type: nAUC_mrr_std value: 2.152891873019869 - task: type: Retrieval dataset: name: MTEB RuBQRetrieval type: ai-forever/rubq-retrieval config: default split: test revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b metrics: - type: main_score value: 74.435 - type: map_at_1 value: 42.76 - type: map_at_10 value: 66.264 - type: map_at_100 value: 67.042 - type: map_at_1000 value: 67.05 - type: map_at_20 value: 66.85900000000001 - type: map_at_3 value: 59.74 - type: map_at_5 value: 63.993 - type: mrr_at_1 value: 60.047281323877066 - type: mrr_at_10 value: 72.82716987504223 - type: mrr_at_100 value: 73.02773497247094 - type: mrr_at_1000 value: 73.02962009865962 - type: mrr_at_20 value: 72.97035759772903 - type: mrr_at_3 value: 70.46887312844763 - type: mrr_at_5 value: 72.12076438140275 - type: nauc_map_at_1000_diff1 value: 34.383768967728635 - type: nauc_map_at_1000_max value: 13.703443395724472 - type: nauc_map_at_1000_std value: -22.72754510223835 - type: nauc_map_at_100_diff1 value: 34.37773445189401 - type: nauc_map_at_100_max value: 13.715120051009938 - type: nauc_map_at_100_std value: -22.71739582208026 - type: nauc_map_at_10_diff1 value: 34.128639545018224 - type: nauc_map_at_10_max value: 13.481023445729216 - type: nauc_map_at_10_std value: -22.841295424013143 - type: nauc_map_at_1_diff1 value: 37.58345298713193 - type: nauc_map_at_1_max value: 9.068626061733989 - type: nauc_map_at_1_std value: -19.34669422079028 - type: nauc_map_at_20_diff1 value: 34.21234363490007 - type: nauc_map_at_20_max value: 13.812265438057898 - type: nauc_map_at_20_std value: -22.744547074381728 - type: nauc_map_at_3_diff1 value: 35.178065640657465 - type: nauc_map_at_3_max value: 12.26694588496597 - type: nauc_map_at_3_std value: -23.876661383660725 - type: nauc_map_at_5_diff1 value: 34.97286590065426 - type: nauc_map_at_5_max value: 12.39449233232647 - type: nauc_map_at_5_std value: -24.179149585732894 - type: nauc_mrr_at_1000_diff1 value: 38.51708954025975 - type: nauc_mrr_at_1000_max value: 16.27687115188748 - type: nauc_mrr_at_1000_std value: -24.317991962455277 - type: nauc_mrr_at_100_diff1 value: 38.51579649813754 - type: nauc_mrr_at_100_max value: 16.282318186103982 - type: nauc_mrr_at_100_std value: -24.313115676201193 - type: nauc_mrr_at_10_diff1 value: 38.374513617518524 - type: nauc_mrr_at_10_max value: 16.411158436434583 - type: nauc_mrr_at_10_std value: -24.214190672272338 - type: nauc_mrr_at_1_diff1 value: 41.11744654145736 - type: nauc_mrr_at_1_max value: 14.857906263383727 - type: nauc_mrr_at_1_std value: -23.05045201335754 - type: nauc_mrr_at_20_diff1 value: 38.42720946112707 - type: nauc_mrr_at_20_max value: 16.333926957304225 - type: nauc_mrr_at_20_std value: -24.2666181277299 - type: nauc_mrr_at_3_diff1 value: 38.54947076552065 - type: nauc_mrr_at_3_max value: 16.28785626102837 - type: nauc_mrr_at_3_std value: -25.404928347060103 - type: nauc_mrr_at_5_diff1 value: 38.23381985227932 - type: nauc_mrr_at_5_max value: 16.29686368315855 - type: nauc_mrr_at_5_std value: -24.88784013864183 - type: nauc_ndcg_at_1000_diff1 value: 34.59545258977158 - type: nauc_ndcg_at_1000_max value: 15.284635200887825 - type: nauc_ndcg_at_1000_std value: -22.257301616758433 - type: nauc_ndcg_at_100_diff1 value: 34.44786100359039 - type: nauc_ndcg_at_100_max value: 15.57235196792877 - type: nauc_ndcg_at_100_std value: -21.856425612245342 - type: nauc_ndcg_at_10_diff1 value: 33.174757528590206 - type: nauc_ndcg_at_10_max value: 15.63435305829791 - type: nauc_ndcg_at_10_std value: -22.08142460589985 - type: nauc_ndcg_at_1_diff1 value: 41.11744654145736 - type: nauc_ndcg_at_1_max value: 14.857906263383727 - type: nauc_ndcg_at_1_std value: -23.05045201335754 - type: nauc_ndcg_at_20_diff1 value: 33.386333672237086 - type: nauc_ndcg_at_20_max value: 16.259547378249156 - type: nauc_ndcg_at_20_std value: -21.834061142760262 - type: nauc_ndcg_at_3_diff1 value: 35.3096569182989 - type: nauc_ndcg_at_3_max value: 13.564724249968299 - type: nauc_ndcg_at_3_std value: -25.112930090907355 - type: nauc_ndcg_at_5_diff1 value: 34.402178469695905 - type: nauc_ndcg_at_5_max value: 13.454254056986617 - type: nauc_ndcg_at_5_std value: -25.099270446248735 - type: nauc_precision_at_1000_diff1 value: -12.741330236539095 - type: nauc_precision_at_1000_max value: 4.404400635687311 - type: nauc_precision_at_1000_std value: 8.389300135369483 - type: nauc_precision_at_100_diff1 value: -12.851044558742647 - type: nauc_precision_at_100_max value: 5.680330188544991 - type: nauc_precision_at_100_std value: 9.489202238591542 - type: nauc_precision_at_10_diff1 value: -9.945369846060753 - type: nauc_precision_at_10_max value: 8.504415247865312 - type: nauc_precision_at_10_std value: 4.494521946889061 - type: nauc_precision_at_1_diff1 value: 41.11744654145736 - type: nauc_precision_at_1_max value: 14.857906263383727 - type: nauc_precision_at_1_std value: -23.05045201335754 - type: nauc_precision_at_20_diff1 value: -12.578957278247266 - type: nauc_precision_at_20_max value: 8.188355833278354 - type: nauc_precision_at_20_std value: 7.448331416027387 - type: nauc_precision_at_3_diff1 value: 8.117030877871983 - type: nauc_precision_at_3_max value: 11.646516155855124 - type: nauc_precision_at_3_std value: -12.527645037478171 - type: nauc_precision_at_5_diff1 value: -0.8567617401390368 - type: nauc_precision_at_5_max value: 8.683018924706662 - type: nauc_precision_at_5_std value: -5.808788866497016 - type: nauc_recall_at_1000_diff1 value: -28.762266898258215 - type: nauc_recall_at_1000_max value: 21.917410784648858 - type: nauc_recall_at_1000_std value: 53.72265532186225 - type: nauc_recall_at_100_diff1 value: -0.23838251752936382 - type: nauc_recall_at_100_max value: 45.959987172148885 - type: nauc_recall_at_100_std value: 45.34588951064591 - type: nauc_recall_at_10_diff1 value: 13.665193847690487 - type: nauc_recall_at_10_max value: 22.3683736077389 - type: nauc_recall_at_10_std value: -10.283709692040667 - type: nauc_recall_at_1_diff1 value: 37.58345298713193 - type: nauc_recall_at_1_max value: 9.068626061733989 - type: nauc_recall_at_1_std value: -19.34669422079028 - type: nauc_recall_at_20_diff1 value: 4.853737371483111 - type: nauc_recall_at_20_max value: 34.92618513489909 - type: nauc_recall_at_20_std value: -1.2868509314659222 - type: nauc_recall_at_3_diff1 value: 28.7908251906051 - type: nauc_recall_at_3_max value: 11.900913295288518 - type: nauc_recall_at_3_std value: -24.462530634963496 - type: nauc_recall_at_5_diff1 value: 25.173125475364177 - type: nauc_recall_at_5_max value: 11.315686078181972 - type: nauc_recall_at_5_std value: -25.091887815136914 - type: ndcg_at_1 value: 60.047 - type: ndcg_at_10 value: 74.435 - type: ndcg_at_100 value: 76.594 - type: ndcg_at_1000 value: 76.725 - type: ndcg_at_20 value: 75.773 - type: ndcg_at_3 value: 65.975 - type: ndcg_at_5 value: 70.81 - type: precision_at_1 value: 60.047 - type: precision_at_10 value: 14.988000000000001 - type: precision_at_100 value: 1.656 - type: precision_at_1000 value: 0.167 - type: precision_at_20 value: 7.9399999999999995 - type: precision_at_3 value: 36.623 - type: precision_at_5 value: 26.277 - type: recall_at_1 value: 42.76 - type: recall_at_10 value: 90.889 - type: recall_at_100 value: 98.834 - type: recall_at_1000 value: 99.663 - type: recall_at_20 value: 95.184 - type: recall_at_3 value: 70.62 - type: recall_at_5 value: 81.652 - task: type: Classification dataset: name: MTEB RuReviewsClassification type: ai-forever/ru-reviews-classification config: default split: test revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a metrics: - type: accuracy value: 74.8095703125 - type: f1 value: 73.91967376784037 - type: f1_weighted value: 73.9189948366255 - type: main_score value: 74.8095703125 - task: type: STS dataset: name: MTEB RuSTSBenchmarkSTS type: ai-forever/ru-stsbenchmark-sts config: default split: test revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018 metrics: - type: cosine_pearson value: 79.888528971486 - type: cosine_spearman value: 81.61889430378866 - type: euclidean_pearson value: 79.94703459875922 - type: euclidean_spearman value: 81.61980863924033 - type: main_score value: 81.61889430378866 - type: manhattan_pearson value: 79.95415547515567 - type: manhattan_spearman value: 81.61130692072074 - type: pearson value: 79.888528971486 - type: spearman value: 81.61889430378866 - task: type: Classification dataset: name: MTEB RuSciBenchGRNTIClassification type: ai-forever/ru-scibench-grnti-classification config: default split: test revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1 metrics: - type: accuracy value: 71.6552734375 - type: f1 value: 70.63908761566744 - type: f1_weighted value: 70.64734045044828 - type: main_score value: 71.6552734375 - task: type: Clustering dataset: name: MTEB RuSciBenchGRNTIClusteringP2P type: ai-forever/ru-scibench-grnti-classification config: default split: test revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1 metrics: - type: main_score value: 64.79686240363448 - type: v_measure value: 64.79686240363448 - type: v_measure_std value: 0.6119665206236284 - task: type: Classification dataset: name: MTEB RuSciBenchOECDClassification type: ai-forever/ru-scibench-oecd-classification config: default split: test revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471 metrics: - type: accuracy value: 56.7626953125 - type: f1 value: 54.62202402640944 - type: f1_weighted value: 54.62367865280833 - type: main_score value: 56.7626953125 - task: type: Clustering dataset: name: MTEB RuSciBenchOECDClusteringP2P type: ai-forever/ru-scibench-oecd-classification config: default split: test revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471 metrics: - type: main_score value: 54.818142832015695 - type: v_measure value: 54.818142832015695 - type: v_measure_std value: 0.7494689058177785 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 69.46416898707648 - type: cosine_spearman value: 71.7236490731324 - type: euclidean_pearson value: 69.26978478998248 - type: euclidean_spearman value: 71.7236490731324 - type: main_score value: 71.7236490731324 - type: manhattan_pearson value: 69.31349929375952 - type: manhattan_spearman value: 71.75161736759956 - type: pearson value: 69.46416898707648 - type: spearman value: 71.7236490731324 - task: type: MultilabelClassification dataset: name: MTEB SensitiveTopicsClassification type: ai-forever/sensitive-topics-classification config: default split: test revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2 metrics: - type: accuracy value: 32.3974609375 - type: f1 value: 36.8155212473576 - type: lrap value: 50.2943929036452 - type: main_score value: 32.3974609375 - task: type: PairClassification dataset: name: MTEB TERRa type: ai-forever/terra-pairclassification config: default split: dev revision: 7b58f24536063837d644aab9a023c62199b2a612 metrics: - type: cosine_accuracy value: 57.00325732899023 - type: cosine_accuracy_threshold value: 75.34879446029663 - type: cosine_ap value: 56.8594077887683 - type: cosine_f1 value: 67.72727272727272 - type: cosine_f1_threshold value: 65.37638306617737 - type: cosine_precision value: 51.91637630662021 - type: cosine_recall value: 97.38562091503267 - type: dot_accuracy value: 57.00325732899023 - type: dot_accuracy_threshold value: 75.34880638122559 - type: dot_ap value: 56.8594077887683 - type: dot_f1 value: 67.72727272727272 - type: dot_f1_threshold value: 65.37638306617737 - type: dot_precision value: 51.91637630662021 - type: dot_recall value: 97.38562091503267 - type: euclidean_accuracy value: 57.00325732899023 - type: euclidean_accuracy_threshold value: 70.2156662940979 - type: euclidean_ap value: 56.8594077887683 - type: euclidean_f1 value: 67.72727272727272 - type: euclidean_f1_threshold value: 83.21480751037598 - type: euclidean_precision value: 51.91637630662021 - type: euclidean_recall value: 97.38562091503267 - type: main_score value: 57.47570140269883 - type: manhattan_accuracy value: 57.65472312703584 - type: manhattan_accuracy_threshold value: 3097.412109375 - type: manhattan_ap value: 57.47570140269883 - type: manhattan_f1 value: 67.88990825688074 - type: manhattan_f1_threshold value: 3821.0716247558594 - type: manhattan_precision value: 52.29681978798587 - type: manhattan_recall value: 96.73202614379085 - type: max_ap value: 57.47570140269883 - type: max_f1 value: 67.88990825688074 - type: max_precision value: 52.29681978798587 - type: max_recall value: 97.38562091503267 - type: similarity_accuracy value: 57.00325732899023 - type: similarity_accuracy_threshold value: 75.34879446029663 - type: similarity_ap value: 56.8594077887683 - type: similarity_f1 value: 67.72727272727272 - type: similarity_f1_threshold value: 65.37638306617737 - type: similarity_precision value: 51.91637630662021 - type: similarity_recall value: 97.38562091503267 --- Development Version: Scheduled for Release Post-Optimization
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
BioNLP
Linq-AI-Research/Linq-Embed-Mistral
Linq-AI-Research
feature-extraction
[ "sentence-transformers", "safetensors", "mistral", "feature-extraction", "mteb", "transformers", "en", "arxiv:2210.07316", "arxiv:2310.06825", "arxiv:2401.00368", "arxiv:2104.08663", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,716
1,717
19,366
69
--- language: - en license: cc-by-nc-4.0 tags: - mteb - transformers - sentence-transformers model-index: - name: Linq-Embed-Mistral results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 84.43283582089552 - type: ap value: 50.39222584035829 - type: f1 value: 78.47906270064071 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 95.70445 - type: ap value: 94.28273900595173 - type: f1 value: 95.70048412173735 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 57.644000000000005 - type: f1 value: 56.993648296704876 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 45.804 - type: map_at_10 value: 61.742 - type: map_at_100 value: 62.07899999999999 - type: map_at_1000 value: 62.08 - type: map_at_3 value: 57.717 - type: map_at_5 value: 60.27 - type: mrr_at_1 value: 47.226 - type: mrr_at_10 value: 62.256 - type: mrr_at_100 value: 62.601 - type: mrr_at_1000 value: 62.601 - type: mrr_at_3 value: 58.203 - type: mrr_at_5 value: 60.767 - type: ndcg_at_1 value: 45.804 - type: ndcg_at_10 value: 69.649 - type: ndcg_at_100 value: 70.902 - type: ndcg_at_1000 value: 70.91199999999999 - type: ndcg_at_3 value: 61.497 - type: ndcg_at_5 value: 66.097 - type: precision_at_1 value: 45.804 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 24.135 - type: precision_at_5 value: 16.714000000000002 - type: recall_at_1 value: 45.804 - type: recall_at_10 value: 94.523 - type: recall_at_100 value: 99.57300000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 72.404 - type: recall_at_5 value: 83.57 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 51.47612678878609 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 47.2977392340418 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 66.82016765243456 - type: mrr value: 79.55227982236292 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.15068664186332 - type: cos_sim_spearman value: 86.4013663041054 - type: euclidean_pearson value: 87.36391302921588 - type: euclidean_spearman value: 86.4013663041054 - type: manhattan_pearson value: 87.46116676558589 - type: manhattan_spearman value: 86.78149544753352 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.88311688311688 - type: f1 value: 87.82368154811464 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 42.72860396750569 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 39.58412067938718 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 30.082666666666665 - type: map_at_10 value: 41.13875 - type: map_at_100 value: 42.45525 - type: map_at_1000 value: 42.561249999999994 - type: map_at_3 value: 37.822750000000006 - type: map_at_5 value: 39.62658333333333 - type: mrr_at_1 value: 35.584 - type: mrr_at_10 value: 45.4675 - type: mrr_at_100 value: 46.31016666666667 - type: mrr_at_1000 value: 46.35191666666666 - type: mrr_at_3 value: 42.86674999999999 - type: mrr_at_5 value: 44.31341666666666 - type: ndcg_at_1 value: 35.584 - type: ndcg_at_10 value: 47.26516666666667 - type: ndcg_at_100 value: 52.49108333333332 - type: ndcg_at_1000 value: 54.24575 - type: ndcg_at_3 value: 41.83433333333334 - type: ndcg_at_5 value: 44.29899999999999 - type: precision_at_1 value: 35.584 - type: precision_at_10 value: 8.390333333333334 - type: precision_at_100 value: 1.2941666666666667 - type: precision_at_1000 value: 0.16308333333333336 - type: precision_at_3 value: 19.414583333333333 - type: precision_at_5 value: 13.751 - type: recall_at_1 value: 30.082666666666665 - type: recall_at_10 value: 60.88875 - type: recall_at_100 value: 83.35141666666667 - type: recall_at_1000 value: 95.0805 - type: recall_at_3 value: 45.683749999999996 - type: recall_at_5 value: 52.08208333333333 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 16.747 - type: map_at_10 value: 29.168 - type: map_at_100 value: 31.304 - type: map_at_1000 value: 31.496000000000002 - type: map_at_3 value: 24.57 - type: map_at_5 value: 26.886 - type: mrr_at_1 value: 37.524 - type: mrr_at_10 value: 50.588 - type: mrr_at_100 value: 51.28 - type: mrr_at_1000 value: 51.29899999999999 - type: mrr_at_3 value: 47.438 - type: mrr_at_5 value: 49.434 - type: ndcg_at_1 value: 37.524 - type: ndcg_at_10 value: 39.11 - type: ndcg_at_100 value: 46.373999999999995 - type: ndcg_at_1000 value: 49.370999999999995 - type: ndcg_at_3 value: 32.964 - type: ndcg_at_5 value: 35.028 - type: precision_at_1 value: 37.524 - type: precision_at_10 value: 12.137 - type: precision_at_100 value: 1.9929999999999999 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 24.886 - type: precision_at_5 value: 18.762 - type: recall_at_1 value: 16.747 - type: recall_at_10 value: 45.486 - type: recall_at_100 value: 69.705 - type: recall_at_1000 value: 86.119 - type: recall_at_3 value: 30.070999999999998 - type: recall_at_5 value: 36.565 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 10.495000000000001 - type: map_at_10 value: 24.005000000000003 - type: map_at_100 value: 34.37 - type: map_at_1000 value: 36.268 - type: map_at_3 value: 16.694 - type: map_at_5 value: 19.845 - type: mrr_at_1 value: 75.5 - type: mrr_at_10 value: 82.458 - type: mrr_at_100 value: 82.638 - type: mrr_at_1000 value: 82.64 - type: mrr_at_3 value: 81.25 - type: mrr_at_5 value: 82.125 - type: ndcg_at_1 value: 64.625 - type: ndcg_at_10 value: 51.322 - type: ndcg_at_100 value: 55.413999999999994 - type: ndcg_at_1000 value: 62.169 - type: ndcg_at_3 value: 56.818999999999996 - type: ndcg_at_5 value: 54.32900000000001 - type: precision_at_1 value: 75.5 - type: precision_at_10 value: 40.849999999999994 - type: precision_at_100 value: 12.882 - type: precision_at_1000 value: 2.394 - type: precision_at_3 value: 59.667 - type: precision_at_5 value: 52.2 - type: recall_at_1 value: 10.495000000000001 - type: recall_at_10 value: 29.226000000000003 - type: recall_at_100 value: 59.614 - type: recall_at_1000 value: 81.862 - type: recall_at_3 value: 17.97 - type: recall_at_5 value: 22.438 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.82 - type: f1 value: 47.794956731921054 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 82.52199999999999 - type: map_at_10 value: 89.794 - type: map_at_100 value: 89.962 - type: map_at_1000 value: 89.972 - type: map_at_3 value: 88.95100000000001 - type: map_at_5 value: 89.524 - type: mrr_at_1 value: 88.809 - type: mrr_at_10 value: 93.554 - type: mrr_at_100 value: 93.577 - type: mrr_at_1000 value: 93.577 - type: mrr_at_3 value: 93.324 - type: mrr_at_5 value: 93.516 - type: ndcg_at_1 value: 88.809 - type: ndcg_at_10 value: 92.419 - type: ndcg_at_100 value: 92.95 - type: ndcg_at_1000 value: 93.10000000000001 - type: ndcg_at_3 value: 91.45299999999999 - type: ndcg_at_5 value: 92.05 - type: precision_at_1 value: 88.809 - type: precision_at_10 value: 10.911999999999999 - type: precision_at_100 value: 1.143 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 34.623 - type: precision_at_5 value: 21.343999999999998 - type: recall_at_1 value: 82.52199999999999 - type: recall_at_10 value: 96.59400000000001 - type: recall_at_100 value: 98.55699999999999 - type: recall_at_1000 value: 99.413 - type: recall_at_3 value: 94.02199999999999 - type: recall_at_5 value: 95.582 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.842 - type: map_at_10 value: 53.147 - type: map_at_100 value: 55.265 - type: map_at_1000 value: 55.37 - type: map_at_3 value: 46.495 - type: map_at_5 value: 50.214999999999996 - type: mrr_at_1 value: 61.574 - type: mrr_at_10 value: 68.426 - type: mrr_at_100 value: 68.935 - type: mrr_at_1000 value: 68.95400000000001 - type: mrr_at_3 value: 66.307 - type: mrr_at_5 value: 67.611 - type: ndcg_at_1 value: 61.574 - type: ndcg_at_10 value: 61.205 - type: ndcg_at_100 value: 67.25999999999999 - type: ndcg_at_1000 value: 68.657 - type: ndcg_at_3 value: 56.717 - type: ndcg_at_5 value: 58.196999999999996 - type: precision_at_1 value: 61.574 - type: precision_at_10 value: 16.852 - type: precision_at_100 value: 2.33 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 37.5 - type: precision_at_5 value: 27.468999999999998 - type: recall_at_1 value: 32.842 - type: recall_at_10 value: 68.157 - type: recall_at_100 value: 89.5 - type: recall_at_1000 value: 97.68599999999999 - type: recall_at_3 value: 50.783 - type: recall_at_5 value: 58.672000000000004 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 39.068000000000005 - type: map_at_10 value: 69.253 - type: map_at_100 value: 70.036 - type: map_at_1000 value: 70.081 - type: map_at_3 value: 65.621 - type: map_at_5 value: 67.976 - type: mrr_at_1 value: 78.13600000000001 - type: mrr_at_10 value: 84.328 - type: mrr_at_100 value: 84.515 - type: mrr_at_1000 value: 84.52300000000001 - type: mrr_at_3 value: 83.52199999999999 - type: mrr_at_5 value: 84.019 - type: ndcg_at_1 value: 78.13600000000001 - type: ndcg_at_10 value: 76.236 - type: ndcg_at_100 value: 78.891 - type: ndcg_at_1000 value: 79.73400000000001 - type: ndcg_at_3 value: 71.258 - type: ndcg_at_5 value: 74.129 - type: precision_at_1 value: 78.13600000000001 - type: precision_at_10 value: 16.347 - type: precision_at_100 value: 1.839 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 47.189 - type: precision_at_5 value: 30.581999999999997 - type: recall_at_1 value: 39.068000000000005 - type: recall_at_10 value: 81.735 - type: recall_at_100 value: 91.945 - type: recall_at_1000 value: 97.44800000000001 - type: recall_at_3 value: 70.783 - type: recall_at_5 value: 76.455 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 94.7764 - type: ap value: 92.67841294818406 - type: f1 value: 94.77375157383646 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 24.624 - type: map_at_10 value: 37.861 - type: map_at_100 value: 39.011 - type: map_at_1000 value: 39.052 - type: map_at_3 value: 33.76 - type: map_at_5 value: 36.153 - type: mrr_at_1 value: 25.358000000000004 - type: mrr_at_10 value: 38.5 - type: mrr_at_100 value: 39.572 - type: mrr_at_1000 value: 39.607 - type: mrr_at_3 value: 34.491 - type: mrr_at_5 value: 36.83 - type: ndcg_at_1 value: 25.358000000000004 - type: ndcg_at_10 value: 45.214999999999996 - type: ndcg_at_100 value: 50.56 - type: ndcg_at_1000 value: 51.507999999999996 - type: ndcg_at_3 value: 36.925999999999995 - type: ndcg_at_5 value: 41.182 - type: precision_at_1 value: 25.358000000000004 - type: precision_at_10 value: 7.090000000000001 - type: precision_at_100 value: 0.9740000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 15.697 - type: precision_at_5 value: 11.599 - type: recall_at_1 value: 24.624 - type: recall_at_10 value: 67.78699999999999 - type: recall_at_100 value: 92.11200000000001 - type: recall_at_1000 value: 99.208 - type: recall_at_3 value: 45.362 - type: recall_at_5 value: 55.58 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.83310533515733 - type: f1 value: 96.57069781347995 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 89.5690834473324 - type: f1 value: 73.7275204564728 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 82.67316745124411 - type: f1 value: 79.70626515721662 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.01344989912575 - type: f1 value: 84.45181022816965 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 37.843426126777295 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 36.651728547241476 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.05750522793288 - type: mrr value: 33.28067556869468 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.744 - type: map_at_10 value: 16.235 - type: map_at_100 value: 20.767 - type: map_at_1000 value: 22.469 - type: map_at_3 value: 11.708 - type: map_at_5 value: 13.924 - type: mrr_at_1 value: 55.728 - type: mrr_at_10 value: 63.869 - type: mrr_at_100 value: 64.322 - type: mrr_at_1000 value: 64.342 - type: mrr_at_3 value: 62.022999999999996 - type: mrr_at_5 value: 63.105999999999995 - type: ndcg_at_1 value: 53.096 - type: ndcg_at_10 value: 41.618 - type: ndcg_at_100 value: 38.562999999999995 - type: ndcg_at_1000 value: 47.006 - type: ndcg_at_3 value: 47.657 - type: ndcg_at_5 value: 45.562999999999995 - type: precision_at_1 value: 55.108000000000004 - type: precision_at_10 value: 30.464000000000002 - type: precision_at_100 value: 9.737 - type: precision_at_1000 value: 2.2720000000000002 - type: precision_at_3 value: 44.376 - type: precision_at_5 value: 39.505 - type: recall_at_1 value: 6.744 - type: recall_at_10 value: 21.11 - type: recall_at_100 value: 39.69 - type: recall_at_1000 value: 70.44 - type: recall_at_3 value: 13.120000000000001 - type: recall_at_5 value: 16.669 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 46.263 - type: map_at_10 value: 63.525 - type: map_at_100 value: 64.142 - type: map_at_1000 value: 64.14800000000001 - type: map_at_3 value: 59.653 - type: map_at_5 value: 62.244 - type: mrr_at_1 value: 51.796 - type: mrr_at_10 value: 65.764 - type: mrr_at_100 value: 66.155 - type: mrr_at_1000 value: 66.158 - type: mrr_at_3 value: 63.05500000000001 - type: mrr_at_5 value: 64.924 - type: ndcg_at_1 value: 51.766999999999996 - type: ndcg_at_10 value: 70.626 - type: ndcg_at_100 value: 72.905 - type: ndcg_at_1000 value: 73.021 - type: ndcg_at_3 value: 63.937999999999995 - type: ndcg_at_5 value: 68.00699999999999 - type: precision_at_1 value: 51.766999999999996 - type: precision_at_10 value: 10.768 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 28.409000000000002 - type: precision_at_5 value: 19.502 - type: recall_at_1 value: 46.263 - type: recall_at_10 value: 89.554 - type: recall_at_100 value: 98.914 - type: recall_at_1000 value: 99.754 - type: recall_at_3 value: 72.89999999999999 - type: recall_at_5 value: 82.1 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 72.748 - type: map_at_10 value: 86.87700000000001 - type: map_at_100 value: 87.46199999999999 - type: map_at_1000 value: 87.47399999999999 - type: map_at_3 value: 83.95700000000001 - type: map_at_5 value: 85.82300000000001 - type: mrr_at_1 value: 83.62 - type: mrr_at_10 value: 89.415 - type: mrr_at_100 value: 89.484 - type: mrr_at_1000 value: 89.484 - type: mrr_at_3 value: 88.633 - type: mrr_at_5 value: 89.176 - type: ndcg_at_1 value: 83.62 - type: ndcg_at_10 value: 90.27 - type: ndcg_at_100 value: 91.23599999999999 - type: ndcg_at_1000 value: 91.293 - type: ndcg_at_3 value: 87.69500000000001 - type: ndcg_at_5 value: 89.171 - type: precision_at_1 value: 83.62 - type: precision_at_10 value: 13.683 - type: precision_at_100 value: 1.542 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.363 - type: precision_at_5 value: 25.196 - type: recall_at_1 value: 72.748 - type: recall_at_10 value: 96.61699999999999 - type: recall_at_100 value: 99.789 - type: recall_at_1000 value: 99.997 - type: recall_at_3 value: 89.21 - type: recall_at_5 value: 93.418 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 61.51909029379199 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 68.24483162045645 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 4.793 - type: map_at_10 value: 13.092 - type: map_at_100 value: 15.434000000000001 - type: map_at_1000 value: 15.748999999999999 - type: map_at_3 value: 9.139 - type: map_at_5 value: 11.033 - type: mrr_at_1 value: 23.599999999999998 - type: mrr_at_10 value: 35.892 - type: mrr_at_100 value: 36.962 - type: mrr_at_1000 value: 37.009 - type: mrr_at_3 value: 32.550000000000004 - type: mrr_at_5 value: 34.415 - type: ndcg_at_1 value: 23.599999999999998 - type: ndcg_at_10 value: 21.932 - type: ndcg_at_100 value: 30.433 - type: ndcg_at_1000 value: 35.668 - type: ndcg_at_3 value: 20.483999999999998 - type: ndcg_at_5 value: 17.964 - type: precision_at_1 value: 23.599999999999998 - type: precision_at_10 value: 11.63 - type: precision_at_100 value: 2.383 - type: precision_at_1000 value: 0.363 - type: precision_at_3 value: 19.567 - type: precision_at_5 value: 16.06 - type: recall_at_1 value: 4.793 - type: recall_at_10 value: 23.558 - type: recall_at_100 value: 48.376999999999995 - type: recall_at_1000 value: 73.75699999999999 - type: recall_at_3 value: 11.903 - type: recall_at_5 value: 16.278000000000002 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 87.31937967632581 - type: cos_sim_spearman value: 84.30523596401186 - type: euclidean_pearson value: 84.19537987069458 - type: euclidean_spearman value: 84.30522052876 - type: manhattan_pearson value: 84.16420807244911 - type: manhattan_spearman value: 84.28515410219309 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.17180810119646 - type: cos_sim_spearman value: 78.44413657529002 - type: euclidean_pearson value: 81.69054139101816 - type: euclidean_spearman value: 78.44412412142488 - type: manhattan_pearson value: 82.04975789626462 - type: manhattan_spearman value: 78.78390856857253 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.35737871089687 - type: cos_sim_spearman value: 88.26850223126127 - type: euclidean_pearson value: 87.44100858335746 - type: euclidean_spearman value: 88.26850223126127 - type: manhattan_pearson value: 87.61572015772133 - type: manhattan_spearman value: 88.56229552813319 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 86.8395966764906 - type: cos_sim_spearman value: 84.49441798385489 - type: euclidean_pearson value: 85.3259176121388 - type: euclidean_spearman value: 84.49442124804686 - type: manhattan_pearson value: 85.35153862806513 - type: manhattan_spearman value: 84.60094577432503 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 90.14048269057345 - type: cos_sim_spearman value: 90.27866978947013 - type: euclidean_pearson value: 89.35308361940393 - type: euclidean_spearman value: 90.27866978947013 - type: manhattan_pearson value: 89.37601244066997 - type: manhattan_spearman value: 90.42707449698062 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 86.8522678865688 - type: cos_sim_spearman value: 87.37396401580446 - type: euclidean_pearson value: 86.37219665505377 - type: euclidean_spearman value: 87.37396385867791 - type: manhattan_pearson value: 86.44628823799896 - type: manhattan_spearman value: 87.49116026788859 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 92.94248481968916 - type: cos_sim_spearman value: 92.68185242943188 - type: euclidean_pearson value: 92.33802342092979 - type: euclidean_spearman value: 92.68185242943188 - type: manhattan_pearson value: 92.2011323340474 - type: manhattan_spearman value: 92.43364757640346 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 70.2918782293091 - type: cos_sim_spearman value: 68.61986257003369 - type: euclidean_pearson value: 70.51920905899138 - type: euclidean_spearman value: 68.61986257003369 - type: manhattan_pearson value: 70.64673843811433 - type: manhattan_spearman value: 68.86711466517345 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 88.62956838105524 - type: cos_sim_spearman value: 88.80650007123052 - type: euclidean_pearson value: 88.37976252122822 - type: euclidean_spearman value: 88.80650007123052 - type: manhattan_pearson value: 88.49866938476616 - type: manhattan_spearman value: 89.02489665452616 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 86.40175229911527 - type: mrr value: 96.61958230585682 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 63.05 - type: map_at_10 value: 73.844 - type: map_at_100 value: 74.313 - type: map_at_1000 value: 74.321 - type: map_at_3 value: 71.17999999999999 - type: map_at_5 value: 72.842 - type: mrr_at_1 value: 65.667 - type: mrr_at_10 value: 74.772 - type: mrr_at_100 value: 75.087 - type: mrr_at_1000 value: 75.095 - type: mrr_at_3 value: 72.944 - type: mrr_at_5 value: 74.078 - type: ndcg_at_1 value: 65.667 - type: ndcg_at_10 value: 78.31700000000001 - type: ndcg_at_100 value: 79.969 - type: ndcg_at_1000 value: 80.25 - type: ndcg_at_3 value: 74.099 - type: ndcg_at_5 value: 76.338 - type: precision_at_1 value: 65.667 - type: precision_at_10 value: 10.233 - type: precision_at_100 value: 1.107 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.889 - type: precision_at_5 value: 19.0 - type: recall_at_1 value: 63.05 - type: recall_at_10 value: 90.822 - type: recall_at_100 value: 97.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.489 - type: recall_at_5 value: 85.161 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.83564356435643 - type: cos_sim_ap value: 96.10619363017767 - type: cos_sim_f1 value: 91.61225514816677 - type: cos_sim_precision value: 92.02825428859738 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.83564356435643 - type: dot_ap value: 96.10619363017767 - type: dot_f1 value: 91.61225514816677 - type: dot_precision value: 92.02825428859738 - type: dot_recall value: 91.2 - type: euclidean_accuracy value: 99.83564356435643 - type: euclidean_ap value: 96.10619363017769 - type: euclidean_f1 value: 91.61225514816677 - type: euclidean_precision value: 92.02825428859738 - type: euclidean_recall value: 91.2 - type: manhattan_accuracy value: 99.84158415841584 - type: manhattan_ap value: 96.27527798658713 - type: manhattan_f1 value: 92.0 - type: manhattan_precision value: 92.0 - type: manhattan_recall value: 92.0 - type: max_accuracy value: 99.84158415841584 - type: max_ap value: 96.27527798658713 - type: max_f1 value: 92.0 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 76.93753872885304 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 46.044085080870126 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.885129730227256 - type: mrr value: 56.95062494694848 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.202047940935508 - type: cos_sim_spearman value: 30.984832035722228 - type: dot_pearson value: 31.20204247226978 - type: dot_spearman value: 30.984832035722228 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.249 - type: map_at_100 value: 14.85 - type: map_at_1000 value: 36.596000000000004 - type: map_at_3 value: 0.717 - type: map_at_5 value: 1.18 - type: mrr_at_1 value: 94.0 - type: mrr_at_10 value: 96.167 - type: mrr_at_100 value: 96.167 - type: mrr_at_1000 value: 96.167 - type: mrr_at_3 value: 95.667 - type: mrr_at_5 value: 96.167 - type: ndcg_at_1 value: 91.0 - type: ndcg_at_10 value: 87.09700000000001 - type: ndcg_at_100 value: 69.637 - type: ndcg_at_1000 value: 62.257 - type: ndcg_at_3 value: 90.235 - type: ndcg_at_5 value: 89.51400000000001 - type: precision_at_1 value: 94.0 - type: precision_at_10 value: 90.60000000000001 - type: precision_at_100 value: 71.38 - type: precision_at_1000 value: 27.400000000000002 - type: precision_at_3 value: 94.0 - type: precision_at_5 value: 93.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.366 - type: recall_at_100 value: 17.491 - type: recall_at_1000 value: 58.772999999999996 - type: recall_at_3 value: 0.7270000000000001 - type: recall_at_5 value: 1.221 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.435 - type: map_at_10 value: 12.147 - type: map_at_100 value: 18.724 - type: map_at_1000 value: 20.426 - type: map_at_3 value: 6.526999999999999 - type: map_at_5 value: 9.198 - type: mrr_at_1 value: 48.980000000000004 - type: mrr_at_10 value: 62.970000000000006 - type: mrr_at_100 value: 63.288999999999994 - type: mrr_at_1000 value: 63.288999999999994 - type: mrr_at_3 value: 59.184000000000005 - type: mrr_at_5 value: 61.224000000000004 - type: ndcg_at_1 value: 46.939 - type: ndcg_at_10 value: 30.61 - type: ndcg_at_100 value: 41.683 - type: ndcg_at_1000 value: 53.144000000000005 - type: ndcg_at_3 value: 36.284 - type: ndcg_at_5 value: 34.345 - type: precision_at_1 value: 48.980000000000004 - type: precision_at_10 value: 26.122 - type: precision_at_100 value: 8.204 - type: precision_at_1000 value: 1.6019999999999999 - type: precision_at_3 value: 35.374 - type: precision_at_5 value: 32.653 - type: recall_at_1 value: 3.435 - type: recall_at_10 value: 18.953 - type: recall_at_100 value: 50.775000000000006 - type: recall_at_1000 value: 85.858 - type: recall_at_3 value: 7.813000000000001 - type: recall_at_5 value: 11.952 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 71.2938 - type: ap value: 15.090139095602268 - type: f1 value: 55.23862650598296 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.7623089983022 - type: f1 value: 65.07617131099336 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 57.2988222684939 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 88.6034451928235 - type: cos_sim_ap value: 81.51815279166863 - type: cos_sim_f1 value: 74.43794671864849 - type: cos_sim_precision value: 73.34186939820742 - type: cos_sim_recall value: 75.56728232189973 - type: dot_accuracy value: 88.6034451928235 - type: dot_ap value: 81.51816956866841 - type: dot_f1 value: 74.43794671864849 - type: dot_precision value: 73.34186939820742 - type: dot_recall value: 75.56728232189973 - type: euclidean_accuracy value: 88.6034451928235 - type: euclidean_ap value: 81.51817015121485 - type: euclidean_f1 value: 74.43794671864849 - type: euclidean_precision value: 73.34186939820742 - type: euclidean_recall value: 75.56728232189973 - type: manhattan_accuracy value: 88.5736424867378 - type: manhattan_ap value: 81.37610101292196 - type: manhattan_f1 value: 74.2504182215931 - type: manhattan_precision value: 72.46922883697563 - type: manhattan_recall value: 76.12137203166228 - type: max_accuracy value: 88.6034451928235 - type: max_ap value: 81.51817015121485 - type: max_f1 value: 74.43794671864849 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.53118329646446 - type: cos_sim_ap value: 87.41972033060013 - type: cos_sim_f1 value: 79.4392523364486 - type: cos_sim_precision value: 75.53457372951958 - type: cos_sim_recall value: 83.7696335078534 - type: dot_accuracy value: 89.53118329646446 - type: dot_ap value: 87.41971646088945 - type: dot_f1 value: 79.4392523364486 - type: dot_precision value: 75.53457372951958 - type: dot_recall value: 83.7696335078534 - type: euclidean_accuracy value: 89.53118329646446 - type: euclidean_ap value: 87.41972415605997 - type: euclidean_f1 value: 79.4392523364486 - type: euclidean_precision value: 75.53457372951958 - type: euclidean_recall value: 83.7696335078534 - type: manhattan_accuracy value: 89.5855163581325 - type: manhattan_ap value: 87.51158697451964 - type: manhattan_f1 value: 79.54455087655883 - type: manhattan_precision value: 74.96763643796416 - type: manhattan_recall value: 84.71666153372344 - type: max_accuracy value: 89.5855163581325 - type: max_ap value: 87.51158697451964 - type: max_f1 value: 79.54455087655883 --- <h1 align="center">Linq-AI-Research/Linq-Embed-Mistral</h1> **Linq-Embed-Mistral** Linq-Embed-Mistral has been developed by building upon the foundations of the [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) models. We focus on improving text retrieval using advanced data refinement methods, including sophisticated data crafting, data filtering, and negative mining guided by teacher models, which are highly tailored to each task, to improve the quality of the synthetic data generated by LLM. These methods are applied to both existing benchmark dataset and highly tailored synthetic dataset generated via LLMs. Our efforts primarily aim to create high-quality triplet datasets (query, positive example, negative example), significantly improving text retrieval performance. Linq-Embed-Mistral performs well in the MTEB benchmarks (as of May 29, 2024). The model excels in retrieval tasks, ranking <ins>**`1st`**</ins> among all models listed on the MTEB leaderboard with a performance score of <ins>**`60.2`**</ins>. This outstanding performance underscores its superior capability in enhancing search precision and reliability. The model achieves an average score of <ins>**`68.2`**</ins> across 56 datasets in the MTEB benchmarks, making it the highest-ranking publicly accessible model and third overall. (Please note that [NV-Emb-v1](https://huggingface.co/nvidia/NV-Embed-v1) and [voyage-large-2-instruct](https://docs.voyageai.com/embeddings/), ranked 1st and 2nd on the leaderboard as of May 29, reported their performance without releasing their models.) This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses. Please refer to specific papers for more details: - [MTEB benchmark](https://arxiv.org/abs/2210.07316) - [Mistral](https://arxiv.org/abs/2310.06825) - [E5-mistral-7b-instruct](https://arxiv.org/pdf/2401.00368.pdf) For more details, refer to [this blog post](https://getlinq.com/blog/linq-embed-mistral/) and [this report](https://huggingface.co/Linq-AI-Research/Linq-Embed-Mistral/blob/main/LinqAIResearch2024_Linq-Embed-Mistral.pdf). ## How to use Here is an example of how to encode queries and passages from the Mr.TyDi training dataset, both with Sentence Transformers or Transformers directly. ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer # Load the model model = SentenceTransformer("Linq-AI-Research/Linq-Embed-Mistral") # Each query must come with a one-sentence instruction that describes the task task = 'Given a question, retrieve Wikipedia passages that answer the question' prompt = f"Instruct: {task}\nQuery: " queries = [ "최초의 원자력 발전소는 무엇인가?", "Who invented Hangul?" ] passages = [ "현재 사용되는 핵분열 방식을 이용한 전력생산은 1948년 9월 미국 테네시주 오크리지에 설치된 X-10 흑연원자로에서 전구의 불을 밝히는 데 사용되면서 시작되었다. 그리고 1954년 6월에 구소련의 오브닌스크에 건설된 흑연감속 비등경수 압력관형 원자로를 사용한 오브닌스크 원자력 발전소가 시험적으로 전력생산을 시작하였고, 최초의 상업용 원자력 엉더이로를 사용한 영국 셀라필드 원자력 단지에 위치한 콜더 홀(Calder Hall) 원자력 발전소로, 1956년 10월 17일 상업 운전을 시작하였다.", "Hangul was personally created and promulgated by the fourth king of the Joseon dynasty, Sejong the Great.[1][2] Sejong's scholarly institute, the Hall of Worthies, is often credited with the work, and at least one of its scholars was heavily involved in its creation, but it appears to have also been a personal project of Sejong." ] # Encode the queries and passages. We only use the prompt for the queries query_embeddings = model.encode(queries, prompt=prompt) passage_embeddings = model.encode(passages) # Compute the (cosine) similarity scores scores = model.similarity(query_embeddings, passage_embeddings) * 100 print(scores.tolist()) # [[73.72908782958984, 30.122787475585938], [29.15508460998535, 79.25375366210938]] ``` ### Transformers ```python import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a question, retrieve Wikipedia passages that answer the question' queries = [ get_detailed_instruct(task, '최초의 원자력 발전소는 무엇인가?'), get_detailed_instruct(task, 'Who invented Hangul?') ] # No need to add instruction for retrieval documents passages = [ "현재 사용되는 핵분열 방식을 이용한 전력생산은 1948년 9월 미국 테네시주 오크리지에 설치된 X-10 흑연원자로에서 전구의 불을 밝히는 데 사용되면서 시작되었다. 그리고 1954년 6월에 구소련의 오브닌스크에 건설된 흑연감속 비등경수 압력관형 원자로를 사용한 오브닌스크 원자력 발전소가 시험적으로 전력생산을 시작하였고, 최초의 상업용 원자력 엉더이로를 사용한 영국 셀라필드 원자력 단지에 위치한 콜더 홀(Calder Hall) 원자력 발전소로, 1956년 10월 17일 상업 운전을 시작하였다.", "Hangul was personally created and promulgated by the fourth king of the Joseon dynasty, Sejong the Great.[1][2] Sejong's scholarly institute, the Hall of Worthies, is often credited with the work, and at least one of its scholars was heavily involved in its creation, but it appears to have also been a personal project of Sejong." ] # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('Linq-AI-Research/Linq-Embed-Mistral') model = AutoModel.from_pretrained('Linq-AI-Research/Linq-Embed-Mistral') max_length = 4096 input_texts = [*queries, *passages] # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt") outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # Normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) # [[73.72909545898438, 30.122783660888672], [29.155078887939453, 79.25374603271484]] ``` ### MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB](https://arxiv.org/abs/2210.07316) benchmark. ## Evaluation Result ### MTEB (as of May 29, 2024) | Model Name | Retrieval (15) | Average (56) | | :------------------------------------------------------------------------------: | :------------: | :----------: | | [Linq-Embed-Mistral](https://huggingface.co/Linq-AI-Research/Linq-Embed-Mistral) | 60.2 | 68.2 | | [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 59.4 | 69.3 | | [SFR-Embedding-Mistral](https://huggingface.co/Salesforce/SFR-Embedding-Mistral) | 59.0 | 67.6 | | [voyage-large-2-instruct](https://docs.voyageai.com/docs/embeddings) | 58.3 | 68.3 | | [GritLM-7B](https://huggingface.co/GritLM/GritLM-7B) | 57.4 | 66.8 | | [voyage-lite-02-instruct](https://docs.voyageai.com/docs/embeddings) | 56.6 | 67.1 | |[gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| 56.2 | 67.3 | | [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 56.9 | 66.6 | |[google-gecko.text-embedding-preview-0409](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings?hl=ko#latest_models)| 55.7 | 66.3 | |[text-embedding-3-large](https://openai.com/index/new-embedding-models-and-api-updates/)| 55.4 | 64.6 | |[Cohere-embed-english-v3.0](https://huggingface.co/Cohere/Cohere-embed-english-v3.0)| 55.0 | 64.5 | # Linq Research Team. - [Junseong Kim](https://huggingface.co/Junseong) - [Seolhwa Lee](https://huggingface.co/Seolhwa) - [Jihoon Kwon](https://huggingface.co/Mayfull) - [Sangmo Gu](https://huggingface.co/karma-os) - Yejin Kim - Minkyung Cho - [Jy-yong Sohn](https://itml.yonsei.ac.kr/professor) - [Chanyeol Choi](https://www.linkedin.com/in/chanyeolchoi) # Citation ```bibtex @misc{LinqAIResearch2024, title={Linq-Embed-Mistral:Elevating Text Retrieval with Improved GPT Data Through Task-Specific Control and Quality Refinement}, author={Junseong Kim, Seolhwa Lee, Jihoon Kwon, Sangmo Gu, Yejin Kim, Minkyung Cho, Jy-yong Sohn, Chanyeol Choi}, howpublished={Linq AI Research Blog}, year={2024}, url={https://getlinq.com/blog/linq-embed-mistral/} } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
ekorman-strive/bge-large-en-v1.5
ekorman-strive
feature-extraction
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,716
1,716
17
0
--- language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-large-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8507462686567 - type: ap value: 38.566457320228245 - type: f1 value: 69.69386648043475 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.416675 - type: ap value: 89.1928861155922 - type: f1 value: 92.39477019574215 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.175999999999995 - type: f1 value: 47.80712792870253 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.184999999999995 - type: map_at_10 value: 55.654 - type: map_at_100 value: 56.25 - type: map_at_1000 value: 56.255 - type: map_at_3 value: 51.742999999999995 - type: map_at_5 value: 54.129000000000005 - type: mrr_at_1 value: 40.967 - type: mrr_at_10 value: 55.96 - type: mrr_at_100 value: 56.54900000000001 - type: mrr_at_1000 value: 56.554 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.44 - type: ndcg_at_1 value: 40.184999999999995 - type: ndcg_at_10 value: 63.542 - type: ndcg_at_100 value: 65.96499999999999 - type: ndcg_at_1000 value: 66.08699999999999 - type: ndcg_at_3 value: 55.582 - type: ndcg_at_5 value: 59.855000000000004 - type: precision_at_1 value: 40.184999999999995 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.405 - type: recall_at_1 value: 40.184999999999995 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 98.72 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 77.027 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.567077926750066 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.19453389182364 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.46555939623092 - type: mrr value: 77.82361605768807 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.9554128814735 - type: cos_sim_spearman value: 84.65373612172036 - type: euclidean_pearson value: 83.2905059954138 - type: euclidean_spearman value: 84.52240782811128 - type: manhattan_pearson value: 82.99533802997436 - type: manhattan_spearman value: 84.20673798475734 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.78896103896103 - type: f1 value: 87.77189310964883 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.714538337650495 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.90108349284447 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.795 - type: map_at_10 value: 43.669000000000004 - type: map_at_100 value: 45.151 - type: map_at_1000 value: 45.278 - type: map_at_3 value: 40.006 - type: map_at_5 value: 42.059999999999995 - type: mrr_at_1 value: 39.771 - type: mrr_at_10 value: 49.826 - type: mrr_at_100 value: 50.504000000000005 - type: mrr_at_1000 value: 50.549 - type: mrr_at_3 value: 47.115 - type: mrr_at_5 value: 48.832 - type: ndcg_at_1 value: 39.771 - type: ndcg_at_10 value: 50.217999999999996 - type: ndcg_at_100 value: 55.454 - type: ndcg_at_1000 value: 57.37 - type: ndcg_at_3 value: 44.885000000000005 - type: ndcg_at_5 value: 47.419 - type: precision_at_1 value: 39.771 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.268 - type: precision_at_5 value: 15.536 - type: recall_at_1 value: 32.795 - type: recall_at_10 value: 62.580999999999996 - type: recall_at_100 value: 84.438 - type: recall_at_1000 value: 96.492 - type: recall_at_3 value: 47.071000000000005 - type: recall_at_5 value: 54.079 - type: map_at_1 value: 32.671 - type: map_at_10 value: 43.334 - type: map_at_100 value: 44.566 - type: map_at_1000 value: 44.702999999999996 - type: map_at_3 value: 40.343 - type: map_at_5 value: 41.983 - type: mrr_at_1 value: 40.764 - type: mrr_at_10 value: 49.382 - type: mrr_at_100 value: 49.988 - type: mrr_at_1000 value: 50.03300000000001 - type: mrr_at_3 value: 47.293 - type: mrr_at_5 value: 48.51 - type: ndcg_at_1 value: 40.764 - type: ndcg_at_10 value: 49.039 - type: ndcg_at_100 value: 53.259 - type: ndcg_at_1000 value: 55.253 - type: ndcg_at_3 value: 45.091 - type: ndcg_at_5 value: 46.839999999999996 - type: precision_at_1 value: 40.764 - type: precision_at_10 value: 9.191 - type: precision_at_100 value: 1.476 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.72 - type: precision_at_5 value: 15.299 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 58.816 - type: recall_at_100 value: 76.654 - type: recall_at_1000 value: 89.05999999999999 - type: recall_at_3 value: 46.743 - type: recall_at_5 value: 51.783 - type: map_at_1 value: 40.328 - type: map_at_10 value: 53.32599999999999 - type: map_at_100 value: 54.37499999999999 - type: map_at_1000 value: 54.429 - type: map_at_3 value: 49.902 - type: map_at_5 value: 52.002 - type: mrr_at_1 value: 46.332 - type: mrr_at_10 value: 56.858 - type: mrr_at_100 value: 57.522 - type: mrr_at_1000 value: 57.54899999999999 - type: mrr_at_3 value: 54.472 - type: mrr_at_5 value: 55.996 - type: ndcg_at_1 value: 46.332 - type: ndcg_at_10 value: 59.313 - type: ndcg_at_100 value: 63.266999999999996 - type: ndcg_at_1000 value: 64.36 - type: ndcg_at_3 value: 53.815000000000005 - type: ndcg_at_5 value: 56.814 - type: precision_at_1 value: 46.332 - type: precision_at_10 value: 9.53 - type: precision_at_100 value: 1.238 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.054000000000002 - type: precision_at_5 value: 16.589000000000002 - type: recall_at_1 value: 40.328 - type: recall_at_10 value: 73.421 - type: recall_at_100 value: 90.059 - type: recall_at_1000 value: 97.81 - type: recall_at_3 value: 59.009 - type: recall_at_5 value: 66.352 - type: map_at_1 value: 27.424 - type: map_at_10 value: 36.332 - type: map_at_100 value: 37.347 - type: map_at_1000 value: 37.422 - type: map_at_3 value: 33.743 - type: map_at_5 value: 35.176 - type: mrr_at_1 value: 29.153000000000002 - type: mrr_at_10 value: 38.233 - type: mrr_at_100 value: 39.109 - type: mrr_at_1000 value: 39.164 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.169000000000004 - type: ndcg_at_1 value: 29.153000000000002 - type: ndcg_at_10 value: 41.439 - type: ndcg_at_100 value: 46.42 - type: ndcg_at_1000 value: 48.242000000000004 - type: ndcg_at_3 value: 36.362 - type: ndcg_at_5 value: 38.743 - type: precision_at_1 value: 29.153000000000002 - type: precision_at_10 value: 6.315999999999999 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 15.443000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.424 - type: recall_at_10 value: 55.364000000000004 - type: recall_at_100 value: 78.211 - type: recall_at_1000 value: 91.74600000000001 - type: recall_at_3 value: 41.379 - type: recall_at_5 value: 47.14 - type: map_at_1 value: 19.601 - type: map_at_10 value: 27.826 - type: map_at_100 value: 29.017 - type: map_at_1000 value: 29.137 - type: map_at_3 value: 25.125999999999998 - type: map_at_5 value: 26.765 - type: mrr_at_1 value: 24.005000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.631 - type: mrr_at_1000 value: 33.694 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 31.630999999999997 - type: ndcg_at_1 value: 24.005000000000003 - type: ndcg_at_10 value: 33.158 - type: ndcg_at_100 value: 38.739000000000004 - type: ndcg_at_1000 value: 41.495 - type: ndcg_at_3 value: 28.185 - type: ndcg_at_5 value: 30.796 - type: precision_at_1 value: 24.005000000000003 - type: precision_at_10 value: 5.908 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 13.391 - type: precision_at_5 value: 9.876 - type: recall_at_1 value: 19.601 - type: recall_at_10 value: 44.746 - type: recall_at_100 value: 68.82300000000001 - type: recall_at_1000 value: 88.215 - type: recall_at_3 value: 31.239 - type: recall_at_5 value: 37.695 - type: map_at_1 value: 30.130000000000003 - type: map_at_10 value: 40.96 - type: map_at_100 value: 42.282 - type: map_at_1000 value: 42.392 - type: map_at_3 value: 37.889 - type: map_at_5 value: 39.661 - type: mrr_at_1 value: 36.958999999999996 - type: mrr_at_10 value: 46.835 - type: mrr_at_100 value: 47.644 - type: mrr_at_1000 value: 47.688 - type: mrr_at_3 value: 44.562000000000005 - type: mrr_at_5 value: 45.938 - type: ndcg_at_1 value: 36.958999999999996 - type: ndcg_at_10 value: 47.06 - type: ndcg_at_100 value: 52.345 - type: ndcg_at_1000 value: 54.35 - type: ndcg_at_3 value: 42.301 - type: ndcg_at_5 value: 44.635999999999996 - type: precision_at_1 value: 36.958999999999996 - type: precision_at_10 value: 8.479000000000001 - type: precision_at_100 value: 1.284 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 20.244 - type: precision_at_5 value: 14.224999999999998 - type: recall_at_1 value: 30.130000000000003 - type: recall_at_10 value: 59.27 - type: recall_at_100 value: 81.195 - type: recall_at_1000 value: 94.21199999999999 - type: recall_at_3 value: 45.885 - type: recall_at_5 value: 52.016 - type: map_at_1 value: 26.169999999999998 - type: map_at_10 value: 36.451 - type: map_at_100 value: 37.791000000000004 - type: map_at_1000 value: 37.897 - type: map_at_3 value: 33.109 - type: map_at_5 value: 34.937000000000005 - type: mrr_at_1 value: 32.877 - type: mrr_at_10 value: 42.368 - type: mrr_at_100 value: 43.201 - type: mrr_at_1000 value: 43.259 - type: mrr_at_3 value: 39.763999999999996 - type: mrr_at_5 value: 41.260000000000005 - type: ndcg_at_1 value: 32.877 - type: ndcg_at_10 value: 42.659000000000006 - type: ndcg_at_100 value: 48.161 - type: ndcg_at_1000 value: 50.345 - type: ndcg_at_3 value: 37.302 - type: ndcg_at_5 value: 39.722 - type: precision_at_1 value: 32.877 - type: precision_at_10 value: 7.9 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.846 - type: precision_at_5 value: 12.9 - type: recall_at_1 value: 26.169999999999998 - type: recall_at_10 value: 55.35 - type: recall_at_100 value: 78.755 - type: recall_at_1000 value: 93.518 - type: recall_at_3 value: 40.176 - type: recall_at_5 value: 46.589000000000006 - type: map_at_1 value: 27.15516666666667 - type: map_at_10 value: 36.65741666666667 - type: map_at_100 value: 37.84991666666666 - type: map_at_1000 value: 37.96316666666667 - type: map_at_3 value: 33.74974999999999 - type: map_at_5 value: 35.3765 - type: mrr_at_1 value: 32.08233333333334 - type: mrr_at_10 value: 41.033833333333334 - type: mrr_at_100 value: 41.84524999999999 - type: mrr_at_1000 value: 41.89983333333333 - type: mrr_at_3 value: 38.62008333333333 - type: mrr_at_5 value: 40.03441666666666 - type: ndcg_at_1 value: 32.08233333333334 - type: ndcg_at_10 value: 42.229 - type: ndcg_at_100 value: 47.26716666666667 - type: ndcg_at_1000 value: 49.43466666666667 - type: ndcg_at_3 value: 37.36408333333333 - type: ndcg_at_5 value: 39.6715 - type: precision_at_1 value: 32.08233333333334 - type: precision_at_10 value: 7.382583333333334 - type: precision_at_100 value: 1.16625 - type: precision_at_1000 value: 0.15408333333333332 - type: precision_at_3 value: 17.218 - type: precision_at_5 value: 12.21875 - type: recall_at_1 value: 27.15516666666667 - type: recall_at_10 value: 54.36683333333333 - type: recall_at_100 value: 76.37183333333333 - type: recall_at_1000 value: 91.26183333333333 - type: recall_at_3 value: 40.769916666666674 - type: recall_at_5 value: 46.702333333333335 - type: map_at_1 value: 25.749 - type: map_at_10 value: 33.001999999999995 - type: map_at_100 value: 33.891 - type: map_at_1000 value: 33.993 - type: map_at_3 value: 30.703999999999997 - type: map_at_5 value: 31.959 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 35.955 - type: mrr_at_100 value: 36.709 - type: mrr_at_1000 value: 36.779 - type: mrr_at_3 value: 33.947 - type: mrr_at_5 value: 35.089 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.329 - type: ndcg_at_100 value: 41.79 - type: ndcg_at_1000 value: 44.169000000000004 - type: ndcg_at_3 value: 33.184999999999995 - type: ndcg_at_5 value: 35.107 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.7669999999999995 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.213000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 25.749 - type: recall_at_10 value: 47.791 - type: recall_at_100 value: 68.255 - type: recall_at_1000 value: 85.749 - type: recall_at_3 value: 36.199 - type: recall_at_5 value: 41.071999999999996 - type: map_at_1 value: 17.777 - type: map_at_10 value: 25.201 - type: map_at_100 value: 26.423999999999996 - type: map_at_1000 value: 26.544 - type: map_at_3 value: 22.869 - type: map_at_5 value: 24.023 - type: mrr_at_1 value: 21.473 - type: mrr_at_10 value: 29.12 - type: mrr_at_100 value: 30.144 - type: mrr_at_1000 value: 30.215999999999998 - type: mrr_at_3 value: 26.933 - type: mrr_at_5 value: 28.051 - type: ndcg_at_1 value: 21.473 - type: ndcg_at_10 value: 30.003 - type: ndcg_at_100 value: 35.766 - type: ndcg_at_1000 value: 38.501000000000005 - type: ndcg_at_3 value: 25.773000000000003 - type: ndcg_at_5 value: 27.462999999999997 - type: precision_at_1 value: 21.473 - type: precision_at_10 value: 5.482 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.205 - type: precision_at_5 value: 8.692 - type: recall_at_1 value: 17.777 - type: recall_at_10 value: 40.582 - type: recall_at_100 value: 66.305 - type: recall_at_1000 value: 85.636 - type: recall_at_3 value: 28.687 - type: recall_at_5 value: 33.089 - type: map_at_1 value: 26.677 - type: map_at_10 value: 36.309000000000005 - type: map_at_100 value: 37.403999999999996 - type: map_at_1000 value: 37.496 - type: map_at_3 value: 33.382 - type: map_at_5 value: 34.98 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.549 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.397 - type: mrr_at_3 value: 38.029 - type: mrr_at_5 value: 39.451 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 42.1 - type: ndcg_at_100 value: 47.089999999999996 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 36.836999999999996 - type: ndcg_at_5 value: 39.21 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.164 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.915 - type: precision_at_5 value: 11.940000000000001 - type: recall_at_1 value: 26.677 - type: recall_at_10 value: 55.54599999999999 - type: recall_at_100 value: 77.094 - type: recall_at_1000 value: 92.01 - type: recall_at_3 value: 41.191 - type: recall_at_5 value: 47.006 - type: map_at_1 value: 24.501 - type: map_at_10 value: 33.102 - type: map_at_100 value: 34.676 - type: map_at_1000 value: 34.888000000000005 - type: map_at_3 value: 29.944 - type: map_at_5 value: 31.613999999999997 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 37.996 - type: mrr_at_100 value: 38.946 - type: mrr_at_1000 value: 38.995000000000005 - type: mrr_at_3 value: 35.079 - type: mrr_at_5 value: 36.69 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.232 - type: ndcg_at_100 value: 45.247 - type: ndcg_at_1000 value: 47.613 - type: ndcg_at_3 value: 33.922999999999995 - type: ndcg_at_5 value: 36.284 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.008 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 24.501 - type: recall_at_10 value: 51.18899999999999 - type: recall_at_100 value: 78.437 - type: recall_at_1000 value: 92.842 - type: recall_at_3 value: 35.808 - type: recall_at_5 value: 42.197 - type: map_at_1 value: 22.039 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.275 - type: map_at_1000 value: 31.379 - type: map_at_3 value: 27.98 - type: map_at_5 value: 29.358 - type: mrr_at_1 value: 24.03 - type: mrr_at_10 value: 32.568000000000005 - type: mrr_at_100 value: 33.403 - type: mrr_at_1000 value: 33.475 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 31.796000000000003 - type: ndcg_at_1 value: 24.03 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 39.668 - type: ndcg_at_1000 value: 42.296 - type: ndcg_at_3 value: 30.709999999999997 - type: ndcg_at_5 value: 33.024 - type: precision_at_1 value: 24.03 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.828 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 13.309000000000001 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 22.039 - type: recall_at_10 value: 47.746 - type: recall_at_100 value: 68.23599999999999 - type: recall_at_1000 value: 87.852 - type: recall_at_3 value: 35.852000000000004 - type: recall_at_5 value: 41.410000000000004 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 15.692999999999998 - type: map_at_10 value: 26.903 - type: map_at_100 value: 28.987000000000002 - type: map_at_1000 value: 29.176999999999996 - type: map_at_3 value: 22.137 - type: map_at_5 value: 24.758 - type: mrr_at_1 value: 35.57 - type: mrr_at_10 value: 47.821999999999996 - type: mrr_at_100 value: 48.608000000000004 - type: mrr_at_1000 value: 48.638999999999996 - type: mrr_at_3 value: 44.452000000000005 - type: mrr_at_5 value: 46.546 - type: ndcg_at_1 value: 35.57 - type: ndcg_at_10 value: 36.567 - type: ndcg_at_100 value: 44.085 - type: ndcg_at_1000 value: 47.24 - type: ndcg_at_3 value: 29.964000000000002 - type: ndcg_at_5 value: 32.511 - type: precision_at_1 value: 35.57 - type: precision_at_10 value: 11.485 - type: precision_at_100 value: 1.9619999999999997 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 22.237000000000002 - type: precision_at_5 value: 17.471999999999998 - type: recall_at_1 value: 15.692999999999998 - type: recall_at_10 value: 43.056 - type: recall_at_100 value: 68.628 - type: recall_at_1000 value: 86.075 - type: recall_at_3 value: 26.918999999999997 - type: recall_at_5 value: 34.14 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.53 - type: map_at_10 value: 20.951 - type: map_at_100 value: 30.136000000000003 - type: map_at_1000 value: 31.801000000000002 - type: map_at_3 value: 15.021 - type: map_at_5 value: 17.471999999999998 - type: mrr_at_1 value: 71.0 - type: mrr_at_10 value: 79.176 - type: mrr_at_100 value: 79.418 - type: mrr_at_1000 value: 79.426 - type: mrr_at_3 value: 78.125 - type: mrr_at_5 value: 78.61200000000001 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 44.106 - type: ndcg_at_100 value: 49.268 - type: ndcg_at_1000 value: 56.711999999999996 - type: ndcg_at_3 value: 48.934 - type: ndcg_at_5 value: 45.826 - type: precision_at_1 value: 71.0 - type: precision_at_10 value: 35.0 - type: precision_at_100 value: 11.360000000000001 - type: precision_at_1000 value: 2.046 - type: precision_at_3 value: 52.833 - type: precision_at_5 value: 44.15 - type: recall_at_1 value: 9.53 - type: recall_at_10 value: 26.811 - type: recall_at_100 value: 55.916999999999994 - type: recall_at_1000 value: 79.973 - type: recall_at_3 value: 16.413 - type: recall_at_5 value: 19.980999999999998 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.519999999999996 - type: f1 value: 46.36601294761231 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 74.413 - type: map_at_10 value: 83.414 - type: map_at_100 value: 83.621 - type: map_at_1000 value: 83.635 - type: map_at_3 value: 82.337 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 80.19800000000001 - type: mrr_at_10 value: 87.715 - type: mrr_at_100 value: 87.778 - type: mrr_at_1000 value: 87.779 - type: mrr_at_3 value: 87.106 - type: mrr_at_5 value: 87.555 - type: ndcg_at_1 value: 80.19800000000001 - type: ndcg_at_10 value: 87.182 - type: ndcg_at_100 value: 87.90299999999999 - type: ndcg_at_1000 value: 88.143 - type: ndcg_at_3 value: 85.60600000000001 - type: ndcg_at_5 value: 86.541 - type: precision_at_1 value: 80.19800000000001 - type: precision_at_10 value: 10.531 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.429 - type: recall_at_1 value: 74.413 - type: recall_at_10 value: 94.363 - type: recall_at_100 value: 97.165 - type: recall_at_1000 value: 98.668 - type: recall_at_3 value: 90.108 - type: recall_at_5 value: 92.52 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.701 - type: map_at_10 value: 37.122 - type: map_at_100 value: 39.178000000000004 - type: map_at_1000 value: 39.326 - type: map_at_3 value: 32.971000000000004 - type: map_at_5 value: 35.332 - type: mrr_at_1 value: 44.753 - type: mrr_at_10 value: 53.452 - type: mrr_at_100 value: 54.198 - type: mrr_at_1000 value: 54.225 - type: mrr_at_3 value: 50.952 - type: mrr_at_5 value: 52.464 - type: ndcg_at_1 value: 44.753 - type: ndcg_at_10 value: 45.021 - type: ndcg_at_100 value: 52.028 - type: ndcg_at_1000 value: 54.596000000000004 - type: ndcg_at_3 value: 41.622 - type: ndcg_at_5 value: 42.736000000000004 - type: precision_at_1 value: 44.753 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.061999999999998 - type: recall_at_1 value: 22.701 - type: recall_at_10 value: 51.432 - type: recall_at_100 value: 77.009 - type: recall_at_1000 value: 92.511 - type: recall_at_3 value: 37.919000000000004 - type: recall_at_5 value: 44.131 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.189 - type: map_at_10 value: 66.24600000000001 - type: map_at_100 value: 67.098 - type: map_at_1000 value: 67.149 - type: map_at_3 value: 62.684 - type: map_at_5 value: 64.974 - type: mrr_at_1 value: 80.378 - type: mrr_at_10 value: 86.127 - type: mrr_at_100 value: 86.29299999999999 - type: mrr_at_1000 value: 86.297 - type: mrr_at_3 value: 85.31400000000001 - type: mrr_at_5 value: 85.858 - type: ndcg_at_1 value: 80.378 - type: ndcg_at_10 value: 74.101 - type: ndcg_at_100 value: 76.993 - type: ndcg_at_1000 value: 77.948 - type: ndcg_at_3 value: 69.232 - type: ndcg_at_5 value: 72.04599999999999 - type: precision_at_1 value: 80.378 - type: precision_at_10 value: 15.595999999999998 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.884 - type: precision_at_5 value: 29.145 - type: recall_at_1 value: 40.189 - type: recall_at_10 value: 77.981 - type: recall_at_100 value: 89.21 - type: recall_at_1000 value: 95.48299999999999 - type: recall_at_3 value: 67.326 - type: recall_at_5 value: 72.863 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.84599999999999 - type: ap value: 89.4710787567357 - type: f1 value: 92.83752676932258 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.132 - type: map_at_10 value: 35.543 - type: map_at_100 value: 36.702 - type: map_at_1000 value: 36.748999999999995 - type: map_at_3 value: 31.737 - type: map_at_5 value: 33.927 - type: mrr_at_1 value: 23.782 - type: mrr_at_10 value: 36.204 - type: mrr_at_100 value: 37.29 - type: mrr_at_1000 value: 37.330999999999996 - type: mrr_at_3 value: 32.458999999999996 - type: mrr_at_5 value: 34.631 - type: ndcg_at_1 value: 23.782 - type: ndcg_at_10 value: 42.492999999999995 - type: ndcg_at_100 value: 47.985 - type: ndcg_at_1000 value: 49.141 - type: ndcg_at_3 value: 34.748000000000005 - type: ndcg_at_5 value: 38.651 - type: precision_at_1 value: 23.782 - type: precision_at_10 value: 6.665 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.776 - type: precision_at_5 value: 10.84 - type: recall_at_1 value: 23.132 - type: recall_at_10 value: 63.794 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.807 - type: recall_at_3 value: 42.765 - type: recall_at_5 value: 52.11 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.3842594786827 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.49384404924761 - type: f1 value: 59.7580539534629 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.56220578345663 - type: f1 value: 75.27228165561478 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.53463349024884 - type: f1 value: 80.4893958236536 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.56100273484962 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.470380028839607 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.06102792457849 - type: mrr value: 33.30709199672238 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.776999999999999 - type: map_at_10 value: 14.924000000000001 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.538999999999998 - type: map_at_3 value: 10.982 - type: map_at_5 value: 12.679000000000002 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.232000000000006 - type: mrr_at_100 value: 57.818999999999996 - type: mrr_at_1000 value: 57.847 - type: mrr_at_3 value: 54.901999999999994 - type: mrr_at_5 value: 56.481 - type: ndcg_at_1 value: 46.594 - type: ndcg_at_10 value: 38.129000000000005 - type: ndcg_at_100 value: 35.54 - type: ndcg_at_1000 value: 44.172 - type: ndcg_at_3 value: 43.025999999999996 - type: ndcg_at_5 value: 41.052 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 28.111000000000004 - type: precision_at_100 value: 8.929 - type: precision_at_1000 value: 2.185 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.232 - type: recall_at_1 value: 6.776999999999999 - type: recall_at_10 value: 19.289 - type: recall_at_100 value: 36.359 - type: recall_at_1000 value: 67.54 - type: recall_at_3 value: 11.869 - type: recall_at_5 value: 14.999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.108000000000004 - type: map_at_10 value: 47.126000000000005 - type: map_at_100 value: 48.171 - type: map_at_1000 value: 48.199 - type: map_at_3 value: 42.734 - type: map_at_5 value: 45.362 - type: mrr_at_1 value: 34.936 - type: mrr_at_10 value: 49.571 - type: mrr_at_100 value: 50.345 - type: mrr_at_1000 value: 50.363 - type: mrr_at_3 value: 45.959 - type: mrr_at_5 value: 48.165 - type: ndcg_at_1 value: 34.936 - type: ndcg_at_10 value: 55.028999999999996 - type: ndcg_at_100 value: 59.244 - type: ndcg_at_1000 value: 59.861 - type: ndcg_at_3 value: 46.872 - type: ndcg_at_5 value: 51.217999999999996 - type: precision_at_1 value: 34.936 - type: precision_at_10 value: 9.099 - type: precision_at_100 value: 1.145 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.456 - type: precision_at_5 value: 15.411 - type: recall_at_1 value: 31.108000000000004 - type: recall_at_10 value: 76.53999999999999 - type: recall_at_100 value: 94.39 - type: recall_at_1000 value: 98.947 - type: recall_at_3 value: 55.572 - type: recall_at_5 value: 65.525 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.56400000000001 - type: map_at_10 value: 85.482 - type: map_at_100 value: 86.114 - type: map_at_1000 value: 86.13 - type: map_at_3 value: 82.607 - type: map_at_5 value: 84.405 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.304 - type: mrr_at_100 value: 88.399 - type: mrr_at_1000 value: 88.399 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.024 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.06500000000001 - type: ndcg_at_100 value: 90.232 - type: ndcg_at_1000 value: 90.305 - type: ndcg_at_3 value: 86.375 - type: ndcg_at_5 value: 87.85300000000001 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.486999999999998 - type: precision_at_100 value: 1.534 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.813 - type: precision_at_5 value: 24.773999999999997 - type: recall_at_1 value: 71.56400000000001 - type: recall_at_10 value: 95.812 - type: recall_at_100 value: 99.7 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 87.966 - type: recall_at_5 value: 92.268 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.241876648614145 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.66212576446223 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.308 - type: map_at_10 value: 13.803 - type: map_at_100 value: 16.176 - type: map_at_1000 value: 16.561 - type: map_at_3 value: 9.761000000000001 - type: map_at_5 value: 11.802 - type: mrr_at_1 value: 26.200000000000003 - type: mrr_at_10 value: 37.621 - type: mrr_at_100 value: 38.767 - type: mrr_at_1000 value: 38.815 - type: mrr_at_3 value: 34.117 - type: mrr_at_5 value: 36.107 - type: ndcg_at_1 value: 26.200000000000003 - type: ndcg_at_10 value: 22.64 - type: ndcg_at_100 value: 31.567 - type: ndcg_at_1000 value: 37.623 - type: ndcg_at_3 value: 21.435000000000002 - type: ndcg_at_5 value: 18.87 - type: precision_at_1 value: 26.200000000000003 - type: precision_at_10 value: 11.74 - type: precision_at_100 value: 2.465 - type: precision_at_1000 value: 0.391 - type: precision_at_3 value: 20.033 - type: precision_at_5 value: 16.64 - type: recall_at_1 value: 5.308 - type: recall_at_10 value: 23.794999999999998 - type: recall_at_100 value: 50.015 - type: recall_at_1000 value: 79.283 - type: recall_at_3 value: 12.178 - type: recall_at_5 value: 16.882 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.93231134675553 - type: cos_sim_spearman value: 81.68319292603205 - type: euclidean_pearson value: 81.8396814380367 - type: euclidean_spearman value: 81.24641903349945 - type: manhattan_pearson value: 81.84698799204274 - type: manhattan_spearman value: 81.24269997904105 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.73241671587446 - type: cos_sim_spearman value: 79.05091082971826 - type: euclidean_pearson value: 83.91146869578044 - type: euclidean_spearman value: 79.87978465370936 - type: manhattan_pearson value: 83.90888338917678 - type: manhattan_spearman value: 79.87482848584241 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.14970731146177 - type: cos_sim_spearman value: 86.37363490084627 - type: euclidean_pearson value: 83.02154218530433 - type: euclidean_spearman value: 83.80258761957367 - type: manhattan_pearson value: 83.01664495119347 - type: manhattan_spearman value: 83.77567458007952 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.40474139886784 - type: cos_sim_spearman value: 82.77768789165984 - type: euclidean_pearson value: 80.7065877443695 - type: euclidean_spearman value: 81.375940662505 - type: manhattan_pearson value: 80.6507552270278 - type: manhattan_spearman value: 81.32782179098741 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.08585968722274 - type: cos_sim_spearman value: 88.03110031451399 - type: euclidean_pearson value: 85.74012019602384 - type: euclidean_spearman value: 86.13592849438209 - type: manhattan_pearson value: 85.74404842369206 - type: manhattan_spearman value: 86.14492318960154 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.95069052788875 - type: cos_sim_spearman value: 86.4867991595147 - type: euclidean_pearson value: 84.31013325754635 - type: euclidean_spearman value: 85.01529258006482 - type: manhattan_pearson value: 84.26995570085374 - type: manhattan_spearman value: 84.96982104986162 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.54617647971897 - type: cos_sim_spearman value: 87.49834181751034 - type: euclidean_pearson value: 86.01015322577122 - type: euclidean_spearman value: 84.63362652063199 - type: manhattan_pearson value: 86.13807574475706 - type: manhattan_spearman value: 84.7772370721132 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.20047755786615 - type: cos_sim_spearman value: 67.05324077987636 - type: euclidean_pearson value: 66.91930642976601 - type: euclidean_spearman value: 65.21491856099105 - type: manhattan_pearson value: 66.78756851976624 - type: manhattan_spearman value: 65.12356257740728 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.19852871539686 - type: cos_sim_spearman value: 87.5161895296395 - type: euclidean_pearson value: 84.59848645207485 - type: euclidean_spearman value: 85.26427328757919 - type: manhattan_pearson value: 84.59747366996524 - type: manhattan_spearman value: 85.24045855146915 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.63320317811032 - type: mrr value: 96.26242947321379 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 60.928000000000004 - type: map_at_10 value: 70.112 - type: map_at_100 value: 70.59299999999999 - type: map_at_1000 value: 70.623 - type: map_at_3 value: 66.846 - type: map_at_5 value: 68.447 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 71.212 - type: mrr_at_100 value: 71.616 - type: mrr_at_1000 value: 71.64500000000001 - type: mrr_at_3 value: 68.77799999999999 - type: mrr_at_5 value: 70.094 - type: ndcg_at_1 value: 64.0 - type: ndcg_at_10 value: 74.607 - type: ndcg_at_100 value: 76.416 - type: ndcg_at_1000 value: 77.102 - type: ndcg_at_3 value: 69.126 - type: ndcg_at_5 value: 71.41300000000001 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 9.933 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.556 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 60.928000000000004 - type: recall_at_10 value: 87.322 - type: recall_at_100 value: 94.833 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.428 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86237623762376 - type: cos_sim_ap value: 96.72586477206649 - type: cos_sim_f1 value: 93.01858362631845 - type: cos_sim_precision value: 93.4409687184662 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.78019801980199 - type: dot_ap value: 93.72748205246228 - type: dot_f1 value: 89.04109589041096 - type: dot_precision value: 87.16475095785441 - type: dot_recall value: 91.0 - type: euclidean_accuracy value: 99.85445544554456 - type: euclidean_ap value: 96.6661459876145 - type: euclidean_f1 value: 92.58337481333997 - type: euclidean_precision value: 92.17046580773042 - type: euclidean_recall value: 93.0 - type: manhattan_accuracy value: 99.85445544554456 - type: manhattan_ap value: 96.6883549244056 - type: manhattan_f1 value: 92.57598405580468 - type: manhattan_precision value: 92.25422045680239 - type: manhattan_recall value: 92.9 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.72586477206649 - type: max_f1 value: 93.01858362631845 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.39930057069995 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.96398659903402 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.946944700355395 - type: mrr value: 56.97151398438164 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.541657650692905 - type: cos_sim_spearman value: 31.605804192286303 - type: dot_pearson value: 28.26905996736398 - type: dot_spearman value: 27.864801765851187 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.8870000000000002 - type: map_at_100 value: 9.78 - type: map_at_1000 value: 22.514 - type: map_at_3 value: 0.6669999999999999 - type: map_at_5 value: 1.077 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 89.86699999999999 - type: mrr_at_100 value: 89.86699999999999 - type: mrr_at_1000 value: 89.86699999999999 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 74.818 - type: ndcg_at_100 value: 53.715999999999994 - type: ndcg_at_1000 value: 47.082 - type: ndcg_at_3 value: 82.134 - type: ndcg_at_5 value: 79.81899999999999 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 54.48 - type: precision_at_1000 value: 20.518 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 85.2 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 2.072 - type: recall_at_100 value: 13.013 - type: recall_at_1000 value: 43.462 - type: recall_at_3 value: 0.695 - type: recall_at_5 value: 1.139 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.328 - type: map_at_10 value: 9.795 - type: map_at_100 value: 15.801000000000002 - type: map_at_1000 value: 17.23 - type: map_at_3 value: 4.734 - type: map_at_5 value: 6.644 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 46.902 - type: mrr_at_100 value: 47.495 - type: mrr_at_1000 value: 47.495 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 44.218 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 24.806 - type: ndcg_at_100 value: 36.419000000000004 - type: ndcg_at_1000 value: 47.272999999999996 - type: ndcg_at_3 value: 25.666 - type: ndcg_at_5 value: 25.448999999999998 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 23.061 - type: precision_at_100 value: 7.714 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 26.531 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.328 - type: recall_at_10 value: 16.524 - type: recall_at_100 value: 47.179 - type: recall_at_1000 value: 81.22200000000001 - type: recall_at_3 value: 5.745 - type: recall_at_5 value: 9.339 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9142 - type: ap value: 14.335574772555415 - type: f1 value: 54.62839595194111 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.94340690435768 - type: f1 value: 60.286487936731916 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.26597708987974 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.48882398521786 - type: cos_sim_ap value: 79.04326607602204 - type: cos_sim_f1 value: 71.64566826860633 - type: cos_sim_precision value: 70.55512918905092 - type: cos_sim_recall value: 72.77044854881267 - type: dot_accuracy value: 84.19264469213805 - type: dot_ap value: 67.96360043562528 - type: dot_f1 value: 64.06418393006827 - type: dot_precision value: 58.64941898706424 - type: dot_recall value: 70.58047493403694 - type: euclidean_accuracy value: 87.45902127913214 - type: euclidean_ap value: 78.9742237648272 - type: euclidean_f1 value: 71.5553235908142 - type: euclidean_precision value: 70.77955601445535 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.41729749061214 - type: manhattan_ap value: 78.90073137580596 - type: manhattan_f1 value: 71.3942611553533 - type: manhattan_precision value: 68.52705653967483 - type: manhattan_recall value: 74.51187335092348 - type: max_accuracy value: 87.48882398521786 - type: max_ap value: 79.04326607602204 - type: max_f1 value: 71.64566826860633 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.68125897465751 - type: cos_sim_ap value: 85.6003454431979 - type: cos_sim_f1 value: 77.6957163958641 - type: cos_sim_precision value: 73.0110366307807 - type: cos_sim_recall value: 83.02279026793964 - type: dot_accuracy value: 87.7672992587418 - type: dot_ap value: 82.4971301112899 - type: dot_f1 value: 75.90528233151184 - type: dot_precision value: 72.0370626469368 - type: dot_recall value: 80.21250384970742 - type: euclidean_accuracy value: 88.4503434625684 - type: euclidean_ap value: 84.91949884748384 - type: euclidean_f1 value: 76.92365018444684 - type: euclidean_precision value: 74.53245721712759 - type: euclidean_recall value: 79.47336002463813 - type: manhattan_accuracy value: 88.47556952691427 - type: manhattan_ap value: 84.8963689101517 - type: manhattan_f1 value: 76.85901249256395 - type: manhattan_precision value: 74.31693989071039 - type: manhattan_recall value: 79.58115183246073 - type: max_accuracy value: 88.68125897465751 - type: max_ap value: 85.6003454431979 - type: max_f1 value: 77.6957163958641 --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model that supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13") model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
[ "SEMANTIC_SIMILARITY", "SUMMARIZATION" ]
[ "BEAR", "BIOSSES", "SCIFACT" ]
Non_BioNLP
lcampillos/roberta-es-clinical-trials-ner
lcampillos
token-classification
[ "transformers", "pytorch", "safetensors", "roberta", "token-classification", "generated_from_trainer", "es", "arxiv:1910.09700", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,656
1,679
58
9
--- language: - es license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: El ensayo clínico con vacunas promete buenos resultados para la infección por SARS-CoV-2. - text: El paciente toma aspirina para el dolor de cabeza y porque la garganta también le duele mucho. - text: El mejor tratamiento actual contra la COVID es la vacunación. model-index: - name: roberta-es-clinical-trials-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-es-clinical-trials-ner This medical named entity recognition model detects 4 types of semantic groups from the Unified Medical Language System (UMLS) (Bodenreider 2004): - ANAT: body parts and anatomy (e.g. *garganta*, 'throat') - CHEM: chemical entities and pharmacological substances (e.g. *aspirina*,'aspirin') - DISO: pathologic conditions (e.g. *dolor*, 'pain') - PROC: diagnostic and therapeutic procedures, laboratory analyses and medical research activities (e.g. *cirugía*, 'surgery') The model achieves the following results on the evaluation set: - Loss: 0.1580 - Precision: 0.8495 - Recall: 0.8806 - F1: 0.8647 - Accuracy: 0.9583 ## Model description This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/). It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials. The model is fine-tuned on the [CT-EBM-SP corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z). ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models (CSIC – Consejo Superior de Investigaciones Científicas) will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos (CSIC – Consejo Superior de Investigaciones Científicas) de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning is the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/). It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use this resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0771 | 1.0 | 785 | 0.1274 | 0.8449 | 0.8797 | 0.8619 | 0.9608 | | 0.0415 | 2.0 | 1570 | 0.1356 | 0.8569 | 0.8856 | 0.8710 | 0.9528 | | 0.0262 | 3.0 | 2355 | 0.1562 | 0.8619 | 0.8798 | 0.8707 | 0.9526 | | 0.0186 | 4.0 | 3140 | 0.1582 | 0.8609 | 0.8846 | 0.8726 | 0.9527 | **Results per class (test set)** | Class | Precision | Recall | F1 | Support | |:-----:|:---------:|:------:|:------:|:--------:| | ANAT | 0.7069 | 0.6518 | 0.6783 | 359 | | CHEM | 0.9162 | 0.9228 | 0.9195 | 2929 | | DISO | 0.8805 | 0.8918 | 0.8861 | 3042 | | PROC | 0.8198 | 0.8720 | 0.8450 | 3954 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6 ## Environmental Impact Carbon emissions are estimated with the [Machine Learning Impact calculator](https://mlco2.github.io/impact/#compute) by [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The carbon impact is estimated by specifying the hardware, runtime, cloud provider, and compute region. - Hardware Type: 1 GPU 24 GB RTX 3090 - Time used: 4' (0.07 hours) - Compute Region: Spain, Europe - Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 0.01 kg eq. CO2 (Carbon offset: 0) ## Funding This model was created with the annotated dataset from the [NLPMedTerm project](http://www.lllf.uam.es/ESP/nlpmedterm_en.html), funded by InterTalentum UAM, Marie Skłodowska-Curie COFUND grant (2019-2021) (H2020 program, contract number 713366) and by the Computational Linguistics Chair from the Knowledge Engineering Institute (IIC-UAM). We thank the [Computational Linguistics Laboratory (LLI)](http://www.lllf.uam.es) at the Autonomous Universidad of Madrid (Universidad Autónoma de Madrid) for the computational facilities we used to fine-tune the model. # License Attribution-NonCommercial 4.0 International (CC BY 4.0)
[ "NAMED_ENTITY_RECOGNITION" ]
[ "CT-EBM-SP", "SCIELO" ]
BioNLP
tomaarsen/mpnet-base-natural-questions-mnrl
tomaarsen
sentence-similarity
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100231", "loss:MultipleNegativesRankingLoss", "en", "dataset:sentence-transformers/natural-questions", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:microsoft/mpnet-base", "base_model:finetune:microsoft/mpnet-base", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,719
1,720
6
0
--- base_model: microsoft/mpnet-base datasets: - sentence-transformers/natural-questions language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:100231 - loss:MultipleNegativesRankingLoss widget: - source_sentence: when did the british leave new york city sentences: - Golden State Warriors The Golden State Warriors are an American professional basketball team based in Oakland, California. The Warriors compete in the National Basketball Association (NBA) as a member of the league's Western Conference Pacific Division. The Warriors play their home games at the Oracle Arena in Oakland. The Warriors have reached nine NBA Finals, winning five NBA championships in 1947,[b] 1956, 1975, 2015 and 2017. Golden State's five NBA championships are tied for fourth-most in NBA history with the San Antonio Spurs, and behind only the Boston Celtics (17), Los Angeles Lakers (16) and Chicago Bulls (6). As of 2017, the Warriors are the third most valuable NBA franchise according to Forbes, with an estimated value of $2.6 billion.[6] - Evacuation Day (New York) Evacuation Day on November 25 marks the day in 1783 when British troops departed from New York City on Manhattan Island, after the end of the American Revolutionary War. After this British Army evacuation, General George Washington triumphantly led the Continental Army from his former headquarters, north of the city, across the Harlem River south down Manhattan through the town to The Battery at the foot of Broadway.[1] - Biochemical oxygen demand BOD can be used as a gauge of the effectiveness of wastewater treatment plants. It is listed as a conventional pollutant in the U.S. Clean Water Act.[2] - source_sentence: what is the newest generation of the ipad sentences: - Alex Karev Alex is fired by Dr. Lebackes when Maggie Pierce accidentally reveals to him that Karev was thinking about leaving the job. Webber recommended Bailey to fill Yang's board seat after she left, so Bailey and Alex fight over the chair. They both make presentations to the board and eventually Bailey wins, with a unanimous vote in her favor. He is hired back as an attending Peds surgeon and takes over full-time as Arizona pursues a fellowship with Dr. Herman. Alex continues to date Jo and his friendship with Meredith grows stronger than ever, with him taking on the role of her new person. When Derek dies and Meredith runs away, Alex is upset by her leaving without telling him where she went and calls her everyday. Eventually she calls him, tells him she is okay, and to stop calling. When she goes into labor and gives birth to Ellis Shepherd, Alex goes to see her since he is her emergency contact. He brings Meredith and her kids back to her house. She asks to move back in with him in her old house. Alex sells Meredith back the house and he and Jo rent a loft. - List of presidents of the United States by age The median age upon accession to the presidency is 55 years and 3 months. This is how old Lyndon B. Johnson was at the time of his inauguration. The youngest person to assume the office was Theodore Roosevelt, who became president at the age of 42 years, 322 days, following William McKinley's assassination; the oldest was Donald Trump, who was 70 years, 220 days old at his inauguration. The youngest person to be elected president was John F. Kennedy, at 43 years, 163 days of age on election day; the oldest was Ronald Reagan, who was 73 years, 274 days old at the time of his election to a second term. - iPad (2018) The iPad (officially sixth-generation iPad) is a 9.7-inch (25cm) tablet computer designed, developed, and marketed by Apple Inc. It was announced on March 27, 2018 during an education-focused event in Chicago and it is a revision of the 2017 model, upgraded with the Apple A10 Fusion SoC and support for styluses such as Apple Pencil.[2] The iPad is marketed towards educators and schools. - source_sentence: what is the average speed of passenger airplane sentences: - Fixed exchange-rate system In the 21st century, the currencies associated with large economies typically do not fix or peg exchange rates to other currencies. The last large economy to use a fixed exchange rate system was the People's Republic of China which, in July 2005, adopted a slightly more flexible exchange rate system called a managed exchange rate.[2] The European Exchange Rate Mechanism is also used on a temporary basis to establish a final conversion rate against the Euro (€) from the local currencies of countries joining the Eurozone. - Tenth Doctor The Tenth Doctor is an incarnation of the Doctor, the protagonist of the BBC science fiction television programme Doctor Who, who is played by David Tennant in three series as well as nine specials. As with previous incarnations of the Doctor, the character has also appeared in other Doctor Who spin-offs. In the programme's narrative, the Doctor is a centuries-old Time Lord alien from the planet Gallifrey who travels in time in his TARDIS, frequently with companions. When the Doctor is critically injured beyond medical repair, he can regenerate his body; in doing so, his physical appearance and personality change, and a new actor assumes the role. Tennant's portrayal of the Doctor is of an outwardly charismatic and charming adventurer whose likable and easygoing attitude can quickly turn to righteous fury when provoked. - Cruise (aeronautics) The typical cruising airspeed for a long-distance commercial passenger aircraft is approximately 475–500 knots (878–926 km/h; 546–575 mph). - source_sentence: when is cars three going to be released sentences: - Benedict's reagent The color of the obtained precipitate gives an idea about the quantity of sugar present in the solution, hence the test is semi-quantitative. A greenish precipitate indicates about 0.5 g% concentration; yellow precipitate indicates 1 g% concentration; orange indicates 1.5 g% and red indicates 2 g% or higher concentration. - Cars 3 The film was released on June 16, 2017, has grossed over $362 million worldwide and received generally positive reviews, with many critics considering it an improvement over its predecessor, as well as praising its emotional story and animation.[7] - Sleeping Beauty At the christening of a king and queen's long-wished-for child, seven good fairies are invited to be godmothers to the infant princess. The fairies attend the banquet at the palace. Each fairy is presented with a golden plate and drinking cups adorned with jewels. Soon after, an old fairy enters the palace and is seated with a plate of fine china and a crystal drinking glass. This old fairy is overlooked because she has been within a tower for many years and everyone had believed her to be deceased. Six of the other seven fairies then offer their gifts of beauty, wit, grace, dance, song, and goodness to the infant princess. The evil fairy is very angry about having been forgotten, and as her gift, enchants the infant princess so that she will one day prick her finger on a spindle of a spinning wheel and die. The seventh fairy, who hasn't yet given her gift, attempts to reverse the evil fairy's curse. However, she can only do so partially. Instead of dying, the Princess will fall into a deep sleep for 100 years and be awakened by a kiss from a king's son. - source_sentence: who was ancient china's main enemy that lived to the north sentences: - Betty Lynn Elizabeth Ann Theresa "Betty" Lynn[1] (born August 29, 1926) is a former American actress. She is best known for her role as Thelma Lou, Deputy Barney Fife's girlfriend, on The Andy Griffith Show. - Sampath Bank Sampath Bank PLC is a licensed commercial bank incorporated in Sri Lanka in 1986 with 229 branches and 373 ATMs island wide. It has won the "Bank of the Year" award by "The Banker" of Financial Times Limited – London, for the second consecutive year and the "National Business Excellence Awards 2010".[citation needed] It has become the third largest private sector bank in Sri Lanka with Rs. 453 billion in deposits as of 30 June 2016.[1] - 'Sui dynasty The Sui Dynasty (Chinese: 隋朝; pinyin: Suí cháo) was a short-lived imperial dynasty of China of pivotal significance. The Sui unified the Northern and Southern dynasties and reinstalled the rule of ethnic Han Chinese in the entirety of China proper, along with sinicization of former nomadic ethnic minorities (the Five Barbarians) within its territory. It was succeeded by the Tang dynasty, which largely inherited its foundation.' co2_eq_emissions: emissions: 176.27316538970194 energy_consumed: 0.45349178905614573 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 1.208 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: MPNet base trained on Natural Questions pairs results: - task: type: information-retrieval name: Information Retrieval dataset: name: natural questions dev type: natural-questions-dev metrics: - type: cosine_accuracy@1 value: 0.5898739126185124 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8181018473267521 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8875965203792395 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9434072915648519 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.5898739126185124 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.272700615775584 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17751930407584793 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0943407291564852 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.5898739126185124 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8181018473267521 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8875965203792395 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9434072915648519 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7718587241150588 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7161706640105541 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7187822446150476 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.5610399765418825 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.7970872837454794 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.866386472485583 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9327533965399277 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.5610399765418825 name: Dot Precision@1 - type: dot_precision@3 value: 0.2656957612484931 name: Dot Precision@3 - type: dot_precision@5 value: 0.17327729449711662 name: Dot Precision@5 - type: dot_precision@10 value: 0.09327533965399278 name: Dot Precision@10 - type: dot_recall@1 value: 0.5610399765418825 name: Dot Recall@1 - type: dot_recall@3 value: 0.7970872837454794 name: Dot Recall@3 - type: dot_recall@5 value: 0.866386472485583 name: Dot Recall@5 - type: dot_recall@10 value: 0.9327533965399277 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.7508293948042275 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.6920021317098743 name: Dot Mrr@10 - type: dot_map@100 value: 0.6951493585515177 name: Dot Map@100 --- # MPNet base trained on Natural Questions pairs This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/mpnet-base-natural-questions-mnrl") # Run inference sentences = [ "who was ancient china's main enemy that lived to the north", 'Sui dynasty The Sui Dynasty (Chinese: 隋朝; pinyin: Suí cháo) was a short-lived imperial dynasty of China of pivotal significance. The Sui unified the Northern and Southern dynasties and reinstalled the rule of ethnic Han Chinese in the entirety of China proper, along with sinicization of former nomadic ethnic minorities (the Five Barbarians) within its territory. It was succeeded by the Tang dynasty, which largely inherited its foundation.', 'Sampath Bank Sampath Bank PLC is a licensed commercial bank incorporated in Sri Lanka in 1986 with 229 branches and 373 ATMs island wide. It has won the "Bank of the Year" award by "The Banker" of Financial Times Limited – London, for the second consecutive year and the "National Business Excellence Awards 2010".[citation needed] It has become the third largest private sector bank in Sri Lanka with Rs. 453 billion in deposits as of 30 June 2016.[1]', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `natural-questions-dev` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.5899 | | cosine_accuracy@3 | 0.8181 | | cosine_accuracy@5 | 0.8876 | | cosine_accuracy@10 | 0.9434 | | cosine_precision@1 | 0.5899 | | cosine_precision@3 | 0.2727 | | cosine_precision@5 | 0.1775 | | cosine_precision@10 | 0.0943 | | cosine_recall@1 | 0.5899 | | cosine_recall@3 | 0.8181 | | cosine_recall@5 | 0.8876 | | cosine_recall@10 | 0.9434 | | cosine_ndcg@10 | 0.7719 | | cosine_mrr@10 | 0.7162 | | **cosine_map@100** | **0.7188** | | dot_accuracy@1 | 0.561 | | dot_accuracy@3 | 0.7971 | | dot_accuracy@5 | 0.8664 | | dot_accuracy@10 | 0.9328 | | dot_precision@1 | 0.561 | | dot_precision@3 | 0.2657 | | dot_precision@5 | 0.1733 | | dot_precision@10 | 0.0933 | | dot_recall@1 | 0.561 | | dot_recall@3 | 0.7971 | | dot_recall@5 | 0.8664 | | dot_recall@10 | 0.9328 | | dot_ndcg@10 | 0.7508 | | dot_mrr@10 | 0.692 | | dot_map@100 | 0.6951 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 100,231 training samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.74 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 135.66 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | query | answer | |:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.</code> | | <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> | | <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### natural-questions * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 100,231 evaluation samples * Columns: <code>query</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | query | answer | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.79 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 142.78 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | query | answer | |:--------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>who betrayed siraj ud daula in the battle of plassey in 1757</code> | <code>Siraj ud-Daulah The Battle of Plassey (or Palashi) is widely considered the turning point in the history of the subcontinent, and opened the way to eventual British domination. After Siraj-ud-Daulah's conquest of Calcutta, the British sent fresh troops from Madras to recapture the fort and avenge the attack. A retreating Siraj-ud-Daulah met the British at Plassey. He had to make camp 27 miles away from Murshidabad. On 23 June 1757 Siraj-ud-Daulah called on Mir Jafar because he was saddened by the sudden fall of Mir Mardan who was a very dear companion of Siraj in battles. The Nawab asked for help from Mir Jafar. Mir Jafar advised Siraj to retreat for that day. The Nawab made the blunder in giving the order to stop the fight. Following his command, the soldiers of the Nawab were returning to their camps. At that time, Robert Clive attacked the soldiers with his army. At such a sudden attack, the army of Siraj became indisciplined and could think of no way to fight. So all fled away in such a situation. Betrayed by a conspiracy plotted by Jagat Seth, Mir Jafar, Krishna Chandra, Omichund etc., he lost the battle and had to escape. He went first to Murshidabad and then to Patna by boat, but was eventually arrested by Mir Jafar's soldiers.</code> | | <code>what is the meaning of single malt whisky</code> | <code>Single malt whisky Single malt whisky is malt whisky from a single distillery, that is, whisky distilled from fermented mash made exclusively with malted grain (usually barley), as distinguished from unmalted grain.</code> | | <code>when is despicable me 3 going to release</code> | <code>Despicable Me 3 Despicable Me 3 premiered on June 14, 2017, at the Annecy International Animated Film Festival, and was released in the United States on June 30, 2017, by Universal Pictures in 3D, RealD 3D, Dolby Cinema, and IMAX 3D. The film received mixed reviews from critics[7] and has grossed over $1 billion worldwide, making it the third highest-grossing film of 2017, the fifth highest-grossing animated film of all time and the 28th highest-grossing overall. It is Illumination's second film to gross over $1 billion, after Minions in 2015, becoming the first ever animated franchise to do so.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | natural-questions-dev_cosine_map@100 | |:------:|:----:|:-------------:|:------:|:------------------------------------:| | 0 | 0 | - | - | 0.1228 | | 0.0004 | 1 | 3.0833 | - | - | | 0.0355 | 100 | 1.3516 | 0.1545 | 0.5151 | | 0.0711 | 200 | 0.1189 | 0.0607 | 0.6299 | | 0.1066 | 300 | 0.0641 | 0.0450 | 0.6535 | | 0.1422 | 400 | 0.0529 | 0.0436 | 0.6532 | | 0.1777 | 500 | 0.0601 | 0.0349 | 0.6716 | | 0.2133 | 600 | 0.0453 | 0.0308 | 0.6771 | | 0.2488 | 700 | 0.0478 | 0.0298 | 0.6769 | | 0.2844 | 800 | 0.0404 | 0.0309 | 0.6834 | | 0.3199 | 900 | 0.0377 | 0.0275 | 0.6855 | | 0.3555 | 1000 | 0.0391 | 0.0248 | 0.6929 | | 0.3910 | 1100 | 0.026 | 0.0265 | 0.6919 | | 0.4266 | 1200 | 0.0343 | 0.0247 | 0.6985 | | 0.4621 | 1300 | 0.0359 | 0.0245 | 0.6951 | | 0.4977 | 1400 | 0.0283 | 0.0213 | 0.6993 | | 0.5332 | 1500 | 0.027 | 0.0207 | 0.7072 | | 0.5688 | 1600 | 0.0313 | 0.0223 | 0.6980 | | 0.6043 | 1700 | 0.0373 | 0.0203 | 0.7042 | | 0.6399 | 1800 | 0.0245 | 0.0199 | 0.7049 | | 0.6754 | 1900 | 0.0294 | 0.0186 | 0.7143 | | 0.7110 | 2000 | 0.0185 | 0.0185 | 0.7116 | | 0.7465 | 2100 | 0.0247 | 0.0181 | 0.7118 | | 0.7821 | 2200 | 0.0221 | 0.0183 | 0.7142 | | 0.8176 | 2300 | 0.0178 | 0.0182 | 0.7141 | | 0.8532 | 2400 | 0.0235 | 0.0170 | 0.7172 | | 0.8887 | 2500 | 0.0279 | 0.0168 | 0.7190 | | 0.9243 | 2600 | 0.0278 | 0.0167 | 0.7188 | | 0.9598 | 2700 | 0.022 | 0.0166 | 0.7179 | | 0.9954 | 2800 | 0.0191 | 0.0166 | 0.7173 | | 1.0 | 2813 | - | - | 0.7188 | ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 0.453 kWh - **Carbon Emitted**: 0.176 kg of CO2 - **Hours Used**: 1.208 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.1.0.dev0 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION" ]
[ "MEDAL" ]
Non_BioNLP
medspaner/roberta-es-clinical-trials-cases-umls-7sgs
medspaner
token-classification
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,696
1,727
17
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: "Criterios de inclusión: 18 a 65 años; necrosis avascular de cadera; sintomática\ \ de menos de 6 meses; capaz de otorgar consentimiento informado.\n Criterios\ \ de exclusión: embarazo, lactancia, mujer fértil sin métodos anticonceptivos\ \ adecuados; tratamiento activo con bifosfonatos; infección por VIH, hepatitis\ \ B o hepatitis C; historia de neoplasia en cualquier organo." - text: 'Recuperación de daño hepático relacionado con nutrición parenteral con ácidos omega-3 en adultos críticos: ensayo clínico aleatorizado.' - text: 'Título público: Análisis del dolor tras inyección intramuscular de penicilina con agujas de mayor calibre y anestésico local, frente a aguja tradicional sin anestésico en pacientes con sífilis' model-index: - name: roberta-es-clinical-trials-cases-umls-7sgs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-es-clinical-trials-cases-umls-7sgs This medical named entity recognition model detects 7 types of semantic groups from the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) ([Bodenreider 2004](https://academic.oup.com/nar/article/32/suppl_1/D267/2505235)): - ANAT: body parts and anatomy (e.g. *garganta*, 'throat') - CHEM: chemical entities and pharmacological substances (e.g. *aspirina*,'aspirin') - DEVI: medical devices (e.g. *catéter*, 'catheter') - DISO: pathologic conditions (e.g. *dolor*, 'pain') - LIVB: living beings (e.g. *paciente*, 'patient') - PHYS: physiological processes (e.g. *respiración*, 'breathing') - PROC: diagnostic and therapeutic procedures, laboratory analyses and medical research activities (e.g. *cirugía*, 'surgery') The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.877 (±0.004) - Recall: 0.890 (±0.001) - F1: 0.884 (±0.002) - Accuracy: 0.960 (±0.001) ## Model description This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/). It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials and clinical cases. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) and 100 clinical cases with Creative Commons license. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data To fine-tune this model we used the [Clinical Trials for Evidence-Based-Medicine in Spanish (CT-EBM-SP) corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) and 100 clinical cases with Creative Commons License. The CT-EBM-SP corpus is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: average 15.5 epochs (±4.65); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.877 (±0.004) | 0.890 (±0.001) | 0.884 (±0.002) | 0.960 (±0.001) | **Results per class (test set; average and standard deviation of 5 rounds with different seeds)** | Class | Precision | Recall | F1 | Support | |:----------:|:--------------:|:--------------:|:--------------:|:---------:| | ANAT | 0.702 (±0.024) | 0.727 (±0.040) | 0.713 (±0.011) | 308 | | CHEM | 0.913 (±0.006) | 0.924 (±0.005) | 0.918 (±0.001) | 2932 | | DEVI | 0.656 (±0.026) | 0.773 (±0.034) | 0.709 (±0.016) | 134 | | DISO | 0.893 (±0.008) | 0.894 (±0.006) | 0.893 (±0.004) | 3065 | | LIVB | 0.944 (±0.010) | 0.957 (±0.003) | 0.951 (±0.005) | 1685 | | PHYS | 0.764 (±0.028) | 0.749 (±0.020) | 0.756 (±0.016) | 308 | | PROC | 0.843 (±0.007) | 0.866 (±0.003) | 0.855 (±0.003) | 4154 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "NAMED_ENTITY_RECOGNITION" ]
[ "CT-EBM-SP", "SCIELO" ]
BioNLP
BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF
BenevolenceMessiah
sentence-similarity
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js", "llama-cpp", "gguf-my-repo", "en", "base_model:nomic-ai/nomic-embed-text-v1.5", "base_model:quantized:nomic-ai/nomic-embed-text-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,734
1,734
44
0
--- base_model: nomic-ai/nomic-embed-text-v1.5 language: - en library_name: sentence-transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity - mteb - transformers - transformers.js - llama-cpp - gguf-my-repo model-index: - name: epoch_0_model results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.20895522388058 - type: ap value: 38.57605549557802 - type: f1 value: 69.35586565857854 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.8144 - type: ap value: 88.65222882032363 - type: f1 value: 91.80426301643274 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.162000000000006 - type: f1 value: 46.59329642263158 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 24.253 - type: map_at_10 value: 38.962 - type: map_at_100 value: 40.081 - type: map_at_1000 value: 40.089000000000006 - type: map_at_3 value: 33.499 - type: map_at_5 value: 36.351 - type: mrr_at_1 value: 24.609 - type: mrr_at_10 value: 39.099000000000004 - type: mrr_at_100 value: 40.211000000000006 - type: mrr_at_1000 value: 40.219 - type: mrr_at_3 value: 33.677 - type: mrr_at_5 value: 36.469 - type: ndcg_at_1 value: 24.253 - type: ndcg_at_10 value: 48.010999999999996 - type: ndcg_at_100 value: 52.756 - type: ndcg_at_1000 value: 52.964999999999996 - type: ndcg_at_3 value: 36.564 - type: ndcg_at_5 value: 41.711999999999996 - type: precision_at_1 value: 24.253 - type: precision_at_10 value: 7.738 - type: precision_at_100 value: 0.98 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 15.149000000000001 - type: precision_at_5 value: 11.593 - type: recall_at_1 value: 24.253 - type: recall_at_10 value: 77.383 - type: recall_at_100 value: 98.009 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 45.448 - type: recall_at_5 value: 57.965999999999994 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 45.69069567851087 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.35185490976283 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.71274951450321 - type: mrr value: 76.06032625423207 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.73980520022269 - type: cos_sim_spearman value: 84.24649792685918 - type: euclidean_pearson value: 85.85197641158186 - type: euclidean_spearman value: 84.24649792685918 - type: manhattan_pearson value: 86.26809552711346 - type: manhattan_spearman value: 84.56397504030865 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.25324675324674 - type: f1 value: 84.17872280892557 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.770253446400886 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.94307095497281 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.164 - type: map_at_10 value: 42.641 - type: map_at_100 value: 43.947 - type: map_at_1000 value: 44.074999999999996 - type: map_at_3 value: 39.592 - type: map_at_5 value: 41.204 - type: mrr_at_1 value: 39.628 - type: mrr_at_10 value: 48.625 - type: mrr_at_100 value: 49.368 - type: mrr_at_1000 value: 49.413000000000004 - type: mrr_at_3 value: 46.400000000000006 - type: mrr_at_5 value: 47.68 - type: ndcg_at_1 value: 39.628 - type: ndcg_at_10 value: 48.564 - type: ndcg_at_100 value: 53.507000000000005 - type: ndcg_at_1000 value: 55.635999999999996 - type: ndcg_at_3 value: 44.471 - type: ndcg_at_5 value: 46.137 - type: precision_at_1 value: 39.628 - type: precision_at_10 value: 8.856 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 21.268 - type: precision_at_5 value: 14.649000000000001 - type: recall_at_1 value: 32.164 - type: recall_at_10 value: 59.609 - type: recall_at_100 value: 80.521 - type: recall_at_1000 value: 94.245 - type: recall_at_3 value: 46.521 - type: recall_at_5 value: 52.083999999999996 - type: map_at_1 value: 31.526 - type: map_at_10 value: 41.581 - type: map_at_100 value: 42.815999999999995 - type: map_at_1000 value: 42.936 - type: map_at_3 value: 38.605000000000004 - type: map_at_5 value: 40.351 - type: mrr_at_1 value: 39.489999999999995 - type: mrr_at_10 value: 47.829 - type: mrr_at_100 value: 48.512 - type: mrr_at_1000 value: 48.552 - type: mrr_at_3 value: 45.754 - type: mrr_at_5 value: 46.986 - type: ndcg_at_1 value: 39.489999999999995 - type: ndcg_at_10 value: 47.269 - type: ndcg_at_100 value: 51.564 - type: ndcg_at_1000 value: 53.53099999999999 - type: ndcg_at_3 value: 43.301 - type: ndcg_at_5 value: 45.239000000000004 - type: precision_at_1 value: 39.489999999999995 - type: precision_at_10 value: 8.93 - type: precision_at_100 value: 1.415 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 20.892 - type: precision_at_5 value: 14.865999999999998 - type: recall_at_1 value: 31.526 - type: recall_at_10 value: 56.76 - type: recall_at_100 value: 75.029 - type: recall_at_1000 value: 87.491 - type: recall_at_3 value: 44.786 - type: recall_at_5 value: 50.254 - type: map_at_1 value: 40.987 - type: map_at_10 value: 52.827 - type: map_at_100 value: 53.751000000000005 - type: map_at_1000 value: 53.81 - type: map_at_3 value: 49.844 - type: map_at_5 value: 51.473 - type: mrr_at_1 value: 46.833999999999996 - type: mrr_at_10 value: 56.389 - type: mrr_at_100 value: 57.003 - type: mrr_at_1000 value: 57.034 - type: mrr_at_3 value: 54.17999999999999 - type: mrr_at_5 value: 55.486999999999995 - type: ndcg_at_1 value: 46.833999999999996 - type: ndcg_at_10 value: 58.372 - type: ndcg_at_100 value: 62.068 - type: ndcg_at_1000 value: 63.288 - type: ndcg_at_3 value: 53.400000000000006 - type: ndcg_at_5 value: 55.766000000000005 - type: precision_at_1 value: 46.833999999999996 - type: precision_at_10 value: 9.191 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 23.448 - type: precision_at_5 value: 15.862000000000002 - type: recall_at_1 value: 40.987 - type: recall_at_10 value: 71.146 - type: recall_at_100 value: 87.035 - type: recall_at_1000 value: 95.633 - type: recall_at_3 value: 58.025999999999996 - type: recall_at_5 value: 63.815999999999995 - type: map_at_1 value: 24.587 - type: map_at_10 value: 33.114 - type: map_at_100 value: 34.043 - type: map_at_1000 value: 34.123999999999995 - type: map_at_3 value: 30.45 - type: map_at_5 value: 31.813999999999997 - type: mrr_at_1 value: 26.554 - type: mrr_at_10 value: 35.148 - type: mrr_at_100 value: 35.926 - type: mrr_at_1000 value: 35.991 - type: mrr_at_3 value: 32.599000000000004 - type: mrr_at_5 value: 33.893 - type: ndcg_at_1 value: 26.554 - type: ndcg_at_10 value: 38.132 - type: ndcg_at_100 value: 42.78 - type: ndcg_at_1000 value: 44.919 - type: ndcg_at_3 value: 32.833 - type: ndcg_at_5 value: 35.168 - type: precision_at_1 value: 26.554 - type: precision_at_10 value: 5.921 - type: precision_at_100 value: 0.8659999999999999 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 13.861 - type: precision_at_5 value: 9.605 - type: recall_at_1 value: 24.587 - type: recall_at_10 value: 51.690000000000005 - type: recall_at_100 value: 73.428 - type: recall_at_1000 value: 89.551 - type: recall_at_3 value: 37.336999999999996 - type: recall_at_5 value: 43.047000000000004 - type: map_at_1 value: 16.715 - type: map_at_10 value: 24.251 - type: map_at_100 value: 25.326999999999998 - type: map_at_1000 value: 25.455 - type: map_at_3 value: 21.912000000000003 - type: map_at_5 value: 23.257 - type: mrr_at_1 value: 20.274 - type: mrr_at_10 value: 28.552 - type: mrr_at_100 value: 29.42 - type: mrr_at_1000 value: 29.497 - type: mrr_at_3 value: 26.14 - type: mrr_at_5 value: 27.502 - type: ndcg_at_1 value: 20.274 - type: ndcg_at_10 value: 29.088 - type: ndcg_at_100 value: 34.293 - type: ndcg_at_1000 value: 37.271 - type: ndcg_at_3 value: 24.708 - type: ndcg_at_5 value: 26.809 - type: precision_at_1 value: 20.274 - type: precision_at_10 value: 5.361 - type: precision_at_100 value: 0.915 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 11.733 - type: precision_at_5 value: 8.556999999999999 - type: recall_at_1 value: 16.715 - type: recall_at_10 value: 39.587 - type: recall_at_100 value: 62.336000000000006 - type: recall_at_1000 value: 83.453 - type: recall_at_3 value: 27.839999999999996 - type: recall_at_5 value: 32.952999999999996 - type: map_at_1 value: 28.793000000000003 - type: map_at_10 value: 38.582 - type: map_at_100 value: 39.881 - type: map_at_1000 value: 39.987 - type: map_at_3 value: 35.851 - type: map_at_5 value: 37.289 - type: mrr_at_1 value: 34.455999999999996 - type: mrr_at_10 value: 43.909 - type: mrr_at_100 value: 44.74 - type: mrr_at_1000 value: 44.786 - type: mrr_at_3 value: 41.659 - type: mrr_at_5 value: 43.010999999999996 - type: ndcg_at_1 value: 34.455999999999996 - type: ndcg_at_10 value: 44.266 - type: ndcg_at_100 value: 49.639 - type: ndcg_at_1000 value: 51.644 - type: ndcg_at_3 value: 39.865 - type: ndcg_at_5 value: 41.887 - type: precision_at_1 value: 34.455999999999996 - type: precision_at_10 value: 7.843999999999999 - type: precision_at_100 value: 1.243 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 18.831999999999997 - type: precision_at_5 value: 13.147 - type: recall_at_1 value: 28.793000000000003 - type: recall_at_10 value: 55.68300000000001 - type: recall_at_100 value: 77.99000000000001 - type: recall_at_1000 value: 91.183 - type: recall_at_3 value: 43.293 - type: recall_at_5 value: 48.618 - type: map_at_1 value: 25.907000000000004 - type: map_at_10 value: 35.519 - type: map_at_100 value: 36.806 - type: map_at_1000 value: 36.912 - type: map_at_3 value: 32.748 - type: map_at_5 value: 34.232 - type: mrr_at_1 value: 31.621 - type: mrr_at_10 value: 40.687 - type: mrr_at_100 value: 41.583 - type: mrr_at_1000 value: 41.638999999999996 - type: mrr_at_3 value: 38.527 - type: mrr_at_5 value: 39.612 - type: ndcg_at_1 value: 31.621 - type: ndcg_at_10 value: 41.003 - type: ndcg_at_100 value: 46.617999999999995 - type: ndcg_at_1000 value: 48.82 - type: ndcg_at_3 value: 36.542 - type: ndcg_at_5 value: 38.368 - type: precision_at_1 value: 31.621 - type: precision_at_10 value: 7.396999999999999 - type: precision_at_100 value: 1.191 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 17.39 - type: precision_at_5 value: 12.1 - type: recall_at_1 value: 25.907000000000004 - type: recall_at_10 value: 52.115 - type: recall_at_100 value: 76.238 - type: recall_at_1000 value: 91.218 - type: recall_at_3 value: 39.417 - type: recall_at_5 value: 44.435 - type: map_at_1 value: 25.732166666666668 - type: map_at_10 value: 34.51616666666667 - type: map_at_100 value: 35.67241666666666 - type: map_at_1000 value: 35.78675 - type: map_at_3 value: 31.953416666666662 - type: map_at_5 value: 33.333 - type: mrr_at_1 value: 30.300166666666673 - type: mrr_at_10 value: 38.6255 - type: mrr_at_100 value: 39.46183333333334 - type: mrr_at_1000 value: 39.519999999999996 - type: mrr_at_3 value: 36.41299999999999 - type: mrr_at_5 value: 37.6365 - type: ndcg_at_1 value: 30.300166666666673 - type: ndcg_at_10 value: 39.61466666666667 - type: ndcg_at_100 value: 44.60808333333334 - type: ndcg_at_1000 value: 46.91708333333334 - type: ndcg_at_3 value: 35.26558333333333 - type: ndcg_at_5 value: 37.220000000000006 - type: precision_at_1 value: 30.300166666666673 - type: precision_at_10 value: 6.837416666666667 - type: precision_at_100 value: 1.10425 - type: precision_at_1000 value: 0.14875 - type: precision_at_3 value: 16.13716666666667 - type: precision_at_5 value: 11.2815 - type: recall_at_1 value: 25.732166666666668 - type: recall_at_10 value: 50.578916666666665 - type: recall_at_100 value: 72.42183333333334 - type: recall_at_1000 value: 88.48766666666667 - type: recall_at_3 value: 38.41325 - type: recall_at_5 value: 43.515750000000004 - type: map_at_1 value: 23.951 - type: map_at_10 value: 30.974 - type: map_at_100 value: 31.804 - type: map_at_1000 value: 31.900000000000002 - type: map_at_3 value: 28.762 - type: map_at_5 value: 29.94 - type: mrr_at_1 value: 26.534000000000002 - type: mrr_at_10 value: 33.553 - type: mrr_at_100 value: 34.297 - type: mrr_at_1000 value: 34.36 - type: mrr_at_3 value: 31.391000000000002 - type: mrr_at_5 value: 32.525999999999996 - type: ndcg_at_1 value: 26.534000000000002 - type: ndcg_at_10 value: 35.112 - type: ndcg_at_100 value: 39.28 - type: ndcg_at_1000 value: 41.723 - type: ndcg_at_3 value: 30.902 - type: ndcg_at_5 value: 32.759 - type: precision_at_1 value: 26.534000000000002 - type: precision_at_10 value: 5.445 - type: precision_at_100 value: 0.819 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 12.986 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.951 - type: recall_at_10 value: 45.24 - type: recall_at_100 value: 64.12299999999999 - type: recall_at_1000 value: 82.28999999999999 - type: recall_at_3 value: 33.806000000000004 - type: recall_at_5 value: 38.277 - type: map_at_1 value: 16.829 - type: map_at_10 value: 23.684 - type: map_at_100 value: 24.683 - type: map_at_1000 value: 24.81 - type: map_at_3 value: 21.554000000000002 - type: map_at_5 value: 22.768 - type: mrr_at_1 value: 20.096 - type: mrr_at_10 value: 27.230999999999998 - type: mrr_at_100 value: 28.083999999999996 - type: mrr_at_1000 value: 28.166000000000004 - type: mrr_at_3 value: 25.212 - type: mrr_at_5 value: 26.32 - type: ndcg_at_1 value: 20.096 - type: ndcg_at_10 value: 27.989000000000004 - type: ndcg_at_100 value: 32.847 - type: ndcg_at_1000 value: 35.896 - type: ndcg_at_3 value: 24.116 - type: ndcg_at_5 value: 25.964 - type: precision_at_1 value: 20.096 - type: precision_at_10 value: 5 - type: precision_at_100 value: 0.8750000000000001 - type: precision_at_1000 value: 0.131 - type: precision_at_3 value: 11.207 - type: precision_at_5 value: 8.08 - type: recall_at_1 value: 16.829 - type: recall_at_10 value: 37.407000000000004 - type: recall_at_100 value: 59.101000000000006 - type: recall_at_1000 value: 81.024 - type: recall_at_3 value: 26.739 - type: recall_at_5 value: 31.524 - type: map_at_1 value: 24.138 - type: map_at_10 value: 32.275999999999996 - type: map_at_100 value: 33.416000000000004 - type: map_at_1000 value: 33.527 - type: map_at_3 value: 29.854000000000003 - type: map_at_5 value: 31.096 - type: mrr_at_1 value: 28.450999999999997 - type: mrr_at_10 value: 36.214 - type: mrr_at_100 value: 37.134 - type: mrr_at_1000 value: 37.198 - type: mrr_at_3 value: 34.001999999999995 - type: mrr_at_5 value: 35.187000000000005 - type: ndcg_at_1 value: 28.450999999999997 - type: ndcg_at_10 value: 37.166 - type: ndcg_at_100 value: 42.454 - type: ndcg_at_1000 value: 44.976 - type: ndcg_at_3 value: 32.796 - type: ndcg_at_5 value: 34.631 - type: precision_at_1 value: 28.450999999999997 - type: precision_at_10 value: 6.241 - type: precision_at_100 value: 0.9950000000000001 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 14.801 - type: precision_at_5 value: 10.280000000000001 - type: recall_at_1 value: 24.138 - type: recall_at_10 value: 48.111 - type: recall_at_100 value: 71.245 - type: recall_at_1000 value: 88.986 - type: recall_at_3 value: 36.119 - type: recall_at_5 value: 40.846 - type: map_at_1 value: 23.244 - type: map_at_10 value: 31.227 - type: map_at_100 value: 33.007 - type: map_at_1000 value: 33.223 - type: map_at_3 value: 28.924 - type: map_at_5 value: 30.017 - type: mrr_at_1 value: 27.668 - type: mrr_at_10 value: 35.524 - type: mrr_at_100 value: 36.699 - type: mrr_at_1000 value: 36.759 - type: mrr_at_3 value: 33.366 - type: mrr_at_5 value: 34.552 - type: ndcg_at_1 value: 27.668 - type: ndcg_at_10 value: 36.381 - type: ndcg_at_100 value: 43.062 - type: ndcg_at_1000 value: 45.656 - type: ndcg_at_3 value: 32.501999999999995 - type: ndcg_at_5 value: 34.105999999999995 - type: precision_at_1 value: 27.668 - type: precision_at_10 value: 6.798 - type: precision_at_100 value: 1.492 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 15.152 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.244 - type: recall_at_10 value: 45.979 - type: recall_at_100 value: 74.822 - type: recall_at_1000 value: 91.078 - type: recall_at_3 value: 34.925 - type: recall_at_5 value: 39.126 - type: map_at_1 value: 19.945 - type: map_at_10 value: 27.517999999999997 - type: map_at_100 value: 28.588 - type: map_at_1000 value: 28.682000000000002 - type: map_at_3 value: 25.345000000000002 - type: map_at_5 value: 26.555 - type: mrr_at_1 value: 21.996 - type: mrr_at_10 value: 29.845 - type: mrr_at_100 value: 30.775999999999996 - type: mrr_at_1000 value: 30.845 - type: mrr_at_3 value: 27.726 - type: mrr_at_5 value: 28.882 - type: ndcg_at_1 value: 21.996 - type: ndcg_at_10 value: 32.034 - type: ndcg_at_100 value: 37.185 - type: ndcg_at_1000 value: 39.645 - type: ndcg_at_3 value: 27.750999999999998 - type: ndcg_at_5 value: 29.805999999999997 - type: precision_at_1 value: 21.996 - type: precision_at_10 value: 5.065 - type: precision_at_100 value: 0.819 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 12.076 - type: precision_at_5 value: 8.392 - type: recall_at_1 value: 19.945 - type: recall_at_10 value: 43.62 - type: recall_at_100 value: 67.194 - type: recall_at_1000 value: 85.7 - type: recall_at_3 value: 32.15 - type: recall_at_5 value: 37.208999999999996 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 18.279 - type: map_at_10 value: 31.052999999999997 - type: map_at_100 value: 33.125 - type: map_at_1000 value: 33.306000000000004 - type: map_at_3 value: 26.208 - type: map_at_5 value: 28.857 - type: mrr_at_1 value: 42.671 - type: mrr_at_10 value: 54.557 - type: mrr_at_100 value: 55.142 - type: mrr_at_1000 value: 55.169000000000004 - type: mrr_at_3 value: 51.488 - type: mrr_at_5 value: 53.439 - type: ndcg_at_1 value: 42.671 - type: ndcg_at_10 value: 41.276 - type: ndcg_at_100 value: 48.376000000000005 - type: ndcg_at_1000 value: 51.318 - type: ndcg_at_3 value: 35.068 - type: ndcg_at_5 value: 37.242 - type: precision_at_1 value: 42.671 - type: precision_at_10 value: 12.638 - type: precision_at_100 value: 2.045 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 26.08 - type: precision_at_5 value: 19.805 - type: recall_at_1 value: 18.279 - type: recall_at_10 value: 46.946 - type: recall_at_100 value: 70.97200000000001 - type: recall_at_1000 value: 87.107 - type: recall_at_3 value: 31.147999999999996 - type: recall_at_5 value: 38.099 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.573 - type: map_at_10 value: 19.747 - type: map_at_100 value: 28.205000000000002 - type: map_at_1000 value: 29.831000000000003 - type: map_at_3 value: 14.109 - type: map_at_5 value: 16.448999999999998 - type: mrr_at_1 value: 71 - type: mrr_at_10 value: 77.68599999999999 - type: mrr_at_100 value: 77.995 - type: mrr_at_1000 value: 78.00200000000001 - type: mrr_at_3 value: 76.292 - type: mrr_at_5 value: 77.029 - type: ndcg_at_1 value: 59.12500000000001 - type: ndcg_at_10 value: 43.9 - type: ndcg_at_100 value: 47.863 - type: ndcg_at_1000 value: 54.848 - type: ndcg_at_3 value: 49.803999999999995 - type: ndcg_at_5 value: 46.317 - type: precision_at_1 value: 71 - type: precision_at_10 value: 34.4 - type: precision_at_100 value: 11.063 - type: precision_at_1000 value: 1.989 - type: precision_at_3 value: 52.333 - type: precision_at_5 value: 43.7 - type: recall_at_1 value: 8.573 - type: recall_at_10 value: 25.615 - type: recall_at_100 value: 53.385000000000005 - type: recall_at_1000 value: 75.46000000000001 - type: recall_at_3 value: 15.429 - type: recall_at_5 value: 19.357 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.989999999999995 - type: f1 value: 42.776314451497555 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 74.13499999999999 - type: map_at_10 value: 82.825 - type: map_at_100 value: 83.096 - type: map_at_1000 value: 83.111 - type: map_at_3 value: 81.748 - type: map_at_5 value: 82.446 - type: mrr_at_1 value: 79.553 - type: mrr_at_10 value: 86.654 - type: mrr_at_100 value: 86.774 - type: mrr_at_1000 value: 86.778 - type: mrr_at_3 value: 85.981 - type: mrr_at_5 value: 86.462 - type: ndcg_at_1 value: 79.553 - type: ndcg_at_10 value: 86.345 - type: ndcg_at_100 value: 87.32 - type: ndcg_at_1000 value: 87.58200000000001 - type: ndcg_at_3 value: 84.719 - type: ndcg_at_5 value: 85.677 - type: precision_at_1 value: 79.553 - type: precision_at_10 value: 10.402000000000001 - type: precision_at_100 value: 1.1119999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.413 - type: precision_at_5 value: 20.138 - type: recall_at_1 value: 74.13499999999999 - type: recall_at_10 value: 93.215 - type: recall_at_100 value: 97.083 - type: recall_at_1000 value: 98.732 - type: recall_at_3 value: 88.79 - type: recall_at_5 value: 91.259 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 18.298000000000002 - type: map_at_10 value: 29.901 - type: map_at_100 value: 31.528 - type: map_at_1000 value: 31.713 - type: map_at_3 value: 25.740000000000002 - type: map_at_5 value: 28.227999999999998 - type: mrr_at_1 value: 36.728 - type: mrr_at_10 value: 45.401 - type: mrr_at_100 value: 46.27 - type: mrr_at_1000 value: 46.315 - type: mrr_at_3 value: 42.978 - type: mrr_at_5 value: 44.29 - type: ndcg_at_1 value: 36.728 - type: ndcg_at_10 value: 37.456 - type: ndcg_at_100 value: 43.832 - type: ndcg_at_1000 value: 47 - type: ndcg_at_3 value: 33.694 - type: ndcg_at_5 value: 35.085 - type: precision_at_1 value: 36.728 - type: precision_at_10 value: 10.386 - type: precision_at_100 value: 1.701 - type: precision_at_1000 value: 0.22599999999999998 - type: precision_at_3 value: 22.479 - type: precision_at_5 value: 16.605 - type: recall_at_1 value: 18.298000000000002 - type: recall_at_10 value: 44.369 - type: recall_at_100 value: 68.098 - type: recall_at_1000 value: 87.21900000000001 - type: recall_at_3 value: 30.215999999999998 - type: recall_at_5 value: 36.861 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.568 - type: map_at_10 value: 65.061 - type: map_at_100 value: 65.896 - type: map_at_1000 value: 65.95100000000001 - type: map_at_3 value: 61.831 - type: map_at_5 value: 63.849000000000004 - type: mrr_at_1 value: 79.136 - type: mrr_at_10 value: 84.58200000000001 - type: mrr_at_100 value: 84.765 - type: mrr_at_1000 value: 84.772 - type: mrr_at_3 value: 83.684 - type: mrr_at_5 value: 84.223 - type: ndcg_at_1 value: 79.136 - type: ndcg_at_10 value: 72.622 - type: ndcg_at_100 value: 75.539 - type: ndcg_at_1000 value: 76.613 - type: ndcg_at_3 value: 68.065 - type: ndcg_at_5 value: 70.58 - type: precision_at_1 value: 79.136 - type: precision_at_10 value: 15.215 - type: precision_at_100 value: 1.7500000000000002 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 44.011 - type: precision_at_5 value: 28.388999999999996 - type: recall_at_1 value: 39.568 - type: recall_at_10 value: 76.077 - type: recall_at_100 value: 87.481 - type: recall_at_1000 value: 94.56400000000001 - type: recall_at_3 value: 66.01599999999999 - type: recall_at_5 value: 70.97200000000001 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 85.312 - type: ap value: 80.36296867333715 - type: f1 value: 85.26613311552218 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.363999999999997 - type: map_at_10 value: 35.711999999999996 - type: map_at_100 value: 36.876999999999995 - type: map_at_1000 value: 36.923 - type: map_at_3 value: 32.034 - type: map_at_5 value: 34.159 - type: mrr_at_1 value: 24.04 - type: mrr_at_10 value: 36.345 - type: mrr_at_100 value: 37.441 - type: mrr_at_1000 value: 37.480000000000004 - type: mrr_at_3 value: 32.713 - type: mrr_at_5 value: 34.824 - type: ndcg_at_1 value: 24.026 - type: ndcg_at_10 value: 42.531 - type: ndcg_at_100 value: 48.081 - type: ndcg_at_1000 value: 49.213 - type: ndcg_at_3 value: 35.044 - type: ndcg_at_5 value: 38.834 - type: precision_at_1 value: 24.026 - type: precision_at_10 value: 6.622999999999999 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.909 - type: precision_at_5 value: 10.871 - type: recall_at_1 value: 23.363999999999997 - type: recall_at_10 value: 63.426 - type: recall_at_100 value: 88.96300000000001 - type: recall_at_1000 value: 97.637 - type: recall_at_3 value: 43.095 - type: recall_at_5 value: 52.178000000000004 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.0095759233926 - type: f1 value: 92.78387794667408 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.0296397628819 - type: f1 value: 58.45699589820874 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.45662407531944 - type: f1 value: 71.42364781421813 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.07800941492937 - type: f1 value: 77.22799045640845 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.531234379250606 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.941490381193802 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.3115090856725 - type: mrr value: 31.290667638675757 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.465 - type: map_at_10 value: 13.03 - type: map_at_100 value: 16.057 - type: map_at_1000 value: 17.49 - type: map_at_3 value: 9.553 - type: map_at_5 value: 11.204 - type: mrr_at_1 value: 43.653 - type: mrr_at_10 value: 53.269 - type: mrr_at_100 value: 53.72 - type: mrr_at_1000 value: 53.761 - type: mrr_at_3 value: 50.929 - type: mrr_at_5 value: 52.461 - type: ndcg_at_1 value: 42.26 - type: ndcg_at_10 value: 34.673 - type: ndcg_at_100 value: 30.759999999999998 - type: ndcg_at_1000 value: 39.728 - type: ndcg_at_3 value: 40.349000000000004 - type: ndcg_at_5 value: 37.915 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.789 - type: precision_at_100 value: 7.754999999999999 - type: precision_at_1000 value: 2.07 - type: precision_at_3 value: 38.596000000000004 - type: precision_at_5 value: 33.251 - type: recall_at_1 value: 5.465 - type: recall_at_10 value: 17.148 - type: recall_at_100 value: 29.768 - type: recall_at_1000 value: 62.239 - type: recall_at_3 value: 10.577 - type: recall_at_5 value: 13.315 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 37.008 - type: map_at_10 value: 52.467 - type: map_at_100 value: 53.342999999999996 - type: map_at_1000 value: 53.366 - type: map_at_3 value: 48.412 - type: map_at_5 value: 50.875 - type: mrr_at_1 value: 41.541 - type: mrr_at_10 value: 54.967 - type: mrr_at_100 value: 55.611 - type: mrr_at_1000 value: 55.627 - type: mrr_at_3 value: 51.824999999999996 - type: mrr_at_5 value: 53.763000000000005 - type: ndcg_at_1 value: 41.541 - type: ndcg_at_10 value: 59.724999999999994 - type: ndcg_at_100 value: 63.38700000000001 - type: ndcg_at_1000 value: 63.883 - type: ndcg_at_3 value: 52.331 - type: ndcg_at_5 value: 56.327000000000005 - type: precision_at_1 value: 41.541 - type: precision_at_10 value: 9.447 - type: precision_at_100 value: 1.1520000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.262 - type: precision_at_5 value: 16.314999999999998 - type: recall_at_1 value: 37.008 - type: recall_at_10 value: 79.145 - type: recall_at_100 value: 94.986 - type: recall_at_1000 value: 98.607 - type: recall_at_3 value: 60.277 - type: recall_at_5 value: 69.407 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.402 - type: map_at_10 value: 84.181 - type: map_at_100 value: 84.796 - type: map_at_1000 value: 84.81400000000001 - type: map_at_3 value: 81.209 - type: map_at_5 value: 83.085 - type: mrr_at_1 value: 81.02000000000001 - type: mrr_at_10 value: 87.263 - type: mrr_at_100 value: 87.36 - type: mrr_at_1000 value: 87.36 - type: mrr_at_3 value: 86.235 - type: mrr_at_5 value: 86.945 - type: ndcg_at_1 value: 81.01 - type: ndcg_at_10 value: 87.99900000000001 - type: ndcg_at_100 value: 89.217 - type: ndcg_at_1000 value: 89.33 - type: ndcg_at_3 value: 85.053 - type: ndcg_at_5 value: 86.703 - type: precision_at_1 value: 81.01 - type: precision_at_10 value: 13.336 - type: precision_at_100 value: 1.52 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 24.44 - type: recall_at_1 value: 70.402 - type: recall_at_10 value: 95.214 - type: recall_at_100 value: 99.438 - type: recall_at_1000 value: 99.928 - type: recall_at_3 value: 86.75699999999999 - type: recall_at_5 value: 91.44099999999999 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.51721502758904 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.054808572333016 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.578 - type: map_at_10 value: 11.036999999999999 - type: map_at_100 value: 12.879999999999999 - type: map_at_1000 value: 13.150999999999998 - type: map_at_3 value: 8.133 - type: map_at_5 value: 9.559 - type: mrr_at_1 value: 22.6 - type: mrr_at_10 value: 32.68 - type: mrr_at_100 value: 33.789 - type: mrr_at_1000 value: 33.854 - type: mrr_at_3 value: 29.7 - type: mrr_at_5 value: 31.480000000000004 - type: ndcg_at_1 value: 22.6 - type: ndcg_at_10 value: 18.616 - type: ndcg_at_100 value: 25.883 - type: ndcg_at_1000 value: 30.944 - type: ndcg_at_3 value: 18.136 - type: ndcg_at_5 value: 15.625 - type: precision_at_1 value: 22.6 - type: precision_at_10 value: 9.48 - type: precision_at_100 value: 1.991 - type: precision_at_1000 value: 0.321 - type: precision_at_3 value: 16.8 - type: precision_at_5 value: 13.54 - type: recall_at_1 value: 4.578 - type: recall_at_10 value: 19.213 - type: recall_at_100 value: 40.397 - type: recall_at_1000 value: 65.2 - type: recall_at_3 value: 10.208 - type: recall_at_5 value: 13.718 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.44288351714071 - type: cos_sim_spearman value: 79.37995604564952 - type: euclidean_pearson value: 81.1078874670718 - type: euclidean_spearman value: 79.37995905980499 - type: manhattan_pearson value: 81.03697527288986 - type: manhattan_spearman value: 79.33490235296236 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.95557650436523 - type: cos_sim_spearman value: 78.5190672399868 - type: euclidean_pearson value: 81.58064025904707 - type: euclidean_spearman value: 78.5190672399868 - type: manhattan_pearson value: 81.52857930619889 - type: manhattan_spearman value: 78.50421361308034 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.79128416228737 - type: cos_sim_spearman value: 86.05402451477147 - type: euclidean_pearson value: 85.46280267054289 - type: euclidean_spearman value: 86.05402451477147 - type: manhattan_pearson value: 85.46278563858236 - type: manhattan_spearman value: 86.08079590861004 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.20623089568763 - type: cos_sim_spearman value: 81.53786907061009 - type: euclidean_pearson value: 82.82272250091494 - type: euclidean_spearman value: 81.53786907061009 - type: manhattan_pearson value: 82.78850494027013 - type: manhattan_spearman value: 81.5135618083407 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 85.46366618397936 - type: cos_sim_spearman value: 86.96566013336908 - type: euclidean_pearson value: 86.62651697548931 - type: euclidean_spearman value: 86.96565526364454 - type: manhattan_pearson value: 86.58812160258009 - type: manhattan_spearman value: 86.9336484321288 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.51858358641559 - type: cos_sim_spearman value: 84.7652527954999 - type: euclidean_pearson value: 84.23914783766861 - type: euclidean_spearman value: 84.7652527954999 - type: manhattan_pearson value: 84.22749648503171 - type: manhattan_spearman value: 84.74527996746386 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.28026563313065 - type: cos_sim_spearman value: 87.46928143824915 - type: euclidean_pearson value: 88.30558762000372 - type: euclidean_spearman value: 87.46928143824915 - type: manhattan_pearson value: 88.10513330809331 - type: manhattan_spearman value: 87.21069787834173 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.376497134587375 - type: cos_sim_spearman value: 65.0159550112516 - type: euclidean_pearson value: 65.64572120879598 - type: euclidean_spearman value: 65.0159550112516 - type: manhattan_pearson value: 65.88143604989976 - type: manhattan_spearman value: 65.17547297222434 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.22876368947644 - type: cos_sim_spearman value: 85.46935577445318 - type: euclidean_pearson value: 85.32830231392005 - type: euclidean_spearman value: 85.46935577445318 - type: manhattan_pearson value: 85.30353211758495 - type: manhattan_spearman value: 85.42821085956945 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 80.60986667767133 - type: mrr value: 94.29432314236236 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 54.528 - type: map_at_10 value: 65.187 - type: map_at_100 value: 65.62599999999999 - type: map_at_1000 value: 65.657 - type: map_at_3 value: 62.352 - type: map_at_5 value: 64.025 - type: mrr_at_1 value: 57.333 - type: mrr_at_10 value: 66.577 - type: mrr_at_100 value: 66.88 - type: mrr_at_1000 value: 66.908 - type: mrr_at_3 value: 64.556 - type: mrr_at_5 value: 65.739 - type: ndcg_at_1 value: 57.333 - type: ndcg_at_10 value: 70.275 - type: ndcg_at_100 value: 72.136 - type: ndcg_at_1000 value: 72.963 - type: ndcg_at_3 value: 65.414 - type: ndcg_at_5 value: 67.831 - type: precision_at_1 value: 57.333 - type: precision_at_10 value: 9.5 - type: precision_at_100 value: 1.057 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.778000000000002 - type: precision_at_5 value: 17.2 - type: recall_at_1 value: 54.528 - type: recall_at_10 value: 84.356 - type: recall_at_100 value: 92.833 - type: recall_at_1000 value: 99.333 - type: recall_at_3 value: 71.283 - type: recall_at_5 value: 77.14999999999999 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74158415841585 - type: cos_sim_ap value: 92.90048959850317 - type: cos_sim_f1 value: 86.35650810245687 - type: cos_sim_precision value: 90.4709748083242 - type: cos_sim_recall value: 82.6 - type: dot_accuracy value: 99.74158415841585 - type: dot_ap value: 92.90048959850317 - type: dot_f1 value: 86.35650810245687 - type: dot_precision value: 90.4709748083242 - type: dot_recall value: 82.6 - type: euclidean_accuracy value: 99.74158415841585 - type: euclidean_ap value: 92.90048959850317 - type: euclidean_f1 value: 86.35650810245687 - type: euclidean_precision value: 90.4709748083242 - type: euclidean_recall value: 82.6 - type: manhattan_accuracy value: 99.74158415841585 - type: manhattan_ap value: 92.87344692947894 - type: manhattan_f1 value: 86.38497652582159 - type: manhattan_precision value: 90.29443838604145 - type: manhattan_recall value: 82.8 - type: max_accuracy value: 99.74158415841585 - type: max_ap value: 92.90048959850317 - type: max_f1 value: 86.38497652582159 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 63.191648770424216 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.02944668730218 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.466386167525265 - type: mrr value: 51.19071492233257 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.198022505886435 - type: cos_sim_spearman value: 30.40170257939193 - type: dot_pearson value: 30.198015316402614 - type: dot_spearman value: 30.40170257939193 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.242 - type: map_at_10 value: 2.17 - type: map_at_100 value: 12.221 - type: map_at_1000 value: 28.63 - type: map_at_3 value: 0.728 - type: map_at_5 value: 1.185 - type: mrr_at_1 value: 94 - type: mrr_at_10 value: 97 - type: mrr_at_100 value: 97 - type: mrr_at_1000 value: 97 - type: mrr_at_3 value: 97 - type: mrr_at_5 value: 97 - type: ndcg_at_1 value: 89 - type: ndcg_at_10 value: 82.30499999999999 - type: ndcg_at_100 value: 61.839999999999996 - type: ndcg_at_1000 value: 53.381 - type: ndcg_at_3 value: 88.877 - type: ndcg_at_5 value: 86.05199999999999 - type: precision_at_1 value: 94 - type: precision_at_10 value: 87 - type: precision_at_100 value: 63.38 - type: precision_at_1000 value: 23.498 - type: precision_at_3 value: 94 - type: precision_at_5 value: 92 - type: recall_at_1 value: 0.242 - type: recall_at_10 value: 2.302 - type: recall_at_100 value: 14.979000000000001 - type: recall_at_1000 value: 49.638 - type: recall_at_3 value: 0.753 - type: recall_at_5 value: 1.226 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.006 - type: map_at_10 value: 11.805 - type: map_at_100 value: 18.146 - type: map_at_1000 value: 19.788 - type: map_at_3 value: 5.914 - type: map_at_5 value: 8.801 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 56.36600000000001 - type: mrr_at_100 value: 56.721999999999994 - type: mrr_at_1000 value: 56.721999999999994 - type: mrr_at_3 value: 52.041000000000004 - type: mrr_at_5 value: 54.796 - type: ndcg_at_1 value: 37.755 - type: ndcg_at_10 value: 29.863 - type: ndcg_at_100 value: 39.571 - type: ndcg_at_1000 value: 51.385999999999996 - type: ndcg_at_3 value: 32.578 - type: ndcg_at_5 value: 32.351 - type: precision_at_1 value: 40.816 - type: precision_at_10 value: 26.531 - type: precision_at_100 value: 7.796 - type: precision_at_1000 value: 1.555 - type: precision_at_3 value: 32.653 - type: precision_at_5 value: 33.061 - type: recall_at_1 value: 3.006 - type: recall_at_10 value: 18.738 - type: recall_at_100 value: 48.058 - type: recall_at_1000 value: 83.41300000000001 - type: recall_at_3 value: 7.166 - type: recall_at_5 value: 12.102 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.4178 - type: ap value: 14.648781342150446 - type: f1 value: 55.07299194946378 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.919637804187886 - type: f1 value: 61.24122013967399 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.207896583685695 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.23114978840078 - type: cos_sim_ap value: 74.26624727825818 - type: cos_sim_f1 value: 68.72377190817083 - type: cos_sim_precision value: 64.56400742115028 - type: cos_sim_recall value: 73.45646437994723 - type: dot_accuracy value: 86.23114978840078 - type: dot_ap value: 74.26624032659652 - type: dot_f1 value: 68.72377190817083 - type: dot_precision value: 64.56400742115028 - type: dot_recall value: 73.45646437994723 - type: euclidean_accuracy value: 86.23114978840078 - type: euclidean_ap value: 74.26624714480556 - type: euclidean_f1 value: 68.72377190817083 - type: euclidean_precision value: 64.56400742115028 - type: euclidean_recall value: 73.45646437994723 - type: manhattan_accuracy value: 86.16558383501221 - type: manhattan_ap value: 74.2091943976357 - type: manhattan_f1 value: 68.64221520524654 - type: manhattan_precision value: 63.59135913591359 - type: manhattan_recall value: 74.5646437994723 - type: max_accuracy value: 86.23114978840078 - type: max_ap value: 74.26624727825818 - type: max_f1 value: 68.72377190817083 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.3681841114604 - type: cos_sim_ap value: 86.65166387498546 - type: cos_sim_f1 value: 79.02581944698774 - type: cos_sim_precision value: 75.35796605434099 - type: cos_sim_recall value: 83.06898675700647 - type: dot_accuracy value: 89.3681841114604 - type: dot_ap value: 86.65166019802056 - type: dot_f1 value: 79.02581944698774 - type: dot_precision value: 75.35796605434099 - type: dot_recall value: 83.06898675700647 - type: euclidean_accuracy value: 89.3681841114604 - type: euclidean_ap value: 86.65166462876266 - type: euclidean_f1 value: 79.02581944698774 - type: euclidean_precision value: 75.35796605434099 - type: euclidean_recall value: 83.06898675700647 - type: manhattan_accuracy value: 89.36624364497226 - type: manhattan_ap value: 86.65076471274106 - type: manhattan_f1 value: 79.07408783532733 - type: manhattan_precision value: 76.41102972856527 - type: manhattan_recall value: 81.92947336002464 - type: max_accuracy value: 89.3681841114604 - type: max_ap value: 86.65166462876266 - type: max_f1 value: 79.07408783532733 --- # BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF This model was converted to GGUF format from [`nomic-ai/nomic-embed-text-v1.5`](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -c 2048 ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
RichardErkhov/Alibaba-NLP_-_gte-Qwen2-1.5B-instruct-4bits
RichardErkhov
null
[ "safetensors", "qwen2", "custom_code", "arxiv:2308.03281", "4-bit", "bitsandbytes", "region:us" ]
1,730
1,730
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gte-Qwen2-1.5B-instruct - bnb 4bits - Model creator: https://huggingface.co/Alibaba-NLP/ - Original model: https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct/ Original model description: --- tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity license: apache-2.0 model-index: - name: gte-qwen2-7B-instruct results: - dataset: config: en name: MTEB AmazonCounterfactualClassification (en) revision: e8379541af4e31359cca9fbcf4b00f2671dba205 split: test type: mteb/amazon_counterfactual metrics: - type: accuracy value: 83.98507462686567 - type: ap value: 50.93015252587014 - type: f1 value: 78.50416599051215 task: type: Classification - dataset: config: default name: MTEB AmazonPolarityClassification revision: e2d317d38cd51312af73b3d32a06d1a08b442046 split: test type: mteb/amazon_polarity metrics: - type: accuracy value: 96.61065 - type: ap value: 94.89174052954196 - type: f1 value: 96.60942596940565 task: type: Classification - dataset: config: en name: MTEB AmazonReviewsClassification (en) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 55.614000000000004 - type: f1 value: 54.90553480294904 task: type: Classification - dataset: config: default name: MTEB ArguAna revision: c22ab2a51041ffd869aaddef7af8d8215647e41a split: test type: mteb/arguana metrics: - type: map_at_1 value: 45.164 - type: map_at_10 value: 61.519 - type: map_at_100 value: 61.769 - type: map_at_1000 value: 61.769 - type: map_at_3 value: 57.443999999999996 - type: map_at_5 value: 60.058 - type: mrr_at_1 value: 46.088 - type: mrr_at_10 value: 61.861 - type: mrr_at_100 value: 62.117999999999995 - type: mrr_at_1000 value: 62.117999999999995 - type: mrr_at_3 value: 57.729 - type: mrr_at_5 value: 60.392 - type: ndcg_at_1 value: 45.164 - type: ndcg_at_10 value: 69.72 - type: ndcg_at_100 value: 70.719 - type: ndcg_at_1000 value: 70.719 - type: ndcg_at_3 value: 61.517999999999994 - type: ndcg_at_5 value: 66.247 - type: precision_at_1 value: 45.164 - type: precision_at_10 value: 9.545 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 24.443 - type: precision_at_5 value: 16.97 - type: recall_at_1 value: 45.164 - type: recall_at_10 value: 95.448 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 73.329 - type: recall_at_5 value: 84.851 task: type: Retrieval - dataset: config: default name: MTEB ArxivClusteringP2P revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d split: test type: mteb/arxiv-clustering-p2p metrics: - type: v_measure value: 50.511868162026175 task: type: Clustering - dataset: config: default name: MTEB ArxivClusteringS2S revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 split: test type: mteb/arxiv-clustering-s2s metrics: - type: v_measure value: 45.007803189284004 task: type: Clustering - dataset: config: default name: MTEB AskUbuntuDupQuestions revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 split: test type: mteb/askubuntudupquestions-reranking metrics: - type: map value: 64.55292107723382 - type: mrr value: 77.66158818097877 task: type: Reranking - dataset: config: default name: MTEB BIOSSES revision: d3fb88f8f02e40887cd149695127462bbcf29b4a split: test type: mteb/biosses-sts metrics: - type: cos_sim_pearson value: 85.65459047085452 - type: cos_sim_spearman value: 82.10729255710761 - type: euclidean_pearson value: 82.78079159312476 - type: euclidean_spearman value: 80.50002701880933 - type: manhattan_pearson value: 82.41372641383016 - type: manhattan_spearman value: 80.57412509272639 task: type: STS - dataset: config: default name: MTEB Banking77Classification revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 split: test type: mteb/banking77 metrics: - type: accuracy value: 87.30844155844156 - type: f1 value: 87.25307322443255 task: type: Classification - dataset: config: default name: MTEB BiorxivClusteringP2P revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 split: test type: mteb/biorxiv-clustering-p2p metrics: - type: v_measure value: 43.20754608934859 task: type: Clustering - dataset: config: default name: MTEB BiorxivClusteringS2S revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 split: test type: mteb/biorxiv-clustering-s2s metrics: - type: v_measure value: 38.818037697335505 task: type: Clustering - dataset: config: default name: MTEB CQADupstackAndroidRetrieval revision: f46a197baaae43b4f621051089b82a364682dfeb split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 35.423 - type: map_at_10 value: 47.198 - type: map_at_100 value: 48.899 - type: map_at_1000 value: 49.004 - type: map_at_3 value: 43.114999999999995 - type: map_at_5 value: 45.491 - type: mrr_at_1 value: 42.918 - type: mrr_at_10 value: 53.299 - type: mrr_at_100 value: 54.032000000000004 - type: mrr_at_1000 value: 54.055 - type: mrr_at_3 value: 50.453 - type: mrr_at_5 value: 52.205999999999996 - type: ndcg_at_1 value: 42.918 - type: ndcg_at_10 value: 53.98 - type: ndcg_at_100 value: 59.57 - type: ndcg_at_1000 value: 60.879000000000005 - type: ndcg_at_3 value: 48.224000000000004 - type: ndcg_at_5 value: 50.998 - type: precision_at_1 value: 42.918 - type: precision_at_10 value: 10.299999999999999 - type: precision_at_100 value: 1.687 - type: precision_at_1000 value: 0.211 - type: precision_at_3 value: 22.842000000000002 - type: precision_at_5 value: 16.681 - type: recall_at_1 value: 35.423 - type: recall_at_10 value: 66.824 - type: recall_at_100 value: 89.564 - type: recall_at_1000 value: 97.501 - type: recall_at_3 value: 50.365 - type: recall_at_5 value: 57.921 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackEnglishRetrieval revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 33.205 - type: map_at_10 value: 44.859 - type: map_at_100 value: 46.135 - type: map_at_1000 value: 46.259 - type: map_at_3 value: 41.839 - type: map_at_5 value: 43.662 - type: mrr_at_1 value: 41.146 - type: mrr_at_10 value: 50.621 - type: mrr_at_100 value: 51.207 - type: mrr_at_1000 value: 51.246 - type: mrr_at_3 value: 48.535000000000004 - type: mrr_at_5 value: 49.818 - type: ndcg_at_1 value: 41.146 - type: ndcg_at_10 value: 50.683 - type: ndcg_at_100 value: 54.82 - type: ndcg_at_1000 value: 56.69 - type: ndcg_at_3 value: 46.611000000000004 - type: ndcg_at_5 value: 48.66 - type: precision_at_1 value: 41.146 - type: precision_at_10 value: 9.439 - type: precision_at_100 value: 1.465 - type: precision_at_1000 value: 0.194 - type: precision_at_3 value: 22.59 - type: precision_at_5 value: 15.86 - type: recall_at_1 value: 33.205 - type: recall_at_10 value: 61.028999999999996 - type: recall_at_100 value: 78.152 - type: recall_at_1000 value: 89.59700000000001 - type: recall_at_3 value: 49.05 - type: recall_at_5 value: 54.836 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackGamingRetrieval revision: 4885aa143210c98657558c04aaf3dc47cfb54340 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 41.637 - type: map_at_10 value: 55.162 - type: map_at_100 value: 56.142 - type: map_at_1000 value: 56.188 - type: map_at_3 value: 51.564 - type: map_at_5 value: 53.696 - type: mrr_at_1 value: 47.524 - type: mrr_at_10 value: 58.243 - type: mrr_at_100 value: 58.879999999999995 - type: mrr_at_1000 value: 58.9 - type: mrr_at_3 value: 55.69499999999999 - type: mrr_at_5 value: 57.284 - type: ndcg_at_1 value: 47.524 - type: ndcg_at_10 value: 61.305 - type: ndcg_at_100 value: 65.077 - type: ndcg_at_1000 value: 65.941 - type: ndcg_at_3 value: 55.422000000000004 - type: ndcg_at_5 value: 58.516 - type: precision_at_1 value: 47.524 - type: precision_at_10 value: 9.918000000000001 - type: precision_at_100 value: 1.276 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.765 - type: precision_at_5 value: 17.204 - type: recall_at_1 value: 41.637 - type: recall_at_10 value: 76.185 - type: recall_at_100 value: 92.149 - type: recall_at_1000 value: 98.199 - type: recall_at_3 value: 60.856 - type: recall_at_5 value: 68.25099999999999 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackGisRetrieval revision: 5003b3064772da1887988e05400cf3806fe491f2 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 26.27 - type: map_at_10 value: 37.463 - type: map_at_100 value: 38.434000000000005 - type: map_at_1000 value: 38.509 - type: map_at_3 value: 34.226 - type: map_at_5 value: 36.161 - type: mrr_at_1 value: 28.588 - type: mrr_at_10 value: 39.383 - type: mrr_at_100 value: 40.23 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 36.422 - type: mrr_at_5 value: 38.252 - type: ndcg_at_1 value: 28.588 - type: ndcg_at_10 value: 43.511 - type: ndcg_at_100 value: 48.274 - type: ndcg_at_1000 value: 49.975 - type: ndcg_at_3 value: 37.319 - type: ndcg_at_5 value: 40.568 - type: precision_at_1 value: 28.588 - type: precision_at_10 value: 6.893000000000001 - type: precision_at_100 value: 0.9900000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 16.347 - type: precision_at_5 value: 11.661000000000001 - type: recall_at_1 value: 26.27 - type: recall_at_10 value: 60.284000000000006 - type: recall_at_100 value: 81.902 - type: recall_at_1000 value: 94.43 - type: recall_at_3 value: 43.537 - type: recall_at_5 value: 51.475 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackMathematicaRetrieval revision: 90fceea13679c63fe563ded68f3b6f06e50061de split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 18.168 - type: map_at_10 value: 28.410000000000004 - type: map_at_100 value: 29.78 - type: map_at_1000 value: 29.892999999999997 - type: map_at_3 value: 25.238 - type: map_at_5 value: 26.96 - type: mrr_at_1 value: 23.507 - type: mrr_at_10 value: 33.382 - type: mrr_at_100 value: 34.404 - type: mrr_at_1000 value: 34.467999999999996 - type: mrr_at_3 value: 30.637999999999998 - type: mrr_at_5 value: 32.199 - type: ndcg_at_1 value: 23.507 - type: ndcg_at_10 value: 34.571000000000005 - type: ndcg_at_100 value: 40.663 - type: ndcg_at_1000 value: 43.236000000000004 - type: ndcg_at_3 value: 29.053 - type: ndcg_at_5 value: 31.563999999999997 - type: precision_at_1 value: 23.507 - type: precision_at_10 value: 6.654 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 14.427999999999999 - type: precision_at_5 value: 10.498000000000001 - type: recall_at_1 value: 18.168 - type: recall_at_10 value: 48.443000000000005 - type: recall_at_100 value: 74.47 - type: recall_at_1000 value: 92.494 - type: recall_at_3 value: 33.379999999999995 - type: recall_at_5 value: 39.76 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackPhysicsRetrieval revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 32.39 - type: map_at_10 value: 44.479 - type: map_at_100 value: 45.977000000000004 - type: map_at_1000 value: 46.087 - type: map_at_3 value: 40.976 - type: map_at_5 value: 43.038 - type: mrr_at_1 value: 40.135 - type: mrr_at_10 value: 50.160000000000004 - type: mrr_at_100 value: 51.052 - type: mrr_at_1000 value: 51.087 - type: mrr_at_3 value: 47.818 - type: mrr_at_5 value: 49.171 - type: ndcg_at_1 value: 40.135 - type: ndcg_at_10 value: 50.731 - type: ndcg_at_100 value: 56.452000000000005 - type: ndcg_at_1000 value: 58.123000000000005 - type: ndcg_at_3 value: 45.507 - type: ndcg_at_5 value: 48.11 - type: precision_at_1 value: 40.135 - type: precision_at_10 value: 9.192 - type: precision_at_100 value: 1.397 - type: precision_at_1000 value: 0.169 - type: precision_at_3 value: 21.816 - type: precision_at_5 value: 15.476 - type: recall_at_1 value: 32.39 - type: recall_at_10 value: 63.597 - type: recall_at_100 value: 86.737 - type: recall_at_1000 value: 97.039 - type: recall_at_3 value: 48.906 - type: recall_at_5 value: 55.659000000000006 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackProgrammersRetrieval revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 28.397 - type: map_at_10 value: 39.871 - type: map_at_100 value: 41.309000000000005 - type: map_at_1000 value: 41.409 - type: map_at_3 value: 36.047000000000004 - type: map_at_5 value: 38.104 - type: mrr_at_1 value: 34.703 - type: mrr_at_10 value: 44.773 - type: mrr_at_100 value: 45.64 - type: mrr_at_1000 value: 45.678999999999995 - type: mrr_at_3 value: 41.705 - type: mrr_at_5 value: 43.406 - type: ndcg_at_1 value: 34.703 - type: ndcg_at_10 value: 46.271 - type: ndcg_at_100 value: 52.037 - type: ndcg_at_1000 value: 53.81700000000001 - type: ndcg_at_3 value: 39.966 - type: ndcg_at_5 value: 42.801 - type: precision_at_1 value: 34.703 - type: precision_at_10 value: 8.744 - type: precision_at_100 value: 1.348 - type: precision_at_1000 value: 0.167 - type: precision_at_3 value: 19.102 - type: precision_at_5 value: 13.836 - type: recall_at_1 value: 28.397 - type: recall_at_10 value: 60.299 - type: recall_at_100 value: 84.595 - type: recall_at_1000 value: 96.155 - type: recall_at_3 value: 43.065 - type: recall_at_5 value: 50.371 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackRetrieval revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 28.044333333333338 - type: map_at_10 value: 38.78691666666666 - type: map_at_100 value: 40.113 - type: map_at_1000 value: 40.22125 - type: map_at_3 value: 35.52966666666667 - type: map_at_5 value: 37.372749999999996 - type: mrr_at_1 value: 33.159083333333335 - type: mrr_at_10 value: 42.913583333333335 - type: mrr_at_100 value: 43.7845 - type: mrr_at_1000 value: 43.830333333333336 - type: mrr_at_3 value: 40.29816666666667 - type: mrr_at_5 value: 41.81366666666667 - type: ndcg_at_1 value: 33.159083333333335 - type: ndcg_at_10 value: 44.75750000000001 - type: ndcg_at_100 value: 50.13658333333334 - type: ndcg_at_1000 value: 52.037 - type: ndcg_at_3 value: 39.34258333333334 - type: ndcg_at_5 value: 41.93708333333333 - type: precision_at_1 value: 33.159083333333335 - type: precision_at_10 value: 7.952416666666667 - type: precision_at_100 value: 1.2571666666666668 - type: precision_at_1000 value: 0.16099999999999998 - type: precision_at_3 value: 18.303833333333337 - type: precision_at_5 value: 13.057083333333333 - type: recall_at_1 value: 28.044333333333338 - type: recall_at_10 value: 58.237249999999996 - type: recall_at_100 value: 81.35391666666666 - type: recall_at_1000 value: 94.21283333333334 - type: recall_at_3 value: 43.32341666666667 - type: recall_at_5 value: 49.94908333333333 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackStatsRetrieval revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 27.838 - type: map_at_10 value: 36.04 - type: map_at_100 value: 37.113 - type: map_at_1000 value: 37.204 - type: map_at_3 value: 33.585 - type: map_at_5 value: 34.845 - type: mrr_at_1 value: 30.982 - type: mrr_at_10 value: 39.105000000000004 - type: mrr_at_100 value: 39.98 - type: mrr_at_1000 value: 40.042 - type: mrr_at_3 value: 36.912 - type: mrr_at_5 value: 38.062000000000005 - type: ndcg_at_1 value: 30.982 - type: ndcg_at_10 value: 40.982 - type: ndcg_at_100 value: 46.092 - type: ndcg_at_1000 value: 48.25 - type: ndcg_at_3 value: 36.41 - type: ndcg_at_5 value: 38.379999999999995 - type: precision_at_1 value: 30.982 - type: precision_at_10 value: 6.534 - type: precision_at_100 value: 0.9820000000000001 - type: precision_at_1000 value: 0.124 - type: precision_at_3 value: 15.745999999999999 - type: precision_at_5 value: 10.828 - type: recall_at_1 value: 27.838 - type: recall_at_10 value: 52.971000000000004 - type: recall_at_100 value: 76.357 - type: recall_at_1000 value: 91.973 - type: recall_at_3 value: 40.157 - type: recall_at_5 value: 45.147999999999996 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackTexRetrieval revision: 46989137a86843e03a6195de44b09deda022eec7 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 19.059 - type: map_at_10 value: 27.454 - type: map_at_100 value: 28.736 - type: map_at_1000 value: 28.865000000000002 - type: map_at_3 value: 24.773999999999997 - type: map_at_5 value: 26.266000000000002 - type: mrr_at_1 value: 23.125 - type: mrr_at_10 value: 31.267 - type: mrr_at_100 value: 32.32 - type: mrr_at_1000 value: 32.394 - type: mrr_at_3 value: 28.894 - type: mrr_at_5 value: 30.281000000000002 - type: ndcg_at_1 value: 23.125 - type: ndcg_at_10 value: 32.588 - type: ndcg_at_100 value: 38.432 - type: ndcg_at_1000 value: 41.214 - type: ndcg_at_3 value: 27.938000000000002 - type: ndcg_at_5 value: 30.127 - type: precision_at_1 value: 23.125 - type: precision_at_10 value: 5.9639999999999995 - type: precision_at_100 value: 1.047 - type: precision_at_1000 value: 0.148 - type: precision_at_3 value: 13.294 - type: precision_at_5 value: 9.628 - type: recall_at_1 value: 19.059 - type: recall_at_10 value: 44.25 - type: recall_at_100 value: 69.948 - type: recall_at_1000 value: 89.35300000000001 - type: recall_at_3 value: 31.114000000000004 - type: recall_at_5 value: 36.846000000000004 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackUnixRetrieval revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 28.355999999999998 - type: map_at_10 value: 39.055 - type: map_at_100 value: 40.486 - type: map_at_1000 value: 40.571 - type: map_at_3 value: 35.69 - type: map_at_5 value: 37.605 - type: mrr_at_1 value: 33.302 - type: mrr_at_10 value: 42.986000000000004 - type: mrr_at_100 value: 43.957 - type: mrr_at_1000 value: 43.996 - type: mrr_at_3 value: 40.111999999999995 - type: mrr_at_5 value: 41.735 - type: ndcg_at_1 value: 33.302 - type: ndcg_at_10 value: 44.962999999999994 - type: ndcg_at_100 value: 50.917 - type: ndcg_at_1000 value: 52.622 - type: ndcg_at_3 value: 39.182 - type: ndcg_at_5 value: 41.939 - type: precision_at_1 value: 33.302 - type: precision_at_10 value: 7.779999999999999 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 18.035 - type: precision_at_5 value: 12.873000000000001 - type: recall_at_1 value: 28.355999999999998 - type: recall_at_10 value: 58.782000000000004 - type: recall_at_100 value: 84.02199999999999 - type: recall_at_1000 value: 95.511 - type: recall_at_3 value: 43.126999999999995 - type: recall_at_5 value: 50.14999999999999 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackWebmastersRetrieval revision: 160c094312a0e1facb97e55eeddb698c0abe3571 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 27.391 - type: map_at_10 value: 37.523 - type: map_at_100 value: 39.312000000000005 - type: map_at_1000 value: 39.54 - type: map_at_3 value: 34.231 - type: map_at_5 value: 36.062 - type: mrr_at_1 value: 32.016 - type: mrr_at_10 value: 41.747 - type: mrr_at_100 value: 42.812 - type: mrr_at_1000 value: 42.844 - type: mrr_at_3 value: 39.129999999999995 - type: mrr_at_5 value: 40.524 - type: ndcg_at_1 value: 32.016 - type: ndcg_at_10 value: 43.826 - type: ndcg_at_100 value: 50.373999999999995 - type: ndcg_at_1000 value: 52.318 - type: ndcg_at_3 value: 38.479 - type: ndcg_at_5 value: 40.944 - type: precision_at_1 value: 32.016 - type: precision_at_10 value: 8.280999999999999 - type: precision_at_100 value: 1.6760000000000002 - type: precision_at_1000 value: 0.25 - type: precision_at_3 value: 18.05 - type: precision_at_5 value: 13.083 - type: recall_at_1 value: 27.391 - type: recall_at_10 value: 56.928999999999995 - type: recall_at_100 value: 85.169 - type: recall_at_1000 value: 96.665 - type: recall_at_3 value: 42.264 - type: recall_at_5 value: 48.556 task: type: Retrieval - dataset: config: default name: MTEB CQADupstackWordpressRetrieval revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 split: test type: BeIR/cqadupstack metrics: - type: map_at_1 value: 18.398 - type: map_at_10 value: 27.929 - type: map_at_100 value: 29.032999999999998 - type: map_at_1000 value: 29.126 - type: map_at_3 value: 25.070999999999998 - type: map_at_5 value: 26.583000000000002 - type: mrr_at_1 value: 19.963 - type: mrr_at_10 value: 29.997 - type: mrr_at_100 value: 30.9 - type: mrr_at_1000 value: 30.972 - type: mrr_at_3 value: 27.264 - type: mrr_at_5 value: 28.826 - type: ndcg_at_1 value: 19.963 - type: ndcg_at_10 value: 33.678999999999995 - type: ndcg_at_100 value: 38.931 - type: ndcg_at_1000 value: 41.379 - type: ndcg_at_3 value: 28.000000000000004 - type: ndcg_at_5 value: 30.637999999999998 - type: precision_at_1 value: 19.963 - type: precision_at_10 value: 5.7299999999999995 - type: precision_at_100 value: 0.902 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 12.631 - type: precision_at_5 value: 9.057 - type: recall_at_1 value: 18.398 - type: recall_at_10 value: 49.254 - type: recall_at_100 value: 73.182 - type: recall_at_1000 value: 91.637 - type: recall_at_3 value: 34.06 - type: recall_at_5 value: 40.416000000000004 task: type: Retrieval - dataset: config: default name: MTEB ClimateFEVER revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 split: test type: mteb/climate-fever metrics: - type: map_at_1 value: 19.681 - type: map_at_10 value: 32.741 - type: map_at_100 value: 34.811 - type: map_at_1000 value: 35.003 - type: map_at_3 value: 27.697 - type: map_at_5 value: 30.372 - type: mrr_at_1 value: 44.951 - type: mrr_at_10 value: 56.34400000000001 - type: mrr_at_100 value: 56.961 - type: mrr_at_1000 value: 56.987 - type: mrr_at_3 value: 53.681 - type: mrr_at_5 value: 55.407 - type: ndcg_at_1 value: 44.951 - type: ndcg_at_10 value: 42.905 - type: ndcg_at_100 value: 49.95 - type: ndcg_at_1000 value: 52.917 - type: ndcg_at_3 value: 36.815 - type: ndcg_at_5 value: 38.817 - type: precision_at_1 value: 44.951 - type: precision_at_10 value: 12.989999999999998 - type: precision_at_100 value: 2.068 - type: precision_at_1000 value: 0.263 - type: precision_at_3 value: 27.275 - type: precision_at_5 value: 20.365 - type: recall_at_1 value: 19.681 - type: recall_at_10 value: 48.272999999999996 - type: recall_at_100 value: 71.87400000000001 - type: recall_at_1000 value: 87.929 - type: recall_at_3 value: 32.653999999999996 - type: recall_at_5 value: 39.364 task: type: Retrieval - dataset: config: default name: MTEB DBPedia revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 split: test type: mteb/dbpedia metrics: - type: map_at_1 value: 10.231 - type: map_at_10 value: 22.338 - type: map_at_100 value: 31.927 - type: map_at_1000 value: 33.87 - type: map_at_3 value: 15.559999999999999 - type: map_at_5 value: 18.239 - type: mrr_at_1 value: 75.0 - type: mrr_at_10 value: 81.303 - type: mrr_at_100 value: 81.523 - type: mrr_at_1000 value: 81.53 - type: mrr_at_3 value: 80.083 - type: mrr_at_5 value: 80.758 - type: ndcg_at_1 value: 64.625 - type: ndcg_at_10 value: 48.687000000000005 - type: ndcg_at_100 value: 52.791 - type: ndcg_at_1000 value: 60.041999999999994 - type: ndcg_at_3 value: 53.757999999999996 - type: ndcg_at_5 value: 50.76500000000001 - type: precision_at_1 value: 75.0 - type: precision_at_10 value: 38.3 - type: precision_at_100 value: 12.025 - type: precision_at_1000 value: 2.3970000000000002 - type: precision_at_3 value: 55.417 - type: precision_at_5 value: 47.5 - type: recall_at_1 value: 10.231 - type: recall_at_10 value: 27.697 - type: recall_at_100 value: 57.409 - type: recall_at_1000 value: 80.547 - type: recall_at_3 value: 16.668 - type: recall_at_5 value: 20.552 task: type: Retrieval - dataset: config: default name: MTEB EmotionClassification revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 split: test type: mteb/emotion metrics: - type: accuracy value: 61.365 - type: f1 value: 56.7540827912991 task: type: Classification - dataset: config: default name: MTEB FEVER revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 split: test type: mteb/fever metrics: - type: map_at_1 value: 83.479 - type: map_at_10 value: 88.898 - type: map_at_100 value: 89.11 - type: map_at_1000 value: 89.12400000000001 - type: map_at_3 value: 88.103 - type: map_at_5 value: 88.629 - type: mrr_at_1 value: 89.934 - type: mrr_at_10 value: 93.91000000000001 - type: mrr_at_100 value: 93.937 - type: mrr_at_1000 value: 93.938 - type: mrr_at_3 value: 93.62700000000001 - type: mrr_at_5 value: 93.84599999999999 - type: ndcg_at_1 value: 89.934 - type: ndcg_at_10 value: 91.574 - type: ndcg_at_100 value: 92.238 - type: ndcg_at_1000 value: 92.45 - type: ndcg_at_3 value: 90.586 - type: ndcg_at_5 value: 91.16300000000001 - type: precision_at_1 value: 89.934 - type: precision_at_10 value: 10.555 - type: precision_at_100 value: 1.1159999999999999 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 33.588 - type: precision_at_5 value: 20.642 - type: recall_at_1 value: 83.479 - type: recall_at_10 value: 94.971 - type: recall_at_100 value: 97.397 - type: recall_at_1000 value: 98.666 - type: recall_at_3 value: 92.24799999999999 - type: recall_at_5 value: 93.797 task: type: Retrieval - dataset: config: default name: MTEB FiQA2018 revision: 27a168819829fe9bcd655c2df245fb19452e8e06 split: test type: mteb/fiqa metrics: - type: map_at_1 value: 27.16 - type: map_at_10 value: 45.593 - type: map_at_100 value: 47.762 - type: map_at_1000 value: 47.899 - type: map_at_3 value: 39.237 - type: map_at_5 value: 42.970000000000006 - type: mrr_at_1 value: 52.623 - type: mrr_at_10 value: 62.637 - type: mrr_at_100 value: 63.169 - type: mrr_at_1000 value: 63.185 - type: mrr_at_3 value: 59.928000000000004 - type: mrr_at_5 value: 61.702999999999996 - type: ndcg_at_1 value: 52.623 - type: ndcg_at_10 value: 54.701 - type: ndcg_at_100 value: 61.263 - type: ndcg_at_1000 value: 63.134 - type: ndcg_at_3 value: 49.265 - type: ndcg_at_5 value: 51.665000000000006 - type: precision_at_1 value: 52.623 - type: precision_at_10 value: 15.185 - type: precision_at_100 value: 2.202 - type: precision_at_1000 value: 0.254 - type: precision_at_3 value: 32.767 - type: precision_at_5 value: 24.722 - type: recall_at_1 value: 27.16 - type: recall_at_10 value: 63.309000000000005 - type: recall_at_100 value: 86.722 - type: recall_at_1000 value: 97.505 - type: recall_at_3 value: 45.045 - type: recall_at_5 value: 54.02400000000001 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA revision: ab518f4d6fcca38d87c25209f94beba119d02014 split: test type: mteb/hotpotqa metrics: - type: map_at_1 value: 42.573 - type: map_at_10 value: 59.373 - type: map_at_100 value: 60.292 - type: map_at_1000 value: 60.358999999999995 - type: map_at_3 value: 56.159000000000006 - type: map_at_5 value: 58.123999999999995 - type: mrr_at_1 value: 85.14500000000001 - type: mrr_at_10 value: 89.25999999999999 - type: mrr_at_100 value: 89.373 - type: mrr_at_1000 value: 89.377 - type: mrr_at_3 value: 88.618 - type: mrr_at_5 value: 89.036 - type: ndcg_at_1 value: 85.14500000000001 - type: ndcg_at_10 value: 68.95 - type: ndcg_at_100 value: 71.95 - type: ndcg_at_1000 value: 73.232 - type: ndcg_at_3 value: 64.546 - type: ndcg_at_5 value: 66.945 - type: precision_at_1 value: 85.14500000000001 - type: precision_at_10 value: 13.865 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 39.703 - type: precision_at_5 value: 25.718000000000004 - type: recall_at_1 value: 42.573 - type: recall_at_10 value: 69.325 - type: recall_at_100 value: 80.932 - type: recall_at_1000 value: 89.446 - type: recall_at_3 value: 59.553999999999995 - type: recall_at_5 value: 64.294 task: type: Retrieval - dataset: config: default name: MTEB ImdbClassification revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 split: test type: mteb/imdb metrics: - type: accuracy value: 95.8336 - type: ap value: 93.78862962194073 - type: f1 value: 95.83192650728371 task: type: Classification - dataset: config: default name: MTEB MSMARCO revision: c5a29a104738b98a9e76336939199e264163d4a0 split: dev type: mteb/msmarco metrics: - type: map_at_1 value: 23.075000000000003 - type: map_at_10 value: 36.102000000000004 - type: map_at_100 value: 37.257 - type: map_at_1000 value: 37.3 - type: map_at_3 value: 32.144 - type: map_at_5 value: 34.359 - type: mrr_at_1 value: 23.711 - type: mrr_at_10 value: 36.671 - type: mrr_at_100 value: 37.763999999999996 - type: mrr_at_1000 value: 37.801 - type: mrr_at_3 value: 32.775 - type: mrr_at_5 value: 34.977000000000004 - type: ndcg_at_1 value: 23.711 - type: ndcg_at_10 value: 43.361 - type: ndcg_at_100 value: 48.839 - type: ndcg_at_1000 value: 49.88 - type: ndcg_at_3 value: 35.269 - type: ndcg_at_5 value: 39.224 - type: precision_at_1 value: 23.711 - type: precision_at_10 value: 6.866999999999999 - type: precision_at_100 value: 0.96 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.083 - type: recall_at_1 value: 23.075000000000003 - type: recall_at_10 value: 65.756 - type: recall_at_100 value: 90.88199999999999 - type: recall_at_1000 value: 98.739 - type: recall_at_3 value: 43.691 - type: recall_at_5 value: 53.15800000000001 task: type: Retrieval - dataset: config: en name: MTEB MTOPDomainClassification (en) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 97.69493844049248 - type: f1 value: 97.55048089616261 task: type: Classification - dataset: config: en name: MTEB MTOPIntentClassification (en) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 88.75968992248062 - type: f1 value: 72.26321223399123 task: type: Classification - dataset: config: en name: MTEB MassiveIntentClassification (en) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 82.40080699394754 - type: f1 value: 79.62590029057968 task: type: Classification - dataset: config: en name: MTEB MassiveScenarioClassification (en) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 84.49562878278414 - type: f1 value: 84.0040193313333 task: type: Classification - dataset: config: default name: MTEB MedrxivClusteringP2P revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 split: test type: mteb/medrxiv-clustering-p2p metrics: - type: v_measure value: 39.386760057101945 task: type: Clustering - dataset: config: default name: MTEB MedrxivClusteringS2S revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 split: test type: mteb/medrxiv-clustering-s2s metrics: - type: v_measure value: 37.89687154075537 task: type: Clustering - dataset: config: default name: MTEB MindSmallReranking revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 split: test type: mteb/mind_small metrics: - type: map value: 33.94151656057482 - type: mrr value: 35.32684700746953 task: type: Reranking - dataset: config: default name: MTEB NFCorpus revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 split: test type: mteb/nfcorpus metrics: - type: map_at_1 value: 6.239999999999999 - type: map_at_10 value: 14.862 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.694000000000003 - type: map_at_3 value: 10.683 - type: map_at_5 value: 12.674 - type: mrr_at_1 value: 50.15500000000001 - type: mrr_at_10 value: 59.697 - type: mrr_at_100 value: 60.095 - type: mrr_at_1000 value: 60.129999999999995 - type: mrr_at_3 value: 58.35900000000001 - type: mrr_at_5 value: 58.839 - type: ndcg_at_1 value: 48.452 - type: ndcg_at_10 value: 39.341 - type: ndcg_at_100 value: 35.866 - type: ndcg_at_1000 value: 45.111000000000004 - type: ndcg_at_3 value: 44.527 - type: ndcg_at_5 value: 42.946 - type: precision_at_1 value: 50.15500000000001 - type: precision_at_10 value: 29.536 - type: precision_at_100 value: 9.142 - type: precision_at_1000 value: 2.2849999999999997 - type: precision_at_3 value: 41.899 - type: precision_at_5 value: 37.647000000000006 - type: recall_at_1 value: 6.239999999999999 - type: recall_at_10 value: 19.278000000000002 - type: recall_at_100 value: 36.074 - type: recall_at_1000 value: 70.017 - type: recall_at_3 value: 12.066 - type: recall_at_5 value: 15.254000000000001 task: type: Retrieval - dataset: config: default name: MTEB NQ revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 split: test type: mteb/nq metrics: - type: map_at_1 value: 39.75 - type: map_at_10 value: 56.443 - type: map_at_100 value: 57.233999999999995 - type: map_at_1000 value: 57.249 - type: map_at_3 value: 52.032999999999994 - type: map_at_5 value: 54.937999999999995 - type: mrr_at_1 value: 44.728 - type: mrr_at_10 value: 58.939 - type: mrr_at_100 value: 59.489000000000004 - type: mrr_at_1000 value: 59.499 - type: mrr_at_3 value: 55.711999999999996 - type: mrr_at_5 value: 57.89 - type: ndcg_at_1 value: 44.728 - type: ndcg_at_10 value: 63.998999999999995 - type: ndcg_at_100 value: 67.077 - type: ndcg_at_1000 value: 67.40899999999999 - type: ndcg_at_3 value: 56.266000000000005 - type: ndcg_at_5 value: 60.88 - type: precision_at_1 value: 44.728 - type: precision_at_10 value: 10.09 - type: precision_at_100 value: 1.1809999999999998 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.145 - type: precision_at_5 value: 17.822 - type: recall_at_1 value: 39.75 - type: recall_at_10 value: 84.234 - type: recall_at_100 value: 97.055 - type: recall_at_1000 value: 99.517 - type: recall_at_3 value: 64.851 - type: recall_at_5 value: 75.343 task: type: Retrieval - dataset: config: default name: MTEB QuoraRetrieval revision: None split: test type: mteb/quora metrics: - type: map_at_1 value: 72.085 - type: map_at_10 value: 86.107 - type: map_at_100 value: 86.727 - type: map_at_1000 value: 86.74 - type: map_at_3 value: 83.21 - type: map_at_5 value: 85.06 - type: mrr_at_1 value: 82.94 - type: mrr_at_10 value: 88.845 - type: mrr_at_100 value: 88.926 - type: mrr_at_1000 value: 88.927 - type: mrr_at_3 value: 87.993 - type: mrr_at_5 value: 88.62299999999999 - type: ndcg_at_1 value: 82.97 - type: ndcg_at_10 value: 89.645 - type: ndcg_at_100 value: 90.717 - type: ndcg_at_1000 value: 90.78 - type: ndcg_at_3 value: 86.99900000000001 - type: ndcg_at_5 value: 88.52600000000001 - type: precision_at_1 value: 82.97 - type: precision_at_10 value: 13.569 - type: precision_at_100 value: 1.539 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.043 - type: precision_at_5 value: 24.992 - type: recall_at_1 value: 72.085 - type: recall_at_10 value: 96.262 - type: recall_at_100 value: 99.77000000000001 - type: recall_at_1000 value: 99.997 - type: recall_at_3 value: 88.652 - type: recall_at_5 value: 93.01899999999999 task: type: Retrieval - dataset: config: default name: MTEB RedditClustering revision: 24640382cdbf8abc73003fb0fa6d111a705499eb split: test type: mteb/reddit-clustering metrics: - type: v_measure value: 55.82153952668092 task: type: Clustering - dataset: config: default name: MTEB RedditClusteringP2P revision: 282350215ef01743dc01b456c7f5241fa8937f16 split: test type: mteb/reddit-clustering-p2p metrics: - type: v_measure value: 62.094465801879295 task: type: Clustering - dataset: config: default name: MTEB SCIDOCS revision: None split: test type: mteb/scidocs metrics: - type: map_at_1 value: 5.688 - type: map_at_10 value: 15.201999999999998 - type: map_at_100 value: 18.096 - type: map_at_1000 value: 18.481 - type: map_at_3 value: 10.734 - type: map_at_5 value: 12.94 - type: mrr_at_1 value: 28.000000000000004 - type: mrr_at_10 value: 41.101 - type: mrr_at_100 value: 42.202 - type: mrr_at_1000 value: 42.228 - type: mrr_at_3 value: 37.683 - type: mrr_at_5 value: 39.708 - type: ndcg_at_1 value: 28.000000000000004 - type: ndcg_at_10 value: 24.976000000000003 - type: ndcg_at_100 value: 35.129 - type: ndcg_at_1000 value: 40.77 - type: ndcg_at_3 value: 23.787 - type: ndcg_at_5 value: 20.816000000000003 - type: precision_at_1 value: 28.000000000000004 - type: precision_at_10 value: 13.04 - type: precision_at_100 value: 2.761 - type: precision_at_1000 value: 0.41000000000000003 - type: precision_at_3 value: 22.6 - type: precision_at_5 value: 18.52 - type: recall_at_1 value: 5.688 - type: recall_at_10 value: 26.43 - type: recall_at_100 value: 56.02 - type: recall_at_1000 value: 83.21 - type: recall_at_3 value: 13.752 - type: recall_at_5 value: 18.777 task: type: Retrieval - dataset: config: default name: MTEB SICK-R revision: a6ea5a8cab320b040a23452cc28066d9beae2cee split: test type: mteb/sickr-sts metrics: - type: cos_sim_pearson value: 85.15084859283178 - type: cos_sim_spearman value: 80.49030614009419 - type: euclidean_pearson value: 81.84574978672468 - type: euclidean_spearman value: 79.89787150656818 - type: manhattan_pearson value: 81.63076538567131 - type: manhattan_spearman value: 79.69867352121841 task: type: STS - dataset: config: default name: MTEB STS12 revision: a0d554a64d88156834ff5ae9920b964011b16384 split: test type: mteb/sts12-sts metrics: - type: cos_sim_pearson value: 84.64097921490992 - type: cos_sim_spearman value: 77.25370084896514 - type: euclidean_pearson value: 82.71210826468788 - type: euclidean_spearman value: 78.50445584994826 - type: manhattan_pearson value: 82.92580164330298 - type: manhattan_spearman value: 78.69686891301019 task: type: STS - dataset: config: default name: MTEB STS13 revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca split: test type: mteb/sts13-sts metrics: - type: cos_sim_pearson value: 87.24596417308994 - type: cos_sim_spearman value: 87.79454220555091 - type: euclidean_pearson value: 87.40242561671164 - type: euclidean_spearman value: 88.25955597373556 - type: manhattan_pearson value: 87.25160240485849 - type: manhattan_spearman value: 88.155794979818 task: type: STS - dataset: config: default name: MTEB STS14 revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 split: test type: mteb/sts14-sts metrics: - type: cos_sim_pearson value: 84.44914233422564 - type: cos_sim_spearman value: 82.91015471820322 - type: euclidean_pearson value: 84.7206656630327 - type: euclidean_spearman value: 83.86408872059216 - type: manhattan_pearson value: 84.72816725158454 - type: manhattan_spearman value: 84.01603388572788 task: type: STS - dataset: config: default name: MTEB STS15 revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 split: test type: mteb/sts15-sts metrics: - type: cos_sim_pearson value: 87.6168026237477 - type: cos_sim_spearman value: 88.45414278092397 - type: euclidean_pearson value: 88.57023240882022 - type: euclidean_spearman value: 89.04102190922094 - type: manhattan_pearson value: 88.66695535796354 - type: manhattan_spearman value: 89.19898476680969 task: type: STS - dataset: config: default name: MTEB STS16 revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 split: test type: mteb/sts16-sts metrics: - type: cos_sim_pearson value: 84.27925826089424 - type: cos_sim_spearman value: 85.45291099550461 - type: euclidean_pearson value: 83.63853036580834 - type: euclidean_spearman value: 84.33468035821484 - type: manhattan_pearson value: 83.72778773251596 - type: manhattan_spearman value: 84.51583132445376 task: type: STS - dataset: config: en-en name: MTEB STS17 (en-en) revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d split: test type: mteb/sts17-crosslingual-sts metrics: - type: cos_sim_pearson value: 89.67375185692552 - type: cos_sim_spearman value: 90.32542469203855 - type: euclidean_pearson value: 89.63513717951847 - type: euclidean_spearman value: 89.87760271003745 - type: manhattan_pearson value: 89.28381452982924 - type: manhattan_spearman value: 89.53568197785721 task: type: STS - dataset: config: en name: MTEB STS22 (en) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 66.24644693819846 - type: cos_sim_spearman value: 66.09889420525377 - type: euclidean_pearson value: 63.72551583520747 - type: euclidean_spearman value: 63.01385470780679 - type: manhattan_pearson value: 64.09258157214097 - type: manhattan_spearman value: 63.080517752822594 task: type: STS - dataset: config: default name: MTEB STSBenchmark revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 split: test type: mteb/stsbenchmark-sts metrics: - type: cos_sim_pearson value: 86.27321463839989 - type: cos_sim_spearman value: 86.37572865993327 - type: euclidean_pearson value: 86.36268020198149 - type: euclidean_spearman value: 86.31089339478922 - type: manhattan_pearson value: 86.4260445761947 - type: manhattan_spearman value: 86.45885895320457 task: type: STS - dataset: config: default name: MTEB SciDocsRR revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab split: test type: mteb/scidocs-reranking metrics: - type: map value: 86.52456702387798 - type: mrr value: 96.34556529164372 task: type: Reranking - dataset: config: default name: MTEB SciFact revision: 0228b52cf27578f30900b9e5271d331663a030d7 split: test type: mteb/scifact metrics: - type: map_at_1 value: 61.99400000000001 - type: map_at_10 value: 73.38799999999999 - type: map_at_100 value: 73.747 - type: map_at_1000 value: 73.75 - type: map_at_3 value: 70.04599999999999 - type: map_at_5 value: 72.095 - type: mrr_at_1 value: 65.0 - type: mrr_at_10 value: 74.42800000000001 - type: mrr_at_100 value: 74.722 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.056 - type: mrr_at_5 value: 73.60600000000001 - type: ndcg_at_1 value: 65.0 - type: ndcg_at_10 value: 78.435 - type: ndcg_at_100 value: 79.922 - type: ndcg_at_1000 value: 80.00500000000001 - type: ndcg_at_3 value: 73.05199999999999 - type: ndcg_at_5 value: 75.98 - type: precision_at_1 value: 65.0 - type: precision_at_10 value: 10.5 - type: precision_at_100 value: 1.123 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 28.555999999999997 - type: precision_at_5 value: 19.0 - type: recall_at_1 value: 61.99400000000001 - type: recall_at_10 value: 92.72200000000001 - type: recall_at_100 value: 99.333 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 78.739 - type: recall_at_5 value: 85.828 task: type: Retrieval - dataset: config: default name: MTEB SprintDuplicateQuestions revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 split: test type: mteb/sprintduplicatequestions-pairclassification metrics: - type: cos_sim_accuracy value: 99.79009900990098 - type: cos_sim_ap value: 95.3203137438653 - type: cos_sim_f1 value: 89.12386706948641 - type: cos_sim_precision value: 89.75659229208925 - type: cos_sim_recall value: 88.5 - type: dot_accuracy value: 99.67821782178218 - type: dot_ap value: 89.94069840000675 - type: dot_f1 value: 83.45902463549521 - type: dot_precision value: 83.9231547017189 - type: dot_recall value: 83.0 - type: euclidean_accuracy value: 99.78613861386138 - type: euclidean_ap value: 95.10648259135526 - type: euclidean_f1 value: 88.77338877338877 - type: euclidean_precision value: 92.42424242424242 - type: euclidean_recall value: 85.39999999999999 - type: manhattan_accuracy value: 99.7950495049505 - type: manhattan_ap value: 95.29987661320946 - type: manhattan_f1 value: 89.21313183949972 - type: manhattan_precision value: 93.14472252448314 - type: manhattan_recall value: 85.6 - type: max_accuracy value: 99.7950495049505 - type: max_ap value: 95.3203137438653 - type: max_f1 value: 89.21313183949972 task: type: PairClassification - dataset: config: default name: MTEB StackExchangeClustering revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 split: test type: mteb/stackexchange-clustering metrics: - type: v_measure value: 67.65446577183913 task: type: Clustering - dataset: config: default name: MTEB StackExchangeClusteringP2P revision: 815ca46b2622cec33ccafc3735d572c266efdb44 split: test type: mteb/stackexchange-clustering-p2p metrics: - type: v_measure value: 46.30749237193961 task: type: Clustering - dataset: config: default name: MTEB StackOverflowDupQuestions revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 split: test type: mteb/stackoverflowdupquestions-reranking metrics: - type: map value: 54.91481849959949 - type: mrr value: 55.853506175197346 task: type: Reranking - dataset: config: default name: MTEB SummEval revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c split: test type: mteb/summeval metrics: - type: cos_sim_pearson value: 30.08196549170419 - type: cos_sim_spearman value: 31.16661390597077 - type: dot_pearson value: 29.892258410943466 - type: dot_spearman value: 30.51328811965085 task: type: Summarization - dataset: config: default name: MTEB TRECCOVID revision: None split: test type: mteb/trec-covid metrics: - type: map_at_1 value: 0.23900000000000002 - type: map_at_10 value: 2.173 - type: map_at_100 value: 14.24 - type: map_at_1000 value: 35.309000000000005 - type: map_at_3 value: 0.7100000000000001 - type: map_at_5 value: 1.163 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 96.0 - type: mrr_at_100 value: 96.0 - type: mrr_at_1000 value: 96.0 - type: mrr_at_3 value: 96.0 - type: mrr_at_5 value: 96.0 - type: ndcg_at_1 value: 90.0 - type: ndcg_at_10 value: 85.382 - type: ndcg_at_100 value: 68.03 - type: ndcg_at_1000 value: 61.021 - type: ndcg_at_3 value: 89.765 - type: ndcg_at_5 value: 88.444 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 88.0 - type: precision_at_100 value: 70.02000000000001 - type: precision_at_1000 value: 26.984 - type: precision_at_3 value: 94.0 - type: precision_at_5 value: 92.80000000000001 - type: recall_at_1 value: 0.23900000000000002 - type: recall_at_10 value: 2.313 - type: recall_at_100 value: 17.049 - type: recall_at_1000 value: 57.489999999999995 - type: recall_at_3 value: 0.737 - type: recall_at_5 value: 1.221 task: type: Retrieval - dataset: config: default name: MTEB Touche2020 revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f split: test type: mteb/touche2020 metrics: - type: map_at_1 value: 2.75 - type: map_at_10 value: 11.29 - type: map_at_100 value: 18.032999999999998 - type: map_at_1000 value: 19.746 - type: map_at_3 value: 6.555 - type: map_at_5 value: 8.706999999999999 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 50.55 - type: mrr_at_100 value: 51.659 - type: mrr_at_1000 value: 51.659 - type: mrr_at_3 value: 47.278999999999996 - type: mrr_at_5 value: 49.728 - type: ndcg_at_1 value: 32.653 - type: ndcg_at_10 value: 27.894000000000002 - type: ndcg_at_100 value: 39.769 - type: ndcg_at_1000 value: 51.495999999999995 - type: ndcg_at_3 value: 32.954 - type: ndcg_at_5 value: 31.502999999999997 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 23.265 - type: precision_at_100 value: 7.898 - type: precision_at_1000 value: 1.58 - type: precision_at_3 value: 34.694 - type: precision_at_5 value: 31.429000000000002 - type: recall_at_1 value: 2.75 - type: recall_at_10 value: 16.953 - type: recall_at_100 value: 48.68 - type: recall_at_1000 value: 85.18599999999999 - type: recall_at_3 value: 7.710999999999999 - type: recall_at_5 value: 11.484 task: type: Retrieval - dataset: config: default name: MTEB ToxicConversationsClassification revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c split: test type: mteb/toxic_conversations_50k metrics: - type: accuracy value: 82.66099999999999 - type: ap value: 25.555698090238337 - type: f1 value: 66.48402012461622 task: type: Classification - dataset: config: default name: MTEB TweetSentimentExtractionClassification revision: d604517c81ca91fe16a244d1248fc021f9ecee7a split: test type: mteb/tweet_sentiment_extraction metrics: - type: accuracy value: 72.94567062818335 - type: f1 value: 73.28139189595674 task: type: Classification - dataset: config: default name: MTEB TwentyNewsgroupsClustering revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 split: test type: mteb/twentynewsgroups-clustering metrics: - type: v_measure value: 49.581627240203474 task: type: Clustering - dataset: config: default name: MTEB TwitterSemEval2015 revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 split: test type: mteb/twittersemeval2015-pairclassification metrics: - type: cos_sim_accuracy value: 87.78089050485785 - type: cos_sim_ap value: 79.64487116574168 - type: cos_sim_f1 value: 72.46563021970964 - type: cos_sim_precision value: 70.62359128474831 - type: cos_sim_recall value: 74.40633245382587 - type: dot_accuracy value: 86.2609524944865 - type: dot_ap value: 75.513046857613 - type: dot_f1 value: 68.58213616489695 - type: dot_precision value: 65.12455516014235 - type: dot_recall value: 72.42744063324538 - type: euclidean_accuracy value: 87.6080348095607 - type: euclidean_ap value: 79.00204933649795 - type: euclidean_f1 value: 72.14495342605589 - type: euclidean_precision value: 69.85421299728193 - type: euclidean_recall value: 74.5910290237467 - type: manhattan_accuracy value: 87.59611372712642 - type: manhattan_ap value: 78.78523756706264 - type: manhattan_f1 value: 71.86499137718648 - type: manhattan_precision value: 67.39833641404806 - type: manhattan_recall value: 76.96569920844327 - type: max_accuracy value: 87.78089050485785 - type: max_ap value: 79.64487116574168 - type: max_f1 value: 72.46563021970964 task: type: PairClassification - dataset: config: default name: MTEB TwitterURLCorpus revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf split: test type: mteb/twitterurlcorpus-pairclassification metrics: - type: cos_sim_accuracy value: 89.98719292117825 - type: cos_sim_ap value: 87.58146137353202 - type: cos_sim_f1 value: 80.28543232369239 - type: cos_sim_precision value: 79.1735289714029 - type: cos_sim_recall value: 81.42901139513397 - type: dot_accuracy value: 88.9199363526992 - type: dot_ap value: 84.98499998630417 - type: dot_f1 value: 78.21951400757969 - type: dot_precision value: 75.58523624874336 - type: dot_recall value: 81.04404065291038 - type: euclidean_accuracy value: 89.77374160748244 - type: euclidean_ap value: 87.35151562835209 - type: euclidean_f1 value: 79.92160922940393 - type: euclidean_precision value: 76.88531587933979 - type: euclidean_recall value: 83.20757622420696 - type: manhattan_accuracy value: 89.72717041176699 - type: manhattan_ap value: 87.34065592142515 - type: manhattan_f1 value: 79.85603419187943 - type: manhattan_precision value: 77.82243332115455 - type: manhattan_recall value: 81.99876809362489 - type: max_accuracy value: 89.98719292117825 - type: max_ap value: 87.58146137353202 - type: max_f1 value: 80.28543232369239 task: type: PairClassification - dataset: config: default name: MTEB AFQMC revision: b44c3b011063adb25877c13823db83bb193913c4 split: validation type: C-MTEB/AFQMC metrics: - type: cos_sim_pearson value: 53.45954203592337 - type: cos_sim_spearman value: 58.42154680418638 - type: euclidean_pearson value: 56.41543791722753 - type: euclidean_spearman value: 58.39328016640146 - type: manhattan_pearson value: 56.318510356833876 - type: manhattan_spearman value: 58.28423447818184 task: type: STS - dataset: config: default name: MTEB ATEC revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 split: test type: C-MTEB/ATEC metrics: - type: cos_sim_pearson value: 50.78356460675945 - type: cos_sim_spearman value: 55.6530411663269 - type: euclidean_pearson value: 56.50763660417816 - type: euclidean_spearman value: 55.733823335669065 - type: manhattan_pearson value: 56.45323093512866 - type: manhattan_spearman value: 55.63248619032702 task: type: STS - dataset: config: zh name: MTEB AmazonReviewsClassification (zh) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 47.209999999999994 - type: f1 value: 46.08892432018655 task: type: Classification - dataset: config: default name: MTEB BQ revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 split: test type: C-MTEB/BQ metrics: - type: cos_sim_pearson value: 70.25573992001478 - type: cos_sim_spearman value: 73.85247134951433 - type: euclidean_pearson value: 72.60033082168442 - type: euclidean_spearman value: 73.72445893756499 - type: manhattan_pearson value: 72.59932284620231 - type: manhattan_spearman value: 73.68002490614583 task: type: STS - dataset: config: default name: MTEB CLSClusteringP2P revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 split: test type: C-MTEB/CLSClusteringP2P metrics: - type: v_measure value: 45.21317724305628 task: type: Clustering - dataset: config: default name: MTEB CLSClusteringS2S revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f split: test type: C-MTEB/CLSClusteringS2S metrics: - type: v_measure value: 42.49825170976724 task: type: Clustering - dataset: config: default name: MTEB CMedQAv1 revision: 8d7f1e942507dac42dc58017c1a001c3717da7df split: test type: C-MTEB/CMedQAv1-reranking metrics: - type: map value: 88.15661686810597 - type: mrr value: 90.11222222222223 task: type: Reranking - dataset: config: default name: MTEB CMedQAv2 revision: 23d186750531a14a0357ca22cd92d712fd512ea0 split: test type: C-MTEB/CMedQAv2-reranking metrics: - type: map value: 88.1204726064383 - type: mrr value: 90.20142857142858 task: type: Reranking - dataset: config: default name: MTEB CmedqaRetrieval revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 split: dev type: C-MTEB/CmedqaRetrieval metrics: - type: map_at_1 value: 27.224999999999998 - type: map_at_10 value: 40.169 - type: map_at_100 value: 42.0 - type: map_at_1000 value: 42.109 - type: map_at_3 value: 35.76 - type: map_at_5 value: 38.221 - type: mrr_at_1 value: 40.56 - type: mrr_at_10 value: 49.118 - type: mrr_at_100 value: 50.092999999999996 - type: mrr_at_1000 value: 50.133 - type: mrr_at_3 value: 46.507 - type: mrr_at_5 value: 47.973 - type: ndcg_at_1 value: 40.56 - type: ndcg_at_10 value: 46.972 - type: ndcg_at_100 value: 54.04 - type: ndcg_at_1000 value: 55.862 - type: ndcg_at_3 value: 41.36 - type: ndcg_at_5 value: 43.704 - type: precision_at_1 value: 40.56 - type: precision_at_10 value: 10.302999999999999 - type: precision_at_100 value: 1.606 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 23.064 - type: precision_at_5 value: 16.764000000000003 - type: recall_at_1 value: 27.224999999999998 - type: recall_at_10 value: 58.05200000000001 - type: recall_at_100 value: 87.092 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 41.373 - type: recall_at_5 value: 48.453 task: type: Retrieval - dataset: config: default name: MTEB Cmnli revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 split: validation type: C-MTEB/CMNLI metrics: - type: cos_sim_accuracy value: 77.40228502705953 - type: cos_sim_ap value: 86.22359172956327 - type: cos_sim_f1 value: 78.96328293736501 - type: cos_sim_precision value: 73.36945615091311 - type: cos_sim_recall value: 85.48047696983868 - type: dot_accuracy value: 75.53818400481059 - type: dot_ap value: 83.70164011305312 - type: dot_f1 value: 77.67298719348754 - type: dot_precision value: 67.49482401656314 - type: dot_recall value: 91.46598082768296 - type: euclidean_accuracy value: 77.94347564642213 - type: euclidean_ap value: 86.4652108728609 - type: euclidean_f1 value: 79.15555555555555 - type: euclidean_precision value: 75.41816641964853 - type: euclidean_recall value: 83.28267477203647 - type: manhattan_accuracy value: 77.45039085989175 - type: manhattan_ap value: 86.09986583900665 - type: manhattan_f1 value: 78.93669264438988 - type: manhattan_precision value: 72.63261296660117 - type: manhattan_recall value: 86.43909282207154 - type: max_accuracy value: 77.94347564642213 - type: max_ap value: 86.4652108728609 - type: max_f1 value: 79.15555555555555 task: type: PairClassification - dataset: config: default name: MTEB CovidRetrieval revision: 1271c7809071a13532e05f25fb53511ffce77117 split: dev type: C-MTEB/CovidRetrieval metrics: - type: map_at_1 value: 69.336 - type: map_at_10 value: 77.16 - type: map_at_100 value: 77.47500000000001 - type: map_at_1000 value: 77.482 - type: map_at_3 value: 75.42999999999999 - type: map_at_5 value: 76.468 - type: mrr_at_1 value: 69.44200000000001 - type: mrr_at_10 value: 77.132 - type: mrr_at_100 value: 77.43299999999999 - type: mrr_at_1000 value: 77.44 - type: mrr_at_3 value: 75.395 - type: mrr_at_5 value: 76.459 - type: ndcg_at_1 value: 69.547 - type: ndcg_at_10 value: 80.794 - type: ndcg_at_100 value: 82.245 - type: ndcg_at_1000 value: 82.40899999999999 - type: ndcg_at_3 value: 77.303 - type: ndcg_at_5 value: 79.168 - type: precision_at_1 value: 69.547 - type: precision_at_10 value: 9.305 - type: precision_at_100 value: 0.9979999999999999 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 27.749000000000002 - type: precision_at_5 value: 17.576 - type: recall_at_1 value: 69.336 - type: recall_at_10 value: 92.097 - type: recall_at_100 value: 98.736 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 82.64 - type: recall_at_5 value: 87.144 task: type: Retrieval - dataset: config: default name: MTEB DuRetrieval revision: a1a333e290fe30b10f3f56498e3a0d911a693ced split: dev type: C-MTEB/DuRetrieval metrics: - type: map_at_1 value: 26.817999999999998 - type: map_at_10 value: 82.67 - type: map_at_100 value: 85.304 - type: map_at_1000 value: 85.334 - type: map_at_3 value: 57.336 - type: map_at_5 value: 72.474 - type: mrr_at_1 value: 91.45 - type: mrr_at_10 value: 94.272 - type: mrr_at_100 value: 94.318 - type: mrr_at_1000 value: 94.32000000000001 - type: mrr_at_3 value: 94.0 - type: mrr_at_5 value: 94.17699999999999 - type: ndcg_at_1 value: 91.45 - type: ndcg_at_10 value: 89.404 - type: ndcg_at_100 value: 91.724 - type: ndcg_at_1000 value: 91.973 - type: ndcg_at_3 value: 88.104 - type: ndcg_at_5 value: 87.25699999999999 - type: precision_at_1 value: 91.45 - type: precision_at_10 value: 42.585 - type: precision_at_100 value: 4.838 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 78.8 - type: precision_at_5 value: 66.66 - type: recall_at_1 value: 26.817999999999998 - type: recall_at_10 value: 90.67 - type: recall_at_100 value: 98.36200000000001 - type: recall_at_1000 value: 99.583 - type: recall_at_3 value: 59.614999999999995 - type: recall_at_5 value: 77.05199999999999 task: type: Retrieval - dataset: config: default name: MTEB EcomRetrieval revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 split: dev type: C-MTEB/EcomRetrieval metrics: - type: map_at_1 value: 47.699999999999996 - type: map_at_10 value: 57.589999999999996 - type: map_at_100 value: 58.226 - type: map_at_1000 value: 58.251 - type: map_at_3 value: 55.233 - type: map_at_5 value: 56.633 - type: mrr_at_1 value: 47.699999999999996 - type: mrr_at_10 value: 57.589999999999996 - type: mrr_at_100 value: 58.226 - type: mrr_at_1000 value: 58.251 - type: mrr_at_3 value: 55.233 - type: mrr_at_5 value: 56.633 - type: ndcg_at_1 value: 47.699999999999996 - type: ndcg_at_10 value: 62.505 - type: ndcg_at_100 value: 65.517 - type: ndcg_at_1000 value: 66.19800000000001 - type: ndcg_at_3 value: 57.643 - type: ndcg_at_5 value: 60.181 - type: precision_at_1 value: 47.699999999999996 - type: precision_at_10 value: 7.8 - type: precision_at_100 value: 0.919 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 21.532999999999998 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 47.699999999999996 - type: recall_at_10 value: 78.0 - type: recall_at_100 value: 91.9 - type: recall_at_1000 value: 97.3 - type: recall_at_3 value: 64.60000000000001 - type: recall_at_5 value: 70.8 task: type: Retrieval - dataset: config: default name: MTEB IFlyTek revision: 421605374b29664c5fc098418fe20ada9bd55f8a split: validation type: C-MTEB/IFlyTek-classification metrics: - type: accuracy value: 44.84801846864178 - type: f1 value: 37.47347897956339 task: type: Classification - dataset: config: default name: MTEB JDReview revision: b7c64bd89eb87f8ded463478346f76731f07bf8b split: test type: C-MTEB/JDReview-classification metrics: - type: accuracy value: 85.81613508442777 - type: ap value: 52.68244615477374 - type: f1 value: 80.0445640948843 task: type: Classification - dataset: config: default name: MTEB LCQMC revision: 17f9b096f80380fce5ed12a9be8be7784b337daf split: test type: C-MTEB/LCQMC metrics: - type: cos_sim_pearson value: 69.57786502217138 - type: cos_sim_spearman value: 75.39106054489906 - type: euclidean_pearson value: 73.72082954602402 - type: euclidean_spearman value: 75.14421475913619 - type: manhattan_pearson value: 73.62463076633642 - type: manhattan_spearman value: 75.01301565104112 task: type: STS - dataset: config: default name: MTEB MMarcoReranking revision: None split: dev type: C-MTEB/Mmarco-reranking metrics: - type: map value: 29.143797057999134 - type: mrr value: 28.08174603174603 task: type: Reranking - dataset: config: default name: MTEB MMarcoRetrieval revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 split: dev type: C-MTEB/MMarcoRetrieval metrics: - type: map_at_1 value: 70.492 - type: map_at_10 value: 79.501 - type: map_at_100 value: 79.728 - type: map_at_1000 value: 79.735 - type: map_at_3 value: 77.77 - type: map_at_5 value: 78.851 - type: mrr_at_1 value: 72.822 - type: mrr_at_10 value: 80.001 - type: mrr_at_100 value: 80.19 - type: mrr_at_1000 value: 80.197 - type: mrr_at_3 value: 78.484 - type: mrr_at_5 value: 79.42099999999999 - type: ndcg_at_1 value: 72.822 - type: ndcg_at_10 value: 83.013 - type: ndcg_at_100 value: 84.013 - type: ndcg_at_1000 value: 84.20400000000001 - type: ndcg_at_3 value: 79.728 - type: ndcg_at_5 value: 81.542 - type: precision_at_1 value: 72.822 - type: precision_at_10 value: 9.917 - type: precision_at_100 value: 1.042 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 29.847 - type: precision_at_5 value: 18.871 - type: recall_at_1 value: 70.492 - type: recall_at_10 value: 93.325 - type: recall_at_100 value: 97.822 - type: recall_at_1000 value: 99.319 - type: recall_at_3 value: 84.636 - type: recall_at_5 value: 88.93100000000001 task: type: Retrieval - dataset: config: zh-CN name: MTEB MassiveIntentClassification (zh-CN) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 76.88298587760592 - type: f1 value: 73.89001762017176 task: type: Classification - dataset: config: zh-CN name: MTEB MassiveScenarioClassification (zh-CN) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 80.76328177538669 - type: f1 value: 80.24718532423358 task: type: Classification - dataset: config: default name: MTEB MedicalRetrieval revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 split: dev type: C-MTEB/MedicalRetrieval metrics: - type: map_at_1 value: 49.6 - type: map_at_10 value: 55.620999999999995 - type: map_at_100 value: 56.204 - type: map_at_1000 value: 56.251 - type: map_at_3 value: 54.132999999999996 - type: map_at_5 value: 54.933 - type: mrr_at_1 value: 49.7 - type: mrr_at_10 value: 55.67100000000001 - type: mrr_at_100 value: 56.254000000000005 - type: mrr_at_1000 value: 56.301 - type: mrr_at_3 value: 54.18300000000001 - type: mrr_at_5 value: 54.983000000000004 - type: ndcg_at_1 value: 49.6 - type: ndcg_at_10 value: 58.645 - type: ndcg_at_100 value: 61.789 - type: ndcg_at_1000 value: 63.219 - type: ndcg_at_3 value: 55.567 - type: ndcg_at_5 value: 57.008 - type: precision_at_1 value: 49.6 - type: precision_at_10 value: 6.819999999999999 - type: precision_at_100 value: 0.836 - type: precision_at_1000 value: 0.095 - type: precision_at_3 value: 19.900000000000002 - type: precision_at_5 value: 12.64 - type: recall_at_1 value: 49.6 - type: recall_at_10 value: 68.2 - type: recall_at_100 value: 83.6 - type: recall_at_1000 value: 95.3 - type: recall_at_3 value: 59.699999999999996 - type: recall_at_5 value: 63.2 task: type: Retrieval - dataset: config: default name: MTEB MultilingualSentiment revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a split: validation type: C-MTEB/MultilingualSentiment-classification metrics: - type: accuracy value: 74.45666666666666 - type: f1 value: 74.32582402190089 task: type: Classification - dataset: config: default name: MTEB Ocnli revision: 66e76a618a34d6d565d5538088562851e6daa7ec split: validation type: C-MTEB/OCNLI metrics: - type: cos_sim_accuracy value: 80.67135896047645 - type: cos_sim_ap value: 87.60421240712051 - type: cos_sim_f1 value: 82.1304131408661 - type: cos_sim_precision value: 77.68361581920904 - type: cos_sim_recall value: 87.11721224920802 - type: dot_accuracy value: 79.04710341093666 - type: dot_ap value: 85.6370059719336 - type: dot_f1 value: 80.763723150358 - type: dot_precision value: 73.69337979094077 - type: dot_recall value: 89.33474128827878 - type: euclidean_accuracy value: 81.05035192203573 - type: euclidean_ap value: 87.7880240053663 - type: euclidean_f1 value: 82.50244379276637 - type: euclidean_precision value: 76.7970882620564 - type: euclidean_recall value: 89.1235480464625 - type: manhattan_accuracy value: 80.61721710882512 - type: manhattan_ap value: 87.43568120591175 - type: manhattan_f1 value: 81.89526184538653 - type: manhattan_precision value: 77.5992438563327 - type: manhattan_recall value: 86.6948257655755 - type: max_accuracy value: 81.05035192203573 - type: max_ap value: 87.7880240053663 - type: max_f1 value: 82.50244379276637 task: type: PairClassification - dataset: config: default name: MTEB OnlineShopping revision: e610f2ebd179a8fda30ae534c3878750a96db120 split: test type: C-MTEB/OnlineShopping-classification metrics: - type: accuracy value: 93.5 - type: ap value: 91.31357903446782 - type: f1 value: 93.48088994006616 task: type: Classification - dataset: config: default name: MTEB PAWSX revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 split: test type: C-MTEB/PAWSX metrics: - type: cos_sim_pearson value: 36.93293453538077 - type: cos_sim_spearman value: 42.45972506308574 - type: euclidean_pearson value: 42.34945133152159 - type: euclidean_spearman value: 42.331610303674644 - type: manhattan_pearson value: 42.31455070249498 - type: manhattan_spearman value: 42.19887982891834 task: type: STS - dataset: config: default name: MTEB QBQTC revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 split: test type: C-MTEB/QBQTC metrics: - type: cos_sim_pearson value: 33.683290790043785 - type: cos_sim_spearman value: 35.149171171202994 - type: euclidean_pearson value: 32.33806561267862 - type: euclidean_spearman value: 34.483576387347966 - type: manhattan_pearson value: 32.47629754599608 - type: manhattan_spearman value: 34.66434471867615 task: type: STS - dataset: config: zh name: MTEB STS22 (zh) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 66.46322760516104 - type: cos_sim_spearman value: 67.398478319726 - type: euclidean_pearson value: 64.7223480293625 - type: euclidean_spearman value: 66.83118568812951 - type: manhattan_pearson value: 64.88440039828305 - type: manhattan_spearman value: 66.80429458952257 task: type: STS - dataset: config: default name: MTEB STSB revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 split: test type: C-MTEB/STSB metrics: - type: cos_sim_pearson value: 79.08991383232105 - type: cos_sim_spearman value: 79.39715677296854 - type: euclidean_pearson value: 78.63201279320496 - type: euclidean_spearman value: 79.40262660785731 - type: manhattan_pearson value: 78.98138363146906 - type: manhattan_spearman value: 79.79968413014194 task: type: STS - dataset: config: default name: MTEB T2Reranking revision: 76631901a18387f85eaa53e5450019b87ad58ef9 split: dev type: C-MTEB/T2Reranking metrics: - type: map value: 67.43289278789972 - type: mrr value: 77.53012460908535 task: type: Reranking - dataset: config: default name: MTEB T2Retrieval revision: 8731a845f1bf500a4f111cf1070785c793d10e64 split: dev type: C-MTEB/T2Retrieval metrics: - type: map_at_1 value: 27.733999999999998 - type: map_at_10 value: 78.24799999999999 - type: map_at_100 value: 81.765 - type: map_at_1000 value: 81.824 - type: map_at_3 value: 54.92 - type: map_at_5 value: 67.61399999999999 - type: mrr_at_1 value: 90.527 - type: mrr_at_10 value: 92.843 - type: mrr_at_100 value: 92.927 - type: mrr_at_1000 value: 92.93 - type: mrr_at_3 value: 92.45100000000001 - type: mrr_at_5 value: 92.693 - type: ndcg_at_1 value: 90.527 - type: ndcg_at_10 value: 85.466 - type: ndcg_at_100 value: 88.846 - type: ndcg_at_1000 value: 89.415 - type: ndcg_at_3 value: 86.768 - type: ndcg_at_5 value: 85.46000000000001 - type: precision_at_1 value: 90.527 - type: precision_at_10 value: 42.488 - type: precision_at_100 value: 5.024 - type: precision_at_1000 value: 0.516 - type: precision_at_3 value: 75.907 - type: precision_at_5 value: 63.727000000000004 - type: recall_at_1 value: 27.733999999999998 - type: recall_at_10 value: 84.346 - type: recall_at_100 value: 95.536 - type: recall_at_1000 value: 98.42999999999999 - type: recall_at_3 value: 56.455 - type: recall_at_5 value: 70.755 task: type: Retrieval - dataset: config: default name: MTEB TNews revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 split: validation type: C-MTEB/TNews-classification metrics: - type: accuracy value: 49.952000000000005 - type: f1 value: 48.264617195258054 task: type: Classification - dataset: config: default name: MTEB ThuNewsClusteringP2P revision: 5798586b105c0434e4f0fe5e767abe619442cf93 split: test type: C-MTEB/ThuNewsClusteringP2P metrics: - type: v_measure value: 68.23769904483508 task: type: Clustering - dataset: config: default name: MTEB ThuNewsClusteringS2S revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d split: test type: C-MTEB/ThuNewsClusteringS2S metrics: - type: v_measure value: 62.50294403136556 task: type: Clustering - dataset: config: default name: MTEB VideoRetrieval revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 split: dev type: C-MTEB/VideoRetrieval metrics: - type: map_at_1 value: 54.0 - type: map_at_10 value: 63.668 - type: map_at_100 value: 64.217 - type: map_at_1000 value: 64.23100000000001 - type: map_at_3 value: 61.7 - type: map_at_5 value: 62.870000000000005 - type: mrr_at_1 value: 54.0 - type: mrr_at_10 value: 63.668 - type: mrr_at_100 value: 64.217 - type: mrr_at_1000 value: 64.23100000000001 - type: mrr_at_3 value: 61.7 - type: mrr_at_5 value: 62.870000000000005 - type: ndcg_at_1 value: 54.0 - type: ndcg_at_10 value: 68.11399999999999 - type: ndcg_at_100 value: 70.723 - type: ndcg_at_1000 value: 71.123 - type: ndcg_at_3 value: 64.074 - type: ndcg_at_5 value: 66.178 - type: precision_at_1 value: 54.0 - type: precision_at_10 value: 8.200000000000001 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 23.633000000000003 - type: precision_at_5 value: 15.2 - type: recall_at_1 value: 54.0 - type: recall_at_10 value: 82.0 - type: recall_at_100 value: 94.1 - type: recall_at_1000 value: 97.3 - type: recall_at_3 value: 70.89999999999999 - type: recall_at_5 value: 76.0 task: type: Retrieval - dataset: config: default name: MTEB Waimai revision: 339287def212450dcaa9df8c22bf93e9980c7023 split: test type: C-MTEB/waimai-classification metrics: - type: accuracy value: 86.63000000000001 - type: ap value: 69.99457882599567 - type: f1 value: 85.07735617998541 task: type: Classification - dataset: config: default name: MTEB 8TagsClustering revision: None split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 44.594104491193555 task: type: Clustering - dataset: config: default name: MTEB AllegroReviews revision: None split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 63.97614314115309 - type: f1 value: 52.15634261679283 task: type: Classification - dataset: config: default name: MTEB ArguAna-PL revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 split: test type: clarin-knext/arguana-pl metrics: - type: map_at_1 value: 32.646 - type: map_at_10 value: 47.963 - type: map_at_100 value: 48.789 - type: map_at_1000 value: 48.797000000000004 - type: map_at_3 value: 43.196 - type: map_at_5 value: 46.016 - type: mrr_at_1 value: 33.073 - type: mrr_at_10 value: 48.126000000000005 - type: mrr_at_100 value: 48.946 - type: mrr_at_1000 value: 48.953 - type: mrr_at_3 value: 43.374 - type: mrr_at_5 value: 46.147 - type: ndcg_at_1 value: 32.646 - type: ndcg_at_10 value: 56.481 - type: ndcg_at_100 value: 59.922 - type: ndcg_at_1000 value: 60.07 - type: ndcg_at_3 value: 46.675 - type: ndcg_at_5 value: 51.76500000000001 - type: precision_at_1 value: 32.646 - type: precision_at_10 value: 8.371 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.919 - type: precision_at_5 value: 13.825999999999999 - type: recall_at_1 value: 32.646 - type: recall_at_10 value: 83.71300000000001 - type: recall_at_100 value: 98.578 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 56.757000000000005 - type: recall_at_5 value: 69.132 task: type: Retrieval - dataset: config: default name: MTEB CBD revision: None split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 68.56 - type: ap value: 23.310493680488513 - type: f1 value: 58.85369533105693 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: None split: test type: PL-MTEB/cdsce-pairclassification metrics: - type: cos_sim_accuracy value: 88.5 - type: cos_sim_ap value: 72.42140924378361 - type: cos_sim_f1 value: 66.0919540229885 - type: cos_sim_precision value: 72.78481012658227 - type: cos_sim_recall value: 60.526315789473685 - type: dot_accuracy value: 88.5 - type: dot_ap value: 72.42140924378361 - type: dot_f1 value: 66.0919540229885 - type: dot_precision value: 72.78481012658227 - type: dot_recall value: 60.526315789473685 - type: euclidean_accuracy value: 88.5 - type: euclidean_ap value: 72.42140924378361 - type: euclidean_f1 value: 66.0919540229885 - type: euclidean_precision value: 72.78481012658227 - type: euclidean_recall value: 60.526315789473685 - type: manhattan_accuracy value: 88.5 - type: manhattan_ap value: 72.49745515311696 - type: manhattan_f1 value: 66.0968660968661 - type: manhattan_precision value: 72.04968944099379 - type: manhattan_recall value: 61.05263157894737 - type: max_accuracy value: 88.5 - type: max_ap value: 72.49745515311696 - type: max_f1 value: 66.0968660968661 task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: None split: test type: PL-MTEB/cdscr-sts metrics: - type: cos_sim_pearson value: 90.32269765590145 - type: cos_sim_spearman value: 89.73666311491672 - type: euclidean_pearson value: 88.2933868516544 - type: euclidean_spearman value: 89.73666311491672 - type: manhattan_pearson value: 88.33474590219448 - type: manhattan_spearman value: 89.8548364866583 task: type: STS - dataset: config: default name: MTEB DBPedia-PL revision: 76afe41d9af165cc40999fcaa92312b8b012064a split: test type: clarin-knext/dbpedia-pl metrics: - type: map_at_1 value: 7.632999999999999 - type: map_at_10 value: 16.426 - type: map_at_100 value: 22.651 - type: map_at_1000 value: 24.372 - type: map_at_3 value: 11.706 - type: map_at_5 value: 13.529 - type: mrr_at_1 value: 60.75000000000001 - type: mrr_at_10 value: 68.613 - type: mrr_at_100 value: 69.001 - type: mrr_at_1000 value: 69.021 - type: mrr_at_3 value: 67.0 - type: mrr_at_5 value: 67.925 - type: ndcg_at_1 value: 49.875 - type: ndcg_at_10 value: 36.978 - type: ndcg_at_100 value: 40.031 - type: ndcg_at_1000 value: 47.566 - type: ndcg_at_3 value: 41.148 - type: ndcg_at_5 value: 38.702 - type: precision_at_1 value: 60.75000000000001 - type: precision_at_10 value: 29.7 - type: precision_at_100 value: 9.278 - type: precision_at_1000 value: 2.099 - type: precision_at_3 value: 44.0 - type: precision_at_5 value: 37.6 - type: recall_at_1 value: 7.632999999999999 - type: recall_at_10 value: 22.040000000000003 - type: recall_at_100 value: 44.024 - type: recall_at_1000 value: 67.848 - type: recall_at_3 value: 13.093 - type: recall_at_5 value: 15.973 task: type: Retrieval - dataset: config: default name: MTEB FiQA-PL revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e split: test type: clarin-knext/fiqa-pl metrics: - type: map_at_1 value: 15.473 - type: map_at_10 value: 24.579 - type: map_at_100 value: 26.387 - type: map_at_1000 value: 26.57 - type: map_at_3 value: 21.278 - type: map_at_5 value: 23.179 - type: mrr_at_1 value: 30.709999999999997 - type: mrr_at_10 value: 38.994 - type: mrr_at_100 value: 39.993 - type: mrr_at_1000 value: 40.044999999999995 - type: mrr_at_3 value: 36.342999999999996 - type: mrr_at_5 value: 37.846999999999994 - type: ndcg_at_1 value: 30.709999999999997 - type: ndcg_at_10 value: 31.608999999999998 - type: ndcg_at_100 value: 38.807 - type: ndcg_at_1000 value: 42.208 - type: ndcg_at_3 value: 28.086 - type: ndcg_at_5 value: 29.323 - type: precision_at_1 value: 30.709999999999997 - type: precision_at_10 value: 8.688 - type: precision_at_100 value: 1.608 - type: precision_at_1000 value: 0.22100000000000003 - type: precision_at_3 value: 18.724 - type: precision_at_5 value: 13.950999999999999 - type: recall_at_1 value: 15.473 - type: recall_at_10 value: 38.361000000000004 - type: recall_at_100 value: 65.2 - type: recall_at_1000 value: 85.789 - type: recall_at_3 value: 25.401 - type: recall_at_5 value: 30.875999999999998 task: type: Retrieval - dataset: config: default name: MTEB HotpotQA-PL revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 split: test type: clarin-knext/hotpotqa-pl metrics: - type: map_at_1 value: 38.096000000000004 - type: map_at_10 value: 51.44499999999999 - type: map_at_100 value: 52.325 - type: map_at_1000 value: 52.397000000000006 - type: map_at_3 value: 48.626999999999995 - type: map_at_5 value: 50.342 - type: mrr_at_1 value: 76.19200000000001 - type: mrr_at_10 value: 81.191 - type: mrr_at_100 value: 81.431 - type: mrr_at_1000 value: 81.443 - type: mrr_at_3 value: 80.30199999999999 - type: mrr_at_5 value: 80.85900000000001 - type: ndcg_at_1 value: 76.19200000000001 - type: ndcg_at_10 value: 60.9 - type: ndcg_at_100 value: 64.14699999999999 - type: ndcg_at_1000 value: 65.647 - type: ndcg_at_3 value: 56.818000000000005 - type: ndcg_at_5 value: 59.019999999999996 - type: precision_at_1 value: 76.19200000000001 - type: precision_at_10 value: 12.203 - type: precision_at_100 value: 1.478 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 34.616 - type: precision_at_5 value: 22.515 - type: recall_at_1 value: 38.096000000000004 - type: recall_at_10 value: 61.013 - type: recall_at_100 value: 73.90299999999999 - type: recall_at_1000 value: 83.91 - type: recall_at_3 value: 51.92400000000001 - type: recall_at_5 value: 56.286 task: type: Retrieval - dataset: config: default name: MTEB MSMARCO-PL revision: 8634c07806d5cce3a6138e260e59b81760a0a640 split: test type: clarin-knext/msmarco-pl metrics: - type: map_at_1 value: 1.548 - type: map_at_10 value: 11.049000000000001 - type: map_at_100 value: 28.874 - type: map_at_1000 value: 34.931 - type: map_at_3 value: 4.162 - type: map_at_5 value: 6.396 - type: mrr_at_1 value: 90.69800000000001 - type: mrr_at_10 value: 92.093 - type: mrr_at_100 value: 92.345 - type: mrr_at_1000 value: 92.345 - type: mrr_at_3 value: 91.86 - type: mrr_at_5 value: 91.86 - type: ndcg_at_1 value: 74.031 - type: ndcg_at_10 value: 63.978 - type: ndcg_at_100 value: 53.101 - type: ndcg_at_1000 value: 60.675999999999995 - type: ndcg_at_3 value: 71.421 - type: ndcg_at_5 value: 68.098 - type: precision_at_1 value: 90.69800000000001 - type: precision_at_10 value: 71.86 - type: precision_at_100 value: 31.395 - type: precision_at_1000 value: 5.981 - type: precision_at_3 value: 84.49600000000001 - type: precision_at_5 value: 79.07 - type: recall_at_1 value: 1.548 - type: recall_at_10 value: 12.149000000000001 - type: recall_at_100 value: 40.794999999999995 - type: recall_at_1000 value: 67.974 - type: recall_at_3 value: 4.244 - type: recall_at_5 value: 6.608 task: type: Retrieval - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 73.55413584398119 - type: f1 value: 69.65610882318181 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 76.37188971082716 - type: f1 value: 75.64847309941361 task: type: Classification - dataset: config: default name: MTEB NFCorpus-PL revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 split: test type: clarin-knext/nfcorpus-pl metrics: - type: map_at_1 value: 4.919 - type: map_at_10 value: 10.834000000000001 - type: map_at_100 value: 13.38 - type: map_at_1000 value: 14.581 - type: map_at_3 value: 8.198 - type: map_at_5 value: 9.428 - type: mrr_at_1 value: 41.176 - type: mrr_at_10 value: 50.083 - type: mrr_at_100 value: 50.559 - type: mrr_at_1000 value: 50.604000000000006 - type: mrr_at_3 value: 47.936 - type: mrr_at_5 value: 49.407000000000004 - type: ndcg_at_1 value: 39.628 - type: ndcg_at_10 value: 30.098000000000003 - type: ndcg_at_100 value: 27.061 - type: ndcg_at_1000 value: 35.94 - type: ndcg_at_3 value: 35.135 - type: ndcg_at_5 value: 33.335 - type: precision_at_1 value: 41.176 - type: precision_at_10 value: 22.259999999999998 - type: precision_at_100 value: 6.712 - type: precision_at_1000 value: 1.9060000000000001 - type: precision_at_3 value: 33.23 - type: precision_at_5 value: 29.04 - type: recall_at_1 value: 4.919 - type: recall_at_10 value: 14.196 - type: recall_at_100 value: 26.948 - type: recall_at_1000 value: 59.211000000000006 - type: recall_at_3 value: 9.44 - type: recall_at_5 value: 11.569 task: type: Retrieval - dataset: config: default name: MTEB NQ-PL revision: f171245712cf85dd4700b06bef18001578d0ca8d split: test type: clarin-knext/nq-pl metrics: - type: map_at_1 value: 25.35 - type: map_at_10 value: 37.884 - type: map_at_100 value: 38.955 - type: map_at_1000 value: 39.007999999999996 - type: map_at_3 value: 34.239999999999995 - type: map_at_5 value: 36.398 - type: mrr_at_1 value: 28.737000000000002 - type: mrr_at_10 value: 39.973 - type: mrr_at_100 value: 40.844 - type: mrr_at_1000 value: 40.885 - type: mrr_at_3 value: 36.901 - type: mrr_at_5 value: 38.721 - type: ndcg_at_1 value: 28.708 - type: ndcg_at_10 value: 44.204 - type: ndcg_at_100 value: 48.978 - type: ndcg_at_1000 value: 50.33 - type: ndcg_at_3 value: 37.36 - type: ndcg_at_5 value: 40.912 - type: precision_at_1 value: 28.708 - type: precision_at_10 value: 7.367 - type: precision_at_100 value: 1.0030000000000001 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 17.034 - type: precision_at_5 value: 12.293999999999999 - type: recall_at_1 value: 25.35 - type: recall_at_10 value: 61.411 - type: recall_at_100 value: 82.599 - type: recall_at_1000 value: 92.903 - type: recall_at_3 value: 43.728 - type: recall_at_5 value: 51.854 task: type: Retrieval - dataset: config: default name: MTEB PAC revision: None split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.49422763833996 - type: f1 value: 66.73472657783407 task: type: Classification - dataset: config: default name: MTEB PPC revision: None split: test type: PL-MTEB/ppc-pairclassification metrics: - type: cos_sim_accuracy value: 81.0 - type: cos_sim_ap value: 91.47194213011349 - type: cos_sim_f1 value: 84.73767885532592 - type: cos_sim_precision value: 81.49847094801224 - type: cos_sim_recall value: 88.24503311258279 - type: dot_accuracy value: 81.0 - type: dot_ap value: 91.47194213011349 - type: dot_f1 value: 84.73767885532592 - type: dot_precision value: 81.49847094801224 - type: dot_recall value: 88.24503311258279 - type: euclidean_accuracy value: 81.0 - type: euclidean_ap value: 91.47194213011349 - type: euclidean_f1 value: 84.73767885532592 - type: euclidean_precision value: 81.49847094801224 - type: euclidean_recall value: 88.24503311258279 - type: manhattan_accuracy value: 81.0 - type: manhattan_ap value: 91.46464475050571 - type: manhattan_f1 value: 84.48687350835321 - type: manhattan_precision value: 81.31699846860643 - type: manhattan_recall value: 87.91390728476821 - type: max_accuracy value: 81.0 - type: max_ap value: 91.47194213011349 - type: max_f1 value: 84.73767885532592 task: type: PairClassification - dataset: config: default name: MTEB PSC revision: None split: test type: PL-MTEB/psc-pairclassification metrics: - type: cos_sim_accuracy value: 97.6808905380334 - type: cos_sim_ap value: 99.27948611836348 - type: cos_sim_f1 value: 96.15975422427034 - type: cos_sim_precision value: 96.90402476780186 - type: cos_sim_recall value: 95.42682926829268 - type: dot_accuracy value: 97.6808905380334 - type: dot_ap value: 99.2794861183635 - type: dot_f1 value: 96.15975422427034 - type: dot_precision value: 96.90402476780186 - type: dot_recall value: 95.42682926829268 - type: euclidean_accuracy value: 97.6808905380334 - type: euclidean_ap value: 99.2794861183635 - type: euclidean_f1 value: 96.15975422427034 - type: euclidean_precision value: 96.90402476780186 - type: euclidean_recall value: 95.42682926829268 - type: manhattan_accuracy value: 97.6808905380334 - type: manhattan_ap value: 99.28715055268721 - type: manhattan_f1 value: 96.14791987673343 - type: manhattan_precision value: 97.19626168224299 - type: manhattan_recall value: 95.1219512195122 - type: max_accuracy value: 97.6808905380334 - type: max_ap value: 99.28715055268721 - type: max_f1 value: 96.15975422427034 task: type: PairClassification - dataset: config: default name: MTEB PolEmo2.0-IN revision: None split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 86.16343490304708 - type: f1 value: 83.3442579486744 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: None split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 68.40080971659918 - type: f1 value: 53.13720751142237 task: type: Classification - dataset: config: default name: MTEB Quora-PL revision: 0be27e93455051e531182b85e85e425aba12e9d4 split: test type: clarin-knext/quora-pl metrics: - type: map_at_1 value: 63.322 - type: map_at_10 value: 76.847 - type: map_at_100 value: 77.616 - type: map_at_1000 value: 77.644 - type: map_at_3 value: 73.624 - type: map_at_5 value: 75.603 - type: mrr_at_1 value: 72.88 - type: mrr_at_10 value: 80.376 - type: mrr_at_100 value: 80.604 - type: mrr_at_1000 value: 80.61 - type: mrr_at_3 value: 78.92 - type: mrr_at_5 value: 79.869 - type: ndcg_at_1 value: 72.89999999999999 - type: ndcg_at_10 value: 81.43 - type: ndcg_at_100 value: 83.394 - type: ndcg_at_1000 value: 83.685 - type: ndcg_at_3 value: 77.62599999999999 - type: ndcg_at_5 value: 79.656 - type: precision_at_1 value: 72.89999999999999 - type: precision_at_10 value: 12.548 - type: precision_at_100 value: 1.4869999999999999 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 34.027 - type: precision_at_5 value: 22.654 - type: recall_at_1 value: 63.322 - type: recall_at_10 value: 90.664 - type: recall_at_100 value: 97.974 - type: recall_at_1000 value: 99.636 - type: recall_at_3 value: 80.067 - type: recall_at_5 value: 85.526 task: type: Retrieval - dataset: config: default name: MTEB SCIDOCS-PL revision: 45452b03f05560207ef19149545f168e596c9337 split: test type: clarin-knext/scidocs-pl metrics: - type: map_at_1 value: 3.95 - type: map_at_10 value: 9.658999999999999 - type: map_at_100 value: 11.384 - type: map_at_1000 value: 11.677 - type: map_at_3 value: 7.055 - type: map_at_5 value: 8.244 - type: mrr_at_1 value: 19.5 - type: mrr_at_10 value: 28.777 - type: mrr_at_100 value: 29.936 - type: mrr_at_1000 value: 30.009999999999998 - type: mrr_at_3 value: 25.55 - type: mrr_at_5 value: 27.284999999999997 - type: ndcg_at_1 value: 19.5 - type: ndcg_at_10 value: 16.589000000000002 - type: ndcg_at_100 value: 23.879 - type: ndcg_at_1000 value: 29.279 - type: ndcg_at_3 value: 15.719 - type: ndcg_at_5 value: 13.572000000000001 - type: precision_at_1 value: 19.5 - type: precision_at_10 value: 8.62 - type: precision_at_100 value: 1.924 - type: precision_at_1000 value: 0.322 - type: precision_at_3 value: 14.6 - type: precision_at_5 value: 11.78 - type: recall_at_1 value: 3.95 - type: recall_at_10 value: 17.477999999999998 - type: recall_at_100 value: 38.99 - type: recall_at_1000 value: 65.417 - type: recall_at_3 value: 8.883000000000001 - type: recall_at_5 value: 11.933 task: type: Retrieval - dataset: config: default name: MTEB SICK-E-PL revision: None split: test type: PL-MTEB/sicke-pl-pairclassification metrics: - type: cos_sim_accuracy value: 83.48960456583775 - type: cos_sim_ap value: 76.31522115825375 - type: cos_sim_f1 value: 70.35573122529645 - type: cos_sim_precision value: 70.9934735315446 - type: cos_sim_recall value: 69.72934472934473 - type: dot_accuracy value: 83.48960456583775 - type: dot_ap value: 76.31522115825373 - type: dot_f1 value: 70.35573122529645 - type: dot_precision value: 70.9934735315446 - type: dot_recall value: 69.72934472934473 - type: euclidean_accuracy value: 83.48960456583775 - type: euclidean_ap value: 76.31522115825373 - type: euclidean_f1 value: 70.35573122529645 - type: euclidean_precision value: 70.9934735315446 - type: euclidean_recall value: 69.72934472934473 - type: manhattan_accuracy value: 83.46922136159804 - type: manhattan_ap value: 76.18474601388084 - type: manhattan_f1 value: 70.34779490856937 - type: manhattan_precision value: 70.83032490974729 - type: manhattan_recall value: 69.87179487179486 - type: max_accuracy value: 83.48960456583775 - type: max_ap value: 76.31522115825375 - type: max_f1 value: 70.35573122529645 task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: None split: test type: PL-MTEB/sickr-pl-sts metrics: - type: cos_sim_pearson value: 77.95374883876302 - type: cos_sim_spearman value: 73.77630219171942 - type: euclidean_pearson value: 75.81927069594934 - type: euclidean_spearman value: 73.7763211303831 - type: manhattan_pearson value: 76.03126859057528 - type: manhattan_spearman value: 73.96528138013369 task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 37.388282764841826 - type: cos_sim_spearman value: 40.83477184710897 - type: euclidean_pearson value: 26.754737044177805 - type: euclidean_spearman value: 40.83477184710897 - type: manhattan_pearson value: 26.760453110872458 - type: manhattan_spearman value: 41.034477441383856 task: type: STS - dataset: config: default name: MTEB SciFact-PL revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e split: test type: clarin-knext/scifact-pl metrics: - type: map_at_1 value: 49.15 - type: map_at_10 value: 61.690999999999995 - type: map_at_100 value: 62.348000000000006 - type: map_at_1000 value: 62.38 - type: map_at_3 value: 58.824 - type: map_at_5 value: 60.662000000000006 - type: mrr_at_1 value: 51.333 - type: mrr_at_10 value: 62.731 - type: mrr_at_100 value: 63.245 - type: mrr_at_1000 value: 63.275000000000006 - type: mrr_at_3 value: 60.667 - type: mrr_at_5 value: 61.93300000000001 - type: ndcg_at_1 value: 51.333 - type: ndcg_at_10 value: 67.168 - type: ndcg_at_100 value: 69.833 - type: ndcg_at_1000 value: 70.56700000000001 - type: ndcg_at_3 value: 62.40599999999999 - type: ndcg_at_5 value: 65.029 - type: precision_at_1 value: 51.333 - type: precision_at_10 value: 9.333 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.333 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 49.15 - type: recall_at_10 value: 82.533 - type: recall_at_100 value: 94.167 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 69.917 - type: recall_at_5 value: 76.356 task: type: Retrieval - dataset: config: default name: MTEB TRECCOVID-PL revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd split: test type: clarin-knext/trec-covid-pl metrics: - type: map_at_1 value: 0.261 - type: map_at_10 value: 2.1260000000000003 - type: map_at_100 value: 12.171999999999999 - type: map_at_1000 value: 26.884999999999998 - type: map_at_3 value: 0.695 - type: map_at_5 value: 1.134 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 96.952 - type: mrr_at_100 value: 96.952 - type: mrr_at_1000 value: 96.952 - type: mrr_at_3 value: 96.667 - type: mrr_at_5 value: 96.667 - type: ndcg_at_1 value: 92.0 - type: ndcg_at_10 value: 81.193 - type: ndcg_at_100 value: 61.129 - type: ndcg_at_1000 value: 51.157 - type: ndcg_at_3 value: 85.693 - type: ndcg_at_5 value: 84.129 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 85.39999999999999 - type: precision_at_100 value: 62.03999999999999 - type: precision_at_1000 value: 22.224 - type: precision_at_3 value: 88.0 - type: precision_at_5 value: 88.0 - type: recall_at_1 value: 0.261 - type: recall_at_10 value: 2.262 - type: recall_at_100 value: 14.981 - type: recall_at_1000 value: 46.837 - type: recall_at_3 value: 0.703 - type: recall_at_5 value: 1.172 task: type: Retrieval - dataset: config: default name: MTEB AlloProfClusteringP2P revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b split: test type: lyon-nlp/alloprof metrics: - type: v_measure value: 70.55290063940157 task: type: Clustering - dataset: config: default name: MTEB AlloProfClusteringS2S revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b split: test type: lyon-nlp/alloprof metrics: - type: v_measure value: 55.41500719337263 task: type: Clustering - dataset: config: default name: MTEB AlloprofReranking revision: 666fdacebe0291776e86f29345663dfaf80a0db9 split: test type: lyon-nlp/mteb-fr-reranking-alloprof-s2p metrics: - type: map value: 73.48697375332002 - type: mrr value: 75.01836585523822 task: type: Reranking - dataset: config: default name: MTEB AlloprofRetrieval revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b split: test type: lyon-nlp/alloprof metrics: - type: map_at_1 value: 38.454 - type: map_at_10 value: 51.605000000000004 - type: map_at_100 value: 52.653000000000006 - type: map_at_1000 value: 52.697 - type: map_at_3 value: 48.304 - type: map_at_5 value: 50.073 - type: mrr_at_1 value: 43.307 - type: mrr_at_10 value: 54.400000000000006 - type: mrr_at_100 value: 55.147999999999996 - type: mrr_at_1000 value: 55.174 - type: mrr_at_3 value: 51.77 - type: mrr_at_5 value: 53.166999999999994 - type: ndcg_at_1 value: 43.307 - type: ndcg_at_10 value: 57.891000000000005 - type: ndcg_at_100 value: 62.161 - type: ndcg_at_1000 value: 63.083 - type: ndcg_at_3 value: 51.851 - type: ndcg_at_5 value: 54.605000000000004 - type: precision_at_1 value: 43.307 - type: precision_at_10 value: 9.033 - type: precision_at_100 value: 1.172 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 22.798 - type: precision_at_5 value: 15.492 - type: recall_at_1 value: 38.454 - type: recall_at_10 value: 74.166 - type: recall_at_100 value: 92.43599999999999 - type: recall_at_1000 value: 99.071 - type: recall_at_3 value: 58.087 - type: recall_at_5 value: 64.568 task: type: Retrieval - dataset: config: fr name: MTEB AmazonReviewsClassification (fr) revision: 1399c76144fd37290681b995c656ef9b2e06e26d split: test type: mteb/amazon_reviews_multi metrics: - type: accuracy value: 53.474 - type: f1 value: 50.38275392350236 task: type: Classification - dataset: config: default name: MTEB BSARDRetrieval revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 split: test type: maastrichtlawtech/bsard metrics: - type: map_at_1 value: 2.252 - type: map_at_10 value: 4.661 - type: map_at_100 value: 5.271 - type: map_at_1000 value: 5.3629999999999995 - type: map_at_3 value: 3.604 - type: map_at_5 value: 4.3020000000000005 - type: mrr_at_1 value: 2.252 - type: mrr_at_10 value: 4.661 - type: mrr_at_100 value: 5.271 - type: mrr_at_1000 value: 5.3629999999999995 - type: mrr_at_3 value: 3.604 - type: mrr_at_5 value: 4.3020000000000005 - type: ndcg_at_1 value: 2.252 - type: ndcg_at_10 value: 6.3020000000000005 - type: ndcg_at_100 value: 10.342 - type: ndcg_at_1000 value: 13.475999999999999 - type: ndcg_at_3 value: 4.0649999999999995 - type: ndcg_at_5 value: 5.344 - type: precision_at_1 value: 2.252 - type: precision_at_10 value: 1.171 - type: precision_at_100 value: 0.333 - type: precision_at_1000 value: 0.059000000000000004 - type: precision_at_3 value: 1.802 - type: precision_at_5 value: 1.712 - type: recall_at_1 value: 2.252 - type: recall_at_10 value: 11.712 - type: recall_at_100 value: 33.333 - type: recall_at_1000 value: 59.458999999999996 - type: recall_at_3 value: 5.405 - type: recall_at_5 value: 8.559 task: type: Retrieval - dataset: config: default name: MTEB HALClusteringS2S revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 split: test type: lyon-nlp/clustering-hal-s2s metrics: - type: v_measure value: 28.301882091023288 task: type: Clustering - dataset: config: default name: MTEB MLSUMClusteringP2P revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: mlsum metrics: - type: v_measure value: 45.26992995191701 task: type: Clustering - dataset: config: default name: MTEB MLSUMClusteringS2S revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 split: test type: mlsum metrics: - type: v_measure value: 42.773174876871145 task: type: Clustering - dataset: config: fr name: MTEB MTOPDomainClassification (fr) revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf split: test type: mteb/mtop_domain metrics: - type: accuracy value: 93.47635452552458 - type: f1 value: 93.19922617577213 task: type: Classification - dataset: config: fr name: MTEB MTOPIntentClassification (fr) revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba split: test type: mteb/mtop_intent metrics: - type: accuracy value: 80.2317569683683 - type: f1 value: 56.18060418621901 task: type: Classification - dataset: config: fra name: MTEB MasakhaNEWSClassification (fra) revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 split: test type: masakhane/masakhanews metrics: - type: accuracy value: 85.18957345971565 - type: f1 value: 80.829981537394 task: type: Classification - dataset: config: fra name: MTEB MasakhaNEWSClusteringP2P (fra) revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 split: test type: masakhane/masakhanews metrics: - type: v_measure value: 71.04138999801822 task: type: Clustering - dataset: config: fra name: MTEB MasakhaNEWSClusteringS2S (fra) revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 split: test type: masakhane/masakhanews metrics: - type: v_measure value: 71.7056263158008 task: type: Clustering - dataset: config: fr name: MTEB MassiveIntentClassification (fr) revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 76.65097511768661 - type: f1 value: 73.82441070598712 task: type: Classification - dataset: config: fr name: MTEB MassiveScenarioClassification (fr) revision: 7d571f92784cd94a019292a1f45445077d0ef634 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 79.09885675857431 - type: f1 value: 78.28407777434224 task: type: Classification - dataset: config: fr name: MTEB MintakaRetrieval (fr) revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e split: test type: jinaai/mintakaqa metrics: - type: map_at_1 value: 25.307000000000002 - type: map_at_10 value: 36.723 - type: map_at_100 value: 37.713 - type: map_at_1000 value: 37.769000000000005 - type: map_at_3 value: 33.77 - type: map_at_5 value: 35.463 - type: mrr_at_1 value: 25.307000000000002 - type: mrr_at_10 value: 36.723 - type: mrr_at_100 value: 37.713 - type: mrr_at_1000 value: 37.769000000000005 - type: mrr_at_3 value: 33.77 - type: mrr_at_5 value: 35.463 - type: ndcg_at_1 value: 25.307000000000002 - type: ndcg_at_10 value: 42.559999999999995 - type: ndcg_at_100 value: 47.457 - type: ndcg_at_1000 value: 49.162 - type: ndcg_at_3 value: 36.461 - type: ndcg_at_5 value: 39.504 - type: precision_at_1 value: 25.307000000000002 - type: precision_at_10 value: 6.106 - type: precision_at_100 value: 0.8420000000000001 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 14.741999999999999 - type: precision_at_5 value: 10.319 - type: recall_at_1 value: 25.307000000000002 - type: recall_at_10 value: 61.056999999999995 - type: recall_at_100 value: 84.152 - type: recall_at_1000 value: 98.03399999999999 - type: recall_at_3 value: 44.226 - type: recall_at_5 value: 51.597 task: type: Retrieval - dataset: config: fr name: MTEB OpusparcusPC (fr) revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a split: test type: GEM/opusparcus metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 task: type: PairClassification - dataset: config: fr name: MTEB PawsX (fr) revision: 8a04d940a42cd40658986fdd8e3da561533a3646 split: test type: paws-x metrics: - type: cos_sim_accuracy value: 70.8 - type: cos_sim_ap value: 73.7671529695957 - type: cos_sim_f1 value: 68.80964339527875 - type: cos_sim_precision value: 62.95955882352941 - type: cos_sim_recall value: 75.85825027685493 - type: dot_accuracy value: 70.8 - type: dot_ap value: 73.78345265366947 - type: dot_f1 value: 68.80964339527875 - type: dot_precision value: 62.95955882352941 - type: dot_recall value: 75.85825027685493 - type: euclidean_accuracy value: 70.8 - type: euclidean_ap value: 73.7671529695957 - type: euclidean_f1 value: 68.80964339527875 - type: euclidean_precision value: 62.95955882352941 - type: euclidean_recall value: 75.85825027685493 - type: manhattan_accuracy value: 70.75 - type: manhattan_ap value: 73.78996383615953 - type: manhattan_f1 value: 68.79432624113475 - type: manhattan_precision value: 63.39869281045751 - type: manhattan_recall value: 75.1937984496124 - type: max_accuracy value: 70.8 - type: max_ap value: 73.78996383615953 - type: max_f1 value: 68.80964339527875 task: type: PairClassification - dataset: config: default name: MTEB SICKFr revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a split: test type: Lajavaness/SICK-fr metrics: - type: cos_sim_pearson value: 84.03253762760392 - type: cos_sim_spearman value: 79.68280105762004 - type: euclidean_pearson value: 80.98265050044444 - type: euclidean_spearman value: 79.68233242682867 - type: manhattan_pearson value: 80.9678911810704 - type: manhattan_spearman value: 79.70264097683109 task: type: STS - dataset: config: fr name: MTEB STS22 (fr) revision: eea2b4fe26a775864c896887d910b76a8098ad3f split: test type: mteb/sts22-crosslingual-sts metrics: - type: cos_sim_pearson value: 80.56896987572884 - type: cos_sim_spearman value: 81.84352499523287 - type: euclidean_pearson value: 80.40831759421305 - type: euclidean_spearman value: 81.84352499523287 - type: manhattan_pearson value: 80.74333857561238 - type: manhattan_spearman value: 82.41503246733892 task: type: STS - dataset: config: fr name: MTEB STSBenchmarkMultilingualSTS (fr) revision: 93d57ef91790589e3ce9c365164337a8a78b7632 split: test type: stsb_multi_mt metrics: - type: cos_sim_pearson value: 82.71826762276979 - type: cos_sim_spearman value: 82.25433354916042 - type: euclidean_pearson value: 81.87115571724316 - type: euclidean_spearman value: 82.25322342890107 - type: manhattan_pearson value: 82.11174867527224 - type: manhattan_spearman value: 82.55905365203084 task: type: STS - dataset: config: default name: MTEB SummEvalFr revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 split: test type: lyon-nlp/summarization-summeval-fr-p2p metrics: - type: cos_sim_pearson value: 30.659441623392887 - type: cos_sim_spearman value: 30.501134097353315 - type: dot_pearson value: 30.659444768851056 - type: dot_spearman value: 30.501134097353315 task: type: Summarization - dataset: config: default name: MTEB SyntecReranking revision: b205c5084a0934ce8af14338bf03feb19499c84d split: test type: lyon-nlp/mteb-fr-reranking-syntec-s2p metrics: - type: map value: 94.03333333333333 - type: mrr value: 94.03333333333333 task: type: Reranking - dataset: config: default name: MTEB SyntecRetrieval revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff split: test type: lyon-nlp/mteb-fr-retrieval-syntec-s2p metrics: - type: map_at_1 value: 79.0 - type: map_at_10 value: 87.61 - type: map_at_100 value: 87.655 - type: map_at_1000 value: 87.655 - type: map_at_3 value: 87.167 - type: map_at_5 value: 87.36699999999999 - type: mrr_at_1 value: 79.0 - type: mrr_at_10 value: 87.61 - type: mrr_at_100 value: 87.655 - type: mrr_at_1000 value: 87.655 - type: mrr_at_3 value: 87.167 - type: mrr_at_5 value: 87.36699999999999 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 90.473 - type: ndcg_at_100 value: 90.694 - type: ndcg_at_1000 value: 90.694 - type: ndcg_at_3 value: 89.464 - type: ndcg_at_5 value: 89.851 - type: precision_at_1 value: 79.0 - type: precision_at_10 value: 9.9 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 32.0 - type: precision_at_5 value: 19.400000000000002 - type: recall_at_1 value: 79.0 - type: recall_at_10 value: 99.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 96.0 - type: recall_at_5 value: 97.0 task: type: Retrieval - dataset: config: fr name: MTEB XPQARetrieval (fr) revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f split: test type: jinaai/xpqa metrics: - type: map_at_1 value: 39.395 - type: map_at_10 value: 59.123999999999995 - type: map_at_100 value: 60.704 - type: map_at_1000 value: 60.760000000000005 - type: map_at_3 value: 53.187 - type: map_at_5 value: 56.863 - type: mrr_at_1 value: 62.083 - type: mrr_at_10 value: 68.87299999999999 - type: mrr_at_100 value: 69.46900000000001 - type: mrr_at_1000 value: 69.48299999999999 - type: mrr_at_3 value: 66.8 - type: mrr_at_5 value: 67.928 - type: ndcg_at_1 value: 62.083 - type: ndcg_at_10 value: 65.583 - type: ndcg_at_100 value: 70.918 - type: ndcg_at_1000 value: 71.72800000000001 - type: ndcg_at_3 value: 60.428000000000004 - type: ndcg_at_5 value: 61.853 - type: precision_at_1 value: 62.083 - type: precision_at_10 value: 15.033 - type: precision_at_100 value: 1.9529999999999998 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 36.315 - type: precision_at_5 value: 25.955000000000002 - type: recall_at_1 value: 39.395 - type: recall_at_10 value: 74.332 - type: recall_at_100 value: 94.729 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 57.679 - type: recall_at_5 value: 65.036 task: type: Retrieval --- ## gte-Qwen2-1.5B-instruct **gte-Qwen2-1.5B-instruct** is the latest model in the gte (General Text Embedding) model family. The model is built on [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) LLM model and use the same training data and strategies as the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model. The model incorporates several key advancements: - Integration of bidirectional attention mechanisms, enriching its contextual understanding. - Instruction tuning, applied solely on the query side for streamlined efficiency - Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks. ## Model Information - Model Size: 1.5B - Embedding Dimension: 1536 - Max Input Tokens: 32k ## Requirements ``` transformers>=4.39.2 flash_attn>=2.5.6 ``` ## Usage ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True) # In case you want to reduce the maximum length: model.max_seq_length = 8192 queries = [ "how much protein should a female eat", "summit define", ] documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] query_embeddings = model.encode(queries, prompt_name="query") document_embeddings = model.encode(documents) scores = (query_embeddings @ document_embeddings.T) * 100 print(scores.tolist()) ``` Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice. ### Transformers ```python import torch import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0]) if left_padding: return last_hidden_states[:, -1] else: sequence_lengths = attention_mask.sum(dim=1) - 1 batch_size = last_hidden_states.shape[0] return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths] def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' # Each query must come with a one-sentence instruction that describes the task task = 'Given a web search query, retrieve relevant passages that answer the query' queries = [ get_detailed_instruct(task, 'how much protein should a female eat'), get_detailed_instruct(task, 'summit define') ] # No need to add instruction for retrieval documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] input_texts = queries + documents tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True) model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True) max_length = 8192 # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Evaluation ### MTEB & C-MTEB You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-1.5B-instruct** on MTEB(English)/C-MTEB(Chinese): | Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) | |:----:|:---------:|:----------:|:----------:|:----------:| | [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - | | [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - | | [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - | | [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - | | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - | | [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - | | [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - | | [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - | | [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - | | [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - | | [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - | | [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** | | [**gte-Qwen2-1.5B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | **67.16** | **67.65** | **66.60** | **64.04** | ### GTE Models The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture). | Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) | |:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:| | [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB | | [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB | | [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB | | [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB | | [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB | | [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB | | [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB | | [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB | | [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB | | [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB | | [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB | ## Cloud API Services In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud. - [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service. - [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available. Note that the models behind the commercial APIs are not entirely identical to the open-source models. ## Citation If you find our paper or models helpful, please consider cite: ``` @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
[ "SUMMARIZATION" ]
[ "BIOSSES", "SCIFACT" ]
Non_BioNLP
RomainDarous/large_directTwoEpoch_additivePooling_randomInit_mistranslationModel
RomainDarous
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4460010", "loss:CoSENTLoss", "dataset:RomainDarous/corrupted_os_by_language", "arxiv:1908.10084", "base_model:RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel", "base_model:finetune:RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
1,739
1,739
26
0
--- base_model: RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel datasets: - RomainDarous/corrupted_os_by_language library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4460010 - loss:CoSENTLoss widget: - source_sentence: Malformed target specific variable definition sentences: - Hedefe özgü değişken tanımı bozuk - Kan alle data in die gids lees - "слава Украине! героям слава!\uFEFF" - source_sentence: Can't write an inode bitmap sentences: - Skontrolujte stav aktualizácií alebo to skúste znova neskôr. - Malsukcesis skribi i nodan bitmapon - Zastępuje wersję GL obsługiwaną przez sterownik - source_sentence: Optimize soft proofing color transformations sentences: - 'arkadaslar biz artik her an kirmizi kart yiyecek,bencil,pas yapamayan,isabetsiz orta yapani istemiyoruz. sozde efsaneniz bu sezon Besiktasa en cok zarar verenlerden biriydi. kendini dusunmeden once Besiktasi dusunecek adam lazim bize. o yuzden #GoHomeQuaresma' - Yav bizim dedikodusunu yaptığımız insanın bile bi vizyonu var. Senin hakkında neden oturup konuşalım? - Ik ben een transgender. - source_sentence: 'Pass 1: Checking @is, @bs, and sizes' sentences: - Bu adam cidden kurabiye gibi ben bunu çayın yanında yerim - sagnat. errada. invisible. justificació. idioma - Wilt u echt de primaire sleutel verplaatsen? (j N) - source_sentence: Search for matching log entries sentences: - quem te lembra? caralho tô assustada aqui kkkkk - sendotasunik gabeko\ egoera bistaratuko den ala ez adierazten du - En aquest cas, hem d'incloure les imatges del contenidor )sr iov per a càrregues de treball de telco (per exemple, com a referència, es podrien obtenir des de valors de helm chart) model-index: - name: SentenceTransformer based on RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.9792971292767451 name: Pearson Cosine - type: spearman_cosine value: 0.8655911199085211 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.9793536482242442 name: Pearson Cosine - type: spearman_cosine value: 0.8656172072948024 name: Spearman Cosine --- # SentenceTransformer based on RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel](https://huggingface.co/RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel) on the [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel](https://huggingface.co/RomainDarous/large_directOneEpoch_additivePooling_randomInit_mistranslationModel) <!-- at revision abc7233cc26cb0cd449fd9335c741917d03f3bd4 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): MultiHeadGeneralizedPooling( (P): ModuleList( (0-7): 8 x Linear(in_features=768, out_features=96, bias=True) ) (W1): ModuleList( (0-7): 8 x Linear(in_features=96, out_features=384, bias=True) ) (W2): ModuleList( (0-7): 8 x Linear(in_features=384, out_features=96, bias=True) ) ) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/large_directTwoEpoch_additivePooling_randomInit_mistranslationModel") # Run inference sentences = [ 'Search for matching log entries', 'quem te lembra? caralho tô assustada aqui kkkkk', 'sendotasunik gabeko\\ egoera bistaratuko den ala ez adierazten du', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.9793 | 0.9794 | | **spearman_cosine** | **0.8656** | **0.8656** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 18.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.47 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:---------------| | <code>Check spelling. Print the document. Show completion window. General. Show help</code> | <code>Kontrolli õigekirja. присоединяюсь. </code> | <code>0</code> | | <code>EXIF not supported for this file format.</code> | <code>Šiam failo formatui EXIF nepalaikomas.</code> | <code>1</code> | | <code>This package includes the documentation for texlive everyhook</code> | <code>Paket ini menyertakan dokumentasi untuk texlive everyhook</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 17.71 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 26.95 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Could not identify the current seat.</code> | <code> 天天花着男人的钱还这这创造新词汇男权你可真牛批,你也就这一出了一问男权,就说是我是吧,到现在我也没听到你给我们讲的男权,你也就是在网上喷喷,现实走道都不敢探头自卑,你现实要把你女权的劲拿出来总低啥头,您老应该去国家教育局把男权加上是吧,你们女权天天说自己生活不好没地位,给你们地位了你们能干啥?用你们的女权打到全世界男性是吧,能相出男权这一词您老也是人才呀,是不是庆幸自己是个女的,活在自己想想的世界里不觉得孤单吗,假象有男权是吧,自己假象和男权还说自己不是田园女权,田园女权能连自己都骂说自己妈是驴爸是大鼎的也是奇葩呀,那我们国家大肆宣扬过你们这么田园女权吗,国家要的是女性人群自主自理,你们可好看看你们女权干的啥事,给你们女权地位高了,看看你们女权干的事n绿地集团高管怎么都不说呀,人家可是有钱有地位,也不是我们说三从四德洗衣做饭你们女权会吗?,那我问问你们女权干过啥惊天大事,还甩锅给孔子,还封建社会,那我问问你们女权在福利面前为啥说自己是女性呀不是社会主义社会吗不应该男女平等吗,天天自己也不知道是不是抱个手机天天欧巴欧巴,你家那位要是不陪你看一会就会问你是不是不爱我了是吧大姐,您老也就赚这白菜钱操心国家事,中国五千年的历史被您老一句否决,还嘲讽人家日本女性,好意思说自己不是女权,三从四德流传这么久到您这变成日本文化了,我就想问问男权您老是怎么想的,那你问孔子老人家呗为什么女人要三从四德,我说的是女权你干嘛自己对号入座,连中华人民传承的东西都不认跟我这谈男权,还男权您老给我举个例子呗,让我们男权听听都是h啥,这些不都是你们女权的标准吗?,还男权,您老醒醒吧这里是现实,不是你的公主世界,总觉得自己多么多么重要,地球没你是不能转了还是人类要灭亡呀,我真的想问一句你给我找一条男权的新闻,咋了我们男人不能提女权呗你老授权了呗,那我们谈论田园女权你老对号入座干嘛,天天过节要礼物,还嫌弃自己男朋友没有钱,我寻思你找个有钱人包养你呗,对了有钱人怎么可能看上你这种女权的呢,还要孩子跟女方姓我也没看见你没跟你妈姓呀,年年过节男人给你们送礼物你们女人给男人送过礼物吗?,一问我不是陪着他吗我对他说我爱你了这不是最好的礼物吗?,男人只要不送礼物就是不爱你们了呗,人家国际女权讲的男人能做的我们女人也能做,田园女权男人能做的我们女人为啥要做,还男权我笑了,以前结婚几头牛换个衣服原装的,现在几十万彩...</code> | <code>0</code> | | <code>Undoing Date and Time Adjustment</code> | <code>正在取消日期和时间调整</code> | <code>1</code> | | <code>Dependency package for gsl_2_6 gnu hpc</code> | <code>Pacotes de desenvolvimento do KDE</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | corrupted open os by language loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:----------------------------------:|:------------------------:|:------------------------:| | 1.0 | 55751 | 0.2668 | 0.2711 | 0.8656 | - | | -1 | -1 | - | - | - | 0.8656 | ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.4.1 - Transformers: 4.48.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.3.0 - Datasets: 2.16.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY", "TRANSLATION" ]
[ "CAS" ]
Non_BioNLP
pruas/BENT-PubMedBERT-NER-Disease
pruas
token-classification
[ "transformers", "pytorch", "bert", "token-classification", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,670
1,709
238
7
--- language: - en license: apache-2.0 pipeline_tag: token-classification --- Named Entity Recognition (NER) model to recognize disease entities. Please cite our work: ``` @article{NILNKER2022, title = {NILINKER: Attention-based approach to NIL Entity Linking}, journal = {Journal of Biomedical Informatics}, volume = {132}, pages = {104137}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2022.104137}, url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526}, author = {Pedro Ruas and Francisco M. Couto}, } ``` [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets: - [NCBI Disease Corpus](https://www.ncbi.nlm.nih.gov/research/bionlp/Data/disease/) (train and dev sets) - [PHAEDRA](http://www.nactem.ac.uk/PHAEDRA/) (train, dev, test sets): entity type "Disorder" - [Corpus for Disease Names and Adverse Effects](https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html) (train, dev, test sets): entity types "DISEASE", "ADVERSE" - [RareDis corpus](https://github.com/isegura/NLP4RARE-CM-UC3M/tree/main/corpus) (train, dev, test sets): entity types "DISEASE", "RAREDISEASE", "SYMPTOM" - [CoMAGC](https://github.com/isegura/NLP4RARE-CM-UC3M/tree/main/corpus) (train, dev, test sets): entity type "cancer_term" - [PGxCorpus](https://www.nature.com/articles/s41597-019-0342-9) (train, dev, test sets): - [miRNA-Test-Corpus](https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html) (train, dev, test sets): entity type "Diseases" - [BC5CDR]() (train and dev sets): entity type "Disease" - [Mantra](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986661/pdf/ocv037.pdf) (train, dev, test sets): entity type "DISO"
[ "NAMED_ENTITY_RECOGNITION" ]
[ "BC5CDR", "NCBI DISEASE", "MIRNA" ]
BioNLP
croissantllm/base_135k
croissantllm
text2text-generation
[ "transformers", "pytorch", "llama", "text-generation", "legal", "code", "text-generation-inference", "art", "text2text-generation", "fr", "en", "dataset:cerebras/SlimPajama-627B", "dataset:uonlp/CulturaX", "dataset:pg19", "dataset:bigcode/starcoderdata", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
1,705
1,706
5
0
--- datasets: - cerebras/SlimPajama-627B - uonlp/CulturaX - pg19 - bigcode/starcoderdata language: - fr - en license: mit pipeline_tag: text2text-generation tags: - legal - code - text-generation-inference - art --- # CroissantLLM - Base (135k steps) This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 135k steps (2.12 T) tokens. To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1. ## Abstract We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware. To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources. To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives. This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models. ## Citation Our work can be cited as: ```bash Coming soon ``` ## Usage This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "croissantllm/base_135k" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto") inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant. He is heading to the market. -> Il va au marché. We are running on the beach. ->", return_tensors="pt").to(model.device) tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5) print(tokenizer.decode(tokens[0])) # remove bos token inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device) tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60) print(tokenizer.decode(tokens[0])) ```
[ "TRANSLATION" ]
[ "CRAFT" ]
Non_BioNLP